url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathoverflow.net/unanswered
## Unanswered Questions 4answers 17k views ### Polynomial bijection from QxQ to Q? Is there any polynomial $f(x,y)\in{\mathbb Q}[x,y]{}\$ such that $f:\mathbb{Q}\times\mathbb{Q} \rightarrow\mathbb{Q}$ is a bijection? 0answers 7k views ### Ultrafilters and automorphisms of the complex field It is well-known that it is consistent with $ZF$ that the only automorphisms of the complex field $\mathbb{C}$ are the identity map and complex conjugation. For example, we have th … 0answers 3k views ### Dropping three bodies Consider the usual three-body problem with Newtonian $1/r^2$ force between masses. Let the three masses start off at rest, and not collinear. Then they will become collinear … 0answers 2k views ### Volumes of Sets of Constant Width in High Dimensions Background The n dimensional Euclidean ball of radius 1/2 has width 1 in every direction. Namely, when you consider a pair of parallel tangent hyperplanes in any direction the dis … 1answer 2k views ### A function whose fixed points are the primes If $a(n) = (\text{largest proper divisor of } n)$, let $f:\mathbb{N} \setminus \{ 0,1\} \to \mathbb{N}$ be defined by $f(n) = n+a(n)-1$. For instance, $f(100)=100+50-1=149$. Clearl … 0answers 643 views ### Local structure of rational varieties I've been asked this question by a colleague who's not an algebraic geometer; we both feel that the answer should be "no", but I don't have a clue how to prove it. Here's the quest … 0answers 2k views ### 2, 3, and 4 (a possible fixed point result ?) The question below is related to the classical Browder-Goehde-Kirk fixed point theorem. Let $K$ be the closed unit ball of $\ell^{2}$, and let $T:K\rightarrow K$ be a mapping such … 0answers 4k views ### Does Godel’s incompleteness theorem admit a converse? Let me set up a strawman: One might entertain the following criticism of Godel's incompleteness theorem: why did we ever expect completeness for the theory of PA or ZF in the firs … 0answers 1k views ### the topology of arithmetic progressions of primes The primary motivation for this question is the following: I would like to extract some topological statistics which capture how arithmetic progressions of prime numbers "fit toget … 0answers 2k views ### Grothendieck-Teichmuller conjecture (1) In "Esquisse d'un programme", Grothendieck conjectures Grothendieck-Teichmuller conjecture: the morphism $$G_{\mathbb{Q}} \longrightarrow Aut(\widehat{T})$$ is an isomor … 0answers 1k views ### two tetrahedra in R^4 It is relatively easy to show (see below) that if we have two equilateral triangles of side 1 in $R^3$, such that their union has diameter 1, then they must share a vertex. I wonde … 0answers 1k views ### What does the theta divisor of a number field know about its arithmetic? This question is about a remark made by van der Geer and Schoof in their beautiful article "Effectivity of Arakelov divisors and the theta divisor of a number field" (from '98) (li … 0answers 1k views ### To what extent does Spec R determine Spec of the Witt vector ring over R? Let $R$ be a perfect $\mathbb{F}_p$-algebra and write $W(R)$ for the Witt ring [i.e., ring of Witt vectors -- PLC] on $R$. I want to know how much we can deduce about \$\text{Spec } … 0answers 1k views ### Constructing non-torsion rational points (over Q) on elliptic curves of rank > 1. Consider an elliptic curve E defined over Q. Assume that the rank of E(Q) is >=2. (Assume the Birch-Swinnerton-Dyer conjecture if needed, so that analytic rank = algebraic rank.) H … 0answers 559 views ### Homotopy type of TOP(4)/PL(4) It is known (e.g. the Kirby-Siebenmann book) that $\mathrm{TOP}(n)/\mathrm{PL}(n)\simeq K({\mathbb Z}/2,3)$ for $n>4$. I believe it is also known (Freedman-Quinn) that \$\mathrm{TOP … 15 30 50 per page
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8940079212188721, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/211314-question-conditional-probability.html
# Thread: 1. ## Question on conditional probability I'm a bit stumped on this question. It seems simple but I don't know where to begin. "There is a 50% chance of hard drive damage if a power line to which a computer is connected is hit during an electrical storm. There is a 50% chance that an electrical storm will occur on any given summer day in a given area. If there is a 0.1% chance that the line will be hit during a storm, what is the probability that the line will be hit and there will be hard drive damage during the next electrical storm?" So it is the intersection between the line being hit and HDD damage. But how do you get there? 2. ## Re: Question on conditional probability Originally Posted by showstopperx I'm a bit stumped on this question. It seems simple but I don't know where to begin. "Assume that there is a 50% chance of hard drive damage if a power line to which a computer is connected is hit during an electrical storm. There is a 5% chance that an electrical storm will occur on any given summer day in a given area. If there is a 0.1% chance that the line will be hit during a storm, what is the probability that the line will be hit and there will be hard drive damage during the next electrical storm in this area?" So it is the intersection between the line being hit and HDD damage. But how do you get there? Let's begin by defining events: $A$ is the event that the hard drive is damaged. $B$ is the event that the line is hit. $C$ is the event that an electrical storm occurs. $P(C) = 0.05$ $P(B|C) = 0.01$ $P(B|C)= \tfrac{P(B\cap C)}{P(C)} \implies 0.01 = \tfrac{P(B\cap C)}{0.05} \implies P(B\cap C)=0.0005$ $P\left[A|(B\cap C)\right] = 0.5$ $P\left[A|(B\cap C)\right] = \tfrac{P\left[A\cap (B\cap C)\right]}{P(B\cap C)} = \tfrac{P(A\cap B\cap C)}{P(B\cap C)} \implies 0.5 = \tfrac{P(A\cap B\cap C)}{0.0005}$ $\implies P(A\cap B\cap C)=0.00025$ $P\left[(A\cap B) | C\right] = \tfrac{P\left[A\cap (B\cap C)\right]}{P(C)} = \tfrac{P(A\cap B\cap C)}{P(C)} = \tfrac{0.00025}{0.05} = 0.005$ -Andy 3. ## Re: Question on conditional probability Thank you sir!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9601856470108032, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-applied-math/33531-forced-damped-oscillations.html
# Thread: 1. ## Forced, Damped Oscillations Hi people. Here's the file: http://www.mth.uct.ac.za/Courses/MAM...roject1_07.pdf Question 1(a) is the one I have a problem with. I just don't know what he's getting at. Is y(x) the function that describes the road? And comparing y(t) and y(x) implies, to me, that x=vt, so it has a constant velocity with respect to the x-axis (in the direction of the x-axis); a very odd thing to do... Is Y then the vertical displacement of the vehicle from the x-axis? So the car is like a mass on a spring on the road? I have no idea how he derived that Differential Equation. Any help is much appreciated thanks. (P.S. I need this pronto please !) 2. Even a simple: "Yes I agree, the question is confusing and badly written. I do not know what he's on about" would be very nice thank-you (my prof is Russian). 3. Unfortunately...that link isn't working for me. I'm not sure if that's why you're not getting replies, but I'm definitely unable to reach the URL you provided. 4. That makes two of us. I'll put this on my "subscribed" list and check back on it later. -Dan 5. Hmm, works fine for me, but that's not the point. Oh, yes, it isn't working, it was a moment ago. Argh! This is frustrating! It appears the entire site is down. Ok, the question is: Suppose that a car oscillates vertically as if it were a mass m on a single spring with constant k, attached to a single dashpot (dashpot provides resistance) with constant c. Suppose that this car is driven along a washboard road surface with an amplitude a and a wavelength L (Mathematically the 'washboard surface' road is one with the elevation given by y=asin(2*pi*x/L).) (a) Show that the upward displacement of the car Y satisfies the equation: $m\ddot{Y} + c\dot{Y} +kY = c\dot{y} + ky$ where y(t) = asin(2*pi*v*t/L) and v is the velocity of the car. 6. As far as I can tell, your original assessment of the problem is correct. y(t) and y(x) are very much related, except for the obvious distinction of one being depending on t and the other being dependent on x, but the two equations for y you have listed both make sense considering that x=vt. That would lead to me to believe that Y is the displacement, and y(t) is just the height of the surface itself, because the washboard surface you're working with is sinusoidal as described by the y(t) function. 7. This is how I understand the question: The car can be regarded as a mass on top of a spring. The car moves along a road in the xy-plane, the road is described by y(x). The x-component of the velocity of the car is constant and x=vt (this is very odd). The distance from the x-axis to the mass on the spring is Y. From that we must deduce the differential equation, but how? The LHS, with the RHS=0 looks like the equation of the free decay of a damped oscillator, so the whole equation looks like that of a driven, damped oscillator. I cannot see how the road described by y(x) would give a driving force like the RHS. What I tried to do is this: Y = y + p + l where l is the equilibrium length of the spring with the mass (a constant), and p the extension of the spring. Re-arrange for p: p = Y - y - l and find the equation of the free decay of a damped oscillator in terms of the variable p, but this does not give the right answer. 8. Nevermind trying to solve for Y- the solution is extremely ugly, and he still wants us to find the amplitude as a function of velocity, puh!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9636690020561218, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/91858-centroid-hemisphere-print.html
# centroid of a hemisphere Printable View • June 4th 2009, 10:39 PM manyarrows centroid of a hemisphere I must find the z centroid of a hemisphere with radius a. It's base is on the x-y plane and its dome extends up the z axis. I am using the following equations to determine the centroid. $\overline{z}=\frac{\int_V\tilde{z} dV}{\int_V dV}$ I am using $dV=\pi a^2 dz$ and $\tilde{z}=z$ and integrating from 0 to a $\overline{z}=\frac{\int_{0}^{a} z \pi a^2 dz}{\int_{0}^{a} \pi a^2 dz}$ $\overline{z}=\frac{\pi a^2 \int_{0}^{a} z dz}{\pi a^2\int_{0}^{a} dz}$ $\overline{z}=\frac{ \int_{0}^{a} z dz}{\int_{0}^{a} dz}$ $\overline{z}=\frac{\frac{z^2}{2}|_{0}^{a}}{z|_{0}^ {a}}$ $\overline{z}=\frac{a}{2}$ I know the answer is supposed to be $\overline{z}=\frac{3a}{8}$ Where did I mess up? Thanks • June 4th 2009, 11:58 PM matheagle These are supposed to be triple integrals. You should switch to spherical co-ordinates with the region being $0\le \rho\le a$, $0\le \theta \le 2\pi$ and $0\le \phi \le \pi/2$. • June 5th 2009, 12:20 AM Random Variable $dV = \pi r^{2}dz$, where r is the radius of each circular disc. Then you to write r in terms of z. • June 5th 2009, 12:29 AM manyarrows The hemisphere is center on the origin so I know that the x and y centroid are 0. Is this what you are refering to by triple integrals? I also would like to be able to do this in cartesian if that is possible as that was that coordinates the problem specified. If I write a in terms of z I get $a^2=y^2 + z^2$ Then when ever I try to get the integrand in terms of one variable I wind back up at $a^2$ Thanks again • June 5th 2009, 12:47 AM manyarrows I found my mistake dV should be $dV=\pi y^2 dz$ then $dV= \pi (a^2 -z^2) dz$ thanks again • June 5th 2009, 06:35 AM matheagle I read dV as dxdydz. Changing to spherical with $J=\rho^2\sin\phi$ and $z=\rho\cos\phi$ we have ${ \int_0^a\int_0^{2\pi}\int_0^{\pi /2} \rho^3\sin\phi \cos\phi d\phi d\theta d\rho\over \int_0^a\int_0^{2\pi}\int_0^{\pi /2} \rho^2 \sin\phi d\phi d\theta d\rho}$ $= { \int_0^a \rho^3d\rho \int_0^{2\pi}d\theta \int_0^{\pi /2} \sin\phi \cos\phi d\phi \over \int_0^a \rho^2 d\rho \int_0^{2\pi}d\theta \int_0^{\pi /2} \sin\phi d\phi }$ $= { (a^4/4)(2\pi)(1/2) \over (a^3/3)(2\pi)(1)}={3a\over 8}$. • June 5th 2009, 07:45 AM manyarrows I also did it this way $hemisphere = a^2=y^2+x^2$ $\tilde{z}=\frac{\int_{V}\tilde{z} dV}{\int_{V} dV}$ $dV= \pi r^2$ $r=y$ $\tilde{z}=z$ $y^2=a^2-z^2$ $\tilde{z}=\frac{\int_{0}^{a}z \pi y^2 dz}{\int_{0}^{a}\pi y^2 dz}$ $\tilde{z}=\frac{\int_{0}^{a}z \pi (a^2-z^2) dz}{\int_{0}^{a}\pi (a^2-z^2) dz}$ $\tilde{z}=\frac{\pi \int_{0}^{a}z (a^2-z^2) dz}{\pi \int_{0}^{a}(a^2-z^2) dz}$ $\tilde{z}=\frac{\int_{0}^{a}z (a^2-z^2) dz}{ \int_{0}^{a}(a^2-z^2) dz}$ Your way looks much easier, so thanks • June 5th 2009, 08:44 PM matheagle Quote: Originally Posted by manyarrows The hemisphere is center on the origin so I know that the x and y centroid are 0. Is this what you are refering to by triple integrals? I also would like to be able to do this in cartesian if that is possible as that was that coordinates the problem specified. If I write a in terms of z I get $a^2=y^2 + z^2$ Then when ever I try to get the integrand in terms of one variable I wind back up at $a^2$ Thanks again You can do this via (x,y,z) but to solve the integrals you will need to various trig substitutions. It's smarter to switch to spherical immediately. For example the bound for z would be $0\le z\le \sqrt{a^2-x^2-y^2}$. Then the (x,y) base is a circle of radius a, also screaming out for trig substitution. The other bounds of integration would be $-\sqrt{a^2-x^2}\le y \le \sqrt{a^2-x^2}$ and $-a\le x \le a$. Which is begging for polar, i.e., trig substitution. • June 5th 2009, 09:24 PM Random Variable He did it correctly the second time. You don't have to set up a triple integral. • April 19th 2012, 01:01 PM DrewMeadow Re: centroid of a hemisphere Here's the problem worked in a bit more detail, hope it reads: Calculus: Centroid of a Hemisphere, Math 251 All times are GMT -8. The time now is 04:40 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 37, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9461962580680847, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/64061/how-does-one-determine-n-spheres-of-curvature
# How does one determine $n$-spheres of curvature? I am aware of circles of curvature and I am simply wondering to what extent does this generalize to $n$-dimensions. Specifically, if some surface in $n$-dimensional space is represented parametricaly, how does one determine the $n$-sphere of curvature at any given point? - 1 – Mike Spivey Sep 13 '11 at 4:00 3 But note that this is a sphere in contact with a curve, not with a two-dimensional surface. In general, a (hyper)surface doesn't have a sphere of curvature but a (hyper)ellipsoid of curvature, since it can have different curvature in different directions. By the way, if a (hyper)surface in $n$-dimensional space had a sphere of curvature, it would be an $(n-1)$-sphere, not an $n$-sphere. – joriki Sep 13 '11 at 8:07 Space curves do have osculating spheres and osculating circles (the circle formed by the intersection of the osculating sphere and osculating plane). – J. M. Sep 13 '11 at 8:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9205528497695923, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/geometry?page=3&sort=newest&pagesize=15
Tagged Questions shape, congruence, similarity, transformations, properties of classes of figures, points, lines, angles 2answers 30 views How to find length when viewing at some angle? I have a question on angles. I have a rectangular tile. when looking straight I can find the width of the tile, but how do I find the apparent width when I see the same rectangular tile at some ... 1answer 37 views How to find length of a rectangular tile when viewing at some angle [duplicate] I have a question on angles. I have a rectangular tile. when looking straight I can find the width of the tile, but how do I find the apparent width when I see the same rectangular tile at some ... 1answer 26 views Prove triangles with same perimeter and point of tangency of excircle and nine-point circle. In triangle ABC let X be the point of tangency of the excircle opposite A with side BC. (A) Prove that the segment AX divides triangle ABC into two triangles, each having the same perimeter. (B) Prove ... 1answer 28 views calculate rotation from 2 3d lines I am trying to extract the transformation of a segment described by two 3d points $(a_0,b_0)$ into the transformed points $(a_1,b_1)$. I have been able to calculate the translation, and I am assuming ... 1answer 29 views How do I prove that the triangle must be obtuse? Suppose you are given a triangle where the center of the nine-point circle lies on the circumcircle of the triangle. It is obvious that the triangle is obtuse, but how can you formally prove that the ... 1answer 26 views How to sketch the graph of $\varphi(X)=d(X,A)+d(X,B)$ when $A$ and $B$ are not given? $A,B$ are points in an axis, disposed in this order. Sketch the graph of the following function: $$\varphi(X)=d(X,A)+d(X,B)$$ $d(A,B)$ is the distance from point $A$ to point $B$. I'm ... 1answer 19 views Question about the dimension of the intersection of two subspaces of a vector space $V$. Let $M, N$ be two subspaces of a vector space $V$ with dimension $k$. Suppose that $\dim M=m$, $\dim N =n$. It is said that $\dim M \cap N \geq m+n-k$. Suppose that $M, N$ are two parallel planes in ... 1answer 29 views The best softwares to understand the intersections of the 3D objects in the Euclidean space What is the best software (Easy to follow and clear graphics) to draw the intersections between two spheres, Two spheres and a pyramid, for example. The centre and the radius of the spheres are given ... 2answers 42 views If the diagonals of a trapezoid are congruent, then the trapezoid is isosceles. Thanks in advance to anyone who can help me out on this. I'm currently a junior in high school taking and doing well my school's honors pre-calc class, but of all of the math I've ever learned, proofs ... 3answers 60 views Distance between two antennas I am trying to find out the formula to calculate how high antennas need to be for Line of Sight (LoS) propagation. I found: d = 3.57sqrt(h) also ... 3answers 61 views Permutations and Cross-ratios Pick four distinct numbers, list all 24 permutations, and compute the cross-ratio of each permutation. Show that at most six numbers have occurred, given by the cross-ratio group \$y, \frac{1}{y}, 1-y, ... 2answers 45 views Constructing Points Given that you have two lines intersecting at the origin "0", with the unit "1" marked on each line, and "2" marked on the second line, clearly show how you would construct the point (2+1), the point ... 2answers 48 views Showing that f cannot be a linear fractional transformation Let $f(x) = \frac{x^2}{x+1}$. Show that f cannot be a linear fractional transformation. (Hint: do not try to argue that f cannot be put into the form of $\frac{ax+b}{cx+d}$). 0answers 29 views Generating Linear Fractional Transformations Let $f(x)$ be a linear fractional transformation of your choosing, as long as $a, b, c, d \neq 0$ and $ad-bc \neq 0$. i) Express $f(x)$ as a composition of generating transformations. ii) Pick four ... 1answer 50 views Limits of the Minkowski distance as related to the generalized mean Given that the Minkowski distance is $$d(X=(x_1,...,x_n),Y=(y_1,...,y_n))=(\sum_{i=1}^n|x_i−y_i|^p)^{1/p}$$ I understand that $$\lim_{p\to\infty}d(X,Y)=\max_{i=1}^n|x_i-y_i|$$ ... 1answer 19 views Given two lat/long/altitude points, how do I find the north/east/up vector between the two points? I have two lat/long/altitude points $(\phi_1,\lambda_1,h_1)$ and $(\phi_2,\lambda_2,h_2)$. I wanted to find the distances in the east and north directions (up is fairly obvious, I think?) between ... 0answers 21 views maximum length of a scaled vector in a triangle (simplex) Given a triangle (or, in general, a simplex) $T$ and a vector $\vec{s}$, I'd like to compute the quantity $$\max\{|x-y|: x,y\in T, x-y = \alpha \vec{s}, \alpha\in\mathbb{R}\}$$ i.e., the maximum ... 1answer 84 views contest problem in geometry Suppose the inscribed circle of $\triangle A_1A_2A_3$ touches the sides $A_2A_3, A_3A_1, A_1A_2$ at $T_1,T_2,T_3$. From the midpoints $M_1,M_2,M_3$ of $A_2A_3,A_3A_1,A_1A_2$, draw lines perpendicular ... 3answers 59 views Finding mass of a sphere whose density is given I want to find the mass of a sphere of radius $a$ whose density at a point is proportional to the distance of a point from a plane passing through a diameter of a sphere 1answer 22 views equation of a plane passing through a diameter of a sphere I want to find the equation of plane passing through a diameter of a sphere, For simplicity let us assume that origin,$(0,0,1)$ and $(0,0,-1)$ are on a diameter, then the points lie on the plane ... 0answers 23 views Imaginary line passing through non-collinear points in R3. I have come to a problem where n points are provided in 3-Dimensional plane. I need a imaginary line which can be assumed that it is passing through these points. 3answers 64 views Equation of Cone vs Elliptic Paraboloid I can't understand why $$\frac{z}{c} = \frac{x^2}{a^2} + \frac{y^2}{b^2} \tag{*}$$ corresponds to an elliptic paraboloid and $$\frac{z^2}{c^2} = \frac{x^2}{a^2} + \frac{y^2}{b^2} \tag{**}$$ to a cone, ... 0answers 43 views Good textbooks on Non-Euclidean Geometry? I'm currently taking a class called Foundations of Geometry. We started with the stereographic projection and carried onward through fractional linear transformations, and now we are working with the ... 1answer 30 views Problem with finding the intersection point between a line and triangle I have a mathematical problem that I'm trying to solve, but the equations I have derived don't give the correct output when utilised on concrete problems. However, I can't figure out what the problem ... 2answers 55 views Looking for a (nonlinear) map from n-dimensional cube to an n-dimensional simplex I am looking for a (nonlinear) map from n-dimensional cube to an n-dimensional simplex; to make it simple, assume the following figure which is showing a sample transformation for the case when $n=2$. ... 2answers 63 views A part of an I.M.O problem Let $a$ be the base of a triangle and $a+b$ be its perimeter. Using the fact that area of triangle is maximum when the other two sides are equal, prove that among all quadrilaterals with fixed ... 2answers 43 views Calculate new positon of rectangle corners based on angle. I am trying to make a re-sizable touch view with rotation in android. I re-size rectangle successfully. You can find code here It has 4 corners. You can re-size that rectangle by dragging one of ... 0answers 57 views Does this graph have a name? Does graph shown below from the paper Dissection Graphs of Planar Point Sets by P. Erdos, L. Lovasz, A. Simmons, and E.G. Straus have a name? Does it come from a family of related graphs? 1answer 41 views Length of DNA strand The DNA molecule has a double helix structure. The radius of each helix is approximately $10$ angstroms ($1$ angstrom $=10^{-8}$cm). Each helix goes up by approximately $34$ angstroms every ... 0answers 28 views A rectangular prism has the surface area of 300 square inches. what whole number dimensions will give the prism the greatest volume. [duplicate] it is a tough geometrical algebra problem It is tough and involves geometry and algebra. thank you 1answer 220 views Maximizing the volume of a rectangular prism A rectangular prism has a surface area of $300$ square inches. What whole number dimensions give the prism the greatest volume? This is a math olympiad problem. It involves the volume and surface ... 3answers 36 views Problem with determining cylinder height Here is a question that I have, but I have no idea where to do go from here. Here is the question: The vase company designs a new vase that is shaped like a cylinder on the bottom with a cone on ... 1answer 18 views How to I calculate a second plot point given the first point and the slope? Is there a formula to calculate the second point in a segment given a starting point, segment length, and slope? Thanks 3answers 40 views Please help me find a formula to find the 3rd point in a right triangle I'm trying to figure out how to plot a 3rd point on a graph Given the following line segments and angles Is there a formula for the 3rd point? Note: This image is just for an example. The base ... 1answer 50 views If I have three points on a circle, how do I calculate other points on the same circle? I have circle which I know intersects the x axis at -11.5 and 11.5. It intersects the Y axis at 1. How can I calculate the (positive) Y value for any X value between -11.5 and 11.5? This is to ... 1answer 24 views finding an equation through these two points in upper half plane I have to find an equation going through $(-1,y)$ and $(1,y)$. The equation my book uses is $x^2+y^2+ax=b$. So I get two equations when I plug in the two points. I get $1+y^2-a=b$ and $1+y^2+a=b$ ... 1answer 27 views Please help me to find an equation to find the 3rd point in an arc. Long story short, I want to animate the rotation of an object that's based off a circle. Given the center point of the circle, the radius, and one of the points in the arc, is it possible to find the ... 3answers 63 views Parametric equation of an ellipse How do I show that the parametric equations $$x(t) = \sin(t+a)$$ $$y(t) = \sin(t+b)$$ define an ellipse? I tried graphing it and I'm certain it is a rotated ellipse. My first idea is to write it ... 0answers 61 views Geometrical Inequality Let $ABCD$ be a quadrilateral on the unit circle, and the diagonals $AC$ and $BD$ intersects at $E$. If the shortest height of the triangle $ACD$ equals the radius of the incircle of the triangle ... 2answers 121 views A concise distance problem A falsely simple Euclidian geometry problem: Points $A$, $B$, $C$ are collinear; $\|AB\|=\|BD\|=\|CD\|=1$; $\|AC\|=\|AD\|$. What is the set of possible $\|AC\|$ ? I'm after a concise answer, ... 5answers 138 views Does $e$ have a geometric representation? [duplicate] Just like $\pi$ is the ratio of a circle's circumference to its diameter? I know that the tangent line to the function $e^x$ has a slope of $e^x$ at that point, but is there some other geometric ... 1answer 71 views circle of inversion Determine the equation of the circle reflection of the line $x = 2$ if the circle of reflection is $x^2 + y^2 + 2x = 0$ which in standard form is $(x+1)^2+y^2=1$ where $radius=1$ and center is ... 1answer 49 views Center of circle that has two points on its circumference and a known tangent I've found a related question, which helped me get started on this. I can get it to work for the example on the question, but I'm running into an issue when the tangent is not y = 0. Other question ... 1answer 21 views Condition for a quadrilateral to be tangential Define a quadrilateral to be tangential iff all four of its internal angle bisectors meet a a single point. Prove the following: A quadrilateral is tangential if and only if three of of its ... 1answer 22 views Pascal's theorem in geometry We denote $P= WX \cap YZ$ to mean point $P$ is the intersection of lines $WX$ and $YZ$. The problem is about pascal's theorem: Let $ABCD$ be a cyclic quadrilateral. Let the tangent lines at A and at ... 1answer 36 views Can one use Pick's theorm to prove that area size 5 covers at least 6 grid points? According to Pick's Theorem, the size of an area $A$ can be calculated by the sum of the interior lattice points located in the polygon $i$ and the number of lattice points on the boundary placed on ... 1answer 80 views help on a geometry problem $ABCD$ is a convex quadrilaterial such that $AC=BD$. $AC$ and $BD$ intersect at $E$ and $\angle AEB=66^{\circ}$. $F$ and $G$ are the midpoints of $AD$ and $BC$, respectively. $FG$ intersects $AC$ ... 2answers 18 views What's the geometric interpretation of a semidenifite matrix smaller than identity matrix? What's the geometric interpretation of a semidenifite matrix in terms of eigenvalues/eigenvectors with the condition: $$0 \preceq W \preceq I$$ 4answers 111 views What's the best 3D angular co-ordinate system for working with smartfone apps This is very much an applied maths question. I'm having trouble with Euler angles in the context of smartphone apps. I've been working with Android, but I would guess that the same problem arises ... 1answer 55 views Inscribing equilateral triangle in rectangle Problem: What is the area of the largest equilateral triangle that can be inscribed in a rectangle with sides $10$ and $11$? (The problem comes from an old high school math contest. I believe it's ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 88, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9273434281349182, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/109442/what-are-some-applications-of-mathematics-to-the-medical-field/109446
# What are some applications of Mathematics to the medical field? This semester I'm charged with finding a senior capstone project for next year. I've given it a lot of thought and can't seem to find any interesting ideas that are appropriate for my level of mathematics: Junior with A's in: • ODEs/PDEs • Linear Algebra • Topology • Organic Chemistry 1/2 • Cellular Biology • Introduction to Biochemistry And currently enrolled in Combinatorics and Complex Variables. I've approached several of my professors and asked for some project ideas but they've not given me anything at all. I'm not asking for you to give me a project to do without me having to do anything. I just want to get a feel for what some applications are to some of the more interesting fields, like Combinatorics and something that is somewhat related to all this, Chaos Theory. - 7 – Zev Chonoles♦ Feb 15 '12 at 4:32 ## 13 Answers Tomography, which is now used heavily nowadays as a diagnostic tool, relies on rather deep mathematics to work properly. There is in particular the Radon transform and its inverse, which are useful for reconstructing a three-dimensional visualization of body parts from "slices" taken by a CAT scanner. - So, that sounds really interesting. I know how to program a little, so producing a render from some supplied slices would be an awesome project. Do you know of any supplemental material to get me started? – MentatOfDune Feb 15 '12 at 0:37 1 @Mentat: this book should cover the theory nicely. For code, try searching around; you'll find stuff like this. – J. M. Feb 15 '12 at 0:46 I was aware of medical imaging, but the more and more I look into this stuff, the more I think this is what I'm going to do my project on. This is really awesome stuff. Thanks! – MentatOfDune Feb 15 '12 at 0:53 There are many topics you could choose from, the field of mathematical biology is vast. Here are a few ideas, in no particular order. The Hodgkin-Huxley equations in neurobiology provide an incredibly accurate quantitative description of action potentials in neurons/myocytes/excitable cells. The Logistic Equation is a simple model of population growth, and the Lotka-Volterra Equation describes population growth in a predator-prey situation. David M. Eddy's, statistical work in public health prompted the American Cancer Society to change its recommendation for the frequency of Pap smears from one year to three years. The Genetic Code is an interesting piece of combinatorics in itself, and I can not help but mention Genetic Algorithms which are a beautiful example of biology inspiring mathematics, rather than the converse. Other potential topics are the application of mathematics to Genomics, Phylogenetics and the Topology/Geometry of proteins and macromolecules. - 1 – J. M. Feb 15 '12 at 1:10 You might be interested in the answers to a question about Math and cancer research at the MathOverflow site, http://mathoverflow.net/questions/87575/mathematics-and-cancer-research - One area in which combinatorics interacts with the biological sciences is in the Ewens sampling formula of population genetics. Population genetics generally relies pretty heavily on mathematics. And there's this book. Statistics generally is heavily relied on in medical research. Since you mention combinatorics, one area of statistics that relies on combinatorics and gets applied to medicine is design of experiments. - I know statistics is heavily relied on, but I find the subject matter dry and boring. Population genetics is an interesting topic though, what would I do with that though? Trace a gene through its development? I'm not really sure. – MentatOfDune Feb 15 '12 at 0:55 Related to J.M.'s answer, one really neat area is source localization of EEG. The idea is similar to other brain scanning methods such fMRI or PET, but instead of measuring blood flow in the brain or nuclear physics, EEG data are collected from a subject and electrical sources of these surface voltages are reconstructed using inverse problem approaches. Inverse problems are a venerable area of applied math, with a long history spanning many disciplines. While this is more interesting from a research standpoint instead of clinical, this is a really cool area of work for understanding brain activity. For a good overview of different algorithms, check out this paper (warning: PDF). - I can't read that paper. I don't have access to that journal. I'm trying to see if I can access it through the university system but no luck. It's a shame, I really wanted to read it. – MentatOfDune Feb 16 '12 at 4:23 @MentatOfDune Try searching through scholar.google.com. I thought it was the link to the PDF directly, but maybe not. I know there are freely available reviews on the subject around. – joshin4colours Feb 16 '12 at 5:20 On top of what has already been said in other great answers: Bernoulli’s equation and the Navier–Stokes equations come up often when studying blood-flow. - The Basic Local Alignment Search Tool (BLAST) is one of the most widely used bioinformatic programs, developed by mathematicians (Altschul, Gish, and Lipman) in the most cited paper of the 1990's (and the most cited biology paper of all time). BLAST is used in medical resequencing, and genome sequencing is having an increasing impact on medicine. The mathematical content is stochastic analysis. - 3D surface curves are used to model tumors to correctly apply heat treatments. This particular application is the motivation to Larson's Calculus's 13th Chapter: Multiple Integrals. He provides the examples: $$\rho = 1+0,2\cdot\sin8\theta \cdot\sin\phi$$ $$\rho = 1+0,2\cdot\sin8\theta \cdot\sin4\phi$$ as models and asks to calculate their volume. As a reference he leaves: "Heat Therapy for Tumors" Leah Edelstine-Keshet (UMAP Journal), Summer '91. - Image processing rely heavily on mathematics and is much used to produce and analyze medical data. This book might give you some ideas: Mathematical problems in image processing: partial differential equations ... By Gilles Aubert, Pierre Kornprobst - There exists an entire volume devoted to "Fuzzy Logic and Medicine". The link there can tell you more, as well as a google search. - For application of chaos theory or non-linear dynamics to heart-rate, you can check the paper by A.L. Goldberger titled Non-linear dynamics for clinicians: chaos theory, fractals, and complexity at the bedside (link). - I heard of a student at Sherbrooke University who made a program that modelled very accurately human brain tissue. It might have nothing to do with what you want, but it helped map vital tissues in brain surgeries and assisting doctors to take irreversible decisions accurately. - That's actually like what I'm looking into doing now, the post about Tomography sparked my interest and I've been looking into it all day. – MentatOfDune Feb 16 '12 at 4:01 If you're interested in applications of differential equations in the biological/medical area, I suggest looking at Clifford Taubes, "Modeling differential equations in biology" http://books.google.ca/books?id=Y464SAAACAAJ There are lots of things in there that would make a good project. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9350074529647827, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/20453/are-g-infinity-algebras-b-infinity-vice-versa/20530
## Are G_infinity algebras B_infinity? Vice versa? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What is the relationship between $G_\infty$ (homotopy Gerstenhaber) and $B_\infty$ algebras? In Getzler & Jones "Operads, homotopy algebra, and iterated integrals for double loop spaces" (a paper I don't well understand) a $B_\infty$ algebra is defined to be a graded vector space $V$ together with a dg-bialgebra structure on $BV = \oplus_{i \geq 0} (V[1])^{\otimes i}$, that is a square-zero, degree one coderivation $\delta$ of the canonical coalgebra structure (stopping here, we have defined an $A_\infty$ algebra) and an associative multiplication $m:BV \otimes BV \to BV$ that is a morphism of coalgebras and such that $\delta$ is a derivation of $m$. A $G_\infty$ algebra is more complicated. The $G_\infty$ operad is a dg-operad whose underlying graded operad is free and such that its cohomology is the operad controlling Gerstenhaber algebras. I believe that the operad of chains on the little 2-discs operad is a model for the $G_\infty$ operad. Yes? It is now known (the famous Deligne conjecture) that the Hochschild cochain complex of an associative algebra carries the structure of a $G_\infty$ algebra. It also carries the structure of a $B_\infty$ algebra. Some articles discuss the $G_\infty$ structure while others discuss the $B_\infty$ structure. So I wonder: How are these structures related in this case? In general? - 4 I have no answer, nor do I have a sensible comment since I don't know what you mean by these words. I hesitate to waste a down vote --- the question could be interesting --- but may we please have some more background? – Scott Carter Apr 6 2010 at 1:55 5 I think the question is aimed at people who know what the words in it mean. – Ed Segal Jul 17 2010 at 10:13 ## 1 Answer There is a nice summary of the relationship between B infinity and G infinity in the first chapter of the book "Operads in Algebra, Topology and Physics" by Markl, Stasheff and Schnider. The short answer is G infinity is the minimal model for the homology of the little disks operad (the G operad). B infinity is an operad of operations on the Hochschild complex. Many of the proofs of Deligne's conjecture involve constructing a map between these two operads. - 1 More precisely, there is an opeard map (constructed by Tamarkin, and depending on the choice of an associator) $G_\infty\to B_\infty$. In other words, any $B_\infty$-algebra is a $G_\infty$-algebra. To see this, one has to remember that a $G_\infty$-structure can be expressed in terms of a DG Lie bialgebra structure on a cofree Lie coalgebra. On the other hand a $B_\infty$-algebra structure can be expressed in terms of DG bialgebra structure on a cofree coassociative algebra. the relation between the two is given by Etingof-Kazhdan dequantization Theorem. – DamienC Apr 25 2011 at 21:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9225919842720032, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/172988-hermitian-matrix-over-c.html
# Thread: 1. ## hermitian matrix over C Do anyone has an example of nxn hermitian matrix with complex entries which has repeated eigenvalues? If can make the n as small as possible. Thank you Sorry for the mistake I made before in my question. 2. Originally Posted by guin Do anyone has an example of nxn hermitian matrix over complex which has n distinct eigenvalues? If can make the n as small as possible. Thank you If you want a simple one: $\displaystyle \left [ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & -2 & 0 \\ 0 & 0 & 5 \end{array} \right ]$ You can add window dressing to it, but it fits what you asked for. Did you want one less "trivial" to work on yourself? -Dan 3. Although the question has been already solved by topsquark , I'd like to add that $D=\textrm{diag}(\lambda_1,\ldots,\lambda_n)\in\mat hbb{C}^{n\times n}$ is hermitian iff $\lambda_i\in\mathbb{R}$ for all $i=1,\ldots,n$. So, all diagonal real matrices with $\lambda_i\neq \lambda_j$ for all $i\neq j$ are hermitian and have $n$ distinct eigenvalues.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9638739824295044, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/21990/proof-square-matrix-has-maximal-rank-if-and-only-if-it-is-invertible/21994
Proof - Square Matrix has maximal rank if and only if it is invertible Could someone help me with the proof that a square matrix has maximal rank if and only if it is invertible? Thanks to everybody - What is a quadratic matrix? – Qiaochu Yuan Feb 14 '11 at 12:43 @Qiaochu Yuan he obviously means square matrix – Listing Feb 14 '11 at 12:46 Yeah sorry for my english :) – markzzz Feb 14 '11 at 13:27 @user3123: I asked because it sounded like the OP could have been referring to a quadratic form rather than a matrix. – Qiaochu Yuan Feb 14 '11 at 14:01 please make your posts self-contained. Don't rely on the subject: put the entire information on the body. – Arturo Magidin Feb 14 '11 at 14:18 1 Answer Suppose $A\in F^{n \times n}$. If A is invertible then there is a matrix B such that $AB=I$ so the standard basis $e_i$ (the columns of I) is in the image of A (these vectors are just the image Av where v are the columns of B) - this shows that $\dim (Im(A)) = n$. On the other hand, if $\dim (Im (A))=n$ then for every i there is $v_i$ such that $A v_i = e_i$. Let B be the matrix with columns $v_i$ then $AB=I$ and A is invertible. - Nice elementary proof. – Listing Feb 14 '11 at 12:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9229728579521179, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/170271-uniform-random-multivariate-interval-0-1-a-print.html
uniform random multivariate on interval (0,1) Printable View • February 5th 2011, 11:27 AM nikie1o2 uniform random multivariate on interval (0,1) Hi guys, Im given X, Y, Z are independent uniform random variables on the interval (0,1). The question is find P(x<y<z). Im thinking its a triple integration of f(x,y,z) dxdydz ? not too sure of the bounds though.. • February 5th 2011, 01:05 PM theodds Quote: Originally Posted by nikie1o2 Hi guys, Im given X, Y, Z are independent uniform random variables on the interval (0,1). The question is find P(x<y<z). Im thinking its a triple integration of f(x,y,z) dxdydz ? not too sure of the bounds though.. No need to integrate. You can solve this by counting, since $<br /> (X < Y < Z) \cup (X < Z < Y) \cup \cdots \cup (Z < Y < X)<br />$ is the whole space (except for a set of measure 0) and each of those events is disjoint and equally likely. Hence, the probability is 1/6. If you did want to integrate, which I wouldn't recommend, the integral would be of this form: $\displaystyle \int_0 ^ 1 \int_0 ^ z \int_0 ^ y \ dx \ dy \ dz$ • February 6th 2011, 12:03 PM nikie1o2 nice. so your saying those 6 possibilities x<y<z,x<z<y,z<x<y,z<y<x,y<x<z,y<z<x equals our sample space and there's a 1/6 probability that x<y<z. I get it.! The next question i have to answer is whats the p(xy<z) can i solve that in a similar way ? • February 6th 2011, 02:53 PM theodds Quote: Originally Posted by nikie1o2 nice. so your saying those 6 possibilities x<y<z,x<z<y,z<x<y,z<y<x,y<x<z,y<z<x equals our sample space and there's a 1/6 probability that x<y<z. I get it.! The next question i have to answer is whats the p(xy<z) can i solve that in a similar way ? That one I integrate, although if you apply the same idea I initially used it makes things a bit easier. First, I would calculate 1 - P(XY > Z) and note that $<br /> (0 < Z < XY < X < Y) \cup (0 < Z < XY < Y < X)<br />$ is the equivalent to (XY > Z) (except on a set of probability 0); moreover we have a disjoint union of equally likely sets, so it suffices to calculate $<br /> 1 - 2 \cdot \int_0 ^ 1 \int_0 ^ x \int_0 ^ {xy} \ dz \ dy \ dx.<br />$ Another approach would be to take the -log of both sides. You end up needing to find P(Gamma(2, 1) > Exponential(1)) with this method, which is easy enough. There may be an easier way that I'm not seeing that lets you get this immediately. • February 7th 2011, 05:36 PM nikie1o2 ok, im just confused when you take the complement of p(XY <Z) why it = 1-P(Z<XY) is that a typo? or im just not seeing why that's correct • February 7th 2011, 07:04 PM theodds Quote: Originally Posted by nikie1o2 ok, im just confused when you take the complement of p(XY <Z) why it = 1-P(Z<XY) is that a typo? or im just not seeing why that's correct No typo, that is what it is. Clearly $XY < Z$ or $XY \ge Z$ correct? I can make the inequality strict because equality happens on a set of probability 0; it doesn't matter at all, but I made the inequalities strict so that I could get a disjoint union. Both the methods I listed give the same answer, which leads me to believe I'm not making a stupid mistake. All times are GMT -8. The time now is 05:01 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9361312985420227, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/76608/chain-complexes-of-vector-bundles
## Chain complexes of vector bundles ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In his paper "Categories and cohomology theories" Graeme Segal considers the category of finite length chain complexes of finite dimensional vector spaces: Let $n = (n_i)_{i \in \mathbb{Z}}$ be a sequence of positive integers almost all zero. Then he claims that the space $K_n$ of chain complexes $E$ with $E^i = \mathbb{R}^{n_i}$ is a real algebraic variety. I am no algebraic geometer, but I suspect this to be similar to flag varieties somehow. So my first question is: What is the topology on $K_n$? As morphisms in the above mentioned category he takes chain homotopy equivalences between these complexes and claims that these also form a topological space. So: What is the topology on Mor($E,F$) for two chain complexes $E$ and $F$ as described above? - $K_n$ is the subset of $\oplus_{i}{\rm Hom}_{\bf R}(E_i,E_{i+1})= \oplus_i M^{n_{i+1}\times n_{i}}({\bf R})$ (where $M^{n_{i+1}\times n_{i}}({\bf R})$ are the $n_{i+1}\times n_{i}$-matrices with real coefficients) consisting of the elements $(d_i)$ such that $d_{i+1}\circ d_i=0$. So this suggests that the topology should be induced by the natural topology of the real vector space $\oplus_i M^{n_{i+1}\times n_{i}}({\bf R})$. – Damian Rössler Sep 28 2011 at 7:45 ## 1 Answer The space $K_n$ sits inside the space of sequences of linear maps $$L_n = \Pi_i Hom(E^i,E^{i+1}).$$ This is just a space of sequences of matrices, so it is a real vector space of dimension $\sum_i (n_i \cdot n_{i+1})$. We give it the usual euclidean topology for real vector spaces. The subspace $K_n$ consists of those sequences of linear maps ${f_i}$ which form a chain complex - i.e., $f_{i+1} \circ f_i = 0$. This condition is polynomial in the entries in the matrices, so $K_n$ is a real algebraic affine subvariety inside $L_n$. We give $K_n$ the subspace topology. The topology on the space of morphism can be described similarly by embedding the morphism set $Mor(E,F)$ into the space of sequences of linear maps $\Pi_i Hom(E^i,F^i)$, which is again a real vector space. The condition of being a morphism of chain complexes is real algebraic so the morphism space is again a real algebraic affine variety. - Thanks! That was much easier than I thought. – Ulrich Pennig Sep 28 2011 at 8:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.915381908416748, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/8452/is-there-an-equation-for-the-strong-nuclear-force
# Is there an equation for the strong nuclear force? The equation describing the force due to gravity is $$F = G \frac{m_1 m_2}{r^2}.$$ Similarly the force due to the electrostatic force is $$F = k \frac{q_1 q_2}{r^2}.$$ 1. Is there a similar equation that describes the force due to the strong nuclear force? 2. What are the equivalent of masses/charges if there is? 3. Is it still inverse square or something more complicated? - ## 5 Answers From the study of the spectrum of quarkonium (bound system of quark and antiquark) and the comparison with positronium one finds as potential for the strong force $V(r) = - \frac{4}{3} \frac{\alpha_s(r) \hbar c}{r} + kr$ where the constant $k$ determines the field energy per unit length and is called string tension. For short distances this resembles the Coulomb law, while for large distances the $kr$ factor dominates (confinement). It is important to note that the coupling $\alpha_s$ also depends on the distance between the quarks. This formula is valid and in agreement with theoretical predictions only for the quarkonium system and its typical energies and distances. For example charmonium: $r \approx 0,4 fm$. So it is not as universal as eg. the gravity law in Newtonian gravity. - . +1 This potential makes more physical sense for quarks since it includes both the QED-like $-1/r$ and the confining $+kr$. – dbrane Apr 11 '11 at 20:48 1 Nice. Of course, the "breaking of the flux tube" has no classical or semi-classical analogue, making this formulation better for hand waving than calculation. – dmckee♦ Apr 11 '11 at 21:19 No, there is none such equation. Reason is that these equations are highly classical and invalid in both relativistic (there is an action at a distance, incompatible with finite speed of light) and quantum mechanical regime (distances strong force is important at are quite microscopic). Also, strong force is confining, meaning you can't ever observe individual color charged particles (color is a property associated with strong force), so there can't really be a macroscopic equation for them. You obviously need at least quantum mechanics to account for strong force, because distances are so tiny (on the scale of nucleus or smaller). But it turns out you need relativity too. The complete theory which incorporates both QM and relativity is called quantum field theory and individual forces are described by QFT Lagrangians which essentially tell you which particles interact with which other particles (e.g. photons with electrically charged particles, gluons with color charged particles, etc.). This is the fundamental theory and the electric force equation you described can be derived from it in classical (both non-QM and nonrelativistic) limit. As for gravitation law, that too can be derived but from different theory, namely general relativity. - Thanks, I guess that the goal is to formulate gravity in the same language of quantum field theory? That is the goal of stringtheory and other such unification theories? – ergodicsum Apr 11 '11 at 18:21 @ergodicsum: yep, pretty much. (Either that, or formulate the standard model in the language of GR, or formulate both in some new theoretical framework yet to be discovered) – David Zaslavsky♦ Apr 11 '11 at 18:28 @ergodicsum: that would be the intuitive proposition, right. But it turns out gravity doesn't play well with QFT in the way other forces do. So the language will probably be of some other theory (e.g. string theory) from which both QFT and GR can be derived in some limits. – Marek Apr 11 '11 at 18:28 4 Well, it's still true that at short distances, much shorter than a Fermi, and in the non-relativistic limit, the strong force is still governed by the Coulomb's law. It's not a terribly useful limit for the strong force but it is misleading to suggest that the strong force is something "entirely different". – Luboš Motl Apr 11 '11 at 18:38 @Lubos Motl thanks for your clarification. My intuition told me something like that should be true, but my intuition is often wrong :). – anna v Apr 11 '11 at 19:11 show 1 more comment At the level of quantum hadron dynamics (i.e. the level of nuclear physics, not the level of particle physics where the real strong force lives) one can talk about a Yukawa potential of the form $$V(r) = - \frac{g^2}{4 \pi c^2} \frac{e^{-mr}}{r}$$ where $m$ is roughly the pion mass and $g$ is an effective coupling constant. To get the force related to this you would take the derivative in $r$. This is a semi-classical approximation, but it is good enough that Walecka uses it breifly in his book. - Let me add one obvious thing: There is an exact equation for the strong force. It is what Gross, Politzer and Wilczek got the Nobel prize for. It is called quantum chromodynamics (QCD). Google it or look it up in Wikipedia, and you can see the Lagrangian for QCD, and compare it to the Lagrangian for electrodynamics. Of course, you could argue about the similarities and differences of a Lagrangian, and a force equation, such as your two examples. - The strong force as seen in nuclear matter The nuclear force, is now understood as a residual effect of the even more powerful strong force, or strong interaction, which is the attractive force that binds particles called quarks together, to form the nucleons themselves. This more powerful force is mediated by particles called gluons. Gluons hold quarks together with a force like that of electric charge, but of far greater power. Marek is talking of the strong force that binds the quarks within the protons and neutrons. There are charges, called colored charges on the quarks, but protons and neutrons are color neutral. Nuclei are bound by the interplay between the residual strong force , the part that is not shielded by the color neutrality of the nucleons, and the electromagnetic force due to the charge of the protons. That also cannot be simply described. Various potentials are used to calculate nuclear interactions. Simplicity and similarity of form for all forces comes not in the formalism of forces, but as Marek said the formalism of quantum field theory. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9396395683288574, "perplexity_flag": "middle"}
http://johncarlosbaez.wordpress.com/2012/05/30/fluid-flows-and-infinite-dimensional-manifolds-part-3/
# Azimuth ## Fluid Flows and Infinite Dimensional Manifolds (Part 3) ### Or: Twisting on the Borderline guest post by Tim van Beek In Part 2 of this series, I told you what ideal incompressible fluids are. Then I explained how the equation of motion for such fluids, Euler’s equation, can be seen as the equation for geodesics in $\mathrm{SDiff}(M)$, the infinite dimensional manifold of volume preserving diffeomorphisms. Last time I promised to talk about the role of pressure in Euler’s equation. I also mentioned that Arnold used this geometric setup to put a bound on how good weather forecasts can be. I will try to fulfill both promises in the next blog post! But last time I also mentioned that the ideal fluid has serious drawbacks as a model. This is an important topic, too, so I would like to explain this a little bit further first, in this blog post. So, this time we will talk a little bit about how we can get viscosity, and therefore turbulence, back into the picture. This will lead us to the Navier–Stokes equation. Can ideal fluids, which solve Euler’s equation, model fluids with a very small viscosity? This depends on what happens to solutions when one lets the viscosity go to zero in the Navier–Stokes equation, so I will show you a result that answers this question in a specific context. I’ll also throw in a few graphics that illustrate the transition from laminar flow to turbulent flow at boundaries, starting with the one above. These are all from: • Milton Van Dyke, An Album of Fluid Motion, Parabolic Press, 12th edition, 1982. ### Re-introducing viscosity: The Navier-Stokes equation The motion of an incompressible, homogeneous, ideal fluid is described by Euler’s equation: $\displaystyle{ \partial_t u + u \cdot \nabla u = - \nabla p }$ Ideal fluids are very nice mathematically. Nicest of all are potential flows, where the velocity vector field is the gradient of a potential. In two dimensions can be studied using complex analysis! One could say that a whole ‘industry’ evolved around the treatment of these kinds of fluid flows. It was even taught to some extend to engineers, before computers took over. Here’s a very nice, somewhat nostalgic book to read about that: • L. M. Milne-Thomson, Theoretical Aerodynamics, 4th edition, Dover Publications, New York, 2011. (Reprint of the 1958 edition.) The assumption of ‘incompressibility’ is not restrictive for most applications involving fluid flows of water and air, for example. Maybe you are a little bit surprised that I mention air, because the compressibility of air is a part of every day life, for example when you pump up a cycle tire. It is, however, not necessary to include this property when you model air flowing at velocities that are significantly lower than the speed of sound in air. The rule of thumb for engineers seems to be that one needs to include compressibility for speeds around Mach 0.3 or more: • Compressible aerodynamics, Wikipedia. However, the concept of an ‘ideal’ fluid takes viscosity out of the picture—and therefore also turbulence, and the drag that a body immersed in fluid feels. As I mentioned last time, this is called the D’Alembert’s paradox. The simplest way to introduce viscosity is by considering a Newtonian fluid. This is a fluid where the viscosity is a constant, and the relation of velocity differences and resulting shear forces is strictly linear. This leads to the the Navier–Stokes equation for incompressible fluids: $\displaystyle{ \partial_t u + u \cdot \nabla u - \nu \Delta u = - \nabla p }$ If you think about molten plastic, or honey, you will notice that the viscosity actually depends on the temperature, and maybe also on the pressure and other parameters, of the fluid. The science that is concerned with the exploration of these effects is called rheology. This is an important research topic and the reason why producers of, say, plastic sheets, sometimes keep physicists around. But let’s stick to Newtownian fluids for now. ### Sending Viscosity to Zero: Boundary Layers Since we get Euler’s equation if we set $\nu = 0$ in the above equation, the question is, if in some sense or another solutions of the Navier-Stokes equation converge to a solution of Euler’s equation in the limit of vanishing viscosity? If you had asked me, I would have guessed: No. The mathematical reason is that we have a transition from a second order partial differential equation to a first order one. This is usually called a singular perturbation. The physical reason is that a nonzero viscosity will give rise to phenomena like turbulence and energy dissipation that cannot occur in an ideal fluid. Well, the last argument shows that we cannot expect convergence for long times if eddies are present, so there certainly is a loophole. The precise formulation of the last statement depends on the boundary conditions one chooses. One way is this: Let $u$ be a smooth solution of Euler’s equation in $\mathbb{R}^3$ with sufficiently fast decay at infinity (this is our boundary condition), then the kinetic energy $E$ $\displaystyle{ E = \frac{1}{2} \int \| u \|^2 \; \mathrm{d x} }$ is conserved for all time. This is not the only conserved quantity for Euler’s equation, but it’s a very important one. But now, suppose $u$ is a smooth solution of the Navier–Stokes equation in $\mathbb{R}^3$ with sufficiently fast decay at infinity. In this case we have $\displaystyle{ \frac{d E}{d t} = - \nu \int \| \nabla \times u \|^2 \mathrm{d x} }$ So, the presence of viscosity turns a conserved quantity into a decaying quantity! Since the 20th century, engineers have taken these effects into account following the idea of ‘boundary layers’ introduced by Ludwig Prandtl, as I already mentioned last time. Actually the whole technique of singular perturbation theory has been developed following this ansatz. This has become a mathematical technique to get asymptotic expansions of solutions of complicated nonlinear partial differential equations. The idea is that the concept of ‘ideal’ fluid is good except at boundaries, where effects due to viscosity cannot be ignored. This is true for a lot of fluids like air and water, which have a very low viscosity. Therefore one tries to match a solution describing an ideal fluid flow far away from the boundaries with a specific solution for a viscous fluid with prescribed boundary conditions valid in a thin layer on the boundaries. This works quite well in applications. One of the major textbooks about this topic has been around for over 60 years now and has reached its 10th edition in German. It is: • H. Schlichting and K. Gersten: Boundary-Layer Theory, 8th edition, Springer, Berlin, 2000. Since I am also interested in numerical models and applications in engineering, I should probably read it. (I don’t know when the 10th edition will be published in English.) ### Sending Viscosity to Zero: Convergence Results However, this approach does not tell us under what circumstances we can expect convergence of solutions $u_{\nu}$ to the viscous Navier–Stokes equations with viscosity $\nu > 0$ to a solution $u_{0}$ of Euler’s equation with zero viscosity. That is, are there such solutions, boundary and initial conditions and a topology on an appropriate topological vector space, such that $\lim_{\nu \to 0} u_{\nu} = u_{0} \; \text{?}$ I asked this question over on Mathoverflow and got some interesting answers. Obviously, a lot of brain power has gone into this question, and there are both interesting positive and negative results. As an example, let me describe a very interesting positive result. I learned about it from this book: • Andrew J. Majda and Andrea L. Bertozzi, Vorticity and Incompressible Flow, Cambridge University Press, Cambridge, 2001. It’s Proposition 3.2 in this book. There are three assumptions that we need to make in order for things to work out: • First, we need to fix an interval $[0, T]$ for the time. As mentioned above, we should not expect that we can get convergence for an unbounded time interval like $[0, \infty].$ • Secondly, we need to assume that solutions $u_{\nu}$ of the Navier–Stokes equation and a solution $u_0$ of Euler’s equation exist and are smooth. • Thirdly we will dodge the issue of boundary layers by assuming that the solutions exist on the whole of $\mathbb{R}^3$ with sufficiently fast decay. As I already mentioned above, a viscous fluid will of course show very different behavior at a boundary than an ideal (that is, nonviscous) one. Our third assumption means that there is no such boundary layer present. We will denote the $L^{\infty}$ norm on vector fields by $\| \cdot \|$ and use the big O notation. Given our three assumptions, Proposition 3.2 says that: $\displaystyle{ \mathrm{sup}_{0 \le t \le T} \; \| u_{\nu} - u_0 \| = O(\nu) }$ It also gives the convergence of the derivatives: $\displaystyle{ \int_0^T \| \nabla (u_{\nu} - u_0) \| \; d t = O(\nu^{1/2}) }$ This result illustrates that the boundary layer ansatz may work, because the ideal fluid approximation could be a good one away from boundaries and for fluids with low viscosity like water and air. So, when John von Neumann joked that “ideal water” is like “dry water”, I would have said: “Well, that is half right”. This entry was posted on Wednesday, May 30th, 2012 at 8:46 am and is filed under mathematics, physics. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. ### 9 Responses to Fluid Flows and Infinite Dimensional Manifolds (Part 3) 1. John Baez says: Nice post! You wrote: The idea is that the concept of ‘ideal’ fluid is good except at boundaries, where effects due to viscosity cannot be ignored. This is true for a lot of fluids like air and water, which have a very low viscosity. Therefore one tries to match a solution describing an ideal fluid flow far away from the boundaries with a specific solution for a viscous fluid with prescribed boundary conditions valid in a thin layer on the boundaries. What are these boundary conditions like? My guess is that for a viscous fluid, the velocity of its flow at a boundary should be zero. Obviously the normal component of the velocity should vanish, but due to the ‘stickiness’, I’d guess the tangential component should vanish too. A practical website on nuclear power plant design shows this picture: Obviously for turbulent flow this velocity profile must show some sort of time-averaged velocity. 2. Tim van Beek says: The boundary condition is that the velocity at the wall is zero (relative to the wall). The velocity profile that we see in the graphic is one of the examples where the Naviers-Stokes equations have a solution in closed form, it is called “Hagen-Poiseuille” flow. This solution describes laminar flow only. It is an experimental fact that this laminar flow can turn into a turbulent flow, though. This is called in German “Rohrströmungsparadoxon” (I can try to look up the English term later :-) I believe that the profile that is labeled “Turbulent Flow” is actually the laminar Hagen-Poiseuille flow at the Reynolds number that marks the experimentally determined transition to turbulent flow. 3. Tim van Beek says: BTW: The boundary layer is described by the “Prandtl boundary layer equations”. 4. Dmitri Manin says: There are a number of other boundary layers in other interesting setups. The one I’m most familiar with is the Ekman boundary layer, which occurs in rotating fluids. Imagine a saucepan on a turntable that is rotating at 33 rpm as a solid, and then the turntable is switched to 45 rpm. While the water is adjusting to the new rotation rate, it continues in an almost solid rotation, except for a thin boundary layer at the bottom, where there is adjustment in the rotational velocity, and radial outflow (because of centrifugal force) modified by Coriolis effect. • Tim van Beek says: Dumb question: Is the Ekman boundary layer a special kind of boundary layer solution but with the usual boundary condition that the fluid velocity at the boundary is zero? Or has it a different boundary condition? • nick says: It depends on which Ekman layer you are referring to! For instance, the Ekman layer most commonly discussed in Oceanography 1 happens at the surface of the ocean, where the boundary condition is a prescribed surface (tangential) stress, parametrizing forcing (usually taken to be a constant wind). This phenomena has important implications for transport, since integration of the velocity profiles for all depths implies a non zero mass transport at $\pm \pi/2$ to the direction of the wind, (where the sign depends on what hemisphere you are in). An example of this is quite common during this time of year in California. Wind blows from the north, this leads to a transport of water to the right of the wind (f is greater than zero in the northern hemisphere) which causes surface waters to move offshore. Mass conservation then implies that water from depth moves up to replace the transported water, which tends to be much colder. Besides having biological implications, this makes surf sessions a bit chillier. See for instance, here http://en.wikipedia.org/wiki/Ekman_layer It is interesting to note that a phenomena so easy to mathematically model has been so difficult to observe. Indeed, Ekman presented his model in 1905, yet it was only very recently (somewhere in the ballpark of the 70s/80s) for the field techniques to reach a state where they could accurately measure the velocity profile in the Ekman layer. What people tend to observe is an “Ekman Spiral” that’s been flattened out vs what the model predicts. Some possible explanations include the additional vortex force due to the interaction of the Stokes drift and the Coriolis force and the fact that we are assuming the eddy viscosity is constant with depth. • Tim van Beek says: Thanks. I should have remembered the name Ekman from the book I browsed some time ago, * Vallis: “Atmospheric and Oceanic Fluid Dynamics” 5. Robert says: You also need sufficiently fast decay to avoid viscosity becoming significant at large distances, which is pretty much the opposite of normal boundary layers. Basically, near an object in an infinite fluid, the appropriate length scale is the objects diameter, but at large distances, much greater than the diameter, the length scale is set by the distance from the object. if the velocity scale at large distances doesn’t decay fast enough, the Reynolds number will increase towards infinity, meaning that viscosity becomes dominant. As I recall, this happens with flow past an infinite cylinder, though I don’t have references to hand. • Tim van Beek says: That looks like a mathematical artefact that is of little importance for practical purposes: Just the right stuff to get mathematicians interested :-) I’ll see if I can find a reference…
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 25, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9398735761642456, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/31552/dirac-equation-as-canonical-quantization/31583
# Dirac equation as canonical quantization? First of all, I'm not a physicist, I'm mathematics phd student, but I have one elementary physical question and was not able to find answer in standard textbooks. Motivation is quite simple: let me fix some finite dimensional vector space $V$. Then we can think about Clifford algebra $Cl(V \oplus V^*)$ as algebra of odd differential operators on $\bigwedge V$ i.e. as canonical quantization of the algebra of classical fermionic observables. ($\bigwedge V$ is an analog of functions on a space and $Cl(V \oplus V^*)$ is an analog of a Weil algebra.) From the other hand, action of a Clifford algebra is used in the construction of the Dirac operator. Thus my question: can one write Lagrangian for classical fermion on a space-time $\mathbb{R}^{3,1}$ such that canonical quantization of such system gives Dirac operator as quantum Hamiltonian? - a related question: With the Dirac Hamiltonian $H=\int d^3 x (i\bar{\psi}\gamma^i \partial_i \psi +m \bar{\psi}\psi)$ how does one cast this into a form that clearly shows the constraints? I mean the theory is invariant under certain transformations, and the constraints should be the generators for these symmetries, how do I expose them in the Hamiltonaian formulation? – kηives Aug 7 '12 at 5:20 @kηives: Through relation $\bar{\psi} = (\gamma_0 \psi)^\dagger$. – Ron Maimon Aug 7 '12 at 22:22 A nitpick--- you shouldn't use "classical fermionic observables", they aren't "observables" because they are fermionic. You should say "classical fermionic variables" instead. The answer to your question is yes, but this point of view is not new, it is how people standardly do higher dimensional Dirac operators. – Ron Maimon Aug 9 '12 at 8:18 ## 2 Answers The mathese in your question makes it difficult to understand, it is best to be more concrete rather than abstract. The answer is yes, this is how higher dimensional Dirac operators are standardly constructed. If you have the Dirac algebra (clifford algebra) on a 2n-dimensional space $$\{ \gamma_\mu \gamma_\nu \} = 2 g_{\mu\nu}$$ say Euclidean, then you can split the space-coordinates into even and odd pairs, and define the raising and lowering operators: $$\sqrt{2}\gamma^-_{i} = \gamma_{2i} + i \gamma_{2i+1}$$ $$\sqrt{2}\gamma^+_{i} = \gamma_{2i} - i \gamma_{2i-1}$$ These anticommute, and obey the usual fermionic raising and lowering operator algebra, you can define a 0 dimensional fermionic system for which the state space are the spin-states. The state space the gamma matrices act on can be labelled starting with the spin-state called |0>, which is annihilated by all the lowering operators, and the other states are found using raising operators applied to |0>. Then the Dirac Hamiltonian is automatically a Hamiltonian defined on a system consisting of a particle at position x, and a fermionic variable going across the finite dimensional state space of spin-states. - This is not answer to your question but a simpler analog of the question you asked : Consider 1 dimensional case. Take your phase space to be $T^*R$ (cotangent bundle of real line), and your 'velocity space' to be $TR$. In this case Dirac operator is $-i\partial/\partial x$ (I got this cool information from wikipedia:) which acts on space of complex valued functions on $R$. Classical analog of this operator would be "momentum function" on $T^*R$ (right?). Now apply rules of Legendre transformation to see what is the corresponding Lagrangian on $TR$. What you will find is that Legendre transformation is not well defined. I think the same will be the case in higher dimensions but I am not sure. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9164053797721863, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/10330/thermal-energy-while-calculating-langevin-forces
# thermal energy while calculating Langevin Forces I have a quick question from thermodynamics. I remember that we take kT/2 as the kinetic energy per degree of freedom in kinetic theory of gases. But when we do langevin forces (for example in cavity dynamics), do we take the average thermal energy as kT/2 or kT ? - ## 1 Answer The average thermal energy is $kT$ for each Harmonic oscillator, which is split equally between $kT/2$ kinetic and $kT/2$ potential. The average kinetic energy in a nonrelativistic system (or one with a quadratic kinetic energy) is always $kT/2$, for a quadratic potential, you get an equal potential contribution, while for a confining box-potential you get no potential energy contribution on average, because the potential acts in a negligible fraction of the total trajectory. To find the Langevin dynamics appropriate to a given system coupled to a thermal bath, the coefficient of the thermal noise is determined by the condition that the Boltzmann distribution is stationary. Whether this is $1/2 kT$ or $kT$ depends on the Boltzmann distribution in question, but it's always known, since you know the energy as a function of the coordinates and velocities (or field values). Once you find the coefficient of the Brownian noise, you make a Smulochowski approximation to find the pure Brownian limit of long times. See this answer for more detail on this limit: Cross-field diffusion from Smoluchowski approximation. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8830277323722839, "perplexity_flag": "head"}
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Irreducible_polynomial
# All Science Fair Projects ## Science Fair Project Encyclopedia for Schools! Search    Browse    Forum  Coach    Links    Editor    Help    Tell-a-Friend    Encyclopedia    Dictionary # Science Fair Project Encyclopedia For information on any area of science that interests you, enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.). Or else, you can start by choosing any of the categories below. # Irreducible polynomial In mathematics, the adjective irreducible means that an object cannot be expressed as a product of at least two non-trivial factors in a given ring. See also factorization. For any field F, the ring of polynomials with coefficients in F is denoted by F[x]. A polynomial p(x) in F[x] is called irreducible over F, if it is non-constant and cannot be represented as the product of two or more non-constant polynomials from F[x]. This definition depends on the field F. Some simple examples will be discussed below. Galois theory studies the relationship between a field, its Galois group, and its irreducible polynomials in depth. Interesting and non-trivial applications can be found in the study of finite fields. It is helpful to compare irreducible polynomials to prime numbers: prime numbers (together with the corresponding negative numbers of equal modulus) are the irreducible integers. They exhibit many of the general properties of the concept 'irreducibility' that equally apply to irreducible polynomials, such as the essentially unique factorization into prime or irreducible factors: Every polynomial p(x) in F[x] can be factorized into polynomials that are irreducible over F. This factorization is unique up to permutation of the factors and the multiplication of constants from F to the factors. ## Simple examples The following three polynomials demonstrate some elementary properties of reducible and irreducible polynomials: $p_1(x)=x^2-4\,=(x-2)(x+2)$, $p_2(x)=x^2-2\,=(x-\sqrt{2})(x+\sqrt{2})$, $p_3(x)=x^2+1\,=(x-i)(x+i)$. Over the field Q of rational numbers, the first polynomial p1(x) is reducible, but the other two polynomials are irreducible. Over the field R of real numbers, the two polynomials p1(x) and p2(x) are reducible, but p3(x) is still irreducible. Over the field C of complex numbers, all three polynomials are reducible. In fact over C, every non-constant polynomial can be factored into linear factors $p(z)=a_n (z-z_1)(z-z_2)\cdots(z-z_n)$ where an is the leading coefficient of the polynomial and $z_1,\ldots,z_n$ are the zeros of p(z). Hence, all irreducible polynomials are of degree 1. This is the Fundamental theorem of algebra. Note: The existence of an essentially unique factorization p3(x) = x2 + 1 = (x - i)(x + i) of p3(x) into factors that do not belong to Q[x] implies that this polynomial is irreducible over Q: there cannot be another factorization. These examples demonstrate the relationship between the zeros of a polynomial (solutions of an algebraic equation) and the factorization of the polynomial into linear factors. The existence of irreducible polynomials of degree greater than one (without zeros in the original field) historically motivated the extension of that original number field so that even these polynomials can be reduced into linear factors: from rational numbers to real numbers and further to complex numbers. For algebraic purposes, the extension from rational numbers to real numbers is often too 'radical': It introduces transcendental numbers (that are not the solutions of algebraic equations with rational coefficients). These numbers are not needed for the algebraic purpose of factorizing polynomials (but they are necessary for the use of real numbers in analysis). Thus, there is a purely algebraic process to extend a given field F with a given polynomial p(x) to a larger field where this polynomial p(x) can be reduced into linear factors. The study of such extensions is the starting point of Galois theory. ### Generalization If R is an integral domain, an element f of R which is neither zero nor a unit is called irreducible if there are no non-units g and h with f = gh. One can show that every prime element is irreducible; the converse is not true in general but holds in unique factorization domains. The polynomial ring F[x] over a field F is a unique factorization domain. 03-10-2013 05:06:04 Science kits, science lessons, science toys, maths toys, hobby kits, science games and books - these are some of many products that can help give your kid an edge in their science fair projects, and develop a tremendous interest in the study of science. When shopping for a science kit or other supplies, make sure that you carefully review the features and quality of the products. Compare prices by going to several online stores. Read product reviews online or refer to magazines. Start by looking for your science kit review or science toy review. Compare prices but remember, Price \$ is not everything. Quality does matter. Science Fair Coach What do science fair judges look out for? ScienceHound Science Fair Projects for students of all ages All Science Fair Projects.com Site All Science Fair Projects Homepage Search | Browse | Links | From-our-Editor | Books | Help | Contact | Privacy | Disclaimer | Copyright Notice
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8939692378044128, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?s=4482fb7b33faf904df250768cec326f2&p=4260031
Physics Forums ## Divergence question I see identity in one mathematical book $$div \vec{A}(r)=\frac{\partial \vec{A}}{\partial r} \cdot grad r$$ How? From which equation? PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Homework Help what does ##gradr## mean? Do you mean: $$\text{div}(\vec{A}(r)) = \frac{\partial\vec{A}(r)}{\partial r}\text{grad}(r)$$ ... still not sure it makes sense. But it looks a bit like it might be a special case of the grad operator in spherical or polar coordinates. I'm afraid you'll have to provide the reference - the book could simply be wrong. Is r a vector or not? Either way it seems that there is a problem with that equation. Recognitions: Gold Member Science Advisor Staff Emeritus ## Divergence question Assuming that r is the distance from (0, 0) to (x, y) then that equation is correct and is just the chain rule (although, strictly speaking, that partial derivative ought to be an ordinary derivative since A is assumed to be a function of r only). Yes. But I'm not sure why is that correct? Could you explain me that? Recognitions: Gold Member Science Advisor Staff Emeritus As I said, it is the chain rule. We have $r= \sqrt{x^2+ y^2+ z^2}$ so that $\partial r/\partial x= x(x^2+ y^2+ z^2)^{-1/2}= x/r$, $\partial r/\partial y= y(x^2+ y^2+ z^2)^{-1/2}= y/r$, $\partial r/\partial z= z(x^2+ y^2+ z^2)^{-1/2}= z/r$. So $grad r= (xi+ yj+ zk)/r$. If we write $\vec{A(r)}= A_1(r)i+ A_2(r)j+ A_3(r)k$ then $d\vec{A(r)}/dr= (dA_1/dr) i+ (dA_2/dr)j+ (dA_3/dr)k$ and $(d\vec{A})/dr\cdot grad r= (x(dA_1/dr)+ y(dA_2/dr)+ z(dA_3/dr))/r$ On the left, $div \vec{A(r)}= dA_1/dr+ dA_2/dr+ dA_3/dr= [(\partial A_1/\partial x)(\partial r/\partial x)+ (\partial A_1/\partial y)(\partial r/\partial y)+ (\partial A_1/\partial z)(\partial z/\partial r)]+ [(\partial A_2/\partial x)(\partial r/\partial x)+ (\partial A_2/\partial y)(\partial r/\partial y)+ (\partial A_2/\partial z)(\partial z/\partial r)]+ [(\partial A_3/\partial x)(\partial r/\partial x)+ (\partial A_3/\partial y)(\partial r/\partial y)+ (\partial A_3/\partial z)(\partial z/\partial r)]$ Now use the fact that $\partial x/\partial r= 1/(\partial r/\partial x)= r/x$, $\partial y/\partial r= r/y$, and $\partial z/\partial r= r/z$. Thread Tools | | | | |------------------------------------------|----------------------------|---------| | Similar Threads for: Divergence question | | | | Thread | Forum | Replies | | | Calculus | 4 | | | Calculus & Beyond Homework | 2 | | | Quantum Physics | 2 | | | Calculus & Beyond Homework | 1 | | | Calculus | 5 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9178407788276672, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/85290/simple-log-inequality-cal-1/85305
# Simple log inequality (CAL 1) I am an amateur when it comes to math. I am currently taking CAL 1 and have a question about one of my assignments. Any help is appreciated. Let $g(x)=x^3-x^3-2x$ and $f(x) = \ln(g(x))$ I have to find the domain of $y = f(x)$ I've figured out that when I do $x^3-x^2-2x > 0$, I end up with $0, 2, -1$. But I'm still not sure what the domain is.... maybe I'm close? maybe it's obvious? Any help much appreciated!!! Thanks. - When you define $g(x)$ you have the $x^3$ term appearing twice. I can't edit since it's only one character. – platinumtucan Dec 24 '11 at 21:14 ## 2 Answers You need to figure out under what conditions on $x$ is $x^3 - x^2 - 2x >0$. Factoring the lhs we have: $x (x^2-x-2)$ which then simplifies to: $x (x-2) (x+1)$. Thus, you need to identify the set of $x$ for which: $x (x-2) (x+1) > 0$ Can you take it from here? - yes, I got there, and I end up with 3 numbers, but I still do not fully understand what the domain is... – Sam Nov 24 '11 at 17:07 No, you end up with three numbers $0$, $2$ and $-1$ only if you insist that the $>$ is a $=$ sign. What do you know must be true about three numbers $a$, $b$, $c$ if the product of these three numbers is greater than $0$ (i.e., $a b c >0$)? – tards Nov 24 '11 at 17:10 they must all be positive numbers? – Sam Nov 24 '11 at 17:17 What about $-1 \cdot -2 \cdot 3$? That is also positive. So,... – tards Nov 24 '11 at 17:24 hmmmm well either an even number of negatives and any number of positives, or all positives – Sam Nov 24 '11 at 17:25 show 1 more comment If we define $P(x)=x(x-2)(x+1)$ , then we can find solution for $P(x)>0$ as it is shown on picture below. So domain is : $x\in (-1,0) \cup (2,+\infty)$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9480756521224976, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/3119/reaching-speed-of-light?answertab=active
# Reaching speed of light [duplicate] Possible Duplicate: Rotate a long bar in space and reach c Sorry this is very naive, but it's bugging me. If you had a straight solid stick attached on one end and rotating around that attachment at a certain rpm, there would be a length at which the end of the stick would theoretically reach, with that rpm, the speed of light. Well, doesn't seem possible - what specifically would be the limitations that would prevent the end of the stick to reach the speed of light? What would happen? - simply as a practical matter, it's doubtful you could find a material strong enough to withstand the tension to supply the necessary centripetal force. – JustJeff Jan 17 '11 at 0:53 1 Voted to close as duplicate: the question has the same answer. – Sklivvz♦ Jan 17 '11 at 1:06 I agree, it's a duplicate; closed. – Noldorin Jan 17 '11 at 1:16 Maybe we should all just admit once and for all that relativity applies to everything in the universe except really long sticks ;-) – Greg P Jan 17 '11 at 16:30 ## marked as duplicate by Colin K, Sklivvz♦, NoldorinJan 17 '11 at 1:16 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. ## 2 Answers Something else to consider is that if you begin with a rigid rod rotating about an axis penetrating its center of mass at a constant angular velocity, and then gradually extend the length of the rod by adding mass to the end(s) of the rod, by doing so you are increasing the rod's moment of inertia. The expression for rotational kinetic energy is 1/2*I*omega^2, where I is the moment of inertia of the rod, given a specific choice of axis. For one of the ends to reach the speed of light would require infinite mass to be added to the ends, and hence, an infinite amount of energy. This seems to be a physically unattainable situation. - In order for the bit of matter of mass $m$ at the very end of the stick to continue moving in a circular path of radius $R$ at a speed approaching the speed of light, it would need to be pulled toward the center with a force whose magnitude is $|F| = |p|\frac{|V|}{R} = \frac{1}{\sqrt{1-(v/c)^2}}\frac{mv^2}{R}$ (the centripetal force you learn about in introductory physics). That force becomes infinitely large as the speed v approaches the speed of light, very rapidly, and eventually exceeds the strength of any interatomic or intermolecular forces that might be trying to hold the object together. - obviously something's going to stop you from getting to c, but what about getting to interesting fractions of c, say, just enough for relativistic effects to become noticable? – JustJeff Jan 17 '11 at 1:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9559004306793213, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=3800758
Physics Forums Thread Closed Page 3 of 4 < 1 2 3 4 > ## How does a clock measure time? Quote by Gokul43201 Why can't you measure time by a direct comparison as well? What your doctor does with his hand on your pulse and his eye on a wristwatch is essentially how we all measure time - by a direct comparison to some other calibrated time interval. I think most any predator that actively hunts prey (not the kind that ambush) has a pretty good intuitive sense of time. In the example above, one process is compared to another process - where does physical "time" enter the equation? Recognitions: Gold Member Science Advisor Staff Emeritus Quote by mangaroosh In the example above, one process is compared to another process - where does physical "time" enter the equation? In what way is the time interval between two ticks of a clock different from the length interval between two ends of a ruler? (Other than that the first interval is measured along a dimension we call 'time' and the second is measured along a dimension we call 'length'.) Mentor Quote by lisab Me too. In a lab, one measures length by comparing it to another (calibrated) length; it's a direct comparison. But to measure time, there's more involved (time = distance/rate). That's only true if you use a mechanical clock! As I said above, you (and M_E) have the dependency backwards when it comes to the definition/most accurate ways of measuring: Time is counted, length is calculated from measuring time and converting. I've seen an awful lot of people saying they just don't like time or feel uncomfortable with it, but I've never seen it be based on a logical reason. It's just a feeling and that's not good enough to really say there is a problem with the concept of time. Mentor Quote by Mech_Engineer And of course time isn't constant- its measurement depends on your speed, further complicating things! No, that doesn't further complicate it any more than it complicates measuring length (due to length contraction). You're misunderstanding the point of Relativity. Mentor Quote by mangaroosh The use of a periodic cycle simply gives us a common unit of comparison, in which we can express information about a process. We can then compare other processes by expressing them in terms of this common unit. The periodic cycle exists, and the processes that are expressed in terms of the cycle exist, but physical "time" cannot be deduced from that. Why not!? Length is measured using a proxy as well. There is no difference. "Length", much like "time", is just a concept; what happens when we use a ruler to measure the physical dimensions of an object is, in a simplistic example, we take a standard unit and hold it beside a physical object, and see how many of those standard units can be held beside the object. This allows us to express the physical dimensions of the object in a standard unit, which allows us to compare the physical dimensions of other objects expressed in the same units. That objects have spatial dimensions is self-evident. The measurement of the temporal dimension is not quite as straight forward - impossible if it doesn't exist - because the attempted temporal measurement of an object can only ever be carried out in the present; that is, the actual time co-ordinate will always be "now". We may of course remember a past state, and project a future state, but those are just mental constructs. The object only ever exists in the present. That is completely wrong. Both distance and time are only measured in a single reference at a time, using some anchor to another reference. For distance, the obvious example is with a tape measure, where it is fixed at one end and read at the other. For longer distances, you can completely lose the starting reference and still have an accurate measurement (such as with the odometer in your car). Here and "now" are the same concept. Quote by mangaroosh Whether it is estimated or not, how does the counting of periodic events allow us to deduce that there is a temporal dimension; bearing in mind that the memory of a past event is just a mental construct, and only events in the present can be said to be real, without assuming that past or future events are real? That is an assertion/assumption without basis and one that is wrong for the same reason as the last. How, for example, can you say all the places you visited in your car, recorded by your odometer, are real? They only exist in the past for you, both in terms of time and space. Quote by mangaroosh The question is about what do we actually observe. Physical theories, make ontological claims about the nature of things like time and space; to have more accurate physical theories we need to see if those claims are justifiable. Indeed, our subconscious belief in time, just as our other subconscious beliefs, can affect our experience of reality, or more pointedly, how we live our lives. Again, it is the emboldened in the last paragraph that is being questioned. LOL ontology is metaphysics, not science, and Relativity can describe time entirely by its relationship to other phenomena without any ontological explanation or reference to metaphysics whatsoever. Reality, existence, being, and the subconscious are for mystics and philosophers to debate. Science can state unequivocally that rainbows are optical illusions, but not whether they exist or are real in any ontological sense. Mentor Quote by russ_watters That's only true if you use a mechanical clock! As I said above, you (and M_E) have the dependency backwards when it comes to the definition/most accurate ways of measuring: Time is counted, length is calculated from measuring time and converting. In my lab, we measure properties just like I said in my previous post. Nearly all of the clocks we use are mechanical clocks. We use tape measures and metersticks to measure distance, most of the time. Sometimes a caliper. When measuring the depth of a glulam beam, we don't ever invoke the length of the path travelled by light in vacuum in 1 ⁄ 299,792,458 of a second. I work in an engineering lab that tests building materials. It ain't rocket science. Blog Entries: 2 Recognitions: Gold Member Science Advisor Well Russ you can explain till you're blue in the face, I still find the absolute definition of time to be a bit of a wishy-washy subject That's not to say I can't utilize it in equations or understand HOW it can be utilized, I'm just saying it's got a certain smell about it... It's probably just a lack of fundamental understanding of where there are/aren't circular arguments. If I was asked to prove the accuracy of a CMM or a micrometer or even the flatness of an optical surface that's probably not a problem (given access to NIST references haha). But if someone asked me to prove the accuracy of an atomic clock, not sure where I would go other than to say "well it's got cesium in it, and this definition of a second says it should be right..." Recognitions: Gold Member Homework Help Quote by mangaroosh The question is, as per the title of the thread, how exactly does a clock measure time? Since we've established (I hope) that the flow of time is now defined by counting the periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom in atomic clocks, perhaps one should ask, "Okay, what makes the atomic clock 'tick.' What drives the electron to regularly, seemingly change state in the first place?" Without thinking, my first thought might be the evolution of the wavefunction, of course. But that leads to a sort of chicken-or-the-egg problem here. The wavefunction is modeled by the time-dependent Schrödinger equation. And the $t$ in the $i \hbar \frac{\partial \Psi}{\partial t}$ term is based on the outcome of the evolution. So that doesn't really help us much here in this particular case. Instead, perhaps there's another approach to understanding this. Consider a very long hallway. In this hallway there are many, many clones/copies of yourself, all lined up one after the other. Each of these clones has a big box containing a stack of loose-leaf papers, in varying levels of organization. The first clone in the line represents some copy of your much younger self. That clone's box of papers is fairly well organized. The box belonging to the next clone in line is identical to the first's, except one of the papers has been moved out of order. As the line continues back, the corresponding stack of papers belonging to that clone are slightly more disorderly than the preceding clone's stack. This goes all the way back to the last clone in line, an older version of yourself who's stack of papers is fairly disorganized. Now suppose you go up to one of these clones, and ask, "are you in the past, present or future?" The answer that you will invariably get is: "I am in the present." And it doesn't matter which clone you ask. They all think that they are in the present. Each clone thinks that it is he/she that is the one in the present, and all the others are either past or future. If you haven't figured out my analogy yet, the line of clones represents a line on the "time" dimension of spacetime. And on this 4-dimensional (or 10 or 11 dimensional, whatever) chunk of spacetime, all versions of you equally exist. No version is more valid or less valid that any other. And each version is fooling himself/herself into thinking that he/she is the only version in the present. What's really the case is that each version is in its own present, and is stuck there. Remember that stack of papers? Each stack of papers represents a particular quantum state of the universe. And each version of yourself is associated with one and only one quantum state. Each version of yourself is able to recall/look at notes/look at records, etc. regarding versions ahead of it, because those versions are almost identical to the clone in question, except they have stacks of paper that are more orderly, not less. What I'm getting at is this: time, whether one considers it an illusion or not, is suspected to be related to entropy. And the arrow of time is always in the direction of increasing entropy. ------- There's more to it than that still. Take an atomic clock, put it in a refrigerator and let it cool down. The entropy of the atomic clock as decreased, and yet it certainly does not tick backwards. As a matter of fact, it doesn't even slow down or change its rate at all. So there's more to it than the entropy of only the atomic clock itself. At the heart of the atomic clock is that caesium 133 atom which contains an electron which emits radiation as it transitions states. And when a photon of radiation is detected by a detector, an electron in an atom in that detector becomes entangled with the original electron in thecaesium 133 atom. Almost immediately both of those electrons become entangled with other particles in the detector, and then the apparatus holding onto the detector and then with the all the atoms in the atomic clock, then room containing the atomic clock, then the Earth, and then throughout the universe. This is the process of decoherence: quantum state leaking out into the universe via quantum entanglement, becoming entangled with ever more and more particles. Decoherence happens fast. And it makes the wavefunction appear to collapse [almost] instantly. And the wavefunction collapses simultaneously (whether treated as instant or "almost" instant) across all space. Decoherence is akin to the second law of thermodynamics acting at its most basic level. And I speculate that resulting quantum entanglement between the caesium 133 atom and the rest of the universe has a large role in evolution of the wavefunction, and the eventual caesium 133 atom's changing of states again (even if that is the 'illusion' of time by talking to a different clone in that hallway analogy: moving to a different quantum state within spacetime). (Tangent: Historically, this idea of instantaneous and simultaneous-across-space of the wavefunction collapse has caused many debates and experiments. According to Einstein's relativity, there is no such thing as absolute simultaneity. Events that are simultaneous in one frame of reference are not simultaneous in another frame. Since then the arguments have been mostly worked out by realizing (a) the wavefunction "collapse" should not be thought of as a classical event: it's not valid to describe it that way. And (b) any real events involving a given wavefunction collapse cannot be brought together for comparison faster than the speed of light. With those realizations in mind, special relativity and quantum mechanics are not in conflict. But it's these realizations that have given birth to various interpretations of quantum mechanics.) Although putting an atomic clock in a refrigerator won't cause it to change its rate of time, relativity will. Put two atomic clocks in airplanes and let one fly around the world toward East, and the other West, and when they return they will show a difference. Put an atomic clock in a gravity well, and it will slow down. That last point I find quite interesting. Consider a simple, non-rotating black hole. The volume of space contained within the event horizon of a black hole is at maximum entropy. One simply cannot put more entropy into a black hole without making it bigger (btw, the entropy of a black hole is proportional to the event horizon's surface area). Now consider carefully lowering an atomic clock such that it hovers just above the event horizon (assume you the observer are a safe distance away). Or just let the clock fall toward the black hole, whatever. When the atomic clock approaches the event horizon, its rate of 'ticking' approaches zero. Of course its slower rate of time can be explained with general relativity. But is there a connection with entropy and how general relativity affects entropy, and thus time, or is that just a coincidence? I'm betting one would have to find more connections to quantum entropy, quantum entanglement, decoherence, and how they are affected by special and general relativity, to really understand how a clock measures time. Further reading: http://arxiv.org/abs/quant-ph/0203033 http://www.fqxi.org/data/essay-conte..._McGucke_7.pdf http://fqxi.org/data/essay-contest-files/Kiefer_fqx.pdf http://lmgtfy.com/?q=quantum +entropy+relativity# Quote by Gokul43201 In what way is the time interval between two ticks of a clock different from the length interval between two ends of a ruler? (Other than that the first interval is measured along a dimension we call 'time' and the second is measured along a dimension we call 'length'.) The interval between events is always in the present though. If we caricature the operations of a caesium clock: as a man sitting on a chair counting the events (or whatever the more specific term is) as they appear in front of him. One pops into view, he counts one, and it disappears again; then another one pops into view, he counts two, and it disappears again; and so on. Here, the temporal "relation" between events is only an imagined one. The man observes them in a sequence, but each one ceases to exist after he counts it. To say that one event exists in the past is simply to imagine that it does. Alternatively, if we imagine that each event is on a conveyor belt such that, as it comes into view he counts it and it continues along the conveyor belt (with or without disappearing, or ceasing to exist). Again, there is no past/present/future relationship, other than in the imagination of the counter. As an event occurs i.e. as he counts the object on the conveyor belt, it continues to exist in the present, but he retains the memory of it having passed him and designates is as being "in the past" - again, this is only imagined, because the object continues to exist in the present and the physical relationship between it and the other events is entirely spacial, not temporal. The temporal relationship is imagined, on the basis of his capacity for memory. Thankyou for that detailed response collinsmark: very insightful. Quote by mangaroosh I would say that time is merely a concept, with no physical existence ... Time refers to changes in physical configurations. Quote by mangaroosh ... this doesn't appear to be how time is treated in physical theories, however. Einsteinian relativity appears to treat it as being physical and dynamical. Right. Quote by mangaroosh I'm wondering how a clock measures time in this context, without simply assuming that it does; or how a temporal dimension can be deduced from the processes of a clock. A clock is a more or less periodically changing physical configuration. The underlying assumption is that any and all physical evolutions are ultimately due to some fundamental physical dynamic(s). So, counting the vibrations of a quartz crystal, or counting the revolutions of the earth around the sun, or counting the ticks of a mechanical wound up clock, or counting the full moons, etc., are all ways of indexing the evolution of our universe. Quote by mangaroosh But just to try and stick to the strict topic of the thread; if we consider the processes of a clock, where is the physical entity called "time" actually measured? Time is the changing, the evolution, of configurations, from the largest to the smallest scale. So, one can measure time on virtually any scale. Quote by mangaroosh The counter of an atomic clock counts the number of events (or oscillations) in the clock i.e. it measure the number of events; where does the measurement of the temporal (and physical) element of spacetime occur? It occurs in the detection of the discrete oscillations. Which are enumerated. And which provide an index for comparison. Time isn't some mysterious dimension. It's just the changing configurations of various ponderable objects. We, or instruments which augment our senses, record physical configurations. The changing of those configurations is what we call the passing of time. To add to what ThomasT said, back in the ancient days the point of references were things like the position of the earth relative to the solar system and things like relativity to the stars. This was before all of the computers, grandfather clocks and all of that: a lot of it was based on understanding various periodic mechanisms that eventually gave us the things like the calenders of which you can find in many ancient documents from all across the globe. Quote by russ_watters Why not!? Length is measured using a proxy as well. There is no difference. An explanation of how physical time, or a temporal dimension cannot be deduced from counting the events of a periodic cycle is proffered here. The difference between the physical dimensions of an object - bear in mind "length" is just a concept - and the temporal dimensions, is that it is possible to hold that proxy beside the object being measured and "measure its length". The same cannot be said of time, unless we assume that time exists a priori; we can't hold a clock beside time and measure the physical property of "time". Quote by russ_watters That is completely wrong. Both distance and time are only measured in a single reference at a time, using some anchor to another reference. For distance, the obvious example is with a tape measure, where it is fixed at one end and read at the other. For longer distances, you can completely lose the starting reference and still have an accurate measurement (such as with the odometer in your car). Here and "now" are the same concept. Apologies, I was a little narrow in my explanation; the explanation, as the one above represents a possibility that is distinctly lacking when it comes to time. With regard to the odometer of a car, it must be highlighted again that "distance", like "lenght" and "time" is just a concept. Indeed, what is measured by the odometer of a car is the amount of road in between the two locations. Again, the measurement device (the wheel) is in direct contact with the object of measurement; something which is not possible with time. Quote by russ_watters That is an assertion/assumption without basis and one that is wrong for the same reason as the last. How, for example, can you say all the places you visited in your car, recorded by your odometer, are real? They only exist in the past for you, both in terms of time and space. It is not so much an assertion or an assumption as reasoning based on lack of evidence. Have you ever existed in a moment that wasn't the present? Bear in mind that the memories you have of "the past" are just mental constructs of the present moment, which has subsequently changed. Is there any evidence, that doesn't exist in the present, of either "the past" or "the future" - bear in mind that antiques or ancient buildings have always existed in the present, and will continue to do so. It is safe enough to say that no experiment has ever been, or will never be, conducted outside of the present, for anyone. This means that there is no possible evidence of the continuing existence of past or future events, only present events. To say that they exist is to assume they do. The difference between the places we visit in our car and events in the past and future, is that we can return to the places we visit in our car, but we can never visit past or future events. We can even set up webcams in the places we visit, or make phonecalls to people located there, but we canntot do the same with past or future events. Quote by wuliheron LOL ontology is metaphysics, not science, and Relativity can describe time entirely by its relationship to other phenomena without any ontological explanation or reference to metaphysics whatsoever. Reality, existence, being, and the subconscious are for mystics and philosophers to debate. Science can state unequivocally that rainbows are optical illusions, but not whether they exist or are real in any ontological sense. No doubt it can, but things like the existence of spacetime and "the block universe" are ontological claims. Mentor Quote by lisab In my lab, we measure properties just like I said in my previous post. Nearly all of the clocks we use are mechanical clocks. We use tape measures and metersticks to measure distance, most of the time. Sometimes a caliper. When measuring the depth of a glulam beam, we don't ever invoke the length of the path travelled by light in vacuum in 1 ⁄ 299,792,458 of a second. I work in an engineering lab that tests building materials. It ain't rocket science. That's fine as long as you recognize that you are using relatively crude instruments that work differently than the official definitions of the units. I wouldn't complain that a sundial doesn't have a second hand! I just find it so odd for otherwise scientific people to analyze a scientific concept according to feelings. Mentor Quote by mangaroosh An explanation of how physical time, or a temporal dimension cannot be deduced from counting the events of a periodic cycle is proffered here. That is completely wrong and your conveyor belt example shows exactly the error you made earlier with "here" and "now". Both the time and position of the starting point are lost on a conveyor belt. They are only kept track of. The difference between the physical dimensions of an object - bear in mind "length" is just a concept - and the temporal dimensions, is that it is possible to hold that proxy beside the object being measured and "measure its length". Sometimes yes, sometimes no. Don't fool yourself into thinking that since sometimes you can see both ends of a ruler at the same time that it makes a difference. It doesn't. I'm sure you wouldn't concede the point for time, that both ends of an interval of time that is shorter than human perception happen "now", would you? The same cannot be said of time, unless we assume that time exists a priori; we can't hold a clock beside time and measure the physical property of "time". Nonsense. The concept of time was developed because it was needed. It was not assumed to exist, it was observed to exist. Again, the measurement device (the wheel) is in direct contact with the object of measurement; something which is not possible with time. What do you mean "again"? That's a new objection that has nothing to do with anything. There is no requirement that a distance measuring device be in contact with what it is measuring. It is not so much an assertion or an assumption as reasoning based on lack of evidence. Have you ever existed in a moment that wasn't the present? Have you ever measured a distance from the wrong end of a tape measure? Bear in mind that the memories you have of "the past" are just mental constructs of the present moment, which has subsequently changed. Is there any evidence, that doesn't exist in the present, of either "the past" or "the future" - bear in mind that antiques or ancient buildings have always existed in the present, and will continue to do so. That is pseudophilosophical nonsense. Again, if that were true, which it isn't, the fact that you are always here would mean distance exists only in your head as well. Both time and distance are measured by establishing one end point, then tracking, with an instrument, to another. It is safe enough to say that no experiment has ever been, or will never be, conducted outside of the present, for anyone. This means that there is no possible evidence of the continuing existence of past or future events, only present events. To say that they exist is to assume they do. Nonsense - that's "not even wrong" (it would have to improve in order to merely be wrong). There is no claim nor requirement for an event that happened in the past to still exist. They did exist. They don't anymore. Moreover, not only do experiments occur in the past, it is actually more correct to say they all occur in the past -- not to mention, they all occur there, not here! But that's true of both time and distance, so again an irrelevant concern. The difference between the places we visit in our car and events in the past and future, is that we can return to the places we visit in our car, but we can never visit past or future events. Not necessarily true. I once visited the World Trade Center twin towers, but I can't again. Clearly, this shows us that "location" really needs 4 coordinates, not three. Yes, there is a difference between distance in time in that time flows and distance does not. But who are you to call that a flaw? It's just a difference and when you oversimplify examples to try to make it into a flaw, you typically only show that time is real. We can even set up webcams in the places we visit, or make phonecalls to people located there, but we canntot do the same with past or future events. Both webcams and phone calls record in the past and bring the recordings to you in the present. You're arguing against yourself. Thread Closed Page 3 of 4 < 1 2 3 4 > Thread Tools | | | | |-----------------------------------------------------|------------------------------|---------| | Similar Threads for: How does a clock measure time? | | | | Thread | Forum | Replies | | | Special & General Relativity | 55 | | | Special & General Relativity | 11 | | | General Physics | 3 | | | Special & General Relativity | 4 | | | Special & General Relativity | 22 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9597491025924683, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/1444/how-to-control-the-output-of-a-hash-function-to-output-to-specific-data-accordin
# How to control the output of a hash function to output to specific data according to similarity? I do not know if the question lies exactly in that field but i'll give it a try unless rejection. I want to study methods of applying LSH functions to feet in a specific area of digest values. Briefly i would like to control the results of hash functions such as "similar" inputs (similarity is defined by an lsh algorithm given a distance metric (i.e:hamming distance)) fall to a specific value. That is i want to control the output of lsh to result in a specific range of values. - Is your question about existing Locality-sensitive hashing functions? Or, is it how to take a Cryptographical hash function, and use it to create a Locality-sensitive hash function? – poncho Dec 12 '11 at 15:59 @poncho The second one but also how i can control the output range. I.e: i want to have such outputs to a specific range of values – curious Dec 12 '11 at 16:33 ## 1 Answer Well, if the question is 'how do we select inputs to a cryptographical hash function so that the outputs are within a specific range', well, you're pretty much limited to: • Rejection methods -- that is, you hash the input with some salt, and if the resulting hash value isn't in the range you want, you keep on trying different salt values until it is. For example, if you want the MSBit of the hash to be zero, you keep on trying different salt values until you find a hash with has an MSBit of zero. • Postprocessing methods -- that is, you hash the input as usual, and then map it into the range you want. For example,if you want the MSBit of the hash to be zero, you simply take the result of the crpytographical hash function, and set the MSBit to zero. If you're looking for a more efficient method of selecting hash inputs to constrict the hash output, well, they're not known to exist for cryptographical hash functions. In fact, if there was a large, easily computable subset of inputs that generated a biased hash output, that would imply a cryptographical weakness of the hash function. For example, to find a collision in the hash function, one could just take inputs from the subset, and hash those; if there was a bias in the hash outputs, one would find a collision with probability $0.5$ with a number of hashes strictly less than $1.17741 \cdot 2^{N/2}$ attempts (where $N$ is the size of the hash function); this is a weakness (although a small one if the bias is small, or it takes a large amount of computation to find elements in the subset). - @Lets say that i have point1 p1 and point 2 p2.According to an LSH function which i do not care, the probability P1 is very high to belong to the same bucket after applying this LSH as such P[LSH(p1)==LSH(p2)] >P1 if those to points are close together at some fraction.So what i want is to direct the output of inputs which are "close" together to a specifiv output – curious Dec 12 '11 at 17:41 2 @curious: Well, if you want similar points to hash to the same value with high probability, the obvious thing to do is to find a mapping function $map$ such that $P[map(p1)==map(p2)]>P1$, and then define $LSH(p) = Hash(map(p))$. However, I suspect that doesn't answer your question. What is the underlying problem you're trying to solve? – poncho Dec 12 '11 at 18:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9300411939620972, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/3872/number-of-regular-cardinals-in-a-weakly-inaccessible-cardinal
# number of regular cardinals in a weakly inaccessible cardinal Let $\kappa$ ba weakly inaccessible cardinal. Why are there $\kappa$ regular cardinals $\lambda < \kappa$? I've tried a recursive construction, but I don't know what to do in the limit step. Supremum does not work, since then we loose regularity. - ## 1 Answer Suppose that $\kappa$ is weakly inaccessible. Thus, it is a regular limit cardinal. So $\kappa=\aleph_\beta$ for some ordinal $\beta$. Since $\kappa$ is a limit cardinal, it must be that $\beta$ is a limit ordinal. Since $\kappa$ is regular, it cannot be that $\beta<\kappa$. So $\kappa=\aleph_\kappa$. Thus, there are $\kappa$ many cardinals below $\kappa$. All the successor cardinals $\aleph_{\beta+1}$ for $\beta<\kappa$ are regular, and there are $\kappa$ many of these. - Ah, of course! Thanks. – Martin Brandenburg Sep 2 '10 at 22:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9371882677078247, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/130140/directional-cosines-of-a-line/130156
# Directional cosines of a line. Show that if the lines with the directional cosines $(l, m, 0)$ and $(p, 0, q)$ are perpendicular then either $m = \frac {1}{\sqrt{p^2 + q^2}}$ or $q = \frac {1}{\sqrt {l^2 + m^2}}$. - I am assuming you mean $m = \frac {1}{\sqrt {p^2 + q^2}}$ or $q = \frac {1}{\sqrt {l^2 + m^2}}$. – Bidit Acharya Apr 10 '12 at 19:00 @BiditAcharya Yes you are right. Sorry for the typo. – Faisal Apr 10 '12 at 19:04 ## 1 Answer Note here that when you write $(l,m,0)$ and $(p,0,q)$, these are the unit vectors along the respective lines.(Check on the definition of a direction cosine) So, this'd mean that $$\sqrt{l^2+m^2}=1 ~~ \implies \frac1{\sqrt{l^2+m^2}}=1 \tag{1}$$ Similar reasoning allows us to deduce $$\sqrt{p^2+q^2}=1 ~~ \implies \frac1{\sqrt{p^2+q^2}}=1\tag{2}$$ Now note that $(l,m,0)$ and $(p,0,q)$ are perpendicular, so $$(l,m,0) \cdot(p,0,q)=0~~ \implies lp=0$$ One of them has to be zero right? Let's assume that $l=0$. Then from our result $(1)$, $m=1$ (it makes sense too, right? if $m \neq 1$ then our vector wouldn't be a unit vector would it?). So one of our unit vectors reduces just to $(0,1,0)$, which is just a vector along the y-axis, or $\hat{\jmath}$.And what other unit vector do you know with the y-component $0$ (in the form $(p,0,q)$) and perpendicular to $\hat{\jmath}$? Obviously, it is the vector $\hat{k}$, or the vector $(0,0,1)$. Now you should notice that from our deductions, $m=q=1$. If you revisit $(1)$ and $(2)$ you have $$q=\frac1{\sqrt{l^2+m^2}}$$ and, $$m=\frac1{\sqrt{p^2+q^2}}$$ Hope it Helps! -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9280518293380737, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/281788/estimate-on-the-hausdorff-dimension-of-boundary-of-balls
# Estimate on the Hausdorff dimension of boundary of balls I am reading Evans and Gariepy's book on GMT and I have a couple questions: 1) if E is a set of locally finite perimeter, is it true that E is $\| \partial E\|$- measurable? 2) At a certain point, he uses the estimate $\mathcal{H}^{n-1} ( \partial B(x,r)) \leq c r^{n-1}$ where x is in the reduced boundary of E, and c is some constant. I don't know why this estimate is true. Thank you for any help - ## 1 Answer 1. Not necessarily. Recall that in the definition of sets of finite perimeter, as well as of the measure $\|\partial E\|$, the set $E\subset \mathbb R^n$ is identified with an element of $L^1$ represented by its characteristic function. In particular, we can add to $E$ any set of $n$-dimensional measure zero without affecting $\|\partial E\|$. If $D$ is an open disk in $\mathbb R^2$, then $\|\partial D\|$ is the linear measure on its boundary (up to a constant multiple). Adding to $D$ a non-measurable subset of $\partial D$, we obtain a set $E$ which is non-measurable with respect to $\|\partial E\|$. 2. This estimate has nothing to do with $E$ or its boundary. By the scaling property of Hausdorff measure (page 63), it suffices to prove that $\mathcal{H}^{n-1}(\partial B(0,1))$ is finite. This can be proved directly by observing that $\partial B(0,1)$ is the Lipschitz image of an $(n-1)$-dimensional cube. An alternative, and perhaps more natural way, is to appeal to surface area formulas on page 101 of the book. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9598520398139954, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/model-categories?sort=faq&pagesize=15
# Tagged Questions The model-categories tag has no wiki summary. Let $\text{Ch}⁺(R)$ be the category of non-negative chain complexes of $R$-modules where $R$ is a commutative ring. What is a cylinder object, in the sense of model categories, for a given complex ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8045381307601929, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/69615-integration-parts.html
# Thread: 1. ## Integration by Parts Hello, I just have two problems. I have used integration by parts, but I get stuck. The work you show will be greatly appreciated. $\int sin \sqrt x~dx$ $\int e^{6x} sin(e^{2x})~dx$ Thanks! 2. Originally Posted by Mr. Engineer Hello, I just have two problems. I have used integration by parts, but I get stuck. The work you show will be greatly appreciated. $\int sin \sqrt x~dx$ $\int e^{6x} sin(e^{2x})~dx$ Thanks! For the first one, do you mean $\sqrt{x} \sin(x)$ ?? 3. no, its the way it is, i trued just using u-substitution, but the answer was not what I got, we were supposed to use integration by parts to solve that one 4. Originally Posted by Mr. Engineer Hello, I just have two problems. I have used integration by parts, but I get stuck. The work you show will be greatly appreciated. $\int sin \sqrt x~dx$ $\int e^{6x} sin(e^{2x})~dx$ Thanks! Well, what are you waiting for? Do it! $<br /> \int{sin(\sqrt{x})}\;dx\;=\;x \cdot sin(\sqrt{x})\;-\;\int x \;d\left(sin (\sqrt{x}) \right)\;=\;x \cdot sin(\sqrt{x})\;-\;\int{x \cdot \frac{cos(\sqrt{x})}{2 \cdot \sqrt{x}}}\;dx<br />$ Now what? 5. Do you have to use parts?. I think a sub is easier. $\int e^{6x}sin(e^{2x})dx$ Now, let $u=e^{2x}, \;\ \frac{1}{2}du=e^{2x}dx$ Make the subs and we get: $\frac{1}{2}\int u^{2}sin(u)du$ Now, let's use tabular integration: This works well when we have a something such as e or sin that repeats as we take derivatives. It's based off parts, though. So, I reckon we are using parts. $\text{\underline{sign}}, \;\ \;\ \text{\underline{u and its derivatives}} \;\ \text{\underline{v' and its antiderivatives}}$ $+ \;\ \;\ \;\ \;\ \rightarrow \;\ \;\ \;\ \;\ u^{2} \;\ \;\ \;\ \;\ \searrow \;\ \;\ \;\ \;\ \;\ \;\ \;\ v'=sin(u)$ $- \;\ \;\ \;\ \;\ \;\ \rightarrow \;\ \;\ \;\ \;\ 2u, \;\ \;\ \;\ \;\ \searrow \;\ \;\ \;\ \;\ \;\ -cos(u)$ $+ \;\ \;\ \;\ \;\ \;\ \rightarrow \;\ \;\ \;\ \;\ 2 \;\ \;\ \;\ \;\ \searrow \;\ \;\ \;\ \;\ \;\ \;\ -sin(u)$ $- \;\ \;\ \;\ \;\ \;\ \;\ \rightarrow \;\ \;\ \;\ 0, \;\ \;\ \;\ \;\ \;\ \;\ \;\ \;\ \;\ cos(u)$ Add up the signed products diagonally and alternate signs. $-u^{2}cos(u)+2usin(u)+2cos(u)+C$ Resub: $\boxed{\frac{1}{2}\left[-(e^{2x})^{2}cos(e^{2x})+2e^{2x}sin(e^{2x})+2cos(e^ {2x})\right]}$ Someone will come by saying, "this way is easier". Oh well. I wanted to show you tabular integration in the event you have not seen it. The more one knows the better. 6. incidentally, a substitution works nicely with the first one also $\int \sin \sqrt{x}~dx$ Let $u = \sqrt{x}$, that is, $u^2 = x$ $\Rightarrow 2u~du = dx$ So our integral becomes $2 \int u \sin u~du$ which is easy to do by parts. you can also use the tabular method as galactus showed you, but it would be overkill here, i think 7. $\int {\sin \sqrt x dx = \int {\sqrt x \frac{{\sin \sqrt x }}{{\sqrt x }}} } dx = - 2\int {\sqrt x d\left( {\cos \sqrt x } \right)} =$ $= - 2\sqrt x \cos \sqrt x + 2\int {\cos \sqrt x d\left( {\sqrt x } \right)} = - 2\sqrt x \cos \sqrt x + 2\sin \sqrt x + C.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9676701426506042, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/34487/what-are-the-most-important-results-and-papers-in-complexity-theory-that-every/34655
## What are the most important results (and papers) in complexity theory that every one should know? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) A few years ago Lance Fortnow listed his favorite theorems in complexity theory: (1965-1974) (1975-1984) (1985-1994) (1995-2004) But he restricted himself (check the third one) and his last post is now 6 years old. An updated and more comprehensive list can be helpful. What are the most important results (and papers) in complexity theory that every one should know? What are your favorites? - 5 The list of important results in complexity theory that every complexity theorist "should" know is enormous. I think a better question would be: "what are the most important results in complexity theory that every mathematician should know?" – Ryan Williams Aug 4 2010 at 14:14 1 How do you expect answers to this question to be different from what Lance did? Do you want also things not on his lists? Do you want extra 'votes' for things already mentioned by him? (I find his lists pretty comprehensive: his favorites ~ results complexity theorists should know) – Mitch Harris Aug 4 2010 at 14:22 @Ryan: That is also a nice question. (Maybe we should start a community wiki for it for that also?) But I was more thinking about having a list of things that a first year graduate student who is going to work in complexity theory should learn (or know). – Kaveh Aug 4 2010 at 16:33 @Mitch: Repeating the results from Lance is OK, but I would like to have other perspective and results not mentioned by him, i.e. a more comprehensive list. His lists does not have anything from last 6 years. – Kaveh Aug 4 2010 at 16:33 ## 5 Answers I think Lance's choices from the past are pretty comprehensive, although I might add a couple more from the lower bounds department which for some reason are not well-known: John E. Hopcroft, Wolfgang J. Paul, Leslie G. Valiant: On Time Versus Space. J. ACM 24(2): 332-337 (1977) Wolfgang J. Paul, Nicholas Pippenger, Endre Szemerédi, William T. Trotter: On Determinism versus Non-Determinism and Related Problems (Preliminary Version) FOCS 1983: 429-438 The first paper shows that $TIME[t] \subseteq SPACE[t/\log t]$ (so, $SPACE[t]$ is not contained in $TIME[o(t \log t)]$). This result has since been generalized (from Turing machines) to all the "modern" models of computation. (For references, look at citations on Google scholar.) The second paper shows that for multitape Turing machines, $NTIME[n] \neq TIME[n]$. This is really the only generic separation of nondeterministic and deterministic time that we know. It is not known whether this result extends to more modern models of computation. Perhaps one reason why these results are not better known is that many seem to believe that their approaches are a dead end, more or less. (There's some mathematical evidence for that: the techniques do break down if you try to push them any further, but it's always possible these techniques could be combined with something new.) As for the last 6 years... I'll have to think about my choices for the "best papers" since then. Expect an update to this answer later. I think the following work over the last six years should be among those that everyone should know about. That doesn't mean that I think they're "best", it just means I am trying to answer the original question. It's a very biased list. • Irit Dinur's combinatorial proof of the PCP theorem • Omer Reingold's logspace algorithm for st-connectivity • Ketan Mulmuley's geometric complexity theory program • Subhash Khot's Unique Games Conjecture and what it entails (this was initiated earlier than 6 years ago but it has become much more important in the last 6 years) • Russell Impagliazzo and Valentine Kabanets' "Derandomizing polynomial identity testing means proving circuit lower bounds" • Lance Fortnow et al.'s time-space lower bounds for SAT (this is excluding all work that I have personally done on this, you can decide for yourself if you should know about that) I left out a bunch of very important things because the list is 6 items. Sorry. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. My favourite results are (1) the existence of NP-complete problems (Cook), (2) the Baker-Gil-Solovay theorem that whether P=NP holds relativized to on oracle depends on the oracle, and (3) Fagin's characterization of NP in terms of second order logic. I am not so much interested in the large number of proofs that show that a certain problem is NP-complete, but the fact that there is some problem that is NP-complete is remarkable and important. And Cook's SAT is actually natural. (2) shows that several approaches will not work when one wants to settle P versus NP. (3) gives a much more natural definition of the class NP. Fagin's formulation (NP is the class of graph properties (of finite graphs) that can be expressed with a formula that has an n-ary second order existential quantifier in front, followed by a first order formula) indicates that NP vs co-NP is a very fundamental question as well (can second order existential quantification be replaced by second order universal quantification?). - The mere fact that NP-complete problems exist is (or should be) obvious and immediate once one has the insight to consider the concept in the first place: the problem "Given a nondeterministic machine P, and a number N in unary, determine if it is possible for P to halt in N steps" is clearly NP-complete. The fact that so many other naturally arising problems turn out to be NP-complete is what makes it interesting. – Sridhar Ramesh Aug 9 2010 at 20:37 I think you should add as a recent result the proof for QIP=IP=PSPACE - Is there a particular reason you chose this result? – András Salamon Aug 5 2010 at 17:49 Well, I have a bias here to be honest but I propose this as a result for the 2005-2010 period. First, to my knowledge, this is the best relation we have between classical and quantum classes. There are other good results on upper bounds for BQP, but this is the only result where a quantum complexity class is completely characterized. Second, although I don't know the complete details, the proof seems to be non-relativizing. And that's important because we can try to learn from here and use it to proof other non-relativizing results. Although, other people already tried that. – Marcos Villagra Aug 5 2010 at 23:29 1 Two corrections: first, there were several previous results that completely characterized a quantum complexity class in terms of a classical class (for example, QRG=EXP, NQP=coC_{=}P, PostBQP=PP, and BQP_CTC=PSPACE). Second, while PSPACE in QIP is nonrelativizing, the "new" direction (QIP in PSPACE) is relativizing. – Scott Aaronson Aug 6 2010 at 1:31 Thanks for the info. But what I wanted to point out is that for the "lower" classical complexity classes (PSPACE and below) this is the best, is that correct? Although the NQP=coC_{=}P result seems to be at a really low level. – Marcos Villagra Aug 6 2010 at 2:22 Also BQPSPACE=PSPACE. – Robin Kothari Aug 7 2010 at 14:27 There's the Bazzi/Razborov/Braverman sequence on fooling AC0 circuits. - Well I guess after Cook, Karp's paper "Reducibility among combinatorial problems" is the second most obligatory and canonical thing to mention. This paper was the first to demonstrate to the world the diversity and ubiquity of NP-complete problems. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.948943018913269, "perplexity_flag": "middle"}
http://divisbyzero.com/2009/09/27/is-or-an-inclusive-or-or-an-exclusive-or/?like=1&_wpnonce=1e4d043b68
# Division by Zero A blog about math, puzzles, teaching, and academic technology Posted by: Dave Richeson | September 27, 2009 ## Is or an inclusive or or an exclusive or? (That was a fun title to write!) At the start of our discrete mathematics course we talk about symbolic logic. Students are often confused by the logical operator “OR.” If p and q are statements then p OR q is true if either p is true or q is true or if both p and q are true. This is easily expressed in a truth table: | | | | |----|----|--------| | p | q | p OR q | | T | T | T | | T | F | T | | F | T | T | | F | F | F | The reason this confuses students is that sometimes when we say “or” in everyday conversation we mean p is true or q is true, but p and q are not both true. (For example, “the door is open or the door is closed.”) This brings to mind the logical operation exclusive or, “XOR” (the usual “or” is inclusive or). The truth table for XOR is shown below. | | | | |----|----|---------| | p | q | p XOR q | | T | T | F | | T | F | T | | F | T | T | | F | F | F | It seems like we use “or” as exclusive sometimes and inclusive other times. My colleagues and I were talking about this at the lunch table the other day. One of my colleagues presented a simple example that illustrates this confusion. Waiter: “Would you like tea or coffee?” (exclusive or) Waiter: “Would you like cream or sugar?” (inclusive or) Patron: “I’d like both, thank you.” I thought this was a great example. Another one of my lunchmates is a linguist and he asserted that when we say “or” we always mean inclusive or—even though it seems that we’re using exclusive or. For example, the patron above could have asked for both coffee and tea, it is just that that isn’t usually done. What about the door being open/closed example? A door can’t be both open and closed. In particular, ”the door is open” and “the door is closed” can’t both be true at the same time. But this doesn’t mean that that use of “or” is exclusive or. Exclusive or means that when both statements p and q are true, p XOR q is false. In the door example, we never encounter the “true or true” situation! According to Wikipedia the source of this argument is a 1971 article by Barrett and Stenner called “The Myth of the Exclusive ‘Or’” (Mind, 80 (317), 116–121). No author has produced an example of an English or-sentence that appears to be false because both of its inputs are true. Certainly there are many or-sentences such as “The light bulb is either on or off” in which it is obvious that both disjuncts cannot be true. But it is not obvious that this is due to the nature of the word “or” rather than to particular facts about the world. Update: I had another example to illustrate this misconception. The sentence $x<0\text{ OR } x\ge 0$, is true for all $x$, right? And this is the logical (inclusive) OR, right? But this is exactly the same as “the door is open or the door is closed.” Just as the door is either open or closed, but can’t be both open and closed, one of the two inequalities $x<0$ and $x\ge 0$ must be true, but both can’t be true simultaneously. The fact that both halves can’t be true at the same time (mathematically) doesn’t mean that two trues joined by this “or” is false. ### Like this: Posted in Math, Teaching | Tags: discrete mathematics, exclusive or, inclusive or, logic ## Responses 1. Reblogged this on imasciencegeek and commented: A fresh recap on logical use of inclusive and exclusive or. Been thinking about this recently in association with other systems. By: imasciencegeek on July 9, 2012 at 7:47 pm 2. Thank you for addressing this topic. Most people seem to take the simplistic interpretation: a or b implies if a then not b. But as you point out, both common situations like the waiter, and set theory illustrated by Venn diagrams imply that, in my words “or includes and”. By: Thomas Swanson on April 1, 2013 at 5:25 pm
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9501305222511292, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/10842/are-all-hawaiian-earrings-homeomorphic
Are all Hawaiian Earrings homeomorphic? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The Hawaiian Earring is usually constructed as the union of circles of radius 1/n centered at (0,1/n): $\bigcup_1^\infty \left[ (0, \frac{1}{n}) + \frac{1}{n}S^1 \right]$. However, nothing stops us from using the sequence of radii $1/n^2$ or any other sequence of numbers $a_n$. I will call a Hawaiian Earring for a sequence (of distinct real numbers), A = {an}, the union of circles of radius an centered at (0, an). Let the union inherit its topological structure from R2. Are all of these spaces homeomorphic? If an is a monotone decreasing sequence converging to 0, is its Hawaiian Earring homeomorphic to that of the sequence {1/n}? - 2 This is a comment about the tag. Nobody studies Hawaiian earring in general topology. The first encounter is when you study algebraic topology -- more precisely, the fundamental group. – Anweshi Jan 6 2010 at 0:53 1 Right, but it is not necessary to study algebraic topology to determine the (general) topological properties of the Hawaiian earring and/or to use it as a counterexample in (general) topology. – Qiaochu Yuan Jan 6 2010 at 2:57 What counterexample in general topology did you use the Hawaiian ear ring for? I would like to know. – Anweshi Jan 6 2010 at 12:50 1 I think Anweshi's right in that the Hawaiian earring is not at all pathological from the viewpoint of general topology (hence not a probable source of counterexamples): it's a compact, connected, locally path-connected subset of the Euclidean plane. I make this point in the commentary in John Armstrong's blog (linked to in my answer below). However, this particular question is a question about general topology, right? – Pete L. Clark Jan 6 2010 at 12:55 3 @Pete and Anweshi, I origionally came across the Hawaiian earring in generaly topology. It was given as an example of somthing which is compact, connected, locally path-connected but NOT locally simply connected (due to problems at (0,0)) i.e. as a counter example to the idea that locally path-connected implies locally simply connected. – qwerty1793 Aug 10 2010 at 22:17 show 1 more comment 4 Answers The Hawaian earring is the one-point compactification of a countable union of open intervals. This description is independent of the radii used to construct it. A beautiful reference about this space is [Cannon, J. W.; Conner, G. R. The combinatorial structure of the Hawaiian earring group. Topology Appl. 106 (2000), no. 3, 225--271 MR1775709) If I recall correctly, they prove my claim there. - 1 See Example 1.25 in "Algebraic Topology" by Allen Hatcher. The Hawaiian Earring has a different fundamental group than the infinite wedge of circles. – John Mangual Jan 5 2010 at 20:57 2 In the wedge of infinitely many circles, you can construct a sequence (whose points are the antipodal points in each circle, of the joining point) which does not have a convergent subsequence: the wedge is, therefore, not compact. – Mariano Suárez-Alvarez Jan 5 2010 at 20:57 That is the wedge of infinitely many circles. Isn't that non-homeomorphic to the Hawaiian? – Anweshi Jan 5 2010 at 20:57 Oops, I screwed by deleting my comment and re-posting in a different form.. The one of mine above, should have been the first. Sorry. – Anweshi Jan 5 2010 at 20:59 4 @Anweshi: the special point in an infinite wedge of circles does not have a countable basis of neighborhoods, so in particular that wedge is not metrizable and, as a consequence, it is not a subspace of $\mathbb R^2$. – Mariano Suárez-Alvarez Jan 5 2010 at 21:03 show 3 more comments You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Mariano is absolutely right. [And so, for the record, is Joel.] Coincidentally, an equivalent question came up on the blogosphere in late 2008, and I answered it. The original website is http://mathphdthoughts.blogspot.com/2008/09/hawaiian-earring.html Here is my post on that page: In order to understand the difference [between the Hawaiian earring and the wedge of circles, that is -- 1/5/10] you need to look explicitly at the definition of the topology on a CW-complex with infinitely many cells. By definition, this is the direct limit over the topologies of the subcomplexes with only finitely many cells. In other words, a subset of the CW complex is open iff its intersection with each individual cell is open. (Another way of saying this is that this is the strongest topology on the entire complex such that each of the inclusion maps from the cells is continuous. This is an instance of a "final topology." Why it is also sometimes called a "weak topology" is not so clear to me: the only reasonable explanation is that the meanings of 'weak' and 'strong' used to be the reverse of what they now are, which I believe is unfortunately the case.) Anyway, to compare the Hawaiian earring to the infinite bouquet of circles, look at the neighborhood bases of the central point P. On the Hawaiian earring, any open set containing P must contain the entire nth circle for all sufficiently large n, and for the remaining finitely many circles must contain an open interval about P on that circle. However, on the bouquet of circles, the neighborhoods of P are exactly the subsets which contain an open interval around P on each circle. This is a much larger collection of neighborhoods, and indeed the CW-topology is strictly finer than the earring topology. From this it is easy to see that the CW-topology is not compact, in any number of ways: (i) Find a closed, discrete infinite subset. (ii) Note that it is Hausdorff and apply the fact (which can be found in Rudin's Real and Complex Analysis) that any two compact Hausdorff topologies on the same set are incomparable. (iii) Convince yourself that any CW-complex is compact iff it has finitely many cells. This generated some discussion on John Armstrong's blog: http://unapologetic.wordpress.com/2008/09/12/hawai%ca%bbian-earrings/#comments The very last comment mentions that the earring as the one-point compactification of a countably infinite disjoint union of open intervals. This is a nice observation, the more so since it's completely obvious: the earring is a closed, bounded subset of the Euclidean plane, hence compact. Remove the central point from it and you do indeed get an infinite disjoint union of open intervals. (And, of course, the one-point compactification of a locally compact Hausdorff space is unique up to unique isomorphism.) This makes clear that the homeomorphism type of the earring does not depend on the radii of the circles -- so long as they converge to 0, of course. (Note that the monotonicity is a superfluous hypothesis. If a sequence converges to $0$, you can reorder it so as to be monotonically decreasing, and the resulting subset of the plane can't tell the difference.) - You've been caught red handed! Locally "compact Hausdorff" indeed! – Harry Gindi Aug 20 2010 at 4:36 Well, if red-handed counts for things that happened over seven months ago... – Harry Gindi Aug 20 2010 at 4:37 The answer to your first question is No, they are not all homeomorphic. In the first question you did not insist that the an converge to 0, and so let us entertain the idea of other crazy sequences. For example, we might let an enumerate all the rational numbers. In this case, we would have circles of every rational radius. This is clearly not homeomorphic to the ordinary Hawaiian earring. For example, every convergent sequence in the ordinary Hawaiian earing lays on a path, but this is not true for the crazy dense version, since every point will be a limit of points on other circles. You can make a less-crazy counterexample by having just two limit points in the sequence an. For example, let a2n converge to 1/2 and a2n+1 converge to 0. This example would be compact, but still different from the classical earring. A similar arguent shows that any two sequences with different finite numbers of limit points will be non-homeomorphic. I believe that the homeomorphism type of the resulting earring will be determined by the homeomorphism type of the set {an}, plus the question of whether 0 is a limit point. - Well, that one isn't compact, so it isn't the same as the converging to 0 case, right? But you're right, this info will affect the homemorphism type. – Joel David Hamkins Jan 6 2010 at 0:29 Wait a minute. I think the case of converging to infinity is the same as having any nonconvergent sequence, no? I don't think infinity is special in the way that 0 is special in this construction. – Joel David Hamkins Jan 6 2010 at 0:45 But the homeomorphism type of a sequence converging to infinity is the same as the homeomorphism type of a sequence converging to a finite number (but not including that number). – Pete L. Clark Jan 6 2010 at 0:50 Wait -- no, you're right again. The sequences $a_n = 2 - \frac{1}{n}$ and $a_n = n$ give rise to homeomorphic earrings. – Pete L. Clark Jan 6 2010 at 0:53 Yes, I think that's right. – Joel David Hamkins Jan 6 2010 at 1:13 show 1 more comment Consider $\mathbb{R}^2 \subset \mathbb{C}\cup i \infty$. The inversion $z \mapsto \frac{1}{z}$ sends the circles $(a_n,0)+a_nS^1$ to the vertical lines $\{\frac{1}{2a_n}+i\mathbb{R}\}\cup i\infty$. Let $b_i=\frac{1}{a_i}$ for $i\geq 1$ and $b_0=0$. The function $f(x)=b_{\lfloor x \rfloor} + \{x\}(b_{\lfloor x + 1\rfloor} - b_{\lfloor x \rfloor})$ is a homeomorphism of $\mathbb{R}^+$ that sends the positive integers to $b_1, b_2,...$. Letting $\phi(x+iy)=\frac{1}{2}f(2x)+iy$, one gets an automorphism of the right-hand plane that sends the vertical lines $\frac{n}{2}+i\mathbb{R}$ to $\frac{1}{2a_n}+i\mathbb{R}$. Extending $\phi$ so that it sends $i\infty$ to $i\infty$, the function $z\mapsto \frac{1}{\phi(1/z)}$ then is a homeomorphism between the earring defined with $a_i$ and the standard one. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9280890822410583, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=2798155
Physics Forums ## finding equation of curve from simple graph Hi can anyone please tell me how to find an equation from a graph. It's a fairly simple graph. A symmetrical curve with a maximum at y=0.5, and y=0 at x=0 and x=7.5 ... any suggestions appreciated! Recognitions: Gold Member Homework Help Science Advisor Generally, it is impossible to find a unique equation describing some particular graph. What you might do, is to construct equations that show SOME of the properties of the graph. You say you have a symmetric curve with maximum at (0,0.5) and a zero at (7.5,0). Well, is it a parabola, perhaps? I don't know. It might as well be the graph of a fourth-order polynomial, or some other function. If it IS the graph of a parabola, i.e, that it is a graph of a second-order polynomial, we might make some headway: 1. We have that: $$y=ax^{2}+bx+c$$, and we are to determine a,b,c. From your info, we have that the maximum occurs at x=0. Since, in general, we have that the maximum will occur at $$x=-\frac{b}{2a}$$, we see that b=0 in your case. 2. The zero: We know this occurs at x=7.5, therefore we have, using the info in (1): $$0=a*7.5^{2}+c\to{c}=-a*7.5^{2}$$ 3. Preliminary expression: We now have that: $$y=a*(x^{2}-7.5^{2})$$ 4. Value at maximum: We have that at x=0, y=0.5, we therefore get: $$0.5=-a*7.5^{2}\to{a}=-\frac{0.5}{7.5^{2}}$$ 5. Final expression We have now strictly determined how a second-order polynomial that fullfills all the given info looks like, namely, for example: $$y=-\frac{1}{112.5}(x-7.5)(x+7.5)$$ Recognitions: Homework Help arildno you've misread the info. Assuming it's a parabola, it's given that the zeroes are x=0 and x=7.5 and the maximum is at y=0.5 So far we can conclude it will be of the form y=kx(x-7.5) for some constant k. Since y=0.5 is a maximum and this occurs when x is in between the roots 0 and 7.5 - because parabola's are symmetric about their turning point, we can substitute in x=3.75 and y=0.5 to find k. ## finding equation of curve from simple graph Quote by arildno Generally, it is impossible to find a unique equation describing some particular graph. What you might do, is to construct equations that show SOME of the properties of the graph. You say you have a symmetric curve with maximum at (0,0.5) and a zero at (7.5,0). Well, is it a parabola, perhaps? I don't know. It might as well be the graph of a fourth-order polynomial, or some other function. If it IS the graph of a parabola, i.e, that it is a graph of a second-order polynomial, we might make some headway: 1. We have that: $$y=ax^{2}+bx+c$$, and we are to determine a,b,c. From your info, we have that the maximum occurs at x=0. Since, in general, we have that the maximum will occur at $$x=-\frac{b}{2a}$$, we see that b=0 in your case. 2. The zero: We know this occurs at x=7.5, therefore we have, using the info in (1): $$0=a*7.5^{2}+c\to{c}=-a*7.5^{2}$$ 3. Preliminary expression: We now have that: $$y=a*(x^{2}-7.5^{2})$$ 4. Value at maximum: We have that at x=0, y=0.5, we therefore get: $$0.5=-a*7.5^{2}\to{a}=-\frac{0.5}{7.5^{2}}$$ 5. Final expression We have now strictly determined how a second-order polynomial that fullfills all the given info looks like, namely, for example: $$y=-\frac{1}{112.5}(x-7.5)(x+7.5)$$ Sorry I didn't make it clear the midpoint maximum for this symmetrical curve is at (3.75, 0.5). Yes it's a parabola though, and your answer is very helpful so thank you very much indeed! Recognitions: Gold Member Homework Help Science Advisor Quote by Mentallic arildno you've misread the info. Assuming it's a parabola, it's given that the zeroes are x=0 and x=7.5 and the maximum is at y=0.5 So far we can conclude it will be of the form y=kx(x-7.5) for some constant k. Since y=0.5 is a maximum and this occurs when x is in between the roots 0 and 7.5 - because parabola's are symmetric about their turning point, we can substitute in x=3.75 and y=0.5 to find k. Oh, you are right! Me being overhasty, contradicting my entish motto.. Anyhow, it seems that the OP has gotten the gist of the idea.. Sorry I'm having another look at this and I'm not so sure I understand it now. Some questions about the method ... Quote by arildno $$0=a*7.5^{2}+c\to{c}=-a*7.5^{2}$$ What does the arrow signify ? Quote by arildno $$0.5=-a*7.5^{2}\to{a}=-\frac{0.5}{7.5^{2}}$$ How can 0.5 = -0.5/7.5^2 ? Recognitions: Homework Help The arrow is just like another line, you're re-arranging the equation. So rather than $$x+2=5$$ $$x=3$$ We instead have $$x+2=5 \rightarrow x=3$$ Quote by hurliehoo How can 0.5 = -0.5/7.5^2 ? It's not, take a look at the arrow Ok in that case I am definitely doing something wrong here. Using my understanding of the above method : 1--> max occurs at (3.75, 0.5). Therefore if x = -b / 2a then b = -7.5a 2--> 0 = (7.5^2)a - 7.5a + c therefore c = 7.5a - (7.5^2)a 3--> y = a((x^2) - (7.5^2) + 7.5) 4--> 0.5 = a((3.75^2) - (7.5^2) + 7.5) so a = -.0144 and I can then get b and c from there, however this seems to be completely wrong. Recognitions: Homework Help Quote by hurliehoo 3--> y = a((x^2) - (7.5^2) + 7.5) What's this line? Particularly, what is the a in front doing there? The equation is $$y=ax^2+bx+c$$ and you've already found that $$b=-7.5a$$ and $$c=7.5a - (7.5^2)a$$ $$c=7.5(1-7.5)a$$ $$c=7.5(-6.5)a$$ $$c=-48.75a$$ So now, knowing that the point (3.75,0.5) satisfies the equation, plug this into the general form of the equation and use these values of b and c to find a. $$y=ax^2+bx+c$$ $$0.5=a(3.75^2)-7.5a(3.75)-48.75a$$ You would then solve to find a, then use another point like (0,0) to find c (which would be 0) and then find b with (7.5,0). This is a pretty tedious method though. You already know that the parabola cuts the x-axis at 0 and 7.5 so you should instantly turn it into a factored parabola of the form $$y=kx(x-7.5)$$ for some constant value k, which you can easily find by plugging in the other known point (3.75,0.5). Note that plugging in (0,0) or (7.5,0) won't work because then you end up with 0=0 and that doesn't help find the value of k. Quote by Mentallic What's this line? Particularly, what is the a in front doing there? The equation is $$y=ax^2+bx+c$$ and you've already found that $$b=-7.5a$$ and $$c=7.5a - (7.5^2)a$$ $$c=7.5(1-7.5)a$$ $$c=7.5(-6.5)a$$ $$c=-48.75a$$ So now, knowing that the point (3.75,0.5) satisfies the equation, plug this into the general form of the equation and use these values of b and c to find a. $$y=ax^2+bx+c$$ $$0.5=a(3.75^2)-7.5a(3.75)-48.75a$$ You would then solve to find a, then use another point like (0,0) to find c (which would be 0) and then find b with (7.5,0). This is a pretty tedious method though. You already know that the parabola cuts the x-axis at 0 and 7.5 so you should instantly turn it into a factored parabola of the form $$y=kx(x-7.5)$$ for some constant value k, which you can easily find by plugging in the other known point (3.75,0.5). Note that plugging in (0,0) or (7.5,0) won't work because then you end up with 0=0 and that doesn't help find the value of k. Regarding my first working, which I posted, of the first (agreed tedious) method I forgot to include the x for b (duh), however the next time I tried I got a similar working to yours ... this didn't work due though, I think because c=-48.75a makes it non-zero. So even when c=0, which it has to for the other two coordinates, I got y = -0.00796x^2 + 0.0597x, which when plugging in x=3.75 gives y=0.11194 instead of 0.5 Anyway ... your second method was infinitely simpler and does indeed work very well. Thanks! Recognitions: Homework Help Oh yes that's right, I totally missed that part too! We found that b=-7.5a and used the point (7.5,0) $$y=ax^2+bx+c$$ $$0=a(7.5)^2+(-7.5a)(7.5)+c$$ $$c=0$$ We screwed up the bx part, we substituted b=-7.5a but forgot about the x=7.5 haha ^^ Yeah, better if you stick to the simpler method... saves you easily making an inevitable mistake like we both did Thread Tools | | | | |------------------------------------------------------------------|----------------------------------------------|---------| | Similar Threads for: finding equation of curve from simple graph | | | | Thread | Forum | Replies | | | Engineering, Comp Sci, & Technology Homework | 2 | | | General Math | 1 | | | Calculus & Beyond Homework | 1 | | | General Math | 7 | | | Introductory Physics Homework | 2 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 38, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9513432383537292, "perplexity_flag": "middle"}
http://www.mywikibiz.com/Crank-Nicolson_method
Quickly add a free MyWikiBiz directory listing! # Crank-Nicolson method ### MyWikiBiz, Author Your Legacy — Saturday May 18, 2013 In the mathematical subfield numerical analysis, the Crank-Nicolson method is a finite difference method used for numerically solving the heat equation and similar partial differential equations. It is a second-order method in time, implicit in time, and is numerically stable. The method was developed by John Crank and Phyllis Nicolson in the mid 20th century. For diffusion equations (and many other equations), it can be shown the Crank-Nicolson method is unconditionally stable. However, the approximate solutions can still contain (decaying) spurious oscillations if the ratio of time step to the square of space step is large (typically larger than 1 / 2). For this reason, whenever large time steps or high spatial resolution is necessary, the less accurate backward Euler method is often used, which is both stable and immune to oscillations. ## The method File:Crank-Nicolson-stencil.svg The Crank-Nicolson stencil on a 1D problem. The Crank-Nicolson method is based on central difference in space, and the trapezoidal rule in time, giving second-order convergence in time. Equivalently, it is the average of forward Euler and backward Euler in time. For example, in one dimension, if the partial differential equation is $\frac{\partial u}{\partial t} = F\left(u, x, t, \frac{\partial u}{\partial x}, \frac{\partial^2 u}{\partial x^2}\right)$ then, letting $u(i \Delta x, n \Delta t) = u_{i}^{n}\,$, the Crank-Nicolson method is the average of the forward Euler method at n and the backward Euler method at n + 1: $\frac{u_{i}^{n + 1} - u_{i}^{n}}{\Delta t} = F_{i}^{n}\left(u, x, t, \frac{\partial u}{\partial x}, \frac{\partial^2 u}{\partial x^2}\right) \qquad \mbox{(forward Euler)}$ $\frac{u_{i}^{n + 1} - u_{i}^{n}}{\Delta t} = F_{i}^{n + 1}\left(u, x, t, \frac{\partial u}{\partial x}, \frac{\partial^2 u}{\partial x^2}\right) \qquad \mbox{(backward Euler)}$ $\frac{u_{i}^{n + 1} - u_{i}^{n}}{\Delta t} = \frac{1}{2}\left( F_{i}^{n + 1}\left(u, x, t, \frac{\partial u}{\partial x}, \frac{\partial^2 u}{\partial x^2}\right) + F_{i}^{n}\left(u, x, t, \frac{\partial u}{\partial x}, \frac{\partial^2 u}{\partial x^2}\right) \right) \qquad \mbox{(Crank-Nicolson)}$ The function F must be discretized spatially with a central difference. Note that this is an implicit method: to get the "next" value of u in time, a system of algebraic equations must be solved. If the partial differential equation is nonlinear, the discretization will also be nonlinear so that advancing in time will involve the solution of a system of nonlinear algebraic equations, though linearizations are possible. In many problems, especially linear diffusion, the algebraic problem is tridiagonal and may be efficiently solved with the tridiagonal matrix algorithm, avoiding a costly full matrix inversion. ## Example: 1D diffusion The Crank-Nicolson method is often applied to diffusion problems. As an example, for linear diffusion, $\frac{\partial u}{\partial t} = a \frac{\partial^2 u}{\partial x^2}$ whose Crank-Nicolson discretization is then: $\frac{u_{i}^{n + 1} - u_{i}^{n}}{\Delta t} = \frac{a}{2 (\Delta x)^2}\left( (u_{i + 1}^{n + 1} - 2 u_{i}^{n + 1} + u_{i - 1}^{n + 1}) + (u_{i + 1}^{n} - 2 u_{i}^{n} + u_{i - 1}^{n}) \right)$ or, letting $r = \frac{a \Delta t}{2 (\Delta x)^2}$: $-r u_{i + 1}^{n + 1} + (1 + 2 r)u_{i}^{n + 1} - r u_{i - 1}^{n + 1} = r u_{i + 1}^{n} + (1 - 2 r)u_{i}^{n} + r u_{i - 1}^{n}\,$ which is a tridiagonal problem, so that $u_{i}^{n + 1}\,$ may be efficiently solved for using the tridiagonal matrix algorithm in favor of a much more costly matrix inversion. A quasilinear equation, such as (this is a minimalistic example and not general) $\frac{\partial u}{\partial t} = a(u) \frac{\partial^2 u}{\partial x^2}$ would lead to a nonlinear system of algebraic equations which could not be easily solved as above; however, it is possible in some cases to linearize the problem by using the old value for a, that is $a_{i}^{n}(u)\,$ instead of $a_{i}^{n + 1}(u)\,$. Other times, it may be possible to estimate $a_{i}^{n + 1}(u)\,$ using an explicit method and maintain stability. ## Example: 1D diffusion with advection for steady flow, with multiple channel connections This is a solution usually employed for many purposes when there's a contamination problem in streams or rivers under steady flow conditions but the only information is given in one dimension. Sometimes is needed some kind of info or data about the behavior in the cross section so this is one of many ways to get some extra data without getting into a model of two or three dimensions. What is modeled here is the concentration of a solute contaminant in water for example. This problem is composed by three parts the known diffusion equation (Dx chosen as constant), plus an advective component which means the system is evolving in space due to a velocity field (Ux which in this case we choose as constant). The third part of the equation is due to a lateral interaction between longitudinal channels (k). $<0>\frac{\partial C}{\partial t} = D_x \frac{\partial^2 C}{\partial x^2} - U_x \frac{\partial C}{\partial x}- k (C-C_N)-k(C-C_M)$ where C is the concentration of the contaminant subscripts N & M correspond to previous and next channel. The Crank-Nicolson scheme of solution (knowing i as position in space & j as time interval) comes like this: $<1> \frac{\partial C}{\partial t} = \frac{C_{i}^{j + 1} - C_{i}^{j}}{\Delta t}$ $<2>\frac{\partial^2 C}{\partial x^2}= \frac{1}{2 (\Delta x)^2}\left( (C_{i + 1}^{j + 1} - 2 C_{i}^{j + 1} + C_{i - 1}^{j + 1}) + (C_{i + 1}^{j} - 2 C_{i}^{j} + C_{i - 1}^{j}) \right)$ $<3>\frac{\partial C}{\partial x} = \frac{1}{2}\left( \frac{(C_{i + 1}^{j + 1} - C_{i - 1}^{j + 1})}{2 (\Delta x)} + \frac{(C_{i + 1}^{j} - C_{i - 1}^{j})}{2 (\Delta x)} \right)$ $<4> C= \frac{1}{2} (C_{i}^{j+1} + C_{i}^{j})$ $<5> C_N= \frac{1}{2} (C_{Ni}^{j+1} + C_{Ni}^{j})$ $<6> C_M= \frac{1}{2} (C_{Mi}^{j+1} + C_{Mi}^{j})$ Now we create the following constants that will be helpful doing the algebra: $\lambda= \frac{D_x\Delta t}{2 \Delta x^2}$ $\alpha= \frac{U_x\Delta t}{4 \Delta x}$ $\beta= \frac{k\Delta t}{2}$ replacing <1>,<2>,<3>,<4>,<5>,<6>, α, β & λ into <0> and putting the terms for the new time in the left (j+1) and the term for the present time in the right (j) we get this expression: $-\beta C_{Ni}^{j+1}-(\lambda+\alpha)C_{i-1}^{j+1} +(1+2\lambda+2\beta)C_{i}^{j+1}-(\lambda-\alpha)C_{i+1}^{j+1}-\beta C_{Mi}^{j+1} = \beta C_{Ni}^{j}+(\lambda+\alpha)C_{i-1}^{j} +(1-2\lambda-2\beta)C_{i}^{j}+(\lambda-\alpha)C_{i+1}^{j}+\beta C_{Mi}^{j}$ in the case we are trying to model the first channel, it can only be in contact with the following channel (M), so the expression is simplified by something like this: $-(\lambda+\alpha)C_{i-1}^{j+1} +(1+2\lambda+\beta)C_{i}^{j+1}-(\lambda-\alpha)C_{i+1}^{j+1}-\beta C_{Mi}^{j+1} = +(\lambda+\alpha)C_{i-1}^{j} +(1-2\lambda-\beta)C_{i}^{j}+(\lambda-\alpha)C_{i+1}^{j}+\beta C_{Mi}^{j}$ in the same way if the model is for the last channel, it can only be in contact with the previous channel (N), so the expression is simplified by something like this: $-\beta C_{Ni}^{j+1}-(\lambda+\alpha)C_{i-1}^{j+1} +(1+2\lambda+\beta)C_{i}^{j+1}-(\lambda-\alpha)C_{i+1}^{j+1}= \beta C_{Ni}^{j}+(\lambda+\alpha)C_{i-1}^{j} +(1-2\lambda-\beta)C_{i}^{j}+(\lambda-\alpha)C_{i+1}^{j}$ in order to solve this kind linear of system of equations we must now see that boundary conditions must be given first to the beginning of the channels: $C_0^{j}$: initial condition for the channel at present time step $C_{0}^{j+1}$: initial condition for the channel at next time step $C_{N0}^{j}$: initial condition for the previous channel to the one analized at present time step $C_{M0}^{j}$: initial condition for the next channel to the one analized at present time step and for the last cell of the channels (z) we must see that the most convenient condition becomes an adiabatic one which means $\frac{\partial C}{\partial x}_{x=z} = \frac{(C_{i + 1} - C_{i - 1})}{2 \Delta x} = 0$ condition satisfied if and only if (regardless of a null value) $C_{i + 1}^{j+1} = C_{i - 1}^{j+1}$ Now let's see what happens if the problem is solved (in a matrix form) using 3 channels and 5 nodes (including the initial boundary condition) we can express this as a linear system problem like this one: $\begin{bmatrix}AA\end{bmatrix}\begin{bmatrix}C^{j+1}\end{bmatrix}=[BB][C^{j}]+[d]$ where $\mathbf{C^{j+1}} = \begin{bmatrix} C_{11}^{j+1}\\ C_{12}^{j+1} \\ C_{13}^{j+1} \\ C_{14}^{j+1} \\ C_{21}^{j+1}\\ C_{22}^{j+1} \\ C_{23}^{j+1} \\ C_{24}^{j+1} \\ C_{31}^{j+1}\\ C_{32}^{j+1} \\ C_{33}^{j+1} \\ C_{34}^{j+1} \end{bmatrix}$   and   $\mathbf{C^{j}} = \begin{bmatrix} C_{11}^{j}\\ C_{12}^{j} \\ C_{13}^{j} \\ C_{14}^{j} \\ C_{21}^{j}\\ C_{22}^{j} \\ C_{23}^{j} \\ C_{24}^{j} \\ C_{31}^{j}\\ C_{32}^{j} \\ C_{33}^{j} \\ C_{34}^{j} \end{bmatrix}$ now we must realize that AA and BB should be arrays made of 4 different subarrays (remember that only three channels are considered for this example but it covers the main part discussed above). $\mathbf{AA} = \begin{bmatrix} AA1 & AA3 & 0\\ AA3 & AA2 & AA3\\ 0 & AA3 & AA1\end{bmatrix}$   and $\mathbf{BB} = \begin{bmatrix} BB1 & -AA3 & 0\\ -AA3 & BB2 & -AA3\\ 0 & -AA3 & BB1\end{bmatrix}$ where the elements mentioned above correspond to the next arrays and an additional 4x4 full of zeros. Must be noted that the AA and BB arrays had size 12x12: $\mathbf{AA1} = \begin{bmatrix} (1+2\lambda+\beta) & -(\lambda-\alpha) & 0 & 0 \\ -(\lambda+\alpha) & (1+2\lambda+\beta) & -(\lambda-\alpha) & 0 \\ 0 & -(\lambda+\alpha) & (1+2\lambda+\beta) & -(\lambda-\alpha)\\ 0 & 0 & -2\lambda & (1+2\lambda+\beta)\end{bmatrix}$   , $\mathbf{AA2} = \begin{bmatrix} (1+2\lambda+2\beta) & -(\lambda-\alpha) & 0 & 0 \\ -(\lambda+\alpha) & (1+2\lambda+2\beta) & -(\lambda-\alpha) & 0 \\ 0 & -(\lambda+\alpha) & (1+2\lambda+2\beta) & -(\lambda-\alpha)\\ 0 & 0 & -2\lambda & (1+2\lambda+2\beta) \end{bmatrix}$   , $\mathbf{AA3} = \begin{bmatrix} -\beta & 0 & 0 & 0 \\ 0 & -\beta & 0 & 0 \\ 0 & 0 & -\beta & 0 \\ 0 & 0 & 0 & -\beta \end{bmatrix}$   , $\mathbf{BB1} = \begin{bmatrix} (1-2\lambda-\beta) & (\lambda-\alpha) & 0 & 0 \\ (\lambda+\alpha) & (1-2\lambda-\beta) & (\lambda-\alpha) & 0 \\ 0 & (\lambda+\alpha) & (1-2\lambda-\beta) & (\lambda-\alpha)\\ 0 & 0 & 2\lambda & (1-2\lambda-\beta)\end{bmatrix}$   & $\mathbf{BB2} = \begin{bmatrix} (1-2\lambda-2\beta) & (\lambda-\alpha) & 0 & 0 \\ (\lambda+\alpha) & (1-2\lambda-2\beta) & (\lambda-\alpha) & 0 \\ 0 & (\lambda+\alpha) & (1-2\lambda-2\beta) & (\lambda-\alpha)\\ 0 & 0 & 2\lambda & (1-2\lambda-2\beta) \end{bmatrix}$ The d vector is just for the assignment of the boundary conditions. In this example is a 12x1 vector: $\mathbf{d} = \begin{bmatrix} (\lambda+\alpha)(C_{10}^{j+1}+C_{10}^{j}) \\ 0 \\ 0 \\ 0 \\ (\lambda+\alpha)(C_{20}^{j+1}+C_{20}^{j}) \\ 0 \\ 0 \\ 0 \\ (\lambda+\alpha)(C_{30}^{j+1}+C_{30}^{j}) \\ 0\\ 0\\ 0\end{bmatrix}$ One must the iterate the following equation to solve the system for any time $\begin{bmatrix}C^{j+1}\end{bmatrix}=\begin{bmatrix}AA^{-1}\end{bmatrix}([BB][C^{j}]+[d])$ ## Example: 2D diffusion When extending into two dimensions on a uniform Cartesian grid, the derivation is similar and the results may lead to a system of band-diagonal equations rather than tridiagonal ones. The two-dimensional heat equation $\frac{\partial u}{\partial t} = a \left(\frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2}\right)$ can be solved with the Crank-Nicolson discretization of $\begin{align}u_{i,j}^{n+1} &= u_{i,j}^n + \frac{1}{2} \frac{a \Delta t}{(\Delta x)^2} \big[(u_{i+1,j}^{n+1} + u_{i-1,j}^{n+1} + u_{i,j+1}^{n+1} + u_{i,j-1}^{n+1} - 4u_{i,j}^{n+1}) \\ & \qquad {} + (u_{i+1,j}^{n} + u_{i-1,j}^{n} + u_{i,j+1}^{n} + u_{i,j-1}^{n} - 4u_{i,j}^{n})\big]\end{align}$ assuming that a square grid is used so that Δx = Δy. This equation can be simplified somewhat by rearranging terms and using the CFL number $\mu = \frac{a \Delta t}{(\Delta x)^2}$ For the Crank-Nicolson numerical scheme, a low CFL number is not required for stability, however it is required for numerical accuracy. We can now write the scheme as: $\begin{align}&(1 + 2\mu)u_{i,j}^{n+1} - \frac{\mu}{2}\left(u_{i+1,j}^{n+1} + u_{i-1,j}^{n+1} + u_{i,j+1}^{n+1} + u_{i,j-1}^{n+1}\right) \\ & \quad = (1 - 2\mu)u_{i,j}^{n} + \frac{\mu}{2}\left(u_{i+1,j}^{n} + u_{i-1,j}^{n} + u_{i,j+1}^{n} + u_{i,j-1}^{n}\right)\end{align}$ ## Application in financial mathematics Because a number of other phenomena can be modeled with the heat equation (often called the diffusion equation in financial mathematics), the Crank-Nicolson method has been applied to those areas as well. Particularly, the Black-Scholes option pricing model's differential equation can be transformed into the heat equation, and thus option pricing numerical solutions can be obtained with the Crank-Nicolson method. The importance of that comes from the extensions of the option pricing model that are not able to be represented with a closed form analytic solution; they can still offer numerical solutions. However, for non-smooth final conditions (which happen for most financial instruments), the Crank-Nicolson method is not satisfactory as numerical oscillations are not damped. For vanilla options, this results in oscillation in the gamma value around the strike price. Therefore, special damping initialization steps are necessary (e.g., fully implicit finite difference method). ## References • Crank J. and Nicolson P. (1947) "A practical method for numerical evaluation of solutions of partial differential equations of the heat conduction type". Proceedings of the Cambridge Philosophical Society 43, 50–64. • Wilmott P., Howison S., Dewynne J. (1995) The Mathematics of Financial Derivatives:A Student Introduction. Cambridge University Press.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 49, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9178689122200012, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/11226/list
## Return to Question Post Made Community Wiki by Harry Gindi 5 "locally cover" doesn't mean anything. I just meant cover. Typically, in the functor of points approach, one constructs the category of algebraic spaces by first constructing the category of locally representable sheaves for the global Zariski topology (Schemes) on $CRing^{op}$. That is, taking the full subcategory of $Psh(CRing^{op})$ which consists of objects $S$ such that $S$ is a sheaf in the global Zariski topology and $S$ has a cover by representables in the induced topology on $Psh(CRing^{op})$. This is the category of schemes. Then, one takes this category and equips it with the etale topology and repeats the construction of locally representable sheaves on this site (Sch with the etale topology) to get the category of algebraic spaces. Can we "skip" the category of schemes entirely by putting a different topology on $CRing^{op}$? My intuition is that since every scheme can be locally covered by affines, and every algebraic space can be locally covered by schemes, we can cut out the middle-man and just define algebraic spaces as locally representable sheaves for the global etale topology on $CRing^{op}$. If this ends up being the case, is there any sort of interesting further generalization before stacks, perhaps taking locally representable sheaves in a flat Zariski-friendly topology like fppf or fpqc? Some motivation: In algebraic geometry, all of our data comes from commutative rings in a functorial way (intentionally vague). All of the grothendieck topologies with nice notions of descent used in Algebraic geometry can be expressed in terms of commutative rings, e.g., the algebraic and geometric forms of Zariski's Main theorem are equivalent, we can describe etale morphisms in terms of etale ring maps, et cetera. What I'm trying to see is whether or not we can really express all of algebraic geometry as "left-handed commutative algebra + sheaves (including higher sheaves like stacks)". The functor of points approach for schemes validates this intuition in the simplest case, but does it actually generalize further? The main question is italicized, but feel free to tell me if I've incorrectly characterized something in the motivation or the background. 4 added a paragraph Typically, in the functor of points approach, one constructs the category of algebraic spaces by first constructing the category of locally representable sheaves for the global Zariski topology (Schemes) on $CRing^{op}$. That is, taking the full subcategory of $Psh(CRing^{op})$ which consists of objects $S$ such that $S$ is a sheaf in the global Zariski topology and $S$ has a cover by representables in the induced topology on $Psh(CRing^{op})$. This is the category of schemes. Then, one takes this category and equips it with the etale topology and repeats the construction of locally representable sheaves on this site (Sch with the etale topology) to get the category of algebraic spaces. Can we "skip" the category of schemes entirely by putting a different topology on $CRing^{op}$? My intuition is that since every scheme can be locally covered by affines, and every algebraic space can be locally covered by schemes, we can cut out the middle-man and just define algebraic spaces as locally representable sheaves for the global etale topology on $CRing^{op}$. If this ends up being the case, is there any sort of interesting further generalization before stacks, perhaps taking locally representable sheaves in a flat Zariski-friendly topology like fppf or fpqc? Some motivation: In algebraic geometry, all of our data comes from commutative rings in a functorial way (intentionally vague). All of the grothendieck topologies with nice notions of descent used in Algebraic geometry can be expressed in terms of commutative rings, e.g., the algebraic and geometric forms of Zariski's Main theorem are equivalent, we can describe etale morphisms in terms of etale ring maps, et cetera. What I'm trying to see is whether or not we can really express all of algebraic geometry as "left-handed commutative algebra + sheaves (including higher sheaves like stacks)". The functor of points approach for schemes validates this intuition in the simplest case, but does it actually generalize further? The main question is italicized, but feel free to tell me if I've incorrectly characterized something in the motivation or the background. 3 added 87 characters in body Typically, in the functor of points approach, one first constructs the category of algebraic spaces by first constructing the category of locally representable sheaves for the global Zariski topology (Schemes) on $CRing^{op}$. That is, taking the full subcategory of $Psh(CRing^{op})$ which consists of objects $S$ such that $S$ is a sheaf in the global Zariski topology and $S$ has a cover by representables in the induced topology on $Psh(CRing^{op})$. This is the category of schemes. Then, one takes this category and equips it with the etale topology and repeats the construction of locally representable sheaves on this site (Sch with the etale topology) to get the category of algebraic spaces. Can we "skip" the category of schemes entirely by putting a different topology on $CRing^{op}$? Some motivation: In algebraic geometry, all of our data comes from commutative rings in a functorial way (intentionally vague). All of the grothendieck topologies with nice notions of descent used in Algebraic geometry can be expressed in terms of commutative rings, e.g., the algebraic and geometric forms of Zariski's Main theorem are equivalent, we can describe etale morphisms in terms of etale ring maps, et cetera. What I'm trying to see is whether or not we can really express all of algebraic geometry as "left-handed commutative algebra + sheaves (including higher sheaves like stacks)". The functor of points approach for schemes validates this intuition in the simplest case, but does it actually generalize further? The main question is italicized, but feel free to tell me if I've incorrectly characterized something in the motivation or the background. 2 added 71 characters in body Typically, in the functor of points approach, one first constructs the category of algebraic spaces by first constructing the category of locally representable sheaves for the global Zariski topology (Schemes) on $CRing^{op}$. That is, taking the full subcategory of $Psh(CRing^{op})$ which consists of objects $S$ such that $S$ is a sheaf in the global Zariski topology and $S$ has a cover by representables in the induced topology on $Psh(CRing^{op})$. This is the category of schemes. Then, one takes this category and equips it with the etale topology and repeats to get the category of algebraic spaces. Can we "skip" the category of schemes entirely by putting a different topology on $CRing^{op}$? Some motivation: In algebraic geometry, all of our data comes from commutative rings in a functorial way (intentionally vague). All of the grothendieck topologies with nice notions of descent used in Algebraic geometry can be expressed in terms of commutative rings, e.g., the algebraic and geometric forms of Zariski's Main theorem are equivalent, we can describe etale morphisms in terms of etale ring maps, et cetera. What I'm trying to see is whether or not we can really express all of algebraic geometry as "left-handed commutative algebra + sheaves (including higher sheaves like stacks)". The functor of points approach for schemes validates this intuition in the simplest case, but does it actually generalize further? The main question is italicized, but feel free to tell me if I've incorrectly characterized something in the motivation or the background. 1 # Commutative rings to algebraic spaces in one jump? Typically, in the functor of points approach, one first constructs the category of algebraic spaces by first constructing the category of locally representable sheaves for the global Zariski topology (Schemes) on $CRing^{op}$. That is, taking the full subcategory of $Psh(CRing^{op})$ which consists of objects $S$ such that $S$ is a sheaf in the global Zariski topology and $S$ has a cover by representables in the induced topology on $Psh(CRing^{op})$. This is the category of schemes. Then, one takes this category and equips it with the etale topology and repeats to get the category of algebraic spaces. Can we "skip" the category of schemes entirely by putting a different topology on $CRing^{op}$? Some motivation: In algebraic geometry, all of our data comes from commutative rings in a functorial way (intentionally vague). All of the grothendieck topologies with nice notions of descent used in Algebraic geometry can be expressed in terms of commutative rings, e.g., the algebraic and geometric forms of Zariski's Main theorem are equivalent. What I'm trying to see is whether or not we can really express all of algebraic geometry as "left-handed commutative algebra + sheaves (including higher sheaves like stacks)". The functor of points approach for schemes validates this intuition in the simplest case, but does it actually generalize further? The main question is italicized, but feel free to tell me if I've incorrectly characterized something in the motivation or the background.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9156585931777954, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/280401/equivalence-of-intrinsic-and-extrinsic-metrics-of-embedded-manifolds
# Equivalence of intrinsic and extrinsic metrics of embedded manifolds. Say a compact n-manifold $\mathcal{M}$ is embedded in $\mathbb{R}^m$, $m > n$. If $d_{\mathcal{M}}$ is the geodesic distance on $\mathcal{M}$, and $d$ the Euclidean distance in $\mathbb{R}^m$, then clearly small $d_{\mathcal{M}}$ implies small $d$. It seems that small $d$ should imply small $d_{\mathcal{M}}$ (since $\mathcal{M}$ is compact, it should have positive reach $\sigma > 0$). Is this known to be true? Thank you. - ## 1 Answer Welcome to Math.SE! The answer is affirmative. Otherwise you would have two sequences $x_k,y_k$ of points in $\mathcal{M}$ such that $d(x_k,y_k)\to 0$ but $d_\mathcal{M}(x_k,y_k)\ge \epsilon$. Since $\mathcal{M}$ is compact, there is a point $p$ to which these sequences converge in the $d$ metric. The point $p$ has a neighborhood $U$ in $\mathbb R^m$ such that there is a diffeomorphism $\Phi$ of $U$ onto some $V\subset\mathbb R^m$ which straightens $U\cap\mathcal{M}$ into a piece of a hyperplane. Since the geodesic distance between $\Phi(x_k)$ and $\Phi(y_k)$ tends to zero as $k\to\infty$, we have a contradiction. The argument is nonconstructive, as it has to be. The example of a very flat ellipse $x^2+(y/\epsilon)^2=1$ shows that there is no universal upper estimate on $d_{\mathcal M}$ in terms of $d$. (In contrast to $d\le d_{\mathcal M}$). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9662750959396362, "perplexity_flag": "head"}
http://mathoverflow.net/users/10475?tab=recent
# Jeff Harvey 2,724 Reputation 2540 views ## Registered User Name Jeff Harvey Member for 2 years Seen 22 mins ago Website Location Chicago Age 58 Professor of Physics at the University of Chicago | | | | |-------|----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 20h | comment | Derivation of Bessel functionsNot a comment on the math, but if you really want to reformulate Quantum Mechanics in terms of classical fluid dynamics and want to be taken seriously rather than viewed as a crank then you are first obligated to understand how Quantum Mechanics is currently formulated and used in some detail. | | May14 | comment | What is the “fundamental theorem of invariant theory” ?@user6818 I think you need to read the paper you are citing more carefully. They do specify the $SU(N_c)$ representations on the bottom of p. 9 and top of p.10. They consider four cases and in each case they specify the $SU(N_c)$ representation content (the $R_i$ in their notation). | | May14 | comment | What is the “fundamental theorem of invariant theory” ?@user6818 As in the question is not well posed to start with and second it is not written in language that most mathematicians will understand. I partially understand what you are asking because I happen to be a physicist. I'd suggest that you either ask the question on physics stack exchange or make the effort to translate your question into a precise mathematical question framed in language that mathematicians will understand. Otherwise your question will be and should be closed since this is a site for research level math questions. | | May13 | comment | What is the “fundamental theorem of invariant theory” ?Your question doesn't contain enough information for a sensible answer until you also specify the $SU(N_c)$ representation of the fields and also their statistics (bosons or fermions). | | May1 | comment | Is there a “right” proof of Riemann’s Theta Relation?It is so annoying not to be able to edit comments! Please in the above read $t A A$ to be the transpose of $A$ times $A$ and interpret the $\frac{1}{2}$ as a prefactor in front of a $4 \times 4$ matrix. | | May1 | comment | Is there a “right” proof of Riemann’s Theta Relation?Mumford explains that the relation depends on a matrix $A$ satisfying $tAA=I$ with $I$ the identity matrix ( I have chosen $m=2$ in his notation on p.14 of Tata I). I think you must have a typo because your matrix does not obey this identity. I believe you want $$A= \frac{1}{2} \begin{matrix} 1 & 1 & 1 & 1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & 1 & -1 \\ 1 & -1 & -1 & 1 \end{matrix}$$ What is special about this matrix is that it acts as a triality transformation on the weight space of so(8)\$, transforming vector weights into spinor weights. This is related to my answer to this question. | | Apr13 | comment | Is there a “right” proof of Riemann’s Theta Relation?Isn't the usual $\theta$ function as defined in Mumford's Tata lectures in the first chapter $\theta: {\mathbb C} \times {\mathbb H} \rightarrow {\mathbb C}$ with ${\mathbb H}$ the upper half plane? What is your "usual" theta function? | | Apr13 | answered | Is there a “right” proof of Riemann’s Theta Relation? | | Apr13 | comment | Is there a “right” proof of Riemann’s Theta Relation?There is a nice interpretation in terms of characters of affine $Spin(8)$ that involves triality and is related to supersymmetry in string theory. I can write up some details tomorrow unless someone else beats me to it. | | Apr10 | comment | Why are currents named currents?To a physicist it is strange to see a current in a Lorentz invariant theory written as a 2-form in space rather than as a 3-form in space-time. | | Mar27 | comment | Algebraic independence of $E_2$, $E_4$ and $E_6$Dear Jonas, To expand slightly on Emerton's comment, $E_4$ and $E_6$ are modular forms, so in particular $E_4(-1/\tau)=\tau^4 E_4(\tau)$ and $E_6(-1/\tau)= \tau^6 E_6(\tau)$. On the other hand $E_2$ is not a modular form, it is only quasimodular. It obeys $E_2(-1/\tau)= \tau^2 E_2(\tau)- 6 i \tau/\pi$. No algebraic combination of $E_4,E_6$ can transform this way under $\tau \rightarrow -1/\tau$ so $E_2$ is algebraically independent of $E_4,E_6$. | | Mar13 | comment | Geometric treatment of the Ward-Takahashi identityIn many cases no invariant measure exists. There are anomalies. The passage from finite dimensional integrals to path integrals is more subtle than you indicate here. | | Mar13 | comment | Geometric treatment of the Ward-Takahashi identityYour questions tend to be too terse in my opinion. Some context would be helpful. Why in the context of differential geometry? Modern from what point of view? Many symmetries of classical field theory Lagrangians do not survive in the quantum theory. There are anomalies, aka anomalous Ward-Takahashi identities. This is actually a huge subject so it is hard to answer without knowing in more detail what you are actually interested in. | | Mar11 | comment | Applications of n-dimensional crystallographic groupsI added a few early references. | | Mar11 | revised | Applications of n-dimensional crystallographic groupsadded references | | Mar11 | answered | Applications of n-dimensional crystallographic groups | | Mar4 | comment | What’s about “quantum modular forms”? | | Feb22 | comment | How should a professor feel peace of mind when a student leaves academia?Do you have children? | | Jan31 | comment | Mock modular forms and (indefinite) quadratic formsI don't know the answer to your question, but one place that Appell-Lerch sums show up is in the Polar part of meromorphic Jacobi forms. The recent paper arXiv:1208.4074 by Dabholkar, Murthy and Zagier has a detailed treatment of the relation between meromorphic Jacobi forms and mock modular forms along with many examples. | | Jan30 | comment | Mock modular forms and (indefinite) quadratic formsWhy isn't there any $z$ dependence on the rhs of your equation for $f(q,z,-1)$? Also, can you provide details on the relation between f(q,z,1) and a specific mock modular form? | | Jan28 | comment | Inclusion of information about external particles to calculate scattering amplitudes in string theoryI have my doubts that this is a good question for MO as it is standard textbook material. String backgrounds determine a CFT, in this CFT there is a state-operator correspondence and the vertex operators used in string scattering computations are given by this correspondence in terms of the external scattering states one is interested in. This is discussed in Chapters 2,3 of volume I of Polchinski's book on string theory. Perhaps you want something else though as I find the mention of bound states and lifetimes and the integral over $d\tau$ confusing. Where did you get your schematic equation? | | Jan27 | comment | Quantization of a classical system (e.g. the case of a billard)I see no reason at all why one needs to have a classical Hamiltonian as a starting point for the description of an intrinsically quantum system. The logical map is quantum --> classical in physics, not the other way around since the world is fundamentally described by quantum mechanics and classical behavior only arises in a limit. Classical systems do play a more fundamental role in defining measurement in quantum mechanics, but that is a different and more complicated can of worms. | | Jan27 | comment | Quantization of a classical system (e.g. the case of a billard)@Uwe If you are a physicist then the "right" quantization is determined by comparison to experiment. | | Jan17 | accepted | Relation between TQFT and Wilson lines, boundary conditions, surface defects etc | | Jan16 | answered | Relation between TQFT and Wilson lines, boundary conditions, surface defects etc | | Jan8 | revised | Meaning of a phrase from “The algebra of grand unified theories”.additional explanation added | | Jan8 | answered | Meaning of a phrase from “The algebra of grand unified theories”. | | Dec28 | comment | What do correlation functions compute in CFT? Of course CFT also shows up in string theory where the fields have a different interpretation, but I thought I would give you the historical origins of CFT since it has the most direct physical interpretation. | | Dec28 | answered | What do correlation functions compute in CFT? | | Dec28 | comment | What do correlation functions compute in CFT? Or are you asking about a purely mathematical interpretation of the correlation functions, which is how I read your question? | | Dec24 | comment | Beginner question on constraints of a wave function in quantum mechanicsI don't think you are describing Griffith's argument correctly. He does not claim that $\psi \rightarrow 0$ as $x \rightarrow \infty$ as a consequence of the time independence of the normalization of $\psi$. He claims $\psi \rightarrow 0$ as a consequence of $\psi$ being normalizable. However he then has a horrible footnote warning about good mathematicians with pathological counterexamples and saying that in physics the wave function always goes to zero at infinity. Except of course he later uses a formalism for scattering problems where $\psi(x,t)$ behaves as $e^{ikx}$ at infinity. ;) | | Dec18 | answered | Explanation for E_8’s torsion | | Dec17 | comment | The Unreasonable Effectiveness of Physics in Mathematics. Why ? What/how to catch?@Alexander Chervov The Dyson quote is from his "Missed Opportunities" Gibbs lecture in 1972. He says "As a working physicist, I am acutely aware of the fact that the marriage between mathematics and physics, which was so enormously fruitful in past centuries, has recently ended in divorce. Discussing this divorce, the physicist Res Jost remarked the other day, `as usual in such affairs, one of the two parties has clearly got the worst of it.'" Dyson clearly thought that physics had got the worst of the divorce. | | Dec14 | awarded | ● Nice Answer | | Dec14 | comment | Mathematician trying to learn string theoryIt would be fun, but would require a lot of time and dedication from both parties. I'm tempted to ask the converse, how does a physicist who knows QM, QFT and string theory learn algebraic geometry, or at least the parts that are most relevant to string theory? The standard answer seems to be to read the first few chapters of Griffiths&Harris and lecture notes by Candelas and others but I wonder if there is a better answer that doesn't involve a willing algebraic geometer. | | Dec13 | answered | Mathematician trying to learn string theory | | Dec13 | comment | Mathematician trying to learn string theoryUnderstanding Lagrangians and symmetries is important, but some theories don't have Lagrangians. In fact such theories, like the (2,0) theory in six dimensions, are the focus of much recent research. Also, while Petr Horava has done much excellent work, including work that foreshadowed the discovery of D-branes, Joe Polchinski is the person who is generally credited with discovering D-branes in string theory. | | Dec9 | awarded | ● Guru | | Dec9 | accepted | Does Physics need non-analytic smooth functions? | | Dec4 | comment | Is there “harmonic potential” for classical bosonic string?There seem to be many gins with <10 reputation points. It is not necessary to create a new id every time you ask a question. | | Dec3 | comment | Stringy version of RRBarton Zwiebach's book "A first course on string theory" will give you the simplest answer to this question and your other question. This question is not really appropriate for MO. Also, you should not answer your own question with a follow-up question. | | Nov28 | comment | Does Physics need non-analytic smooth functions? | | Nov28 | revised | Does Physics need non-analytic smooth functions?improved grammar | | Nov26 | awarded | ● Good Answer | | Nov26 | comment | Quantum mechanics basicsWhen you quantize the electromagnetic field the wave function is most definitely not the electric and magnetic fields. The fields are operators which act on a complex valued wave function, just as in the non-relativistic Schrodinger equation. You are confusing complex valued fields with the complex valued wave function of quantum mechanics. They are two different things. | | Nov26 | awarded | ● Mortarboard | | Nov26 | awarded | ● Nice Answer | | Nov26 | answered | Does Physics need non-analytic smooth functions? |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9333238005638123, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Range_of_a_projectile
# Range of a projectile The path of this projectile launched from a height y0 has a range d. In physics, assuming a flat Earth with a uniform gravity field, and no air resistance, a projectile launched with specific initial conditions will have a predictable range. The following applies for ranges which are small compared to the size of the Earth. For longer ranges see sub-orbital spaceflight. • g: the gravitational acceleration—usually taken to be 9.81 m/s2 (32 f/s2) near the Earth's surface • θ: the angle at which the projectile is launched • v: the velocity at which the projectile is launched • y0: the initial height of the projectile • d: the total horizontal distance travelled by the projectile When neglecting air resistance, the range of a projectile will be $d = \frac{v \cos \theta}{g} \left( v \sin \theta + \sqrt{v^2 \sin^2 \theta + 2gy_0} \right)$ If (y0) is taken to be zero, meaning the object is being launched on flat ground, the range of the projectile will then simplify to $d = \frac{v^2 \sin 2 \theta}{g}$ ## Ideal projectile motion Ideal projectile motion states that there is no air resistance and no change in gravitational acceleration. This assumption simplifies the mathematics greatly, and is a close approximation of actual projectile motion in cases where the distances travelled are small. Ideal projectile motion is also a good introduction to the topic before adding the complications of air resistance. ### Derivations 45 degrees goes the farthest. #### Flat ground Range of a projectile (in vacuum). First we examine the case where (y0) is zero. The horizontal position of the projectile is $x(t) = v t \cos \theta$ In the vertical direction $y(t) = v t \sin \theta - \frac{1} {2} g t^2$ We are interested in the time when the projectile returns to the same height it originated at, thus $y = v t \sin \theta - \frac{1} {2} g t^2$ By factoring: $t = 0$ or $t = \frac{2 v \sin \theta} {g}$ The first solution corresponds to when the projectile is first launched. The second solution is the useful one for determining the range of the projectile. Plugging this value for (t) into the horizontal equation yields $x = \frac {2 v^2 \cos \theta \, \sin \theta } {g}$ Applying the trigonometric identity $\sin(x+y) = \sin x \, \cos y \ + \ \sin y \, \cos x$ If x and y are same, $\sin 2x = 2 \sin x \, \cos x$ allows us to simplify the solution to $d = \frac {v^2 \sin 2 \theta}{g}$ Note that when (θ) is 45°, the solution becomes $d = \frac {v^2} {g}$ #### Uneven ground Now we will allow (y0) to be nonzero. Our equations of motion are now $x(t) = v t \cos \theta$ and $y(t) = y_0 + v t \sin \theta - \frac{1}{2} g t^2$ Once again we solve for (t) in the case where the (y) position of the projectile is at zero (since this is how we defined our starting height to begin with) $0 = y_0 + v t \sin \theta - \frac{1} {2} g t^2$ Again by applying the quadratic formula we find two solutions for the time. After several steps of algebraic manipulation $t = \frac {v \sin \theta} {g} \pm \frac {\sqrt{v^2 \sin^2 \theta + 2 g y_0}} {g}$ The square root must be a positive number, and since the velocity and the cosine of the launch angle can also be assumed to be positive, the solution with the greater time will occur when the positive of the plus or minus sign is used. Thus, the solution is $t = \frac {v \sin \theta} {g} + \frac {\sqrt{v^2 \sin^2 \theta + 2 g y_0}} {g}$ Solving for the range once again $d = \frac {v \cos \theta} {g} \left [ v \sin \theta + \sqrt{v^2 \sin^2 \theta + 2 g y_0} \right]$ #### Angle of impact The angle ψ at which the projectile lands is given by: $\tan \psi = \frac {-v_y(t_d)} {v_x(t_d)} = \frac {\sqrt { v^2 \sin^2 \theta + 2 g y_0 }} { v \cos \theta}$ For maximum range, this results in the following equation: $\tan^2 \psi = \frac { 2 g y_0 + v^2 } { v^2 } = C+1$ Rewriting the original solution for θ, we get: $\tan^2 \theta = \frac { 1 - \cos^2 \theta } { \cos^2 \theta } = \frac { v^2 } { 2 g y_0 + v^2 } = \frac { 1 } { C + 1 }$ Multiplying with the equation for (tan ψ)^2 gives: $\tan^2 \psi \, \tan^2 \theta = \frac { 2 g y_0 + v^2 } { v^2 } \frac { v^2 } { 2 g y_0 + v^2 } = 1$ Because of the trigonometric identity $\tan (\theta + \psi) = \frac { \tan \theta + \tan \psi } { 1 - \tan \theta \tan \psi }$, this means that θ + ψ must be 90 degrees. ## Actual projectile motion In addition to air resistance, which slows a projectile and reduces its range, many other factors also have to be accounted for when actual projectile motion is considered. ### Projectile characteristics Generally speaking, a projectile with greater volume faces greater air resistance, reducing the range of the projectile. This can be modified by the projectile shape: a tall and wide, but short projectile will face greater air resistance than a low and narrow, but long, projectile of the same volume. The surface of the projectile also must be considered: a smooth projectile will face less air resistance than a rough-surfaced one, and irregularities on the surface of a projectile may change its trajectory if they create more drag on one side of the projectile than on the other. Mass also becomes important, as a more massive projectile will have more kinetic energy, and will thus be less affected by air resistance. The distribution of mass within the projectile can also be important, as an unevenly weighted projectile may spin undesirably, causing irregularities in its trajectory due to the magnus effect. If a projectile is given rotation along its axes of travel, irregularities in the projectile's shape and weight distribution tend to be canceled out. See rifling for a greater explanation. ### Firearm barrels For projectiles that are launched by firearms and artillery, the nature of the gun's barrel is also important. Longer barrels allow more of the propellant's energy to be given to the projectile, yielding greater range. Rifling, while it may not increase the average (arithmetic mean) range of many shots from the same gun, will increase the accuracy and precision of the gun.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 23, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8943004608154297, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/39184?sort=votes
## Regular vs. Irregular Vertices in a Mesh ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi everybody, Reading about Geometry Processing, I have realized that people in this area are very interested in regular vertices(degree=6) rather than irregular ones. Can anybody give me reasons why this is an important property? Suppose that we can have a mesh with all regular vertices (genus 1) but with a very poor geometry positioning for the vertices. So, is there any specific reason why connectivity is this much important regardless of the geometry? Thanks - 1 What does this have to do with Riemannian geometry? – Darsh Ranjan Sep 18 2010 at 3:05 You might want to change the tags for this topic to mathematical modeling, rather than riemannian-geometry, unless you have an explanation for why you feel riemannian-geometry is an appropriate tag. – sleepless in beantown Sep 18 2010 at 4:40 ## 2 Answers Nima, geometry processing is used in meshes for 3-dimensional modeling of real world structures or physical items. If you try to create a regularly spaced lattice in 3-dimensions such as $\mathbb{R}^3$, a starting point would be the regions of $\mathbb{Z}^3$ which are enclosed by the object in question. That would be the lattice which contains a point at every integer value of $x, y,$ and $z$ within the object's boundaries. If you connect an edge between every two lattice points that are Euclidean distance $1$ apart, $(\Delta x)^2+(\Delta y)^2+(\Delta z)^2 = 1$, then each vertex will have six edges from it leading to the six lattice points at the relative positions $\Delta x=-1$, $\Delta x=+1$, $\Delta y=-1$, $\Delta y=+1$, $\Delta z=-1$, $\Delta z=+1$. Notice that the boundaries of such a mesh would be clipped and would not match the boundaries of the object. Such a mesh could be extended by having the outermost lattice-points moved from co-inciding with $(x,y,z) \in \mathbb{Z}^3$ to a position in $\mathbb{R}^3$ which is on the boundary of the object. Now, the lattice is no longer regular, but the vertex connectivity is still regular. (Alternatively, additional lattice points could be added on the object boundary in $\mathbb{R}^3$ and connected with the underlying regular lattice, leading to irregular vertex connectivity.) This sort of 6-connectivity at vertices in 3-dimensions for lattice simulations allows for simulations such as 3-d lattice Boltzman numerical simulations. This type of 6-connectivity is also known as the Von Neumann neighborhood in cellular automata simulations, and in the 2-dimensional case the analogous 4-connectivity Von Neumann neighborhood for a point at $(a,b)$ are the lattice points $(a-1,b), (a+1,b), (a,b-1),$ and $(a,b+1)$. It is possible to associate a value at every point of the lattice and allow that value to represent a particular material property. If you take a solid object represented by a 3-d lattice and model applying a physical pressure to it, then the regular object modeled by a regular lattice with regular geometry will become deformed. The lattice points, originally defined as being located at integer values, can be allowed to move in $\mathbb{R}^3$, associating a real value for each of its 3-dimensional coordinates. Thus, deformation modeling can be carried out, such as when an automobile's fender changes its shape in response to a collision with another object. In the realm of hydrodynamics and computational fluid dynamics, there are two ways to use lattice models with 6-connectivity at each vertex. In Eulerian models, the lattice points stay fixed while the values associated with the lattice points are recalculated and changed during each step of the numerical simulation. In Lagrangian models, the lattice points are defined at particular positions (not necessarily coinciding with integer lattice points in $\mathbb{Z}^3$), and the vertices in the mesh are moved in space during steps of the simulation. If you give me more information about what specific topic you are studying, or what you wish to use "geometry processing" for (mesh creation?, mesh assessment?, mesh refinement?, mesh validation?), perhaps I could provide more detailed answers. In response to your comment about which is better, what is better has to be defined in terms of the end goal to be reached: numerical precision, model fidelity to the underlying physical objects, speed of computation. If the numerical precision of the simulator will blow up with slim triangles (for example, triangles with one angle of less than 5 degrees), then it's better to accept irregular vertices. If the simulation would perform better (faster, fewer errors, etc.) with regular and similar vertex connectivity, then allow the irregular triangles and banish irregular nonconforming vertices. There is always a trade-off to make: precision vs. computational time; computational complexity vs.fidelity to the physical object, etc. It is easier to calculate "subdivision surfaces" for regular meshes. It is easier to speed up calculations at homogeneous regions by using larger triangulations and to have more accuracy at inhomogeneous regions and irregular boundaries by using smaller triangulations. Tetrahedral meshing has to use non-identical tetraheda, because it is impossible to completely fill space with identical regular tetrahedra. It is possible to fill space with regular cubes or regular rectangles. That is one reason that 6-connectivity at vertices is frequently used in 3-d meshes. - Thanks for your explanations. I am wondering if we have an arbitrary mesh with some irregular vertices, let's say vertices of degree 12, 15, ..., is there any benefit in doing a remeshing which reduces the variance of the vertice's degrees? Because there are some existing remeshing papers in Computer Graphics that try to remove high valence vertices. Now, I wonder which one is better? 1. a mesh with some irregular vertices but good looking triangles. 2. a remeshing of that mesh with less irregular vertices but with very slim and weird triangles. – Nima Sep 18 2010 at 5:06 @Nima, I would think that "better" has to be defined by your ultimate goal. If the numerical precision of the simulator will blow up with slim triangles (for example, triangles with one angle of less than 5 degrees), then it's better to accept irregular vertices. If the simulation would perform better (faster, fewer errors, etc.) with regular and similar vertex connectivity, then allow the irregular triangles and banish irregular nonconforming vertices. There is always a trade-off to make: precision vs. computational time; computational complexity vs.fidelity to the physical object, etc. – sleepless in beantown Sep 18 2010 at 22:47 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Some geometric algorithms work better around regular vertices. For example, Loop subdivision is a procedure to turn a triangular mesh into a smooth surface (i. e., a "subdivision surface"). The amount of smoothness varies, however: IIRC, the normal vector to the surface is continuous everywhere, but the curvature is only continuous away from vertices of degree other than 6. - Exactly, I am looking for reasons such as Loop Subdivision. Now, I can only see Subdivision Surfaces and Mesh Compression. Do you know any more? – Nima Sep 18 2010 at 5:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9319432973861694, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/69051/sum-of-products-boolean-algebra?answertab=active
# Sum of Products (Boolean Algebra) I am having real trouble getting to the corrects answers when asked to simply Sum of products expressions. For instance: Determine whether the left and right hand sides represent the same function: a) $x_1\bar{x}_3+x_2x_3+\bar{x}_2\bar{x}_3 = (x_1+\bar{x}_2+x_3)(x_1+x_2+\bar{x}_3)(\bar{x}_1+x_2+\bar{x}_3)$ They answer is that they are equivalent. Here is my logic for solving the left hand side, I first expand so each term has 3 variables: $$=x_1x_2\bar{x}_3+x_1\bar{x}_2\bar{x}_3+x_1x_2x_3+\bar{x}_1x_2x_3+x_1\bar{x}_2\bar{x}_3+\bar{x}_1\bar{x}_2\bar{x}_3$$ combining terms $$\begin{align*} &=x_1x_2(\bar{x}_3+x_3)+(x_1+\bar{x}_1)\bar{x}_2\bar{x}_3+x_1\bar{x}_2\bar{x}_3\\ &=x_1x_2+\bar{x}_2\bar{x}_3+x_1\bar{x}_2\bar{x}_3 \end{align*}$$ Then if I apply De Morgan's law in order to get it in Product of sum form, I get nothing close to the equivalent of the right hand side. - 3 Try expanding the RHS first. – user2468 Oct 1 '11 at 17:12 By expand do you mean distribute the terms through? – Nick Oct 1 '11 at 17:16 Or I could apply demorgans law to right side – Nick Oct 1 '11 at 17:25 Distribute the terms in RHS, and use identities such as commutativity, idempotence, etc. to simplify it. – user2468 Oct 1 '11 at 18:15 1 When I expanded the right hand side, I ended up with an extra $x_1x_2$ summand (in addition to the other three summands in your expression). However, one can check that as functions in which each $x_i$ can be either $0$ or $1$, the two expressions are equal functions: just verify that in each of the three cases in which the right hand side is zero the left hand side is zero as well (easy), and that in every other situation the left hand side is $1$. – Arturo Magidin Oct 1 '11 at 22:01 show 1 more comment ## 2 Answers After expanding RHS using distributive law and reducing it you should get: $x_1\bar{x}_3+\bar{x}_2\bar{x}_3+x_2x_3+x_1x_2$ Now, make Karnaugh map in order to get minimal disjunctive form: If you group ones as on picture above you will get next expression: $x_1\bar{x}_3+\bar{x}_2\bar{x}_3+x_2x_3$ - Just as when proving trig identities, in general it’s better to start by trying to simplify the more complicated side, which in this case is the right-hand side. If you expand the RHS by brute force and use the identities $xx=x$, $x\bar{x}=0$, $x+\bar{x}=1$, and $x+xy=x$ repeatedly, you shouldn’t have much trouble reducing it to $$x_1x_2+x_1\bar{x}_3+x_2x_3+\bar{x}_2\bar{x}_3,$$ which is almost what you want. Now write $$x_1x_2 = x_1x_2 1 = x_1x_2 (x_3 + \bar{x}_3),$$ expand, and use the absorption identity again to get rid of the extra terms. - (+1),Nice use of absorption identity in the end. – Quixotic Oct 2 '11 at 5:11 This was very helpful thank you! – Nick Oct 2 '11 at 15:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9148170351982117, "perplexity_flag": "head"}
http://www.reference.com/browse/Hyperbolic+3-manifold
Definitions Nearby Words # 3-manifold In mathematics, a 3-manifold is a 3-dimensional manifold. The topological, piecewise-linear, and smooth categories are all equivalent in three dimensions, so little distinction is usually made in whether we are dealing with say, topological 3-manifolds, or smooth 3-manifolds. Phenomena in three dimensions can be strikingly different from that for other dimensions, and so there is a prevalence of very specialized techniques that do not generalize to dimensions greater than three. Perhaps surprisingly, this special role has led to the discovery of close connections to a diversity of other fields, such as knot theory, geometric group theory, hyperbolic geometry, number theory, Teichmüller theory, topological quantum field theory, gauge theory, Floer homology, and partial differential equations. 3-manifold theory is considered a part of low-dimensional topology or geometric topology. A key idea in the theory is to study a 3-manifold by considering special surfaces embedded in it. One can choose the surface to be nicely placed in the 3-manifold, which leads to the idea of an incompressible surface and the theory of Haken manifolds, or one can choose the complementary pieces to be as nice as possible, leading to structures such as Heegaard splittings, which are useful even in the non-Haken case. Thurston's contributions to the theory allow one to also consider, in many cases, the additional structure given by a particular Thurston model geometry (of which there are eight). The most prevalent geometry is hyperbolic geometry. Using a geometry in addition to special surfaces is often fruitful. The fundamental groups of 3-manifolds strongly reflect the geometric and topological information belonging to a 3-manifold. Thus, there is an interplay between group theory and topological methods. ## Important examples of 3-manifolds ### Hyperbolic link complements The following examples are particularly well-known and studied. ## Some important classes of 3-manifolds The classes are not necessarily mutually exclusive! ## Foundational results Some results are named as conjectures as a result of historical artifacts. We begin with the purely topological: • Moise's theorem - Every 3-manifold has a triangulation, unique up to common subdivision • As corollary, every compact 3-manifold has a Heegaard splitting. • Prime decomposition theorem • Kneser-Haken finiteness • Loop and sphere theorems • Annulus and Torus theorem • JSJ decomposition, also known as the toral decomposition • Scott core theorem • Lickorish-Wallace theorem • Waldhausen's theorems on topological rigidity • Waldhausen conjecture on Heegaard splittings Theorems where geometry plays an important role in the proof: Results explicitly linking geometry and topology: • Thurston's hyperbolic Dehn surgery theorem • Jorgensen-Thurston theorem that the set of finite volumes of hyperbolic 3-manifolds has order type $omega^omega$. • Thurston's geometrization theorem for Haken manifolds • Tameness conjecture, also called the Marden conjecture or tame ends conjecture • Ending lamination conjecture ## Important conjectures Some of these are thought to be solved, as of March 2007. Please see specific articles for more information. ## References • Hempel, 3-manifolds, American Mathematical Society, ISBN 0-8218-3695-1 • Jaco, Lectures on three-manifold topology, American Mathematical Society, ISBN 0-8218-1693-4 • Rolfsen, Knots and Links, American Mathematical Society, ISBN 0-914098-16-0 • Thurston, Three-dimensional geometry and topology, Princeton University Press, ISBN 0-691-08304-5 • Adams, The Knot Book, American Mathematical Society, ISBN 0-8050-7380-9 • Hatcher, Notes on basic 3-manifold topology, available online • R. H. Bing, The Geometric Topology of 3-Manifolds, (1983) American Mathematical Society Colloquium Publications Volume 40, Providence RI, ISBN 0-8218-1040-5.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8537687659263611, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/4605-sherical-coordinates.html
# Thread: 1. ## Homework Problem Wondering if someone could help me get this answer. I don't get spherical coordinates at all. The volume of the region given in spherical coordinates by the inequalities 3 less than or equal to rho less than or equal to 5 0 less than or equal to phi less than or equal to pi/6 -pi/6 less than or equal to theta less than or equal to pi/6 is filled with uniform material. Find the x-coordinate of the centre of mass. Thanks for any help. John 2. Originally Posted by OntarioStud Wondering if someone could help me get this answer. I don't get spherical coordinates at all. The volume of the region given in spherical coordinates by the inequalities 3 less than or equal to rho less than or equal to 5 0 less than or equal to phi less than or equal to pi/6 -pi/6 less than or equal to theta less than or equal to pi/6 is filled with uniform material. Find the x-coordinate of the centre of mass. Thanks for any help. John The mass in the region is: $<br /> M=\int_{\rho=3}^5 \int_{\phi=0}^{\pi/6} \int_{\theta=-\pi/6}^{\pi/6} \kappa(\bold{r})\ \rho^2 \sin(\phi)\ d\theta \ d\phi \ d\rho<br />$ (Note $\rho^2 \sin(\phi)\ d\theta \ d\phi \ d\rho$ is the volume element in spherical polars) and the centre of mass is: $<br /> \bold{R}=\frac{1}{M} \int_{\rho=3}^5 \int_{\phi=0}^{\pi/6} \int_{\theta=-\pi/6}^{\pi/6} \kappa(\bold{r})\ \bold{r}\ \rho^2 \sin(\phi)\ d\theta \ d\phi \ d\rho<br />$ where $\kappa(\bold{r})$ is the density at $\bold{r}$, which we are told is a constant, and so independent of $\bold{r}$. Now the $x$ component of the centre of mass is obtained from the above by replacing $\bold{r}$ by its $x$ component $\bold{r}_x=\rho \sin(\phi) \cos(\theta)$: $<br /> \bold{R_x}=\frac{1}{M} \int_{\rho=3}^5 \int_{\phi=0}^{\pi/6} \int_{\theta=-\pi/6}^{\pi/6} \kappa\ \rho \sin(\phi) \cos(\theta) \ \rho^2 \sin(\phi)\ d\theta \ d\phi \ d\rho<br />$ RonL 3. Hello, John! The volume of the region given in spherical coordinates by the inequalities: . $\begin{array}{ccc}3 \leq \rho \leq 5 \\ 0 \leq \phi \leq \frac{\pi}{6} \\ \text{-}\frac{\pi}{6} \leq \theta \leq \frac{\pi}{6}\end{array}$ is filled with uniform material. Find the $x$-coordinate of the centre of mass. A good sketch might help ... or try to visualize the region. $3 \leq \rho \leq 5$ . . .We have a hollow sphere at the origin: inner radius 3, outer radius 5. $0 \leq \phi \leq \frac{\pi}{6}$ . . The upper polar region only, cut off by a cone with vertex angle $\frac{\pi}{3}\;(60^o)$ . . . . . . . . . .The region is like an angel-food cake. $\text{-}\frac{\pi}{6} \leq \theta \leq \frac{\pi}{6}$ . Cut a $60^o$ slice of the "cake". And there is the region . . . 4. Thanks a lot. I got it.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8987783789634705, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2010/08/13/baire-sets/?like=1&source=post_flair&_wpnonce=5d5a17dad5
# The Unapologetic Mathematician ## Baire Sets Looking over my notes from topology it seems I completely skipped over Baire sets. This was always one of those annoying topics that I never had much use for, partly because I didn’t do point-set topology or analysis. Also, even in my day the usual approach was a very classical and awkward one. Today I’m going to do a much more modern and streamlined one, and I can motivate it better from a measure-theoretic context to boot! Basically, the idea of a Baire set is one that can’t be filled up by “negligible” sets. We’ve used that term in measure theory to denote a subset of a set of measure zero. But in topology we don’t have a “measure” to work with. Instead, we use the idea of a closed “nowhere dense” set — one for which there is no open set on which it is dense. The original motivation was a set like a boundary of a region; in the context of Jordan content we saw that such a set was negligible. Clearly such a set has no interior — no open set completely contained inside — and any finite union of them is still nowhere dense. However, if we add up countably infinitely many we might have enough points to be dense on some open set. However, we don’t want to be able to actually fill such an open set. In the measure-theoretic context, this corresponds to the way any countable union of negligible sets is still negligible. So, let’s be more specific: a “Baire set” is one for which the interior of every countable union of closed, nowhere dense sets is empty. Equivalently, we can characterize Baire sets in complementary terms: every countable intersection of dense open sets is dense. We can also use the contrapositive of the original definition: if a countable union of closed sets has an interior point, then one of the sets must itself have an interior point. We’re interested in part of the famous “Baire category theorem” — the name is an artifact of the old, awkward approach and has nothing to do with category theory — which tells us that every complete metric space $X$ is a Baire space. Let $\{U_n\}$ be a countable collection of open dense subsets of $X$. We will show that their intersection is dense by showing that any nonempty open set $W$ has some point $x$ — the same point — in common with all the $U_n$. Okay, Since $U_1$ is dense, then $U_1\cap W$ is nonempty, and it contains a point $x_1$. As the intersection of two open sets, it’s open, and so it contains an open neighborhood of $x_1$ which set can take to be an open metric ball of radius $r_1>0$. But then $B(x_1,r_1)$ is an open set, which will intersect $U_2$. This process will continue, and for every $n$ we will find a point $x_n$ and a radius $r_n$ so that $B(x_n,r_n)\subseteq B(x_{n-1},r_{n-1})\cap U_n$. We can also at each step pick $r_n<\frac{1}{n}$. And so we come up with a sequence of points $\{x_n\}$. At each step, the ball $B(x_n,r_n)$ contains the whole tail of the sequence past $x_n$, and so all of these points are within $r_n$ of each other. Since $r_n$ gets arbitrarily small, this shows that $x_n$ is Cauchy, and since $X$ is complete, the sequence must converge to a limit $x$. This point $x$ will be in each set $U_n$, since $x\in B(x_n,r_n)\subseteq U_n$, and it’s obviously in $W$, as desired. The other part of the Baire category theorem says that any locally compact Hausdorff space is a Baire space. In this case the proof proceeds very similarly, but with the finite intersection property for compact spaces standing in for completeness. ### Like this: Posted by John Armstrong | Point-Set Topology, Topology ## 5 Comments » 1. Should the last word be ‘completeness’? Also, thanks for the interesting and well-written stuff. Comment by Greg Simon | August 13, 2010 | Reply 2. Yes, sorry, fixing… Comment by | August 13, 2010 | Reply 3. I’m glad that you are including this historically interesting topic. Comment by | August 15, 2010 | Reply 4. [...] union of these closed subsets has an interior point. But since is a complete metric space, it is a Baire space as well. And thus one of the must have an interior point as [...] Pingback by | August 16, 2010 | Reply 5. [...] have measurable sets, but we do have something almost as good. Just as when we discussed Baire spaces, we can use nowhere-denseness as a topological stand-in for negligibility. In fact, we’ll [...] Pingback by | August 20, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 30, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9501504898071289, "perplexity_flag": "head"}
http://mathoverflow.net/questions/66601/smoothing-subvarieties
## Smoothing subvarieties ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose I have a smooth complex projective variety $X$ and a singular subvariety $Z$. Can I find a general complete intersection subvariety $W$ of the same dimension as $Z$, and another smooth subvariety $Y$ so that we have an algebraic equivalence of cycles $$a[W]+ b[Z] \sim [Y]$$ for some positive rational numbers $a, b$? You can ask it for homological equivalence if that is somehow easier. What about if I ask to actually smooth the union $W\cup Z$? - ## 1 Answer I think the answer to your last question as posed is "no", but the first question may be actually quite difficult. ## #1 If you hadn't required $W$ to be a general complete intersection, then the answer to both questions would be "yes": Let $I$ be the homogenous ideal of $Z$ in $S$ the homogenous coordinate ring of $X$ and pick $f_1,\dots,f_q\in I$ general homogenous elements where $q=\mathrm{codim}_XZ$. Let $V:=W\cup Z= V(f_1,\dots,f_q)$. Then $V$ is a complete intersection and hence smoothable to a $Y$ as required. ## #2 The fact is, there are non-smoothable varieties. For instance let $X$ be a big enough projective space and $Z$ a cone over an abelian variety of dimension at least $2$ (any non-Cohen-Macaulay isolated log canonical singularity would work). Then $Z$ has a single singular point which is non-smoothable. Adding $W$ does not help, since $W$ being general, it will miss the singular point, so $W\cup Z$ is still non-smoothable. ## #3 In your first question, since you are asking about the cycles, you can drop the word "general". The point is, that you can "add" a $W$ to make a non-smoothable singularity smoothable, but it would have to go through the singular set and not even just randomly. As in #1, you can find some $W$, but I am not sure how to guarantee that this $W$ be a complete intersection. My guess is still that there should be singularities with which you can't do this, but this is certainly a more subtle question. - Is it obvious that "general" in this context means "generic"? He wants to find one, and you don't have to find generic things. – Will Sawin Dec 2 2011 at 18:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9389249086380005, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/electronic-band-theory?sort=votes
# Tagged Questions The electronic-band-theory tag has no wiki summary. 2answers 658 views ### Modern and complete references for the $k\cdot p$ method? I've recently started studying the $k\cdot p$ method for describing electronic bandstructures near the centre of the Brillouin zone and I've been finding it hard to find any pedagogical references on ... 1answer 168 views ### What is the simplest possible topological Bloch function? Kohmoto (1985) pointed out in Topological Invariant and the Quantization of the Hall Conductance how TKNN's calcuation of Hall conducance is related to topology, in which topologically nontriviality ... 2answers 157 views ### A question on the existence of Dirac points in graphene? As we know, there are two distinct Dirac points for the free electrons in graphene. Which means that the energy spectrum of the 2$\times$2 Hermitian matrix $H(k_x,k_y)$ has two degenerate points $K$ ... 4answers 5k views ### What's the difference between Fermi Energy and Fermi Level? I'm a bit confused about the difference between these two concepts. According to Wikipedia the Fermi energy and Fermi level are closely related concepts. From my understanding, the Fermi energy is the ... 1answer 221 views ### Counterexamples to the bulk-boundary correspondence (topological insulators) In the literature on topological insulators and superconductors the 'bulk-boundary correspondence' features quite heavily. One version of this conjecture says roughly: "At an interface between two ... 1answer 322 views ### Can surface dipoles/charges change the work function of a metal? As typically drawn in simplified band diagrams (see picture below), the metal Fermi Level is shown as the top of the conduction band, with the entire band filled. In many situations, including ... 0answers 99 views ### Intuition on topologically nontrivial 2D-band structures? I want to get more intuition on topologically nontrivial band structures. There's this popular 2D two-band model for a topological insulator where $H=\sum_{k}h(\boldsymbol{k})$ (see Qi, Hughes, and ... 2answers 350 views ### How does Bloch's theorem generalize to a finite sized crystal? I would be fine with a one dimensional lattice for the purpose of answering this question. I am trying to figure out what more general theorem (if any) gives Bloch's theorem as the number of unit ... 1answer 36 views ### Dopant concentration and changes in band gap energy Thanks to this lovely website, I was able to pop out reasonable values for my band gap energies from a translucent material. As expected, I found a decrease in band gap energy due to my treatments. ... 1answer 268 views ### Transparency of solids using bandgaps and relation to conduction and valence bands I think I understand how a solid can appear transparent as long as the energy of the photons travelling through it are not absorbed in the material's bandgap. But how does this band gap relate to ... 1answer 129 views ### Confused about charge seperation in solar cells I'm a bit confused about how solar cells work. My understanding is that there is a p-n junction. A photon is absorbed which creates an electron-hole pair, and the idea is to separate the electron ... 2answers 164 views ### Bloch oscillations - Scattering to other bands In the free electron approximation, a Bloch state $|k\rangle$ is the linear superposition of free plane wave states $\sum_G C_G(k) |k+G\rangle$, where $G$ are the conjugate lattice. Since the ... 1answer 327 views ### What does the Fermi Energy really signify in a Semiconductor? In understanding the behavior of semiconductors, I'm coming across a description of the Fermi Energy here and at Wikipedia's page (Fermi Energy, Fermi Level). If I understand correctly, the Fermi ... 1answer 326 views ### Energy band diagram of a system of Silicon Quantum dots Suppose that we have a system of Silicon nanoparticles embedded in ZnO dielectric matrix. i'm thinking about how to construct the energy band structure of this system , suppose that we already have ... 0answers 75 views ### Topological band theory [closed] Why topological insulators were discovered so late? While the band theory was known long time ago! I mean why the topological properties of electronic bands were not noticed in the past? 1answer 64 views ### Are electronic wavefunctions in band gap insulators localized? is a single-particle picture sufficient in this case? I am having trouble understanding the physics of band gap insulators. Usually in undergrad solid state physics one looks at non-interacting electrons in a periodic potential, with no disorder. Then, ... 0answers 44 views ### Why does silicon have an indirect gap? Is there an intuitive explanation as to why silicon has an indirect gap? I have heard that this can explained using pseudopotentials. 1answer 109 views ### Semiconductors and localization of the electrons When looking at the band diagram of a semi-conductor, direct conclusion of the invariance under discrete translations, for a filled state with an electron, one does know precisely it's momentum, so my ... 1answer 453 views ### Effective Mass and Fermi Velocity of Electrons in Graphene: In graphene, we have (in the low energy limit) the linear energy-momentum dispersion relation: $E=\hbar v_{\rm{F}}|k|$. This expression arises from a tight-binding model, in fact \$E =\frac{3\hbar ... 1answer 126 views ### Band Structure and Carrier Recombination/Generation So i've been a bit confused, looking at PN junction, semiconductors and the like (trying to nail down how exactly semiconductors work, transistors and such). I've read the wiki on band structure ... 1answer 48 views ### What is the reasoning behind hole carriers being able to carry heat? In the Peltier effect, we consider charge carriers being able to carry heat. As for electrons or ions, this attitude makes sense, since external electric potential drives particles with mass in a ... 1answer 55 views ### DFT for bandstructure Density Functional Theory (DFT) is not appropriate in predicting the band gap of the materials. However, which functional gives close value to the experimentally observed band gap of semiconductors? ... 1answer 330 views ### PN Junction Depletion Region So it took me a little bit to understand this, but I want to make sure I have a few things right. First of all, when a Crystal Structure with One side N-Doped, One Side P-Doped are in the same ... 2answers 424 views ### Band Gap/Energy Bands in Semiconductors? I think i've finally nailed down the Semiconductor Physics (Well the general part, whats and why's etc, as per my previous question) Anyways there is one small part that confuses me, and thats BAND ... 1answer 41 views ### Regarding “Holes” in bands, and Photons So from learning Band theory, and PN Junction and such, I've learned that photons are created when "holes" are filled in a band, and this is what can create light (Isn't this how LEDs work?) Anyways, ... 1answer 99 views ### Does anyone know the difference and relation between $k\cdot p$ method and tight binding (TB) method? Among the methods of calculating energy bands for crystals, first-principles method is the most accurate. Besides first principles, two commonly used modeling methods are the $k\cdot p$ method and ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9184253811836243, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/77195/how-has-modern-algebraic-geometry-affected-other-areas-of-math/77216
## How has modern algebraic geometry affected other areas of math? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have a friend who is very biased against algebraic geometry altogether. He says it's because it's about polynomials and he hates polynomials. I try to tell him about modern algebraic geometry, scheme theory, and especially the relative approach, things like algebraic spaces and stacks, etc, but he still thinks it sounds stupid. This stuff is very appealing for me and I think it's one of the most beautiful theories of math and that's enough for me to love it, but in our last talk about this he asked me well how has the modern view of algebraic geometry been useful or given cool results in math outside of algebraic geometry itself. I guess since I couldn't convince him that just studying itself was interesting, he wanted to know why else he'd want to study it if he isn't going to be an algebraic geometer. But I found myself unable to give him a good answer that involved anything outside of algebraic geometry or number theory (which he dislikes even more than polynomials). He really likes algebraic topology and homotopy theory and says he wants to learn more about the categorical approaches to algebraic topology and is also interested in differential and noncommutative geometry because of their applications to mathematical physics. I know that recently there's been a lot of overlap between algebraic topology/homotopy theory and algebraic geometry (A1 homotopy theory and such), and applications of algebraic geometry to string theory/mirror symmetry and the Konstevich school of noncommutative geometry. However, I am far from qualified to explain any of these things and have only picked up enough to know they will be extremely interesting to me when I get to the point that I can understand them, but that's not a satisfactory answer for him. I don't know enough to really explain how modern algebraic geometry has affected math outside of itself and number theory enough to spark interest in someone who doesn't just find it intrinsically interesting. So my question are specifically as follows: How would one explain how the modern view of algebraic geometry has affected or inspired or in any way advanced math outside of algebraic geometry and number theory? How would one explain why modern algebraic geometry is useful and interesting for someone who's not at all interested in classical algebraic geometry or number theory? Specifically why should someone who wants to learn modern algebraic topology/homotopy theory care or appreciate modern algebraic geometry? I'm not sure if this should be CW or not so tell me if it should. - 28 How can you hate polynomials? What a strange attitude. – Qiaochu Yuan Oct 5 2011 at 3:25 26 Seriously, if you hate polynomials, you hate mathematics. I believe the only reason someone in mathematics can seriously hate polynomials is if his only experience with polynomials comes from artifical math-contest problems (unfortunately, almost all polynomial problems from math contests are extremely artificial and boring). Most of quantitative (= containing actual equality-type results) mathematics is about polynomials in one way or another; for example, characteristic classes theory has lots of them. Isn't that your friend's algebraic topology? – darij grinberg Oct 5 2011 at 3:45 8 You can lead a horse to water, but you can't make it drink... – Yemon Choi Oct 5 2011 at 4:07 41 You friend is young, he'll grow up. – Angelo Oct 5 2011 at 6:22 7 Dori, I am embarrassed to admit that I also once 'hated polynomials', and even had the same attitude towards algebraic geometry because that was what it appeared to be about to me at first glance. But now I blame it on mostly being ignorant of the ubiquity of polynomials in mathematics. So like your friend, I was interested in non-commutative geometry and was really enamored with Gel`fand duality - and when I found out that one of the points of view of algebraic geometry is that rings are all viewed as being functions over their spectrum, my attitude immediately changed because of the – Jon Paprocki Oct 5 2011 at 14:58 show 15 more comments ## 12 Answers As others have suggested, your friend is getting it backwards. He's like a hammer asking what a carpenter is useful for. Given a field (of mathematics, say), there are typically some fields that are more structured than it and others that are less structured. In mathematics, people often say the more structured ones are 'harder', and the less structured are 'softer'. For instance, in increasing order of hardness, we have sets, topological spaces, topological manifolds, differential manifolds, complex manifolds, complex algebraic varieties, algebraic varieties over the rational numbers, integral algebraic varieties. These are in a linear order, but if you throw in other subjects, you'll get a non-linear one. (p-adic algebraic geometry and Riemannian geometry immediately come to mind.) (I think Gromov has some remarks at the end of an ICM address where he talks about this and gives other examples. Also, don't confuse 'harder' and 'softer' in this sense with what they mean in the sciences, which is essentially 'more precise' and 'less precise'. For instance, in science people say that biology is softer than chemistry. In fact, the two meanings are opposites because in science, more structured objects are less amenable to a precise analysis. But this typically isn't the case in mathematics.) Now given a subject S and a harder subject H, it's usually true that most objects in S don't admit the structure of an object in H. For instance, most topological manifolds don't admit a complex structure. On the other hand, for the objects of S that do admit such a structure, their theory from the point of view of H is typically much richer than that from the point of view of S. For instance, the study of Riemann surfaces as topological spaces is less rich than their study as complex manifolds. You might say that softer subjects are broad and flexible and harder ones are rich and rigid. Mathematicians tend to view subjects that are softer than their specialty as general nonsense, and harder ones as excessively particular. This is not to say that a soft field is easier or less interesting than a harder one. Even if it is true that the directly analogous question in the soft subject is easier (e.g. classify Riemann surfaces topologically rather than holomorphically), it just means that the people in the soft subject can move on and study more sophisticated objects. So they just get stuck later rather than sooner. For instance, over the past 50 years, a big fraction of the best number theorists have been studying elliptic curves over number fields. Now elliptic curves over the complex numbers are much easier (I think there hasn't been much new since the 19th century), so the algebraic geometers just moved on to higher genus or higher dimension and are grappling with the issues there, issues that are way out of reach in the presence of arithmetic structure. Now my main point here is that soft subjects were typically invented to break up the study of harder ones into smaller pieces. (This is surely something of a creation myth, but one with a fair amount of truth.) For instance, the real numbers were invented to break up the study of polynomial equations into two steps: when a polynomial has a real solution and when that real solution is rational. I know very little about modern analysis, but I think that much of it was invented to do the same with differential equations. You first find solutions in some soft sense and then see whether it comes from a solution in the harder sense of original interest. So the role of soft subjects is to aid in the study of harder ones---people usually don't ask for applications of partial differential equations to the study of topological vector spaces, but it's considered a mark of respectability to ask for the opposite. Similarly, no one talks about applications of engineering to mathematics. Since algebraic geometry is at the hard end of the spectrum above, there aren't many fields in which it is natural to ask for applications. Number theory, or arithmetic algebraic geometry, is harder and of course there are zillions of applications there, but that's not what your friend wants. Just about all mathematicians work in a subject that is softer than some and harder than others (and if you include non-mathematical subjects, then all mathematicians do). That's all good---it takes a whole food chain to make an ecosystem. But it's backwards to ask about the nutritional value of something that typically eats you. [This picture of mathematics is of course simplistic. There are examples of hard subjects with applications to softer ones. See Donu Arapura's answer, for example. There are also applications of arithmetic algebraic geometry to complex algebraic geometry. For instance, Grothendieck's proof of the Ax-Grothendieck theorem, or the proof of the decomposition theorem for perverse sheaves using the theory of weights and the Weil conjectures. But I think it's fair to say that such applications are the exception---and are prized because of it---rather than the rule.] - 4 Well, it's a pretty standard usage of the terms, but also I think it captures well the sense of weak structure vs strong structure. And I did spend a whole paragraph saying exactly that it has nothing to do with being easier or more difficult, so I didn't think there would be any confusion about what I meant... – James Borger Oct 5 2011 at 12:31 6 This answer surely holds the record for the most individual sentences that would deserve an upvote in and of themselves. – Cam McLeman Oct 5 2011 at 15:29 27 +1 for this sentence alone. "Mathematicians tend to view subjects that are softer than their specialty as general nonsense, and harder ones as excessively particular." – Jim Bryan Oct 5 2011 at 17:26 4 BSteinhurst: Here are 2 quotes in the direction of your concern. First, David Mumford wrote in his Red Book "Algebraic geometry seems to have acquired the reputation of being esoteric, exclusive, and very abstract, with adherents who are secretly plotting to take over all the rest of mathematics." Miles Reid wrote in his undergrad alg. geom. book "Thanks to the systematic use of the notion of scheme [...] algebraic geometry was able to absorb practically all the advances made in topology, homological algebra, number theory, etc., and even to play a dominant role in their development." [cont.] – KConrad Oct 5 2011 at 18:37 6 "So the role of soft subjects is to aid in the study of harder ones." This reminds me of the aphorism of Peter Freyd: "Perhaps the purpose of categorical algebra is to show that which is trivial is trivially trivial." Which may mean: "one thing category theory does is help make the softer bits seem utterly natural and obvious, so as to quickly get to the heart of the matter, isolating the hard nuggets, which one may then attack with abandon." (From the nLab page nlab.mathforge.org/nlab/show/nPOV ) – Todd Trimble Oct 6 2011 at 19:25 show 12 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I tend to shy away from questions like this, but for some reason I'm up at a strange hour with nothing better to do. And the news is too depressing. Actually I agree with your friend, "algebraic geometry" does sound a bit dull. We should have come up with a cooler name, but it's too late. Anyway, here's something. Say you have a compact Riemann surface equipped with a positively curved metric. Then by Gauss-Bonnet, the Euler characteristic is positive. Therefore, modulo basic facts about Riemann surfaces, it must be the Riemann sphere. Now consider a higher dimensional version. Suppose that $X$ is a compact complex manifold with a Kähler metric with positive curvature in a suitable sense (i.e. positive bisectional curvature). Then Frankel conjectured that it must be a projective space. There are two proofs, one due to Siu and Yau uses harmonic maps and another due to Mori using algebraic geometry in positive characteristic. For the second proof, first observe that $X$ is projective algebraic by Kodaira's embedding theorem. Then the curvature condition implies that the tangent bundle is positive in the sense of algebraic geometry (i.e. ample). Mori proved that projective spaces are the only varieties with positive tangent bundle. Scheme theory is needed to move the problem into characteristic $p$, where the main argument takes place. - 11 That you find the label "algebraic geometry" dull is a contrast to what Mumford wrote about attending a course by Zariski as a freshman: "When he spoke the words 'algebraic variety', there was a certain resonance in his voice that said distinctly that he was looking into a secret garden. I immediately wanted to be able to do this too." – KConrad Oct 5 2011 at 18:42 3 Well to be honest, if there was a tongue in cheek button, I would have clicked it while typing that. It is a nice quote by the way. – Donu Arapura Oct 5 2011 at 18:57 Usually it works the other way around: things appear in topology first and then people realize that those things may have analogs in algebraic geometry. Etale cohomology is perhaps the best known example. But let me give a counter-example (I wrote something similar as a comment to a recent question but I can't find it now). In topology there is the Lefschetz formula, which expresses the alternating sum of the traces of the cohomology endomorphisms induced by a smooth self-mapping of a smooth manifold in terms of the local contributions of the fixed points, assuming those are non-degenerate. There is a generalization of this for self-maps of arbitrary finite CW-complexes. The contribution of each fixed point is local, i.e. it can be determined by looking at the map in an arbitrarily small neighborhood of the point. In particular, if there are no fixed points, the alternating sum of the traces is zero. Inspired by this, Grothendieck proved in SGA 5 an algebraic version of the Lefschetz formula, without smoothness or completeness assumptions. It also works for more general sheaves than the constant sheaf. Inspired by this, Goresky and MacPherson gave a topological version of the formula, which, under some assumptions, allows one to calculate the contribution of each component of the fixed point set. See "The local contribution to the Lefschetz fixed point Formula", Inv. Math. 111, 1993, 1-33. - 9 Here is a quote from Kontsevich and Zagier, which at once puts algebraic geometry in a very broad context : "It can be said without much overstretching that a large part of algebraic geometry is (in a hidden form) the study of integrals of rational functions of several variables." The Hodge conjecture can be seen a statement about integrals. No I think nobody would argue that this is not a fundamental pursuit. – Damian Rössler Oct 5 2011 at 16:36 A lot of recent research in algebraic topology, particularly in stable homotopy theory, makes essential use of perspectives coming from algebraic geometry. It particularly aids in the conceptualization of computational results in the subject and the so-called "chromatic filtration". Rather than me writing at length, you should see Mike Hopkins' ICM lecture for a readable introduction to this connection. - 2 Also cf. everything ever written by Jacob Lurie. – Dylan Wilson Oct 6 2011 at 5:14 Atiyah and Singer proved their famous "index theorem" in topology using ideas and methods directly inspired by Grothendieck's proof of his huge generalization of Riemann-Roch's theorem, a proof that makes a prototypical use of several of the newest aspects of modern (that is Grothendieck's) algebraic geometry, for example the insistence in working in a relative rather than absolute situation, that is for morphisms rather than simply for objects. - This is not likely to satisfy the OP, but the study of the dynamics of billiards in polygons has a lot to do with things like variations of Hodge structures and slopes of divisors on Moduli space. The connection is via Teichmuller curves and related objects. - If you study representation theory of groups or algebras, then representation varieties are useful. - The size of Fourier coefficients of modular forms can only be studied (so far) via the use of very sophisticated tools from Algebraic Geometry. Of course, one could argue that modular forms are part of Number Theory, but this is not how they arised and they appear in many othe branches of mathematics (combinatorics or theoretical physics for example). - My own knowledge of algebraic geometry is not yet even at a "cocktail party" level, but I'd also love to learn a bit about why (and how) I should care about AG. But I have two points to mention here. 1. Lior Pachter and Bernd Sturmfels have edited a book called Algebraic Statistics for Computational Biology, and in that book they argue how the language of algebraic geometry offers some help for tackling problems in computational biology + statistics. 2. Another exciting perspective might be offered by the work of Ketan Mulmuley on Geometric complexity theory, where Mulmuley is using algebraic geometric ideas to tackle the P vs NP problem. - The notion of (Grothendieck) topos is coming right from algebraic geometry, yet they are very useful in homotopy theory, see for example Lurie’s book "Higher Topos Theory". In particular, the homotopy category is the archetypal example of $(\infty,1)$-topos. - Here are some recent papers in discrete geometry where algebraic methods are used, in particular some famous theorems of algebraic geometry, like Bezout's or Milnor-Thom theorem, are frequently applied in this area. http://arxiv.org/abs/1011.4105 http://arxiv.org/abs/0812.1043 http://arxiv.org/abs/1102.5391 http://arxiv.org/abs/0905.1583 - Algebraic geometry is not about polynomial. Learning AG is not going to help you solve polynomial equations. But I think there are softwares that do this quite well. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9562188982963562, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/43707/another-faster-than-light-question
# Another faster-than-light question Imagine we have something very heavy (i.e supermassive black hole) and some object that we can throw with 0.999999 speed of light (i.e proton). We are throwing our particle in the direction of hole. The black hole is so heavy that we can assume that in some moment acceleration of gravity would be say 0.0001 speed of light/s^2. So the question is what will be the speed of proton in few a seconds later, assuming we have such distances that it will not hit the black hole before. - 3 General relativity only restricts the speed of objects so close to each other that the variation in the gravitational field is close to zero. For such observers, there will be no observable gravitational force, and thus your object will still be travelling slightly under the speed of light. Relative to a distant observer, the particle will be moving "faster than light" once it crosses the black hole's horizon. – Jerry Schirmer Nov 8 '12 at 14:08 In this way we also put the laser at same distance and direct it to the black hole. We turn on the lasert and our proton overtakes the ray? – mirt Nov 8 '12 at 14:21 1 No. The ray moves in the same background geometry and is always moving faster, locally, than the proton. – Jerry Schirmer Nov 8 '12 at 14:23 Actually i did't understand what a distant observer will see after let's say 3 seconds after start. – mirt Nov 8 '12 at 14:26 The light would outpace the proton, full stop. The actual experiment would depend a lot on the details of the setup. Both would appear to be travelling faster than light relative to the distant observer after they crossed the horizon (let's assume the BH is expanding so that distant observers observe a crossing) – Jerry Schirmer Nov 8 '12 at 14:28 show 2 more comments ## 2 Answers This the classic "hurling a stone into a black hole" problem. It's described in detail in sample problem 3 in chapter 3 of Exploring Black Holes by Edwin F.Taylor and John Archibald Wheeler. Incidentally I strongly recommend this book if you're interested in learning about black holes. It does require some maths, so it's not a book for the general public, but the maths is fairly basic compared to the usual GR textbooks. The answer to your question is that no-one observes the stone (proton in your example) to move faster than light, no matter how fast you throw it towards the black hole. I've phrased this carefully because in GR it doesn't make sense to ask questions like "how fast is the stone" moving unless you specify what observer you're talking about. Generally we consider two different types of observer. The Schwarzschild observer sits at infinity (or far enough away to be effectively at infinity) and the shell observer sits at a fixed distance from the event horizon (firing the rockets of his spaceship to stay in place). These two observers see very different things. For the Schwarzschild observer the stone initially accelerates, but then slows to a stop as it meets the horizon. The Schwarzschild observer will never see the stone cross the event horizon, or not unless they're prepared to wait an infinite time. The shell observer sees the stone fly past at a velocity less than the speed of light, and the nearer the shell observer gets to the event horizon the faster they see the stone pass. If the shell observer could sit at the event horizon (they can't without an infinitely powerful rocket) they'd see the stone pass at the speed of light. To calculate the trajectory of a hurled stone you start by calculating the trajectory of a stone falling from rest at infinity. I'm not going to repeat all the details from the Taylor and Wheeler book since they're a bit involved and you can check the book. Instead I'll simply quote the result: For the Schwarzschild observer: $$\frac{dr}{dt} = - \left( 1 - \frac{2M}{r} \right) \left( \frac{2M}{r} \right)^{1/2}$$ For the shell observer: $$\frac{dr_{shell}}{dt_{shell}} = - \left( \frac{2M}{r} \right)^{1/2}$$ These equations use geometric units so the speed of light is 1. If you put $r = 2M$ to find the velocities at the event horizon you'll find the Schwarzschild observer gets $v = 0$ and the (hypothetical) shell observer gets $v = 1$ (i.e. $c$). But this was for a stone that started at rest from infinity. Suppose we give the stone some extra energy by throwing it. This means it corresponds to an object that starts from infinity with a finite velocity $v_\infty$. We'll define $\gamma_\infty$ as the corresponding value of the Lorentz factor. Again I'm only going to give the result, which is: For the Schwarzschild observer: $$\frac{dr}{dt} = - \left( 1 - \frac{2M}{r} \right) \left[ 1 - \frac{1}{\gamma_\infty^2}\left( 1 - \frac{2M}{r} \right) \right]^{1/2}$$ For the shell observer: $$\frac{dr_{shell}}{dt_{shell}} = - \left[ 1 - \frac{1}{\gamma_\infty^2}\left( 1 - \frac{2M}{r} \right) \right] ^{1/2}$$ Maybe it's not obvious from a quick glance at the equations that neither $dr/dt$ nor $dr_{shell}/dt_{shell}$ exceeds infinity, but if you increase your stone's initial velocity to near $c$ the value of $\gamma_\infty$ goes to $\infty$ and hence 1/$\gamma^2$ goes to zero. In this limit it's easy to see that the velocity never exceeds $c$. In his comments Jerry says several times that the velocity exceeds $c$ only after crossing the event horizon. While Jerry knows vaaaaastly more than me about GR I would take him to task for this. It certainly isn't true for the Schwarzschild observer, and you can't even in principle have a shell observer within the event horizon. - I'm certainly playing a bit fast and loose with "moving" in this context. You are right to say that things inside the horizon are unobservable, and so there is a bit of a wrongness to the claim. The formal statement would be that a freely falling frame inside the horizon would appear to be moving superlumminally with respect to a freely falling frame outside the horizon--the frames are dragged radially past any static limit--it's basically thinking of the Schwarzschild horizon the same way that you would think of the cosmological horizon. – Jerry Schirmer Nov 8 '12 at 17:15 Jerry's comment is perfect. Just explaining something I've understood... I'd advice that normal black-holes are far better than the super-massive ones. Because, they're the largest of 'em and hence effectively less gravitational effect on your proton. Anyways, The answer is NO because of two things - First of all, Newton's laws (like acceleration of protons) are unusable relative to a black-hole's event horizon. And second, relativity generally restricts a motion faster than light..! Relativity concludes that you'd measure these motions relatively and not absolutely. So, we're using an observer like yourself. General relativity says that gravity influences both space & time to bend out, thereby taking a shorter path which our guys call it - "a geodesic motion"... Ok. Now, to your question... Let's assume that you're sending something similar to a laser beam of protons. If you're able to see those protons, you'll definitely see a red-shifted beam (becoming dimmer & dimmer with distance) as it approaches the horizon (let's just ignore that it disappears). Now, All paths of the protons turn toward the event horizon of the black-hole where space-time also curves further & further. Even light bends, thereby taking the shortest path (which seems to be accelerated). Instead of mentioning "accelerated", relativity says it gets "curved". Hence, I'd conclude that you'd never cross light speed at any time. I'm also quoting Jerry's comment that, The protons would appear to travel faster than light relative to you, but you can't observe that in that case because, we can't observe anything inside the black-hole. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9349733591079712, "perplexity_flag": "middle"}
http://nrich.maths.org/2508
### GOT IT Now For this challenge, you'll need to play Got It! Can you explain the strategy for winning this game with any target? ### Is There a Theorem? Draw a square. A second square of the same size slides around the first always maintaining contact and keeping the same orientation. How far does the dot travel? ### Reverse to Order Take any two digit number, for example 58. What do you have to do to reverse the order of the digits? Can you find a rule for reversing the order of digits for any two digit number? # Weekly Problem 20 - 2013 ##### Stage: 3 Challenge Level: The diagram shows that $1 + 3 + 5 + 7 + 5 + 3 + 1 = 3^2 + 4^2$. What is $1 + 3 + 5 + \dots + 1999 + 2001 + 1999 + \dots + 5 + 3 + 1$? If you liked this problem, here is an NRICH task which challenges you to use similar mathematical ideas. This problem is taken from the UKMT Mathematical Challenges. View the previous week's solution The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8915669322013855, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/257599/the-existence-of-an-inverse-to-a-differentiable-function?answertab=oldest
# The existence of an inverse to a differentiable function Let $f:\mathbb{R}^2\to\mathbb{R}^3$ be differentiable. Can there exist a $g:\mathbb{R}^3\to\mathbb{R}^2$ such that $gf=\text{id}_{\mathbb{R}^2}$? What about such that $fg=\text{id}_{\mathbb{R}^3}$? How does one approach such a problem? Chain rule, I suppose, but I can't manage to make it work. Edited: Made clear that the question is asking if it is possible that $f$ has such a left or right inverse. - 3 If $gf = Id_{\mathbb{R}^2}$, then the derivative $(gf)'(p)$ would be the identity at every point $p$. Using the chain rule, how big is the kernel of $(gf)'(p)$? Similarly, if $fg = Id_{\mathbb{R}^3}$, then $(fg)'(p)$ is the identity. But how big is the rank of $(fg)'(p)$? – froggie Dec 13 '12 at 2:21 ## 1 Answer For the first question, consider the inclusion $(x,y) \mapsto (x,y,0)$ and in the other direction $(x,y,z) \mapsto (x,y)$. For the second, the hint of froggie totally works! -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.934330403804779, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/general-relativity+observer
# Tagged Questions 1answer 50 views ### Does non-mass-energy generate a gravitational field? At a very basic level I know that gravity isn't generated by mass but rather the stress-energy tensor and when I wave my hands a lot it seems like that implies that energy in $E^2 = (pc)^2 + (mc^2)^2$ ... 1answer 59 views ### An infalling object in a black hole looks “paused” for a far away observer, for how long? As I understand, to an observer well outside a black hole, anything going towards it will appear to slow down, and eventually come to a halt, never even touching the event horizon. What happens if ... 4answers 197 views ### The bigger the mass, the more time slows down. Why is this? If I were to stand by a pyramid, which weighs about 20 million tons, I would slow down by a trillion million million million of second. Don't know if that's exactly right, but you get the point. Also, ... 1answer 101 views ### Would dense matter around a black hole event horizon eventually form a secondary black hole? [duplicate] Possible Duplicate: Black hole formation as seen by a distant observer Given that matter can never cross the event horizon of a black hole (from an external observer point of view), if a ... 0answers 36 views ### Can a black hole actually grow, from the point of view of a distant observer? [duplicate] Possible Duplicate: Black hole formation as seen by a distant observer I've read in several places that from the PoV of a distant observer it will take an infinite amount of time for new ... 1answer 209 views ### In general relativity (GR), does time stop at the event horizon or in the central singularity of a black hole? I was reading through this question on time and big bang, and @John Rennie's answer surprised me. In the immediate environment of a black hole, where does time stop ticking if one were to follow a ... 3answers 179 views ### Black hole formation as seen by a distant observer [duplicate] Possible Duplicate: How can anything ever fall into a black hole as seen from an outside observer? Is black hole formation observable for a distant observer in finite amount of time? ... 2answers 207 views ### What do you feel when crossing the event horizon? I have heard the claim over and over that you won't feel anything when crossing the event horizon as the curvature is not very large. But the fundamental fact remains that information cannot pass ... 1answer 248 views ### Time dilation - why the observers see each other the slow one but then one of them is older or younger? I'm in trouble with time dilation: Suppose that there's two people on the Earth (A,B), they are twins and each other has a clock. (So they are at the same reference frame). B travels in a spaceship ... 1answer 193 views ### Falling into a black hole I've heard it mentioned many times that "nothing special" happens for an infalling observer who crosses the event horizon of a black hole, but I've never been completely satisfied with that statement. ... 2answers 104 views ### Why is matter drawn into a black hole condensed into a single point within the singularity? [duplicate] Possible Duplicate: Why is matter drawn into a black hole not condensed into a single point within the singularity? When we speak of black holes and their associated singularity, why is ... 2answers 131 views ### Effect of gravity at near-lightspeeds Let's say I'm in a space station, hurtling towards our galaxy nearly close to the speed of light. From my reference frame, I see the galaxy coming towards my ship at the same speed. I pass the Sun, ... 8answers 2k views ### How can anything ever fall into a black hole as seen from an outside observer? The event horizon of a black hole is where gravity is such that not even light can escape. This is also the point I understand that according to Einstein time dilation will be infinite for a ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9524304866790771, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/97756-differential-funtion.html
# Thread: 1. ## a differential funtion Assume $f:R \rightarrow R$ has a finite derivative everywhere on $(a,b)$ except possibly at a point $c \in (a,b)$. If we have $lim_{x \rightarrow c} f'(x)=A$, then prove $f'(c)$ exists and is equal to $A$. 2. Hello, Hint : use the mean value theorem. 3. ## mean value theorem i thought about it but is it possible to even use it? f has to be ontinuous on the endpoints and has to be differentiable on (a,b). but we don know if it is continuoust at a and b or it is diffentiable at c. 4. You cannot hope to prove this unless you are given that f is continuous at c. Otherwise you can have a function that is continuous and differentiable everywhere in the interval apart from c but which takes an arbitrary value at c itself, or you could have a function that looks like tan(x) but with a small section round pi/2 removed. Then there is a huge discontinuity at c, and yet the derivative is the same both sides of c. 5. Originally Posted by Kat-M i thought about it but is it possible to even use it? f has to be ontinuous on the endpoints and has to be differentiable on (a,b). but we don know if it is continuoust at a and b or it is diffentiable at c. We must be allowed to assume that f is continuous on (a,b), otherwise the propostion would be clearly false (because in that case f(c) could be anything). So, assuming f is continuous on (a,b), I suggest that you use the mean value theorem, applied to (a,c] and [c,b), respectively, to show that the left and the right derivative of f at c both exist and are both equal to A. 6. Originally Posted by Kat-M Assume $f:R \rightarrow R$ has a finite derivative everywhere on $(a,b)$ except possibly at a point $c \in (a,b)$. If we have $lim_{x \rightarrow c} f'(x)=A$, then prove $f'(c)$ exists and is equal to $A$. The question does not actually say that $f$ is continuous at $c$. But it better had be, as noted by alunw and Failure. With that assumption, the result is true, but it seems to be quite a tricky piece of analysis, requiring a very careful proof. We are told that $\lim_{x\to c}f'(x) = A$. So given $\varepsilon>0$ there exists $\delta>0$ such that $|f'(\xi)-A|<\varepsilon$ whenever $|\xi-c|<\delta$. Now suppose that $|x-c|<\delta$ and $|y-c|<\delta$. By the mean value theorem (as hinted by flyingsquirrel ) $\frac{f(x) - f(y)}{x-y} = f'(\xi)$ for some $\xi$ between $x$ and $y$. But if $x$ and $y$ are both within distance $\delta$ from $c$ then so is $\xi$. It follows that $\left|\frac{f(x) - f(y)}{x-y} - A\right|<\varepsilon$. Next, let $y\to c$ in that last inequality. By continuity of $f$ at $c$, this gives $\left|\frac{f(x) - f(c)}{x-c} - A\right|\leqslant\varepsilon$. Since this holds whenever $|x-c|<\delta$, and $\varepsilon$ is an arbitrary positive number, it follows that $f$ is differentiable at $c$, with derivative $A$. 7. Originally Posted by Opalg Now suppose that $|x-c|<\delta$ and $|y-c|<\delta$. By the mean value theorem (as hinted by flyingsquirrel ) $\frac{f(x) - f(y)}{x-y} = f'(\xi)$ for some $\xi$ between $x$ and $y$. How can you apply the mean value theorem here if you don't know (yet) that f is differentiable in the entire intervall between x and y? 8. Originally Posted by Failure How can you apply the mean value theorem here if you don't know (yet) that f is differentiable in the entire interval between x and y? Good point. I should have said that y must be on the same side of c as x. Then I think the proof should work. But in fact your comment #5 above gives a shorter proof: once you know that f is continuous on the closed interval [c,x], you can apply the MVT on that interval and the result comes out straight away.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 46, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.973671555519104, "perplexity_flag": "head"}
http://mathoverflow.net/questions/15897?sort=newest
## In what topology DM stacks are stacks ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Background/motivation One of the main reason to introduce (algebraic) stacks is build "fine moduli spaces" for functors which, strictly speaking, are not representable. The yoga is more or less as follows. One notices that a representable functor on the category of schemes is a sheaf in the fpqc topology. In particular it is a sheaf in coarser topologies, like the fppf or étale topologies. Now some naturally defined functors (for instance the functor $\mathcal{M}_{1,1}$ of elliptic curves) are not sheaves in the fpqc topology (actually $\mathcal{M}_{1,1}$ is not even an étale sheaf) so there is no hope to represent them. Enters the $2$-categorical world and we introduce fibered categories and stacks. Many functors which are not sheaves arise by collapsing fibered categories which ARE stacks, so not all hope is lost. But, as not every fpqc sheaf is representable, we should not expect that every fpqc stack is in some sense "represented by a generalized space", so we make a definition of what we mean by an algebraic stack. Let me stick with the Deligne-Mumford case. Then a DM stack is a fibered category (in groupoids) over the category of schemes, which 1) is a stack in the étale topology 2) has a "nice" diagonal 3) is in some sense étale locally similar to a scheme. I don't need to make precise what 2) and 3) mean. By the preceding philosophy we should expect that DM stacks generalize schemes in the same way that stacks generalize sheaves. In particular I would expect that DM stacks turn out to be stacks in finer topologies, just as schemes are sheaves not only in the Zariski topology (which is trivial) but also in the fpqc topology (which is a theorem of Grothendieck). Question Is it true that DM stacks are actually stacks in the fpqc topology? And if not, did someone propose a notion of "generalized space" in the context of stacks, so that this result holds? - ## 1 Answer The rule of thumb is this: Your DM (or Artin) stack will be a sheaf in the fppf/fpqc topology if the condition imposed on its diagonal is fppf/fpqc local on the target ("satisfies descent"). In other words, in condition 2 you asked that the diagonal be a relative scheme/relative algebraic space perhaps with some extra properties. If there if fppf descent for morphisms of this type (e.g., "relative algebraic space", "relative monomorphism of schemes"), you'll have something satisfying fppf descent. If there is fpqc descent for morphisms of this type (e.g., "relative quasi-affine scheme"), then you'll have something satisfying fpqc descent. See for instance LMB (=Laumon, Moret-Bailly. Champs algebriques), Corollary 10.7. Alternatively: earlier this year I wrote up some notes (PDF link) that included an Appendix collecting in one place the equivalences of some standard definitions of stacks, including statements of the type above. - 5 The suggested references appeal to fpqc-sheafifiction, a somewhat far-out operation. (I generally view appeal to universes as a bit of laziness when it can be avoided with a bit more effort to unravel what is actually going on.) I think it is a very instructive exercise to unravel the arguments at this step in the suggested references to see that fpqc-sheafification is not needed at all. It gives one a more "hands-on" understanding of what makes the argument work to make it more explicit in this way. Give it a try! – BCnrd Feb 20 2010 at 18:11 Thank you very much. Sorry that it took me so much time to accept the answer; I didn't have time to read your notes before. – Andrea Ferretti Mar 2 2010 at 21:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9345406293869019, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2007/03/14/braid-groups/?like=1&source=post_flair&_wpnonce=c8d80d4293
# The Unapologetic Mathematician ## Braid groups Okay, time for a group I really like. Imagine you’re playing the shell game. You’re mixing up some shells on the surface of a table, and you can’t lift them up. How can you rearrange them? At first, you might think this is just a permutation group all over again, but not quite. Let’s move two shells around each other, taking a picture of the table of the table each moment, and then stack those pictures like a flip-book movie. I’ve drawn just such a picture. We read this movie from the bottom to the top. The shell on the right moves behind the shell on the left as they switch places. It could also have moved in front of the left shell, though, and that picture would show the paths crossing the other way. We don’t want to consider those two pictures the same. So why don’t we want to? Because the paths those shells trace out look a lot like the strands of knots! The two dimensions of the table and one of time make a three-dimensional space we can use to embed knots. Since these pictures just have a bunch of strands running up and down, crossing over and under each other as they go we call them “braids”. In fact, these movies form a group. We compose movies by running them one after another. We always bring the $n$ shells back where they started so we can always start the next movie with no jumps. We get the identity by just leaving the shells alone. Finally, we can run a movie backwards to invert it. There’s one such braid group $B_n$ for each number $n$ of shells. The first one, $B_1$ is trivial since there’s nothing to do — there’s no “braiding” going on with one strand. The second one, $B_2$ is just a copy of the integers again, counting how many twists like the one pictured above we’ve done. Count the opposite twist as $-1$. Notice that this is already different from the symmetric groups, where $S_2$ just has the two moves, “swap the letters” or “leave them alone”. Beyond here the groups $B_n$ and $S_n$ get more and more different, but they’re also pretty tightly related. If we perform a braiding and then forget which direction we made each crossing we’re just left with a permutation. Clearly every permutation can arise from some braiding, so we have an epimorphism from $B_n$ onto $S_n$. In fact, this shows up when we try to give a presentation of the braid group. Recall that the symmetric group has presentation: $<s_1,...,s_{n-1}|s_i^2 (1\leq i\leq n-1),s_is_js_i^{-1}s_j^{-1} (|i-j|\geq 2),$ $s_is_{i+1}s_is_{i+1}s_is_{i+1} (1\leq i\leq n-2)>$ The generator $s_i$ swaps the contents of places $i$ and $i+1$. The relations mean that swapping twice undoes a swap, widely spaced swaps can be done in either order, and another seemingly more confusing relation that’s at least easily verified. The braid group looks just like this, except now a twist is not its own inverse. So get rid of that first relation: $<s_1,...,s_{n-1}|s_is_js_i^{-1}s_j^{-1} (|i-j|\geq 2),s_is_{i+1}s_is_{i+1}s_is_{i+1} (1\leq i\leq n-2)>$ The fact that we get from the braid group to the symmetric group by adding relations reflects the fact that $S_n$ is a quotient of $B_n$. It’s interesting to play with this projection and compute its kernel. ## 8 Comments » 1. One question occurs to me: when can we find a neat free resolution of the braid group? Do we know which braid groups have finite resolutions? Do we know the complexities of braid group resolutions? Comment by | March 15, 2007 | Reply 2. [...] a “braided monoidal category”, for a very good reason I’ll talk about tomorrow (hint). Now if by chance the braiding is its own inverse, we call it a “symmetry”, and call [...] Pingback by | July 2, 2007 | Reply 3. I have been looking through some knot theory books, and I have started leafing through some abstract algebra books. I am curious about chirality tests for knots is 3 space, and, in particular, for what integer values n will there be achiral knots. I found out there is a proof that there are some whenever n is even, and none when n is prime. For composite odd numbers n, there are some that have achiral knots. In the tables I have found to date, there are no 9 crossing knots listed as being achiral. One achiral knot with 15 crossings was found in 1998 by M. Thistlewaite and friends. Has one been found for 21,33,35, or 39 yet? I suspect there will be at least one or more for each of those cases. However, I do not think there will be any for 25, 27, 49, 81, 121, 125, …. Has any progress been made on this front? I’m rather curious. I have an idea I am investigating, but I am just starting and working my way into group theory. Jim aka Maddy Comment by Jim Balliette | February 13, 2009 | Reply 4. I have to say, Jim, I really don’t know much about chirality results like that, but I can ask around. Comment by | February 13, 2009 | Reply 5. Thanks John, I think it is a rather tough question. It has been long since I looked at group theory, so I am having to start from scratch. I have some background, but it is mostly dormant. I did some analysis on Hoste’s 15 crossing knot, but I am not sure I found anything that will help me. Perhaps I should look up Professor Thistlewaite and see if he has anything. I thought I would run a test on my computer, but the size of the sets I need to manipulate are too big. The symmetric groups get rather large rather quickly! I am hitting a website here and one there, but I am not finding what I am looking for as of yet. Thanks for maintaining this site! Jim Comment by Jim Balliette | February 14, 2009 | Reply 6. Ok John, there is a 9 crossing achiral knot. I was bothered by the fact I didn’t find one in the tables I was using. I’m going to compare the number of chiral knots with n crossings to the number of Sylow Subgroups of a cyclic group of order n. I believe there is relationship, a strong one, but I could be way off. If I’m right, then there WILL be chiral knots for prime powers, though I’m not sure about nonalternating chiral knots when an odd prime power is involved. I am going to try to draw a 21 crossing non-alternating chiral knot this weekend. This should be a challenge! Hope you are having a great time this weekend! Jim Comment by Jim Balliette | February 14, 2009 | Reply 7. I seem to be using the word chiral when I mean ACHIRAL or AMPHICHIRAL. Dang. Those other ones are rather easy to draw. Jim Comment by Jim Balliette | February 14, 2009 | Reply 8. Ooops…I misread the book. There is NOT a 9 crossing achiral knot. Perhaps powers of odd primes will not have a non-alternating achiral knot, or maybe no achiral knots either. If so, what is the connection between this, if any, and cyclic groups of prime power order having unbranched normal decomposition series? ( I hope I am using the right terminology!! ) My attempt to draw the knot was a failure, which I expected. I ended up guessing quite a bit and created a 3 link consisting of a free link and an alternating link of two components. I don’t have enough data to see what is happening yet. Still, trying to find a pattern is challenging and a wee bit of fun. So far I have been able to distinguish all of the knots I have tried from the unknot using a very simple test. Every attempt to make a nasty unknot that I cannot tell is the unknot has failed, but I haven’t made unknots with too many crossings yet. I am convinced the method is invariant for Reidemeister moves Type I and Type II, but type III is a bit harder to examine. This is interesting, but it doesn’t mean the method will always work. However, if it is a test that will work, all of the computer code needed to do it is already written. I do my tests in GAP, but I suppose the test could be done using any computer algebra system. A very competent typist can type in the data using a single line. Unfortunately, I am NOT a good typist, so typing in the data in my case takes anywhere from 5 to 50 times! Ok, off to start over with more care to be VERY systematic. I sure hope I don’t find out I made a typing error that destroys all of the fun stuff I think I am seeing. Jim Comment by Jim Balliette | February 15, 2009 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 19, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9537075757980347, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/78469?sort=oldest
a question on continuity of $G$-module for a profinite group $G$ Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have seen the following statment somewhere, for example in Appendix B2 on Silverman's book "The Arithmetic of Elliptic Curves" : Let $M$ be an abelian group with discrete topology and $G$ be a profinite group. Then an linear action ( which means that $\sigma(m_1+m_2)=\sigma(m_1)+\sigma(m_2)$, i.e it is a $G$-module) $\phi : G \times M \rightarrow M$ is continuous if and only if the stabilizer $\sigma \in G | \sigma(m)=m$ has finite index in $G$ for all $m \in M$. But what we need is that this stabilizer is open in $G$. I also saw that in a profinite group, not every subgroup of finite index is open. So is this statement correct? Or how to see that this stabilizer is open if it has finite rank? - If the group is finitely generated, finite-index subgroups are open:ams.org/mathscinet-getitem?mr=2276769 – Agol Oct 18 2011 at 17:13 2 It is false as stated. For instance, consider any faithful action of a quotient of G by a non-closed finite index subgroup. "Finite index" should be replaced with either "open" or "closed and finite index". – Kevin Ventullo Oct 18 2011 at 21:47 @Kevin Ventullo: Thanks. But I made a mistake in my question. In fact, the statement is for $G$-modules, i.e it requires the action satisfies $\sigma (m_1 + m_2) = \sigma(m_1) + \sigma(m_2)$. The example that $G$ acts on $G/H$ for a subgroup $H$ of $G$ is not a $G$-module. I should add this condition in my question, sorry. – unknown (google) Oct 18 2011 at 22:18 1 Answer Not every finite index subgroup is open, but closed subgroups of finite index are open. So if the stabilizer is closed, that would be sufficient... - 1 Without continuity, I can't see why the stabilizer is closed....:( Could you give me a hint? Thanks. – unknown (google) Oct 18 2011 at 19:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.911521315574646, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=671105
Physics Forums Page 1 of 2 1 2 > ## Moving Charges and Magnetic Moments Does any moving charge generate a magnetic moment? I thought so because a moving charge generates a magnetic field. PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus I know that charges moving in a circle generate a magnetic moment, but I was wondering if charges travelling in a straight line also generate a magnetic moment. If a moving charge travelling in a straight line generates a magnetic field, wouldn't it also have a magnetic moment? ## Moving Charges and Magnetic Moments Well, consider the dipole moment in electrostatics... its created by placing two charges... So we can say that there will be a magnetic moment if there are two magnetic poles. Now when a charge is moving in a closed path, it creates a magnetic field similar to what would have been created if two poles had been there instead of the charge. What I mean is that when a charge rotates, it creates two poles, North in the anticlockwise sense and South in the clockwise. So we get a magnetic moment in this case. But when a charge is moving in an open path, it doesn't create these to poles and hence we get no magnetic moment. Thats what I think. Let's wait for others' opinions as well. What do you think? I think since a magnetic field should have a magnetic moment, which you can use to calculate the magnetic field, a moving charge no matter what kind of motion, generating a magnetic field, should have a magnetic moment. Quote by quantumfoam I think since a magnetic field should have a magnetic moment, which you can use to calculate the magnetic field, a moving charge no matter what kind of motion, generating a magnetic field, should have a magnetic moment. Well, you don't always need the magnetic moment to calculate the mgnetic field. Consider a straight wire! We can find the field at a distance by simple integration from Biot-Savart's Law! Moreover we have Ampere's Circuital Law for a more general case. These don't require the knowledge of magnetic moments. But if I wanted to, would I be able to describe the magnetic field of a moving charge in terms of its magnetic moment? Quote by quantumfoam But if I wanted to, would I be able to describe the magnetic field of a moving charge in terms of its magnetic moment? I don't think anyone can give a negative answer to that! If you are capable enough, then certainly you can! But I can't help you with that. Talk to someone whom you know... like your teachers and Head of Departments... and if you get an answer from them, positive or negative, post it here so that I can know as well. Taking a charge on circular path, the moment at the centre of rotation is calculated by $\vec m = I*\vec A$ (A=circular area) And in the case of single charge the current is: $I= \frac{dQ}{dt}$ where $t=\frac{2\Pi r}{v}$ and the moment becomes: $\vec m=\frac{\vec r \times \vec v Q}{2}$ Now if we start incrasing the size of cicular path towards ∞, the moment increases linearly. But the "reference point" of the moment approaches infinite distance from the charge. Note that the moment itself is not a field; all calculations using moments are done assuming a moment on definite point. Another formula for the moment is: $\vec m=\frac{I}{2}\int \vec {r} \times \vec {dr}$ Which can be used to calculate magnetic moment at origin when the conductor is at (a,y) (vertical line having distance a from origin): $\vec m = \frac{I}{2}\int ^{\infty}_{-\infty}(a\vec i+y\vec j)\times(\vec j dy)$ $\vec m = \frac{I}{2}\int ^{\infty}_{-\infty}a\vec i\times\vec j dy$ $\vec m = \frac{aI}{2}\vec k \infty$ Thus the magnetic moment for a straight line, calculated at distance a from the line, is infinite, if a>0. An analogy in mechanics would be calculating the moment of inertia of a disc at point which is NOT the center of mass. Which seems ridiculous. I would say that the magnetic moment is a property of a system, and valid only at the "centre" of the system (how to define the centre is another problem). As it is not a field, it cannot be used as such for calculating field quantities. BR, -Topi Quote by TopiRinkinen An analogy in mechanics would be calculating the moment of inertia of a disc at point which is NOT the center of mass. Which seems ridiculous. Why do you say that finding moment of inertia about an arbitrary point is ridiculous??? It's perfectly valid and there are well-defined moment of inertia's about any axis at any point in a system!! And about the magnetic moment, why do say that it is valid only at the "center"? Is it not distributed over the entire surface? I mean any arbitrarily coiled loop carrying charges have well defined magnetic moments! You said it yourself... the product of the current and the area...!! The magnetic moment points from the south pole to the north pole of a magnet, which I believe to imply magnetic dipoles. If a moving point charge generates a magnetic field according to the Biot-Savart law, (which shows the two magnetic poles of the moving point charge) shouldn't the moving charge also generate a magnetic moment regardless of what type of motion it is executing? Please correct me if I am wrong. Quote by quantumfoam The magnetic moment points from the south pole to the north pole of a magnet, which I believe to imply magnetic dipoles. If a moving point charge generates a magnetic field according to the Biot-Savart law, (which shows the two magnetic poles of the moving point charge) shouldn't the moving charge also generate a magnetic moment regardless of what type of motion it is executing? Please correct me if I am wrong. But the field due to a "not coiled wire", like a straight wire, isn't separated as north or south. The field rewinds back on itself after 360°. I mean, the field due to a straight wire is curved in the shape of a circle. The direction of the field at a point is a tangent to this circle, at that point. Hence the field itself changes direction from one point to another. What will appear as north pole at one point, will appear to be south pole at the point which is 180° from the previous point. So how can you say that there is a well-defined north or south pole? Yes, you are right about the field appearing to be circular, but isn't the field zero in the direction of velocity of the charge? This would give our moving charge a kind of "donut" shape of some sort. We can't define which one is south or north, but we can sure see the poles Quote by quantumfoam This would give our moving charge a kind of "donut" shape of some sort. We can't define which one is south or north, but we can sure see the poles I don't understand what you mean by that. Could you elaborate? If you can't "define" the north or south, then how can you "see" the poles? What I meant by "define" is that it depends on the observer on what he chooses to be the south and north pole. The way you can see that the poles are there is by seeing the two opposite points that have no field at them. I am very sure you are familiar with the Biot-Savart equation. According to the equation, the magnetic field of the moving point charge in both the direction of the velocity and in the direction opposite of the velocity is zero, hence the "donut" shape of the magnetic field. Since it has two endpoints, (which, by the way, i'm assuming implies magnetic dipoles) then the magnetic moment of the moving charge can be defined since it is a constant vector pointing from the south pole to the north pole of the magnetic field. Again, this horrible way of seeing it like this may be wrong, so please correct me. If it makes sense, please let me know Quote by quantumfoam hence the "donut" shape of the magnetic field I still don't understand this... the field is in the shape of a cylinder with the wire as it's axis. Quote by quantumfoam Since it has two endpoints, (which, by the way, i'm assuming implies magnetic dipoles)... Even if I take it to be a donut, where do you find endpoints in a 'donut'? Quote by quantumfoam ... it is a constant vector pointing from the south pole to the north pole of the magnetic field. That's what I pointed out earlier... If the direction of the field keep on changing with position, how can there a fixed north pole for the magnetic moment to point at!!! I am sorry if my answers are not good, but honestly, I can't visualize what you are trying to say! I just can't see the two poles at two distinct places throughout the spread of the field and hence i can't see any sign of the magnetic moment. I am really sorry that after almost a week, I couldn't provide you with a satisfactory answer! :( Nooooo! I don't think we are on the same page here is all. (: I don't think imagining our charge to be a wire is going to help because a wire and a point charge have different magnetic field shapes. What I meant by the donut shape of our point charge is that it is not completely in the shape of a donut, but it has some characteristics. In a real donut, there is a hole, which I totally understand why you got confused (that was my mistake by not being clear enough), but in the magnetic field of our point charge there is no hole. It has the same curving features of a donut but it doesn't have a whole. It has the same magnetic field shape as a solenoid. Our moving point charge is moving at a constant velocity and through a vacuum (classical vacuum). If it is not accelerating or near any force, how will the direction between north and south change? Page 1 of 2 1 2 > Thread Tools | | | | |----------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Moving Charges and Magnetic Moments | | | | Thread | Forum | Replies | | | Classical Physics | 2 | | | Introductory Physics Homework | 13 | | | Introductory Physics Homework | 1 | | | Introductory Physics Homework | 1 | | | General Physics | 3 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9475700259208679, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2008/11/10/sweedler-notation/?like=1&source=post_flair&_wpnonce=eb8d9d939c
# The Unapologetic Mathematician ## Sweedler notation As we work with coalgebras, we’ll need a nice way to write out the comultiplication of an element. In the group algebra we’ve been using as an example, we just have $\Delta(e_g)=e_g\otimes e_g$, but not all elements are so cleanly sent to two copies of themselves. And other comltiplications in other coalgebras aren’t even defined so nicely on any basis. So we introduce the so-called “Sweedler notation”. If you didn’t like the summation convention, you’re going to hate this. Okay, first of all, we know that the comultiplication of an element $c\in C$ is an element of the tensor square $C\otimes C$. Thus it can be written as a finite sum $\displaystyle\Delta(c)=\sum\limits_{i=1}^n(c)a_i\otimes b_i$ Now, this uses two whole new letters, $a$ and $b$, which might be really awkward to come up with in practice. Instead, let’s call them $c_{(1)}$ and $c_{(2)}$, to denote the first and second factors of the comultiplication. We’ll also move the indices to superscripts, just to get them out of the way. $\displaystyle\Delta(c)=\sum\limits_{i=1}^n(c)c_{(1)}^i\otimes c_{(2)}^i$ The whole index-summing thing is a bit awkward, especially because the number of summands is different for each coalgebra element $c$. Let’s just say we’re adding up all the terms we need to for a given $c$: $\displaystyle\Delta(c)=\sum\limits_{(c)}c_{(1)}\otimes c_{(2)}$ Then if we’re really pressed for space we can just write $\Delta(c)=c_{(1)}\otimes c_{(2)}$. Since we don’t use a subscript in parentheses for anything else, we remember that this is implicitly a summation. Let’s check out the counit laws $(1_M\otimes\epsilon)\circ\Delta=1_M=(\epsilon\otimes1_M)\circ\Delta$ in this notation. Now they read $c_{(1)}\epsilon(c_{(2)}=c=\epsilon(c_{(1)})c_{(2)}$. Or, more expansively: $\displaystyle\sum\limits_{(c)}c_{(1)}\epsilon\left(c_{(2)}\right)=c=\sum\limits_{(c)}\epsilon\left(c_{(1)}\right)c_{(2)}$ Similarly, the coassociativity condition now reads $\displaystyle\sum\limits_{(c)}\left(\sum\limits_{\left(c_{(1)}\right)}\left(c_{(1)}\right)_{(1)}\otimes\left(c_{(1)}\right)_{(2)}\right)\otimes c_{(2)}=\sum\limits_{(c)}c_{(1)}\otimes\left(\sum\limits_{\left(c_{(2)}\right)}\left(c_{(2)}\right)_{(1)}\otimes\left(c_{(2)}\right)_{(1)}\right)$ In the Sweedler notation we’ll write both of these equal sums as $\displaystyle\sum\limits_{(c)}c_{(1)}\otimes c_{(2)}\otimes c_{(3)}$ Or more simply as $c_{(1)}\otimes c_{(2)}\otimes c_{(3)}$. As a bit more practice, let’s write out the condition that a linear map $f:C\rightarrow D$ between coalgebras is a coalgebra morphism. The answer is that $f$ must satisfy $f\left(c_{(1)}\right)\otimes f\left(c_{(2)}\right)=f(c)_{(1)}\otimes f(c)_{(2)}$ Notice here that there are implied summations here. We are not asserting that all the summands are equal, and definitely not that $f\left(c_{(1)}\right)=f(c)_{(1)}$ (for instance). Sweedler notation hides a lot more than the summation convention ever did, but it’s still possible to expand it back out to a proper summation-heavy format when we need to. ### Like this: Posted by John Armstrong | Algebra ## 7 Comments » 1. And unsurprisingly, now that I’ve seen the Sweedler notation, I find it about as horrible and unreadable as ever I found the summation convention. Give me my sum signs. By all means fudge the decorations as long as there is a context, but having at least ONE sigma hanging around and a bunch of indexing variables vaguely indicated makes it all more readable and not less. And I had this rant a while back too, and I know we disagree on this. Comment by | November 11, 2008 | Reply 2. Actually, I rather dislike Sweedler notation myself. But for writing out formulas it’s sort of difficult to escape. The thing is, I’ve found that if you’re writing out a lot of things in Sweedler notation, you’re thinking too explicitly. Similarly, if you’re writing out a lot of matrix indices, you’re missing the point. As for actual use of Sweedler notation, I need it for writing the two equivalent monoidal triple products of three representations. Beyond that, I hope to not need it again. Comment by | November 11, 2008 | Reply 3. [...] remember that this doesn’t mean that the two tensorands are always equal, but only that the results after (implicitly) summing up [...] Pingback by | November 19, 2008 | Reply 4. I personally love the summation convention, and loved it from the first time I encountered it. The problem I had with Sweedler notation is that nobody seems to explain it in detail. However I think I get it now – and I think I’ll learn to like it. Comment by Blake | December 4, 2008 | Reply 5. So am I correct to understand that, when we write $c_{(1)}) \otimes c_{(2)})$ and $c_{(1)}) \otimes c_{(2)}) \otimes c_{(3)}$, the $c_{(2)}$ in the former is different from the $c_{(2)}$ in the latter? That is, there’s an implicit additional index here that you (hopefully) get by counting up the size of the tensor product you’re working in? Comment by Daniel | March 26, 2011 | Reply 6. (modulo the stray parens that I managed to put in there somehow…) Comment by Daniel | March 26, 2011 | Reply 7. That’s correct, Daniel, and that’s one of the most confusing things about it. In fact, in $c_{(1)}$ and $c_{(2)}$ the $1$ and $2$ aren’t really index values at all, and $c_{(1)}$ and $c_{(2)}$ aren’t particular values of some indexed quantity. For example, there’s a certain common situation where we write $\Delta(X)=X\otimes1+1\otimes X$. In this case, $c_{(1)}$ can be $X$ and $c_{(2)}$ can be $1$, or vice versa, depending on which of the two summands we’re talking about. Comment by | March 26, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 38, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9277414679527283, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-equations/129387-cannot-solve-differential-equation.html
# Thread: 1. ## Cannot Solve this Differential Equation I would like some help solving this differential equation if possible: y'' + 2y' + y = xe^(-x) I end up with the complementary solution of the equation being: y = C1*e^(-x) + C2*xe^(-x) + yp However, here I get stumped for how to solve for yp. I set the trial solution as (Ax+B)e^(-x) and then differentiat to find y' and y'' and plug it into the equation. But then everything cancels out giving me x = 0. Can someone explain how to solve the rest of this? 2. Originally Posted by zerobladex I would like some help solving this differential equation if possible: y'' + 2y' + y = xe^(-x) I end up with the complementary solution of the equation being: y = C1*e^(-x) + C2*xe^(-x) + yp However, here I get stumped for how to solve for yp. I set the trial solution as (Ax+B)e^(-x) and then differentiat to find y' and y'' and plug it into the equation. But then everything cancels out giving me x = 0. Can someone explain how to solve the rest of this? Since your complimentry solution has the same form as the particular solution you need to increase the degree in the polynomials of $y_p$ by 2. So the form will be $(Ax^3+Bx^2+Cx+D)e^{-x}$ If you write the ODE out as an operator you get $(D+1)^2y=xe^{-x}$ To annihilate the right hand side you need to act on the equation by $(D+1)^2$ again. This gives $(D+1)^4y=0$ This gives the form $(Ax^3+Bx^2+Cx+D)e^{-x}$ as above
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9260638952255249, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/7043/list
## Return to Answer 2 added 468 characters in body I've just realized I was being a little bit slow. I had already found on the internet that $n^{-2}\sum_{k=1}^nφ(k)$ is roughly $3/π^2$ and stupidly didn't notice that I could "differentiate" this to get exactly what I want. That is, $\sum_1^N φ(k)$ is about $3N^2/π^2$, so the difference between the sum to $N+M$ and the sum to $N$ is around $6NM/π^2$, from which it follows that the average value near $N$ is around $6N/π^2$, which is entirely consistent with the well-known fact that the probability that two random integers are coprime is $6/π^2$. I'm adding this paragraph after Greg's comment. To argue that the probability that two random integers are coprime, you observe that the probability that they do not have p as a common factor is (1-1/p^2). If you take the product of that over all p then you've got the reciprocal of the Euler product formula for ζ(2), or 1^{-2}+2^{-2}+... = π^2/6. It's not that hard to turn these formal arguments into a rigorous proof, since everything converges nicely. 1 I've just realized I was being a little bit slow. I had already found on the internet that $n^{-2}\sum_{k=1}^nφ(k)$ is roughly $3/π^2$ and stupidly didn't notice that I could "differentiate" this to get exactly what I want. That is, $\sum_1^N φ(k)$ is about $3N^2/π^2$, so the difference between the sum to $N+M$ and the sum to $N$ is around $6NM/π^2$, from which it follows that the average value near $N$ is around $6N/π^2$, which is entirely consistent with the well-known fact that the probability that two random integers are coprime is $6/π^2$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9846765995025635, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/36467/nl-vs-nl-choose-n?answertab=votes
$N^L$ vs. ${N+L\choose N}$ Any ideas on finding a good estimate/approximation for $A/B$ where $A = N^L$ and $B = {N+L\choose N}$? - 2 You could try applying Stirling on the factorials implicit in the binomial coefficient... – J. M. May 2 '11 at 17:07 1 I don't understand the notations $N^L$ and $C_{N+L}^N$. What do those mean? – Mitch May 2 '11 at 17:17 @Mitch: I am taking $N^L$ as the exponential and $C_{N+L}^N$ as the binomial coefficient of $N+L$ choose $N$ – Ross Millikan May 2 '11 at 17:52 1 In what regime? If $L$ is fixed and $N$ is allowed to grow then the ratio approaches $L!$. – Qiaochu Yuan May 2 '11 at 17:54 Wow, just the slightest change in notation (from lower case to upper) made me misunderstand. That's not a problem with the notation, but a problem with my reading ability. – Mitch May 2 '11 at 17:56 show 1 more comment 3 Answers If you expand $B$ as $\frac{(N+L)!}{N!L!}$ and then use Stirling's approximation on the factorials, you will be very close. - $\log(A/B) = \log(L!) - \sum_{j=1}^L \log(1+j/N)$. You can approximate or bound the sum in various ways, depending on your needs. - I'm assuming that by $C_{N+L}^N$ you mean the binomial coefficient $(N+L)!/N!L!$. (I would denote this by ${N+L \choose N}$.) If you're thinking of $L$ as a constant then you can write this as $${(N+L)(N+L-1) \cdots (N+1) \over L!}.$$ And you can expand out the numerator; you get $${(N^L + {L(L+1) \over 2} N^{L-1} + \cdots) \over L!}$$ - Thank you, this is a good hint as well. – Leo May 2 '11 at 18:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9342897534370422, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/90402/rational-forms-of-simple-lie-algebras/93063
## Rational forms of simple Lie algebras ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am more or less familiar with the classification of real forms of complex semisimple Lie algebras. But as soon as I wander off into the domain of very-non-algebraically closed fields, things seem to become considerably more complex. Is there a classification of the rational forms of complex semisimple finite-dimensional Lie algebras? Of course, there is such a classification, and one can in principle carry it out using standard ideas involving Galois cohomology &c. The question is, rather, has it been written down explicitly and with a nice parametrization? In particular: How many rational forms does the complex algebra of type $G_2$ have? - ## 3 Answers I think the answer to the first question is clearcut: yes, there is a (sort of) classification. On the other hand, it gets extremely complicated when you start over `$\mathbb{Q}$` rather than `$\mathbb{R}$`. As far as I know, all detailed treatments require (beyond an early stage) case-by-case examination of the simple types over `$\mathbb{C}$`. As you note, Galois cohomology gets implicated here, along with the careful study of inner and outer automorphism groups of each simple Lie algebra. The final chapter of Jacobson's 1962 book Lie Algebras lays out the algebraic program, which at that time hadn't disposed of all the exceptional types (some of my fellow graduate students were still occupied with that). The main problem is that you get involved with the classification of various types of associative algebras along the way. This is easier to control over local fields, but there are infinitely many of those fields in the background of the study of Lie algebras over `$\mathbb{Q}$`. Not having spent time with this problem for many decades, I can't comment in more detail on how well satisfied one might be with the answers in the existing literature. But beyond Jacobson's book, there is a somewhat more "rational" approach developed in considerable detail by one of his former students George Seligman, which I reviewed for the AMS Bulletin here. The exceptional Lie algebra of type `$G_2$` was apparently first studied in depth by Jacobson himself in Duke Math. J. 5 (1939); here the relevant (non-associative) algebra is the 8-dimensional Cayley (octonion) algebra whose forms over the ground field have to be identified. None of this literature makes for easy reading, but in retrospect my impression (possibly false) is that infinitely many forms can exist over `$\mathbb{Q}$`. I'd be curious as to whether any recent literature simplifies the whole matter at all. P.S. George McNinch has addressed some of this more precisely from the viewpoint of Galois cohomology. Maybe I should just add that the big obstacle in general to finding all forms of a simple Lie algebra over a given field is the possible existence of many anisotropic forms. Over `$\mathbb{R}$` one is lucky enough to find unique anisotropic (=compact) forms, but the classification of such forms is highly sensitive to the nature of the field. Classification methods for algebraic groups (Tits) or Lie algebras (discussed above) usually just reduce the problem in a unified way to the anisotropic case. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Sorry: I was being a dim-wit. (My answer now reflects a fair amount of editing -- sorry.) There are only finitely many isomorphism classes of $\mathbf{Q}$-forms of simple groups of type $G_2$. I expect that means there are only finitely many iso classes of $\mathbf{Q}$-forms of simple Lie algebras of type $G_2$ though I confess I didn't carefully think through the transition from algebraic groups to Lie algebras. Indeed, over any field $k$, $k$-forms of a simple algebraic group of type $G_2$ are classified by the cohomology set $H^1(k,H)$ where $H$ is a split group of type $G_2$. [We use here that $H$ is simple -- i.e. adjoint -- and that every $k$-automorphism of $H$ is inner]. Now, one knows that $H^1(\mathbf{Q}_p,H)$ is trivial for all primes p (since $\mathbf{Q}_p$ is a local field and $G$ is simply connected). And $H^1(\mathbf{R},H)$ is finite [there are only finitely many (two, I think?) real forms]. Hence the Hasse principle implies that $H^1(\mathbf{Q},H)$ is finite. For a general field $k$, note that the cohomology set $H^1(k,G)$ identifies with (1) the set of isomorphism classes of octonion algebras over $k$, and (2) the set of isomorphism classes of certain quadratic forms known as 3-Pfister forms. For all this, see e.g. Serre's part of the book [Garibaldi, Merkurjev, Serre "Cohomological Invariants and Galois cohomology". At least if the characteristic of $k$ is 0 and $H_1$ and $H_2$ are two $k$-forms of the group $G_2$, I hope that $H_1 \not \simeq H_2$ should imply that $\operatorname{Lie}(H_1) \not \simeq \operatorname{Lie}(H_2)$. So to decide if there are infinitely many $k$-forms of the simple Lie algebra of type $G_2$, it is enough to exhibit infinitely many 3-Pfister forms over $k$ which are not isometric. Now, Theorem 18.1 in [Serre, loc. cit.] calculates the group of cohomological invariants Inv$_k($Pfister$_3,\mathbf{Z}/2\mathbf{Z})$; it is a free module of rank 2 over the (infinite) cohomology ring $H^\bullet(k,\mathbf{Z}/2\mathbf{Z})$. And there is an invariant e such that for $\alpha_1,\alpha_2,\alpha_3 \in k^\times$, the value of e on the $3$-Pfister form $Q_\alpha = \langle \langle \alpha_1$, $\alpha_2$, $\alpha_3 \rangle \rangle$ is the cup-product $(\alpha_1)\cup (\alpha_2) \cup (\alpha_3)$ in $H^3(k,\mathbf{Z}/2\mathbf{Z})$ of the classes in $H^1(k,\mathbf{Z}/2\mathbf{Z}) = k^\times/k^{\times 2}$ determined by the $\alpha_i$. In fact, any "normalized invariant" is a $H^\bullet(k,\mathbf{Z}/2\mathbf{Z})$-multiple of $e$. So to use these invariants to find lots of non-isometric 3-Pfister forms, you'd need at least that $H^3(k,\mathbf{Z}/2\mathbf{Z})$ is non-zero. Now, $H^3$ is non-zero for $k=\mathbf{Q}((T))$ (or even $\mathbf{Q}_p((T))$) and I believe I would expect there to be infinitely many isometry classes of Pfister forms in those cases (but I'd be interested in seeing an argument...) - If I understand you correctly: the Hasse principle implies that for $G$ split over $\mathbb Q$ we have $H^1(\mathbb Q,G)=H^1(\mathbb R,G)$ and, taking $G$ of split type $G_2$ this allows us to conclude that there are two rational forms of $G_2$, giving after extension of scalars to $\mathbb R$ the compact form and the one other form shown in the tables of, say, [Onishchik, Vinberg, Lie Groups and Lie Algebras III]. (Since this second real form is not compact, their rational Lie algebras cannot be isomorphic) – Mariano Suárez-Alvarez Mar 8 2012 at 23:26 @Mario: Yes, for $G$ split of type $G_2$, and more generally for $G$ simply connected, $H^1(k,G)$ vanishes for $k = \mathbf{Q}_p$ so the Hasse principle implies what you say. When $G$ is no longer simply connected, that vanishing may fail. And the Hasse principle may fail. The issue about Lie algebras that gave me pause was this: does every $\mathbf{Q}$ form of a simple Lie algebra come from a form of the corresponding algebraic group? I guess the answer is "yes" since the automorphism group of the Lie algebra and algebraic group should coincide; but I didn't think too carefully about it – George McNinch Mar 9 2012 at 2:13 (Disclaimer: I am not an expert and easily may overlook something). Results about forms of exceptional types are technically difficult and scattered over the literature. For example, the only known to me full description of forms of $D_4$ is buried (somewhat implicitly) inside the book: Knus, Merkurjev, Rost, Tignol, The Book of Involutions, AMS Colloq. Publ, Vol. 44., 1998, http://www.mathematik.uni-bielefeld.de/~rost/BoI.html . The works of Skip Garibaldi (using the language of algebraic groups) are also highly relevant. - Thanks for the reference! – Mariano Suárez-Alvarez Apr 4 2012 at 2:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 65, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9402952194213867, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/111605-find-solution-set-equation.html
# Thread: 1. ## Find the solution set of the equation Question : Find the solution set of the equation $<br /> \begin{bmatrix}<br /> x & 3 & 7 \\<br /> 2 & x & 2\\<br /> 7 & 6 & x<br /> \end{bmatrix} = 0 <br />$ It is given that x= -9 is one of the roots 2. Originally Posted by zorro Question : Find the solution set of the equation $<br /> \begin{bmatrix}<br /> x & 3 & 7 \\<br /> 2 & x & 2\\<br /> 7 & 6 & x<br /> \end{bmatrix} = 0 <br />$ It is given that x= -9 is one of the roots Your equation does not make any sense. The zero matrix is $<br /> \begin{bmatrix}<br /> 0 & 0 & 0 \\<br /> 0 & 0 & 0\\<br /> 0 & 0 & 0<br /> \end{bmatrix}$ which is CLEARLY not the same as $<br /> \begin{bmatrix}<br /> x & 3 & 7 \\<br /> 2 & x & 2\\<br /> 7 & 6 & x<br /> \end{bmatrix}$. 3. ## Might be this would make some sence Originally Posted by Prove It Your equation does not make any sense. The zero matrix is $<br /> \begin{bmatrix}<br /> 0 & 0 & 0 \\<br /> 0 & 0 & 0\\<br /> 0 & 0 & 0<br /> \end{bmatrix}$ which is CLEARLY not the same as $<br /> \begin{bmatrix}<br /> x & 3 & 7 \\<br /> 2 & x & 2\\<br /> 7 & 6 & x<br /> \end{bmatrix}$. Might be this might make some think clear ......The matrix is not this $<br /> \begin{bmatrix}<br /> x & 3 & 7 \\<br /> 2 & x & 2\\<br /> 7 & 6 & x<br /> \end{bmatrix}<br />$ = 0 , but this $<br /> \begin{vmatrix}<br /> x & 3 & 7 \\<br /> 2 & x & 2\\<br /> 7 & 6 & x<br /> \end{vmatrix}<br />$ = 0 4. Originally Posted by zorro Might be this might make some think clear ......The matrix is not this $<br /> \begin{bmatrix}<br /> x & 3 & 7 \\<br /> 2 & x & 2\\<br /> 7 & 6 & x<br /> \end{bmatrix}<br />$ = 0 , but this $<br /> \begin{vmatrix}<br /> x & 3 & 7 \\<br /> 2 & x & 2\\<br /> 7 & 6 & x<br /> \end{vmatrix}<br />$ = 0 The first step is to get an expression for the determinant. Then equate this expression to zero and solve for x. Please show what you've done and where you get stuck. 5. The determinant will clearly be a cubic in x. Since you are already given that x= 9 is a root, you can factor it as (x-9) times some quadratic and then, if necessary, use the quadratic formula. 6. ## Is this correct Originally Posted by mr fantastic The first step is to get an expression for the determinant. Then equate this expression to zero and solve for x. Please show what you've done and where you get stuck. After taking the deteminant this is what i have got $x(x^2 - 12) - 3(2x - 14) + 7(12 - 7x) = 0$ $x^3 - 12x - 6x + 42 + 84 - 49x = 0$ $x^3 - 67x + 126 = 0$ $(x + 9) (x^2 - 9x + 14)$ $(x + 9) (x - 7) (x - 2)$ So what is the answer as $x = -9 , \ 7 , \ 2$ i didnt actually understand the question therefore i dont know what to do after this ??????? 7. Wasn't it just asking you to solve for $x$? You've done this... 8. If you want to get really, really "technical" (and mathematicians are notorious for that!), since the problem asked for "solution set", the answer is the set of those solutions: {-9, 7, 2}. (Warning: I am assuming those are the correct solutions; I didn't check them.) 9. ## Thank u for helping me Originally Posted by HallsofIvy If you want to get really, really "technical" (and mathematicians are notorious for that!), since the problem asked for "solution set", the answer is the set of those solutions: {-9, 7, 2}. (Warning: I am assuming those are the correct solutions; I didn't check them.) Thanks HallsofIvy and Prove It for helping me Cheers
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.949033260345459, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/1406/what-does-the-property-that-path-connectedness-implies-arc-connectedness-imply/102825
## What does the property that path-connectedness implies arc-connectedness imply? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) A space X is path-connected if any two points are the endpoints of a path, that is, the image of a map [0,1] \to X. A space is arc-connected if any two points are the endpoints of a path, that, the image of a map [0,1] \to X which is a homeomorphism on its image. If X is Hausdorff, then path-connected implies arc-connected. I was wondering about the converse: What properties must X have if path-connected implies arc-connected? In particular, what are equivalent properties? - More precisely: what properties must have a space where two points that can be joined by a path can be joined by an arc. And more vaguely: why is it needed (or simply useful) to know that two points can be joined by an arc, and not simply a path? – Benoit Jubin Oct 21 2009 at 1:09 ## 2 Answers I don't have an answer, but here is an example to show it's not a local property that decides it. Consider the real line with two inseparable zeros, 0 and 0'. Clearly there is a path from 0 to 0' but not an arc. On the other hand, if you adjoin a point at infinity, making a circle with a double point on it, you can make such an arc going through infinity, and so the space is arc-connected. - 3 That's not an embedding, though - the image is the whole space, and the circle with a double point isn't Hausdorff. – Harry Altman May 30 2010 at 1:55 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. It suffices that $X$ be Hausdorff: the path is then a compact metric image of [0,1] and as such arc-wise connected (do Problem 6.3.11 of Engelking's General Topology). - But it is not necessary. The real line with a double origin is a counterexample. – skupers May 24 2010 at 3:13 The best I can come up with then is an artificial any two points are connected by a locally connected metric continuum''. Any arc is such a continuum and the exercise mentioned above establishes that such continua are arcwise connected. It looks to me like that exercise, a step towards the Hahn-Mazurkiewicz theorem, will have to be a main part of any argument and any iff-condition that should set things up for a construction of an arc will be as artificial as what I wrote above. – KP Hart May 25 2010 at 9:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9454508423805237, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/80657-complex-variables.html
# Thread: 1. ## Complex variables Find all functions $f (z )$ satisfying the following two conditions: (1) $f (z )$ is analytic in the disk $|z - 1| < 1$ . (2) $f ( \frac{n}{n + 1} ) = 1 - \frac{1}{2n^2 + 2n + 1}$. 2. Originally Posted by vincisonfire Find all functions $f (z )$ satisfying the following two conditions: (1) $f (z )$ is analytic in the disk $|z - 1| < 1$ . (2) $f ( \frac{n}{n + 1} ) = 1 - \frac{1}{2n^2 + 2n + 1}$. $\frac1{2n^2+2n+1} = \frac1{(n+1)^2+n^2} = \frac{\frac1{(n+1)^2}}{1+\bigl(\frac n{n+1}\bigr)^2} = \frac{\bigl(1-\frac n{n+1}\bigr)^2}{1+\bigl(\frac n{n+1}\bigr)^2}<br />$, so you can take $f(z) = 1 - \frac{(1-z)^2}{1+z^2}$. (But could there be any other analytic functions taking those values at the points n/(n+1)?) 3. Originally Posted by Opalg ... but could there be any other analytic functions taking those values at the points n/(n+1)?... In a problem i'm working about one important step is to demonstrate this lemma... Let be $f(*)$ an analytic function whose value $f_{n}$ are known for $z=0,1,...,n, ...$. In this case, under certain conditions, there is only one analytic $f(*)$ for which is $f(n)= f_{n}$. Does Opalg think that is an interesting question to be discussed in MHF?... if yes, in which section?... Kind regards $\chi$ $\sigma$ 4. Originally Posted by chisigma In a problem i'm working about one important step is to demonstrate this lemma... Let be $f(*)$ an analytic function whose value $f_{n}$ are known for $z=0,1,...,n, ...$. In this case, under certain conditions, there is only one analytic $f(*)$ for which is $f(n)= f_{n}$. Does Opalg think that is an interesting question to be discussed in MHF?... if yes, in which section?... Yes, that's a nice question. You could ask it as a new thread in this section.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9300572276115417, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/89475/list
Return to Question 2 added 103 characters in body As far as I know, there are four possible ways to generalize algebraic geometry by 'simply' replacing the basic category of rings with something similar but more general: $\bullet$ In the approach by Toen-Vaquié we fix a nice symmetric monoidal category $C$, also called a relative context. An affine scheme is defined to be an algebra object in $C$, and an arbitrary scheme is a certain presheaf on affine schemes. We optain the category $\mathrm{Sch}(C)$ of schemes relative to $C$. $\bullet$ In Durov's theory a generalized ring is an algebraic monad which is commutative in a certain sense. Then affine schemes are defined to be the spectra of generalized rings and arbitrary schemes are optained by gluing. This results in the category $\mathrm{genSch}$. $\bullet$ In his book Categories of commutative algebras Yves Diers considers Zariski categories, which seem to axiomatize familiar properties of categories of commutative algebras. If $\mathcal{A}$ is such a Zariski category, then one can develope commutative algebra internal to $\mathcal{A}$, construct affine schemes and then by gluing also schemes as usual. We optain the category $\mathrm{Sch}(\mathcal{A})$. $\bullet$ In derived algebraic geometry one replaces the category of rings with the category of simplicial rings (but I don't really know enough about that, yet). My question is: What are the connections between these 'generalized algebraic geometries'? Fortunately there is a map of $\mathbb{F}_1$-land which draws connections between all these various approaches to schemes over $\mathbb{F}_1$. For example monoid schemes à la Deitmar/Kato are in the intersection of Toen-Vaquiè and Durov. Note, however, that the theories mentioned above are far more general. Specifically, one might ask the following questions: Is the category of generalized rings a Zariski category and is Durov's theory (say, with the unary localization theory) a special case of the one by Yves Diers? What is the relationship between Toen-Vaquié schemes relative to the symmetric monoidal category of simplicial rings and derived schemes? If $C$ is a relative context, is then the category of algebra objects in $C$ a Zariski category and do the corresponding schemes coincide? Probably not because Diers never mentions monoids as an example, but perhaps it's the other way round? Of course, many more questions are out there ... Probably I'm not the first one with this question, therefore I've also put the "reference-request" tag. It would be great if there is some paper like "Mapping AG-land". 1 Connections between various generalized algebraic geometries (Toen-Vaquié, Durov, Diers, Lurie)? As far as I know, there are four possible ways to generalize algebraic geometry by 'simply' replacing the basic category of rings with something similar but more general: $\bullet$ In the approach by Toen-Vaquié we fix a nice symmetric monoidal category $C$, also called a relative context. An affine scheme is defined to be an algebra object in $C$, and an arbitrary scheme is a certain presheaf on affine schemes. We optain the category $\mathrm{Sch}(C)$ of schemes relative to $C$. $\bullet$ In Durov's theory a generalized ring is an algebraic monad which is commutative in a certain sense. Then affine schemes are defined to be the spectra of generalized rings and arbitrary schemes are optained by gluing. This results in the category $\mathrm{genSch}$. $\bullet$ In his book Categories of commutative algebras Yves Diers considers Zariski categories, which seem to axiomatize familiar properties of categories of commutative algebras. If $\mathcal{A}$ is such a Zariski category, then one can develope commutative algebra internal to $\mathcal{A}$, construct affine schemes and then by gluing also schemes as usual. We optain the category $\mathrm{Sch}(\mathcal{A})$. $\bullet$ In derived algebraic geometry one replaces the category of rings with the category of simplicial rings (but I don't really know enough about that, yet). My question is: What are the connections between these 'generalized algebraic geometries'? Fortunately there is a map of $\mathbb{F}_1$-land which draws connections between all these various approaches to schemes over $\mathbb{F}_1$. For example monoid schemes à la Deitmar/Kato are in the intersection of Toen-Vaquiè and Durov. Note, however, that the theories mentioned above are far more general. Specifically, one might ask the following questions: Is the category of generalized rings a Zariski category and is Durov's theory (say, with the unary localization theory) a special case of the one by Yves Diers? What is the relationship between Toen-Vaquié schemes relative to the symmetric monoidal category of simplicial rings and derived schemes? If $C$ is a relative context, is then the category of algebra objects in $C$ a Zariski category and do the corresponding schemes coincide? Of course, many more questions are out there ... Probably I'm not the first one with this question, therefore I've also put the "reference-request" tag. It would be great if there is some paper like "Mapping AG-land".
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9256580471992493, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-equations/202276-general-solution-pde.html
3Thanks • 1 Post By GJA • 1 Post By GJA • 1 Post By GJA # Thread: 1. ## General Solution of a PDE Find the general solution of the following PDE. uxxyy = 0 I know I have to go from right to left i.e integrate with respect to y, then y then x and x. So here it goes uxxy = F(x) Where F(x) is a constant in terms of x. uxx =yF(x) +F1(x) Where F1(x) is a constant in terms of x. ux = yxF(x) + xF1(x) + F2(y) Where F2(y) is a constant in terms of y. u = u(x,y) = 1/2 (yx2F(x) ) + 1/2 (x2F1(x) ) + xF2(y) + F3(y) Where F3(y) is a constant in terms of y. Is that correct? Thanks 2. ## Re: General Solution of a PDE Hi, princessmath. Nice work so far. Something did catch my eye. You have the line $u_{xx}=yF(x) + F_{1}(x)$ . We need to be careful at this point, because the next step is to take an antiderivative with respect to x. Since there are functions of x on the right hand side we cannot do this by putting x's in front of them like we were able to put y in front of F(x) in the previous integration. Does this make sense? Let me know if there are any questions. Good luck! 3. ## Re: General Solution of a PDE Hey GJA, Do you mean I have to change F(x) to F(x^2/2) if we were to integrate wrt x? The y in front is fine because the LHS has no y. is that what you mean? 4. ## Re: General Solution of a PDE Hi again. I'm glad you're sticking with it on this tricky little problem! Let's start from $u_{xx}=yF(x)+F_{1}(x).$ We want to integrate both sides with respect to $x$ now. But we have a problem because we don't know what $F(x)$ and $F_{1}(x)$ are; all we know is that they are functions of $x$. I think if we use a concrete function for $F_{1}(x)$ what I'm getting at might be a little more clear. Note: I'm ignoring $F(x)$ for now because I'm trying to demonstrate why after integrating we don't have $xF_{1}(x).$ For example, pretend $F_{1}(x)=e^{x}.$ Then we have $u_{xx}=yF(x)+e^{x}.$ When you take an antiderivative of the right hand side $e^{x}$ stays $e^{x}$, it does NOT become $xe^{x}=xF_{1}(x).$ Does that help demonstrate what we're getting at? Again, the main issue is that we don't know what $F(x)$ and $F_{1}(x)$ are specifically; they can be any functions with $x$'s in it. So, at best, after we take an antiderivative we can write $u_{x}=y\int F(x)dx+\int F_{1}(x)dx+F_{2}(y),$ where $F_{2}(y)$ is a function of $y$ only, and so is constant with respect to $x.$ Again, good work. Keep working hard and asking questions and it will make sense! Good luck! 5. ## Re: General Solution of a PDE Oh I see what you mean. Yeah I never thought about it like that. So it's a function in terms of x. Hmm let me think about it and see if I can produce a final solution. Thanks a lot! 6. ## Re: General Solution of a PDE Can you generalize it to say Integral of F(x) is equal to I(x) where I(x) = S F(x) dx and then will continue the same process? Do you think that is valid? 7. ## Re: General Solution of a PDE If you want to simplify the notation a bit and write $I(x)=\int F(x)dx$ and $I_{1}(x)=\int F_{1}(x)dx,$ that's no problem at all 8. ## Re: General Solution of a PDE I got it! So basically I get this general solution where I have something that looks like this. U(x,y) = yM(x) + M1(x) + xF2(y) +F3(y) Where M(x) = S I(x) and where I(x) = S F(x) sames goes for M1 = S I1(x) where I1(x) = S F1 (x). Is there a simpler way of doing it? Thanks
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9417272806167603, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/95742?sort=votes
What is the name of $\frac{e^z-1}{z}$ and how to invert it? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I came across this complex function in my work $f(z)=\frac{e^z-1}{z}$. Is there a reference to $f(z)$? What is its name in the literature? More importantly, is the function inversible? If so, what is $f^{-1}(z)$? Thanks. - I don't know about the name, but certainly it can't be globally invertible. For example, it assumes the value 0 infinitely often. The only invertible global holomorphic functions are the polynomials of degree 1. – Angelo May 2 2012 at 10:08 4 en.wikipedia.org/wiki/… is relevant. – Neil Strickland May 2 2012 at 10:31 Thanks, if I write the function as a series instead, i.e. $f(z) = \sum_{k=0}^\infty \frac{z^k}{(k+1)!}$, is it invertible? – Minh-Tri Pham May 2 2012 at 11:19 It is strictly increasing on the reals, so it is invertible there. – Gerald Edgar May 2 2012 at 12:51 2 The Wikipedia article gives a series for 1/f(z), but the question was about $f^{-1}(z)$. – Michael Renardy May 2 2012 at 15:00 2 Answers Let $y=(e^z-1)/z$ and $x=-1/y$. Then $xe^x=(x-z)e^{x-z}$. Hence $$x-z=W(xe^x).$$ Here W is an appropriately chosen branch of the Lambert function (ProductLog[-1,.] in Mathematica). - Interesting. Of course $x=W(x e^x)$ for some other branch of $W$. – Gerald Edgar May 2 2012 at 15:25 1 ...so I guess this means: solutions to $y=f(z)$ are $$z=-\frac{1 + W_k \left(-\frac{\operatorname{e} ^{-1/y}}{y}\right) y}{y}$$ where $W_k$ are the branches of the Lambert W function. – Gerald Edgar May 2 2012 at 15:37 We need to rule out the principal branch of the Lambert function, which would simply give z=0. – Michael Renardy May 2 2012 at 15:51 Thanks. This is precisely the answer I have been looking for. – Minh-Tri Pham May 2 2012 at 16:32 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. As for the name, according to wikipedia the Todd genus is given by: $$\mathrm{Td}(z)=\frac{z}{1-e^{-z}}.$$ So, $f(z)=1/\mathrm{Td}(-z)$. - 2 Interesting, but does not answer the question. Why the upvotes??? – András Bátkai May 2 2012 at 19:55 2 @András Bátkai: probably because it's a near-answer ["$1/{\rm Td}(-z)$" feels closer to a named function than "$(e^z-1)/z$"] and makes a possibly unexpected connection with research-level mathematics. – Noam D. Elkies May 3 2012 at 0:56 1 @András Bátkai: Let's not be competitive - if someone posts a useful answer/remark I'm glad to upvote it. – Qfwfq May 3 2012 at 7:50 3 There was no competitiveness. I was just surprised that the accepted answer got (at the time) less upvote then this one. Which is, of course, informative. – András Bátkai May 3 2012 at 8:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9260492324829102, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/65835?sort=votes
## Reference for functors in Kadeishvili’s C_\infty paper ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In his paper Cohomology $C_\infty$-algebra and rational homotopy type, Tornike Kadeishvili describes how the rational cohomology of a simply-connected space carries the structure of a $C_\infty$-algebra, and how this structure determines the rational homotopy type of the space. (The result has been mentioned before on MathOverflow, eg in these answers.) I'm trying to follow the proofs, which are somewhat light on details. In particular, they rely on an adjoint pair of functors $$\Gamma\colon CDGAlg \rightleftarrows DGLieCoalg \colon \mathcal{A}$$ introduced in section 4.3, between the categories of commutative differential graded algebras, and differential graded Lie coalgebras. The functor $\Gamma$ is given as the composition $$\Gamma\colon CDGAlg\stackrel{B}{\to}DGBialg\stackrel{Q}{\to}DGLieCoalg$$ where $B$ is a bar construction and $Q$ is the functor of indecomposables. The adjoint functor $\mathcal{A}$ is dual to the Chevalley-Eilenberg functor. There is a standard weak equivalence $\mathcal{A}\Gamma(A)\to A$. I am struggling to find any reference to these functors in the papers cited in the bibliography. For the proof of Theorem 9.1 we seem to need that the functor $\mathcal{A}\Gamma$ applied to the weak equivalences of $C_\infty$-algebras $$\lbrace f_i\rbrace\colon (H(A),\lbrace m_i\rbrace )\to (A,\lbrace d, \mu, 0,\ldots\rbrace)$$ yields a weak equivalence $\mathcal{A}\Gamma(H(A))\to\mathcal{A}\Gamma(A)$ in $CDGAlg$, but this is not stated anywhere and is not obvious to me. Is the above true, and can anyone explain why? Where can I read more about the functors $\Gamma$ and $\mathcal{A}$ and their properties, in particular in the setting of $C_\infty$-algebras? - This is standard Koszul duality. Take your favourite book on rational homotopy theory. – Fernando Muro May 25 2011 at 10:10 Dear Fernando, please could you expand on your comment a little? I have two favourite books on RHT, and neither of them have Koszul duality in the index. – Mark Grant May 25 2011 at 14:42 See for instance IV.22 in MR1802847 (2002d:55014) Félix, Yves; Halperin, Stephen; Thomas, Jean-Claude Rational homotopy theory. Graduate Texts in Mathematics, 205. Springer-Verlag, New York, 2001. xxxiv+535 pp. ISBN: 0-387-95068-0 (Reviewer: John F. Oprea), 55P62 (18Gxx 55U35) – Fernando Muro May 25 2011 at 15:40 @Fernando: Thanks. Do you mean that the functors in that chapter are be (linearly) dual to the ones described above? What is still worrying me is that the weak equivalence $H(A)\to A$ is not a CDGA map, in particular it is only multiplicative up to boundaries. – Mark Grant May 26 2011 at 20:50 ## 2 Answers You can find all the arguments in Chapter 11 of the book downloadable at http://math.unice.fr/~brunov/Operads.html. This chapter deals with the bar and the cobar constructions for algebras over a Koszul operad. The last section [11.4] treats the extension to homotopy algebras. The theorem "the bar-cobar construction for $C_\infty$-algebras sends $\infty$-quasi-isomorphisms to quasi-isomorphisms" is exactly Proposition 11.4.11 apply to the operad $P=Com$. [Needless to say that this reference does not provide the very first proof of this fact for $C_\infty$-algebras. Let's just say that it is freely available on the net, so easily accessible.] - Thanks Bruno. This certainly seems to be the right general framework to prove this, and a lot more besides! I've begun reading. – Mark Grant May 31 2011 at 19:03 Feel free to ask if you have any further question. I imagine that you were looking for a more ad hoc reference. Read this one with $P=Com$ and $P^{anti !}=Lie^*$ in mind. – Bruno V. Jun 1 2011 at 12:11 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Ben Walter and I make the functors $\Gamma$ and $A$ more explicit, by using an explicit model for the cofree Lie Coalgebra functor, in this paper. We do not discuss the application to $\infty$-algebras as Bruno does in much greater generality. (We were interested in using explicit models to be able to compute, in particular in the long exact sequence of a fibration as we do in a sequel to this paper on Hopf invariants.) - Thanks Dev. The paper looks interesting, I'll take a look. – Mark Grant Jun 1 2011 at 10:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9090741872787476, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/134012/two-sides-and-angle-between-them-triangle-question
# two sides and angle between them triangle question. is it possible to find the third side of a triangle if you know the lengths of the other two and the angle between the known sides? the triangle is not equilateral. we're using the kinect camera and we can find the distance from the camera to the start of a line and the distance to the end of a line, we could calculate the angle between the two lines knowing the maximum vertical and horizontal angle but would it be possible to calculate the length of the line on the ground? the problem is that the camera won't be exactly above the line so the triangle we get wouldn't be equilateral. - 1 Yes, you can use the law of cosines... – J. M. Apr 19 '12 at 17:22 ## 4 Answers How about the law of cosines? Consider the following triangle $\triangle ABC$, the $\color{maroon} {\text{poly 1}}$ below, with sides $\color{maroon}{\overline{AB}=c}$ and $\color{maroon}{\overline{AC}=b}$ known. Further the angle between them, $\color{green}\alpha$ is known. $\hskip{2 in}$ Then, the law of cosines tell you that $$\color{maroon}{a^2=b^2+c^2-2bc\;\cos }\color{green}{\alpha}$$ - This is precisely what the cosine theorem allows you to compute: $$c^2 = a^2 + b^2 - 2ab\cos(\gamma)$$ where $\gamma$ is the angle opposite to $c$ (uhm, yes, is probably be called 'law of cosine' rather than 'cosine theorem'). - Use the law of cosines... $c^2 = a^2 + b^2 - 2ab \cdot \cos{\theta}$ ... where $a$ and $b$ are the sides you know, $\theta$ the angle between them, and $c$ the side you seek, opposite $\theta$. - If you are interested in doing calculations with specific angles and sides when there is information which forces a specific triangle, as is true in Euclidean geometry with angle-side-angle, rather than "theory" you can do this at this on-line site: http://www.calculatorsoup.com/calculators/geometry-plane/triangle-theorems.php -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9384514093399048, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/68813/inhomogeneous-second-order-pde/69840
# Inhomogeneous Second Order PDE Given $3u_{tt} + 10u_{xt} + 3u_{xx} = \sin(x+t)$ find the general solution. I have yet to solve any inhomogeneous second order PDE (or even first order ones at that). For homogeneous PDE of same order, I managed to solve them by factoring the operators and so forth. Being new to PDEs (self studying via Strauss PDE book) I lack the intuition to find a clever way of solving these, however from my experience with ODEs I reckon there is a way to solve these by first solving the associated homogeneous first by factoring operators and so forth and stuff.. but not finding much progress on incorporating the $\sin(x+t)$ term. Any help & direction to solving this would be greatly appreciated. - Did you try to change the coordinates? For example, write $u(t,x)=v(\phi(t,x))$ where $\phi$ is a linear bijective transformation such that $3u_{tt}+10u_{xt}+3u_{xx}$ is simpler. – Davide Giraudo Sep 30 '11 at 17:27 I tried but failed to find a suitable candidate for the change as the $sin(x+t)$ and the various partial terms were throwing me off (It was easily doable in first order but second order adds more to take into factor) – Room Sep 30 '11 at 17:54 Put $u(t,x)=v(at+bx,x)$. The partial derivatives are simpler. – Davide Giraudo Sep 30 '11 at 18:20 ## 1 Answer I'm sure you have some change of variables formulas in your PDE course, meant to simplify the given equation. I don't remember the formulas, but there is a way to avoid them. The equation can be written in the following way: $$A u = \sin(t+x)$$ where $A$ is the operator $$A= 3\frac{\partial^2}{\partial t^2}+10\frac{\partial^2}{\partial t \partial x} +3 \frac{\partial^2}{\partial x^2}.$$ In this case, the operator can be factored (just like a binomial expression) in the following way: $$A =\left( \frac{\partial }{\partial t}+3\frac{\partial}{\partial x}\right)\left(3\frac{\partial}{\partial t}+\frac{\partial }{\partial x} \right).$$ Now, change the variables such that each of the two factors will be a partial derivation. $$\frac{\partial}{\partial y}=\left( \frac{\partial }{\partial t}+3\frac{\partial}{\partial x}\right), \frac{\partial}{\partial z}=\left(3\frac{\partial}{\partial t}+\frac{\partial }{\partial x} \right)$$ For this, change the variables to $\displaystyle \begin{cases} t=y+3z \\ x=3y+z \end{cases}$ and define $w(y,z)=u(t,x)$. Then we have $$\frac{\partial^2w }{\partial y \partial z}=Au= \sin(x+t)=\sin(4y+4z)$$ Integrate two times, with respect to $y,z$ and you will find $w$. From there it is easy to get to $u$. This might not be the most efficient method (for an exam, for example), but it might show from where the change of variables formulas come from. I did this in my exam, once, because I didn't remember the formulas. (I got zero points, because my teacher didn't like the method and there were some small mistakes). It is best to learn all the cases for change of variable, because that gains you time in an exam, but is also important to know from where those change of variable formulas come from. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9564851522445679, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/177333-continuity-complex-function.html
# Thread: 1. ## Continuity of complex function Note: Sorry if this is the wrong section, the course this is from is 'Complex Analysis', so I assumed it belonged here. The question f(z) = im(z) Where is f continuous? My attempt Unless I'm mistaken, we need to show that $\lim_{z \to z_0} f(z) = f(z_0)$ The limit is $y_0$, and f(z) is also $y_0$. So is the function continuous everywhere, or am I doing this incorrectly? Thanks. 2. Originally Posted by Glitch The question f(z) = im(z) Where is f continuous? My attempt Unless I'm mistaken, we need to show that $\lim_{z \to z_0} f(z) = f(z_0)$ The answer depends level of rigor required. If you need a $\varepsilon /\delta$ proof here is a suggestion. If $|z-z_0|<\delta$ then $|y-y_0|\le\sqrt{(x-x_0)^2+(y-y_0)^2}=|z-z_0|$. 3. I don't think we have to use epsilon delta proofs. I'm quite sure my lecturer mentioned that it was non-examinable. 4. An alternative is to use the property $\displaystyle\lim_{z\to z_0}f(z)=w_0\Leftrightarrow \displaystyle\lim_{z\to z_0}\textrm{Re}(f(z))=\textrm{Re}(w_0)\;\wedge\; \displaystyle\lim_{z\to z_0}\textrm{Im}(f(z))=\textrm{Im}(w_0)$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9433562755584717, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/140428/continuous-versus-differentiable
Continuous versus differentiable A function is "differentiable" if it has a derivative. A function is "continuous" if it has no sudden jumps in it. Until today, I thought these were merely two equivalent definitions of the same concept. But I've read some stuff today which seems to be claiming that this is not the case. The obvious next question is "why?" Apparently somebody has already asked: Are Continuous Functions Always Differentiable? Several answers were given, but I don't understand any of them. In particular, Wikipedia and one of the replies above both claim that $|x|$ has no derivative. Can anyone explain this extremely unexpected result? Edit: Apparently some people dislike the fact that this is non-obvious to me. To be clear: I am not saying that the result is untrue. (I'm sure many great mathematicians have analysed the question very carefuly and are quite sure of the answer.) I am saying that it is extremely perplexing. (As a general rule, mathematics has a habit of doing that. Which is one of the reasons why we demand proof of everything.) In particular, can anyone explain precisely why the derivative of $|x|$ at zero is not simply zero? After all, the function is neither increasing nor decreasing, which ought to mean the derivative is zero. Alternatively, the expression $$\frac{|x + a| - |x - a|}{a}$$ becomes closer and closer to zero as $a$ becomes closer to zero when $x=0$. (In fact, it is exactly zero for all $a$!) Is that not how derivatives work? Several answers have suggested that the derivative is not defined here "because there would be a jump in the derivative at that point". This seems to assert that a continuous function must never have a discontinuous derivative; I'm not convinced that this is the case. Can anyone confirm or refuse this argument? - 9 Roughly speaking, a function is differentiable at a point if, the closer you zoom into that point, the more the function resembles a line (the slope of which is the derivative at that point). No matter how far you zoom in on $|x|$ at $x = 0$, it never resembles a line: it always looks like a corner. – Qiaochu Yuan May 3 '12 at 15:14 14 Here is some unasked-for advice: when several independent authorities agree on some fact that seems mistaken to me, it has often been my experience that the mistake is mine, not theirs. Then there is an opportunity to learn something new, which is not well-served by labeling the claim as "nonsensical". In my experience it has been more useful to ask "People say x, but this seems wrong to me because of y; what have I misunderstood?" Sometimes I reject the answer anyway, but usually it turns out that not all these people are dummies, and some of them understand better than I do. – MJD May 3 '12 at 15:29 8 @Mark Dominus's unasked for advice is excellent. As a professional mathematician reading the question, I had a visceral negative reaction to the word "nonsensical" which my conscious brain struggled to dampen. To paraphrase: something which doesn't (yet) make sense to you need not be nonsense. To assume -- even at the level of vocabulary -- otherwise is going to close a lot of doors that would otherwise have remained open. – Pete L. Clark May 3 '12 at 16:37 3 Tough ​​​crowd. – BlueRaja - Danny Pflughoeft May 3 '12 at 20:39 3 @Matt: It's not true that the derivative of any differentiable function is continuous. If it were, we wouldn't need the term "continuously differentiable". An example of a differentiable function with discontinuous derivative is $x^2\sin(1/x)$, whose derivative exists everywhere but is discontinuous at $x=0$ (see also here). A derivative need not even be Riemann-integrable; see e.g. Volterra's function. However, a differentiable function is continuous. – joriki May 4 '12 at 12:21 show 8 more comments 8 Answers Let's be clear: continuity and differentiability begin as a concept at a point. That is, we talk about a function being: 1. Defined at a point $a$; 2. Continuous at a point $a$; 3. Differentiable at a point $a$; 4. Continuously differentiable at a point $a$; 5. Twice differentiable at a point $a$; 6. Continuously twice differentiable at a point $a$; and so on, until we get to "analytic at the point $a$" after infinitely many steps. I'll concentrate on the first three and you can ignore the rest; I'm just putting it in a slightly larger context. A function is defined at $a$ if it has a value at $a$. Not every function is defined everywhere: $f(x) = \frac{1}{x}$ is not defined at $0$, $g(x)=\sqrt{x}$ is not defined at negative numbers, etc. Before we can talk about how the function behaves at a point, we need the function to be defined at the point. Now, let us say that the function is defined at $a$. The intuitive notion we want to refer to when we talk about the function being "continuous at $a$" is that the graph does not have any holes, breaks, or jumps at $a$. Now, this is intuitive, and as such it makes it very hard to actually check or test functions, especially when we don't have their graphs. So we need a definition that is mathematical, and that allows for testing and falsification. One such definition, apt for functions of real numbers, is: We say that $f$ is continuous at $a$ if and only if three things happens: 1. $f$ is defined at $a$; and 2. $f$ has a limit as $x$ approaches $a$; and 3. $\lim\limits_{x\to a}f(x) = f(a)$. The first condition guarantees that there are no holes in the graph; the second condition guarantees that there are no jumps at $a$; and the third condition that there are no breaks (e.g., taking a horizontal line and shifting a single point one unit up would be what I call a "break"). Once we have this condition, we can actually test functions. It will turn out that everything we think should be "continuous at $a$" actually is according to this definition, but there are also functions that might seem like they ought not to be "continuous at $a$" under this definition but are. For example, the function $$f(x) = \left\{\begin{array}{ll} 0 & \text{if }x\text{ is a rational number,}\\ x & \text{if }x\text{ is not a rational number.} \end{array}\right.$$ turns out to be continuous at $a=0$ under the definition above, even though it has lots and lots of jumps and breaks. (In fact, it is continuous only at $0$, and nowhere else). Well, too bad. The definition is clear, powerful, usable, and captures the notion of continuity, so we'll just have to let a few undesirables into the club if that's the price for having it. We say a function is continuous (as opposed to "continuous at $a$") if it is continuous at every point where it is defined. We say a function is continuous everywhere if it is continuous at each and every point (in particular, it has to be defined everywhere). This is perhaps unfortunate terminology: for instance, $f(x) = \frac{1}{x}$ is not continuous at $0$ (it is not defined at $0$), but it is a continuous function (it is continuous at every point where it is defined), but not continuous everywhere (not continuous at $0$). Well, language is not always logical, we just learn to live with it (witness "flammable" and "inflammable", which mean the same thing). Now, what about differentiability at $a$? We say a function is differentiable at $a$ if the graph has a well-defined tangent at the point $(a,f(a))$ that is not vertical. What is a tangent? A tangent is a line that affords the best possible linear approximation to the function, in such a way that the relative error goes to $0$. That's a mouthful, you can see this explained in more detail here and here. We exclude vertical tangents because the derivative is actually the slope of the tangent at the point, and vertical lines have no slope. Turns out that, intuitively, in order for there to be a tangent at the point, we need the graph to have no holes, no jumps, no breaks, and no sharp corners or "vertical segments". From that intuitive notion, it should be clear that in order to be differentiable at $a$ the function has to be continuous at $a$ (to satisfy the "no holes, no jumps, no breaks"), but it needs more than that. The example of $f(x) = |x|$ is a function that is continuous at $x=0$, but has a sharp corner there; that sharp corner means that you don't have a well-defined tangent at $x=0$. You might think the line $y=0$ is the tangent there, but it turns out that it does not satisfy the condition of being a good approximation to the function, so it's not actually the tangent. There is no tangent at $x=0$. To formalize this we end up using limits: the function has a non-vertical tangent at the point $a$ if and only if $$\lim_{h\to 0}\frac{f(a+h)-f(a)}{h}\text{ exists}.$$ What this does is just saying "there is a line that affords the best linear approximation with a relative error going to $0$." Once you check, it turns out it does capture what we had above in the sense that every function that we think should be differentiable (have a nonvertical tangent) at $a$ will be differentiable under this definition. Again, turns out that it does open the door of the club for functions that might seem like they ought not to be differentiable but are. Again, that's the price of doing business. A function is differentiable if it is differentiable at each point of its domain. It is differentiable everywhere if it is differentiable at every point (in particular, $f$ is defined at every point). Because of the definitions, continuity is a prerequisite for differentiability, but it is not enough. A function may be continuous at $a$, but not differentiable at $a$. In fact, functions can get very wild. In the late 19th century, it was shown that you can have functions that are continuous everywhere, but that do not have a derivative anywhere (they are "really spiky" functions). Hope that helps a bit. Added. You ask about $|x|$ and specifically, about considering $$\frac{|x+a|-|x-a|}{a}$$ as $a\to 0$. I'll first note that you actually want to consider $$\frac{f(x+a)-f(x-a)}{2a}$$ rather than over $a$. To see this, consider the simple example of the function $y=x$, where we want the derivative to be $1$ at every point. If we consider the quotient you give, we get $2$ instead: $$\frac{f(x+a)-f(x-a)}{a} = \frac{(x+a)-(x-a)}{a} = \frac{2a}{a} = 2.$$ You really want to divide by $2a$, because that's the distance between the points $x+a$ and $x-a$. The problem is that this is not always a good way of finding the tangent; if there is a well-defined tangent, then the difference $$\frac{f(x+a)-f(x-a)}{2a}$$ will give the correct answer. However, it turns out that there are situations where this gives you an answer, but not the right answer because there is no tangent. Again: the tangent is defined to be the unique line, if one exists, in which the relative error goes to $0$. The only possible candidate for a tangent at $0$ for $f(x) = |x|$ is the line $y=0$, so the question is why this is not the tangent; the answer is that the relative error does not go to $0$. That is, the ratio between how big the error is if you use the line $y=0$ instead of the function (which is the value $|x|-0$) and the size of the input (how far we are from $0$, which is $x$) is always $1$ when $x\gt 0$, $$\frac{|x|-0}{x} = \frac{x}{x} = 1\quad\text{if }x\gt 0,$$ and is always $-1$ when $x\lt 0$: $$\frac{|x|-0}{x} = \frac{-x}{x} = -1\quad\text{if }x\lt 0.$$ That is: this line is not a good approximation to the graph of the function near $0$: even as you get closer and closer and closer to $0$, if you use $y=0$ as an approximation your error continues to be large relative to the input: it's not getting better and better relative to the size of the input. But the tangent is supposed to make the error get smaller and smaller relative to how far we are from $0$ as we get closer and closer to zero. That is, if we use the line $y=mx$, then it must be the case that $$\frac{f(x) - mx}{x}$$ approaches $0$ as $x$ approaches $0$ in order to say that $y=mx$ is "the tangent to the graph of $y=f(x)$ at $x=0$". This is not the case for any value of $m$ when $f(x)=|x|$, so $f(x)=|x|$ does not have a tangent at $0$. The "symmetric difference" that you are using is hiding the fact that the graph of $y=f(x)$ does not flatten out as we approach $0$, even though the line you are using is horizontal all the time. Geometrically, the graph does not get closer and closer to the line as you approach $0$: it's always a pretty bad error. - 2 I know this is slightly pedantic, but you forgot that there exists $C^\infty$ functions that are not $C^\omega$. – kahen May 3 '12 at 17:42 2 @kahen: It's in the "and so on until we get to" (i.e.,. not so much forgot as didn't want to get into it). (-; – Arturo Magidin May 3 '12 at 17:51 1 I see. "Deliberately didn't mention" it is then. Fair enough. – kahen May 3 '12 at 18:10 1 Best answer yet. – MathematicalOrchid May 4 '12 at 8:57 I think it is worth pointing out that some authors allow a function to be differentiable at an accumulation point of the domain even when it is not defined there. – Michael Greinecker May 4 '12 at 9:15 show 1 more comment The derivative, in simple words, is the slope of the function at the point. If you consider $|x|$ at $x > 0$ the slope is clearly $1$ since there $|x| = x$. Similarly, for $x<0$ the slope is $-1$. Thus, if you consider $x = 0$ then you cannot define the slope at that point, i.e. right and left directional derivatives do not agree at $x = 0$. So that's why the function is not differentiable at $x=0$. Just to extend a perfect comment by Qiaochu to a more striking example, the sample path of a Brownian motion is continuous but nowhere differentiable: Note also that this curve exhibits self-similarity property, so if you zoom it, it looks the same and never will look any similar to a line. Also, the Brownian motion can be considered as a measure (even a probability distribution) on the space of continuous functions. The set of differentiable functions has this measure zero. So one can say that it is very unlikely that a continuous function is differentiable (I guess, that is what André meant in his comment). - 1 Your graph doesn't look to me like the usual Brownian motion: the variance of $y(x+h) - y(x)$ appears to be increasing as $x$ increases. – Robert Israel May 3 '12 at 17:44 @Robert: I changed the illustration, took it from Wikipedia's article on Wiener process. Hope that it shows that zooming does not lead us to a line. – Ilya May 4 '12 at 9:49 So, to be clear, you're saying that all functions with infinite fine detail lack a derivative? – MathematicalOrchid May 4 '12 at 10:54 @MathematicalOrchid: to be clear, would you tell what do you mean with an infinite fine detail? – Ilya May 4 '12 at 11:01 Touché. :-) OK, well how about this: Any function that possesses fractal self-similarity on all scales. – MathematicalOrchid May 4 '12 at 11:09 show 2 more comments The absolute value function has a derivative everywhere except at $x=0$. The reason there is no derivative at $x=0$ is that if the definition of the derivative is applied from the left, $$\lim_{h\rightarrow0-}\frac{\lvert 0+h\rvert-\lvert0\rvert}{h}=-1,$$ you get a different answer than if it is applied from the right, $$\lim_{h\rightarrow0+}\frac{\lvert 0+h\rvert-\lvert0\rvert}{h}=1.$$ Intuitively, the derivative is the slope of the tangent line, which changes abruptly at $x=0$. The graph of the derivative of $\lvert x\rvert$ would have a jump at $x=0$, and so would be discontinuous there. On the other hand, the absolute value function itself is continuous everywhere, including at $x=0$. This example illustrates the fact that continuity does not imply differentiability. On the other had, differentiability does imply continuity. Intuitively, this is because, in order for the quotient $\dfrac{f(x+h)-f(x)}{h}$ to have a limit as $h\rightarrow0$, we must have $f(x+h)\rightarrow f(x)$ as $h\rightarrow0$, which is the limit definition of continuity. Edit: To respond to your edit, you ask a good question! What you are seeing is that you can get different answers to the question "What is the derivative of $\lvert x\rvert$ at $x=0$?" depending on how you set up the difference quotient before taking the limit: the limit from the left gives $-1$, the limit from the right gives $1$ and a symmetrical quotient gives 0. You can get other answers as well. For example, if we position our small interval around $x=0$ asymmetrically, $$\lim_{h\rightarrow0}\frac{\lvert0+\frac{2}{3}h\rvert-\lvert0-\frac{1}{3}h\rvert}{h}$$ we get $1/3$. If, at a certain point, all of these methods give the same answer, we say that the function is differentiable at that point. In some sense, the definition of derivative is robust at such points, since we can make natural modifications to it and still get the same result. On the other hand, if, by fine-tuning the procedure, we can get different answers, then we say that the function is not differentiable at that point - the definition of derivative is not so robust there, since by making natural modifications, we can get different answers. You can probably imagine that at points where we have that robustness, we can prove all sorts of strong statements about the behavior of the function. At points where we don't have it, we can't prove so much. Hence it makes sense to invent a term to capture this distinction between the two types of point. - You got many answers to the general question. I'd like to spend a few words on your quotient $$\frac{|x+a|-|x-a|}{a}.$$ Let's use a more conventional notation: $$\frac{|x+h|-|x-h|}{h}.$$ Now, it is pretty easy to prove that $$\lim_{h \to 0} \frac{|h|-|-h|}{h}=0.$$ You ask why this does not imply that $x \mapsto |x|$ is differentiable at $x=0$. The answer is, on one hand, simple: you did not use the correct definition of derivative :-) On the other hand, "your" definition is used in mathematics, under different names. In general, we can consider $$\lim_{h \to 0} \frac{f(x_0+h)-f(x_0-h)}{2h}, \tag{1}$$ and this limit cooincides with $f'(x_0)$ provided that $f$ is differentiable at $x_0$. However, (1) may exist and yet $f$ is not differentiable at $x_0$. The limit (1) is often called symmetric derivative of $f$ at $x_0$. - Thanks for the info. – MathematicalOrchid May 4 '12 at 8:57 @Siminore: there should be $2h$ instead of only $h$ in the denumerator -- limit of such a quotient is called symmetric derivative. – Damian Sobota May 5 '12 at 12:28 @DamianSobota: you are right, I fixed the misprint. Thank you. – Siminore May 5 '12 at 12:29 The basic concept of a derivative is slope. The derivative gives you the slope at any given point on the graph. So, as Matt hinted at, look at a graph that has a sharp point on it. What is the slope of a point? (There isn't one, it is undefined) Just because it is pointed does not mean that it is discontinuous, but it does mean that it is not differentiable everywhere. So to be differentiable, a function must be both smooth and continuous. Hope that helps - So, the answer to this question really depends on your notion of differentiability. Let us start with the classical notion of differentiability. A function, $f(x)$ is differentiable at a point, $x_0$ if the following limit exists:$lim_{h\rightarrow 0}\frac{f(x+h)-f(x)}{h}$. The other answers give a good explanation of why $|x|$ is not differentiable (in the classical way). That said, their are other notions of differentiability that one may contemplate! For example the function that you mentioned $f(x)=|x|$, one may assign a derivative of 0 when $x=0$, if we use good generalization of derivative. One way to do this is to see that the derivative of $|x|$ is $+/-1$ depending on whether $x$ is greater than or less than zero. But what happens at zero. Well if you write this derivative function (that is undefined at 0 for the moment) as a Fouier series, then evaluate the series at $x=0$, you will get that the "derivative" that you obtain is zero (I am sure their are better ways of doing this such as approximating the +/- function by smooth functions). That said, if we are speaking about non-classical notions of differentiability, one may even differentiate discontinuous functions in the distributional way. For instance the "second derivative" again non-classical of $|x|$ is the dirac delta distribution (which is no longer a real valued function but a certain type of limit of real valued functions). Also the derivative mentioned of $|x|$ is also of distributional type. The wikipedia artical on distributional derivatives. http://en.wikipedia.org/wiki/Distribution_%28mathematics%29 - There are two ways Two ways in which a continuous function can fail to be differentiable (assuming it is a function whose input and output are each a real number): • By having a vertical tangent, as in the case of $f(x) = \sqrt[3]{x}$ (the cube-root function), which has a vertical tangent at $x=0$. • By having a "sharp corner" in its graph, as in the case of $f(x)=|x|$, which has a sharp corner at $x=0$. At that point the slope abruptly changes from $-1$ to $+1$. - I think you mean "at least two ways". As written, your answer seems to suggest that every continuous function has a one-sided derivative in $[-\infty,\infty]$...which is certainly not true, as I'm sure you know. – Pete L. Clark May 4 '12 at 12:54 @PeteL.Clark : If one construes "sharp corner" somewhat broadly, I think that covers it. The "sawtooth" function that is nowhere differentiable and everywhere continuous has lots of sharp corners. – Michael Hardy May 4 '12 at 18:26 So you're saying that the function $f(x) = x \sin (1/x)$ for $x \neq 0$ for $x \neq 0$ and $f(0) = 0$ has a "sharp corner" at $x = 0$? If so, what's your definition of "sharp corner"? (I assumed you meant that the left and right handed derivatives both exist but are unequal, because that's what's happening in the example you give.) – Pete L. Clark May 4 '12 at 19:01 OK, I've rephrased it, since I'd rather not get into what the term "sharp corner" ought to mean. – Michael Hardy May 4 '12 at 21:44 Okay, then: +1. – Pete L. Clark May 4 '12 at 22:09 Here is a function which can be written explicitly, in a simple form (i.e. not in terms of Brownian Motion). Take the following: $$f(x) = \sum_{n=0}^\infty \alpha^n \cos(\beta^n\pi x)$$ where $\alpha \in (0,1)$, $\alpha \beta \geq 1$. This is an example of a function which is continuous everywhere, but differentiable nowhere. To prove continuity everywhere, the partial sums are continuous (being a finite sum of continuous functions). From this, prove that the series is uniformly convergent, and then you can prove that the function (being the limit of the partial sums) is the uniform limit of a sequence of continuous functions, and is hence continuous. (Use the Weierstrass-M test). To prove that $f$ is nowhere differentiable is a bit more complicated, but to prove it, you could prove explicitly that the limit in the definition of the derivative does not exist, which is one of the most direct proofs. Another possible proof arises from Fourier Analysis, and roughly goes as follows. The function is expressed explicitly as a Fourier cosine series, which is uniformly convergent. Differentiate the partial sums termwise, and prove that the limit of the partial sums doesn't exist. There are details left out here, but this is another approach. This function is actually an extension of the original construction by Weierstrass, and the desired properties of this function were established by G.H. Hardy. One of the comments alluded to the fact that in some well-defined sense, almost every continuous function is nowhere differentiable. If we restrict ourselves to the case of functions which are continuous on the compact interval $[0,1]$, this is in the sense of (classical) Wiener measure, but is likely well beyond the scope of this question. (See this. Another example of a continuous, but nowhere differentiable function is the Blancmange Function.) There's another interesting example, but this one might be even harder to justify than the Weierstrass function. The Devil's Staircase (i.e. the Cantor-Lebesgue function) is a function which is continuous, but is not differentiable at any point in the Cantor set. Further, the derivative is zero wherever it is defined. This function can actually be further generalized to create a strictly monotone continuous function whose derivative exists almost everywhere, and whose derivative is zero where defined. (The set of points where the derivative is not defined contains the Cantor set). - 1 – MJD May 4 '12 at 6:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 167, "mathjax_display_tex": 18, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.960728645324707, "perplexity_flag": "head"}
http://www.euro-math-soc.eu/jemsaccess/books/book.php?proj_nr=146&titleindex=1
QUICK SEARCH: # Book Details Search page | Title Index  | Author Index EMS Monographs in Mathematics Joachim Krieger (EPFL Lausanne, Switzerland) Wilhelm Schlag (University of Chicago, USA) #### Concentration Compactness for Critical Wave Maps ISBN 978-3-03719-106-4 DOI 10.4171/106 February 2012, 490 pages, hardcover, 16.5 x 23.5 cm. 88.00 Euro Wave maps are the simplest wave equations taking their values in a Riemannian manifold $(M,g)$. Their Lagrangian is the same as for the scalar equation, the only difference being that lengths are measured with respect to the metric $g$. By Noether's theorem, symmetries of the Lagrangian imply conservation laws for wave maps, such as conservation of energy. In coordinates, wave maps are given by a system of semilinear wave equations. Over the past 20 years important methods have emerged which address the problem of local and global wellposedness of this system. Due to weak dispersive effects, wave maps defined on Minkowski spaces of low dimensions, such as $\mathbb R^{2+1}_{t,x}$, present particular technical difficulties. This class of wave maps has the additional important feature of being energy critical, which refers to the fact that the energy scales exactly like the equation. Around 2000 Daniel Tataru and Terence Tao, building on earlier work of Klainerman–Machedon, proved that smooth data of small energy lead to global smooth solutions for wave maps from 2+1 dimensions into target manifolds satisfying some natural conditions. In contrast, for large data, singularities may occur in finite time for $M =\mathbb S^2$ as target. This monograph establishes that for $\mathbb H$ as target the wave map evolution of any smooth data exists globally as a smooth function. While we restrict ourselves to the hyperbolic plane as target the implementation of the concentration-compactness method, the most challenging piece of this exposition, yields more detailed information on the solution. This monograph will be of interest to experts in nonlinear dispersive equations, in particular to those working on geometric evolution equations. #### Further Information Review in Zentralblatt MATH 06004782 Review in MR 2895939
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8721268177032471, "perplexity_flag": "head"}
http://www.openwetware.org/index.php?title=User:Timothee_Flutre/Notebook/Postdoc/2011/11/10&curid=109436&diff=665463&oldid=658224
# User:Timothee Flutre/Notebook/Postdoc/2011/11/10 ### From OpenWetWare (Difference between revisions) | | | | | |----------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ( start section on binary phenotype) | | ( add info for binary phenotypes) | | | (One intermediate revision not shown.) | | | | | Line 236: | | Line 236: | | | | | | | | | <math>\mathrm{BF} = \sum_{m \, \in \, \text{grid}} \frac{1}{M} \, \mathrm{BF}(\sigma_a^{(m)}, \sigma_d^{(m)})</math> | | <math>\mathrm{BF} = \sum_{m \, \in \, \text{grid}} \frac{1}{M} \, \mathrm{BF}(\sigma_a^{(m)}, \sigma_d^{(m)})</math> | | | | + | | | | | + | In eQTL studies, the weights can be estimated from the data using a hierarchical model (see below), by pooling all genes together as in Veyrieras ''et al'' (PLoS Genetics, 2010). | | | | | | | | | | | | Line 298: | | Line 300: | | | | <math>\mathcal{L}(B) = \prod_{i=1}^N p_i^{y_i} (1-p_i)^{1-y_i}</math> | | <math>\mathcal{L}(B) = \prod_{i=1}^N p_i^{y_i} (1-p_i)^{1-y_i}</math> | | | | | | | - | We still use the same priors as above for <math>B</math> and the Bayes factor now is: | + | We still use the same prior as above for <math>B</math> (but there is no <math>\tau</math> anymore) and the Bayes factor now is: | | | | | | | | <math>\mathrm{BF} = \frac{\int \mathsf{P}(B) \mathsf{P}(Y | X, B) \mathrm{d}B}{\int \mathsf{P}(\mu) \mathsf{P}(Y | X, \mu) \mathrm{d}\mu}</math> | | <math>\mathrm{BF} = \frac{\int \mathsf{P}(B) \mathsf{P}(Y | X, B) \mathrm{d}B}{\int \mathsf{P}(\mu) \mathsf{P}(Y | X, \mu) \mathrm{d}\mu}</math> | | | | | | | | The interesting point here is that there is no way to calculate these integrals analytically. Therefore, we will use [http://en.wikipedia.org/wiki/Laplace_approximation Laplace's method] to approximate them, as in Guan & Stephens (2008). | | The interesting point here is that there is no way to calculate these integrals analytically. Therefore, we will use [http://en.wikipedia.org/wiki/Laplace_approximation Laplace's method] to approximate them, as in Guan & Stephens (2008). | | | | + | | | | | + | | | | | + | <math>\mathsf{P} (Y|X) = \int \exp \left( \mathrm{ln} \, \mathsf{P}(B) + \mathrm{ln} \, \mathsf{P}(Y | X, B) \right) \mathsf{d}B</math> | | | | + | | | | | + | <math>\mathsf{P} (Y|X) = \int \exp \left[ \mathrm{ln} \left( (2 \pi)^{-\frac{3}{2}} \, \frac{1}{\sigma_\mu \sigma_a \sigma_d} \, \exp\left( -\frac{1}{2} (\frac{\mu^2}{\sigma_\mu^2} + \frac{a^2}{\sigma_a^2} + \frac{d^2}{\sigma_d^2}) \right) \right) + \sum_{i=1}^N y_i \, \mathrm{ln} (p_i) + \sum_{i=1}^N (1-y_i) \, \mathrm{ln} (1-p_i) \right] \mathsf{d}B</math> | | | | + | | | | | + | using <math>f</math> to denote the function inside the exponential: | | | | + | | | | | + | <math>\mathsf{P} (Y|X) = \int \exp \left( f(B) \right) \mathsf{d}B</math> | | | | + | | | | | + | First we need to calculate the first derivatives of <math>f</math>, by noting that they all have a very similar form: | | | | + | | | | | + | <math>\frac{\partial f}{\partial x} = - \frac{x}{\sigma_x^2} + \sum_{i=1}^N \left(\frac{y_i}{p_i} - \frac{1-y_i}{1-p_i} \right) \frac{\partial p_i}{\partial x}</math> | | | | + | | | | | + | where <math>x</math> is <math>\mu</math>, <math>a</math> or <math>d</math>. | | | | + | | | | | + | Also, there is a simple form for the first derivatives of <math>p_i</math>: | | | | + | | | | | + | <math>\frac{\partial p_i}{\partial x} = \frac{ (\frac{\partial X_i^TB}{\partial x} e^{X_i^TB}) (1+e^{X_i^TB}) - e^{X_i^TB} (\frac{\partial X_i^TB}{\partial x} e^{X_i^TB})}{(1+e^{X_i^TB})^2} = \frac{p_i^2}{e^{X_i^TB}} \frac{\partial X_i^TB}{\partial x}</math> | | | | + | | | | | + | where <math>\frac{\partial X_i^TB}{\partial x}</math> is <math>1</math> for <math>\mu</math>, <math>g_i</math> for <math>a</math>, and <math>\mathbf{1}_{g_i=1}</math> for <math>d</math>. | | | | + | | | | | + | This gives us one equation per parameter: | | | | + | | | | | + | <math>\frac{\partial f}{\partial \mu} = - \frac{\mu}{\sigma_\mu^2} + \sum_{i=1}^N \left(\frac{y_i}{p_i} - \frac{1-y_i}{1-p_i} \right)</math> | | | | + | | | | | + | <math>\frac{\partial f}{\partial a} = - \frac{a}{\sigma_a^2} + \sum_{i=1}^N \left(\frac{y_i}{p_i} - \frac{1-y_i}{1-p_i} \right) g_i</math> | | | | + | | | | | + | <math>\frac{\partial f}{\partial d} = - \frac{d}{\sigma_d^2} + \sum_{i=1}^N \left(\frac{y_i}{p_i} - \frac{1-y_i}{1-p_i} \right) \mathbf{1}_{g_i=1}</math> | | | | + | | | | | + | Second, we need to calculate the second derivatives of <math>f</math>: | | | | + | | | | | + | ... | | | | + | | | | | + | The second derivatives of <math>f</math> are strictly negative. Therefore, <math>f</math> is globally convex and hence has a unique global maximum, at <math>B^\star = [\mu^\star a^\star d^\star]^T</math>. As a consequence, we have the right to use the Laplace's method to approximate each integral of the Bayes factor around their respective maximum. | | | | + | | | | | + | finding the maximums: iterative procedure or generic solver -> to do | | | | + | | | | | + | implementation: in R -> to do | | | | + | | | | | + | finding the effect sizes and their std error: to do | | | | | | | | | | | | - | * '''Link between Bayes factor and P-value''': see Wakeley (2008) | + | * '''Link between Bayes factor and P-value''': see Wakefield (2008) | | | | | | | | to do | | to do | | Line 325: | | Line 369: | | | | | | | | | | | | | - | * '''Genetic relatedness''': linear mixed model | + | * '''Genetic relatedness''': linear mixed model, see Zhou & Stephens (Nature Genetics, 2012) | | | | | | | | to do | | to do | | | | | | | | | | | | - | * '''Discrete phenotype''': count data as from RNA-seq, Poisson-like likelihood | + | * '''Discrete phenotype''': count data as from RNA-seq, Poisson-like likelihood, see Sun (Biometrics, 2012) | | | | | | | | to do | | to do | ## Revision as of 19:01, 3 January 2013 Project name Main project page Previous entry      Next entry ## Bayesian model of univariate linear regression for QTL detection This page aims at helping people like me, interested in quantitative genetics, to get a better understanding of some Bayesian models, most importantly the impact of the modeling assumptions as well as the underlying maths. It starts with a simple model, and gradually increases the scope to relax assumptions. See references to scientific articles at the end. • Data: let's assume that we obtained data from N individuals. We note $y_1,\ldots,y_N$ the (quantitative) phenotypes (e.g. expression levels at a given gene), and $g_1,\ldots,g_N$ the genotypes at a given SNP (encoded as allele dose: 0, 1 or 2). • Goal: we want to assess the evidence in the data for an effect of the genotype on the phenotype. • Assumptions: the relationship between genotype and phenotype is linear; the individuals are not genetically related; there is no hidden confounding factors in the phenotypes. • Likelihood: we start by writing the usual linear regression for one individual $\forall i \in \{1,\ldots,N\}, \; y_i = \mu + \beta_1 g_i + \beta_2 \mathbf{1}_{g_i=1} + \epsilon_i \; \text{ with } \; \epsilon_i \; \overset{i.i.d}{\sim} \; \mathcal{N}(0,\tau^{-1})$ where β1 is in fact the additive effect of the SNP, noted a from now on, and β2 is the dominance effect of the SNP, d = ak. Let's now write the model in matrix notation: $Y = X B + E \text{ where } B = [ \mu \; a \; d ]^T$ This gives the following multivariate Normal distribution for the phenotypes: $Y | X, \tau, B \sim \mathcal{N}(XB, \tau^{-1} I_N)$ Even though we can write the likelihood as a multivariate Normal, I still keep the term "univariate" in the title because the regression has a single response, Y. It is usual to keep the term "multivariate" for the case where there is a matrix of responses (i.e. multiple phenotypes). The likelihood of the parameters given the data is therefore: $\mathcal{L}(\tau, B) = \mathsf{P}(Y | X, \tau, B)$ $\mathcal{L}(\tau, B) = \left(\frac{\tau}{2 \pi}\right)^{\frac{N}{2}} exp \left( -\frac{\tau}{2} (Y - XB)^T (Y - XB) \right)$ • Priors: we use the usual conjugate prior $\mathsf{P}(\tau, B) = \mathsf{P}(\tau) \mathsf{P}(B | \tau)$ A Gamma distribution for τ: $\tau \sim \Gamma(\kappa/2, \, \lambda/2)$ which means: $\mathsf{P}(\tau) = \frac{\frac{\lambda}{2}^{\kappa/2}}{\Gamma(\frac{\kappa}{2})} \tau^{\frac{\kappa}{2}-1} e^{-\frac{\lambda}{2} \tau}$ And a multivariate Normal distribution for B: $B | \tau \sim \mathcal{N}(\vec{0}, \, \tau^{-1} \Sigma_B) \text{ with } \Sigma_B = diag(\sigma_{\mu}^2, \sigma_a^2, \sigma_d^2)$ which means: $\mathsf{P}(B | \tau) = \left(\frac{\tau}{2 \pi}\right)^{\frac{3}{2}} |\Sigma_B|^{-\frac{1}{2}} exp \left(-\frac{\tau}{2} B^T \Sigma_B^{-1} B \right)$ • Joint posterior (1): $\mathsf{P}(\tau, B | Y, X) = \mathsf{P}(\tau | Y, X) \mathsf{P}(B | Y, X, \tau)$ • Conditional posterior of B: $\mathsf{P}(B | Y, X, \tau) = \frac{\mathsf{P}(B, Y | X, \tau)}{\mathsf{P}(Y | X, \tau)}$ Let's neglect the normalization constant for now: $\mathsf{P}(B | Y, X, \tau) \propto \mathsf{P}(B | \tau) \mathsf{P}(Y | X, \tau, B)$ Similarly, let's keep only the terms in B for the moment: $\mathsf{P}(B | Y, X, \tau) \propto exp(B^T \Sigma_B^{-1} B) exp((Y-XB)^T(Y-XB))$ We expand: $\mathsf{P}(B | Y, X, \tau) \propto exp(B^T \Sigma_B^{-1} B - Y^TXB -B^TX^TY + B^TX^TXB)$ We factorize some terms: $\mathsf{P}(B | Y, X, \tau) \propto exp(B^T (\Sigma_B^{-1} + X^TX) B - Y^TXB -B^TX^TY)$ Importantly, let's define: $\Omega = (\Sigma_B^{-1} + X^TX)^{-1}$ We can see that ΩT = Ω, which means that Ω is a symmetric matrix. This is particularly useful here because we can use the following equality: Ω − 1ΩT = I. $\mathsf{P}(B | Y, X, \tau) \propto exp(B^T \Omega^{-1} B - (X^TY)^T\Omega^{-1}\Omega^TB -B^T\Omega^{-1}\Omega^TX^TY)$ This now becomes easy to factorizes totally: $\mathsf{P}(B | Y, X, \tau) \propto exp((B^T - \Omega X^TY)^T\Omega^{-1}(B - \Omega X^TY))$ We recognize the kernel of a Normal distribution, allowing us to write the conditional posterior as: $B | Y, X, \tau \sim \mathcal{N}(\Omega X^TY, \tau^{-1} \Omega)$ • Posterior of τ: Similarly to the equations above: $\mathsf{P}(\tau | Y, X) \propto \mathsf{P}(\tau) \mathsf{P}(Y | X, \tau)$ But now, to handle the second term, we need to integrate over B, thus effectively taking into account the uncertainty in B: $\mathsf{P}(\tau | Y, X) \propto \mathsf{P}(\tau) \int \mathsf{P}(B | \tau) \mathsf{P}(Y | X, \tau, B) \mathsf{d}B$ Again, we use the priors and likelihoods specified above (but everything inside the integral is kept inside it, even if it doesn't depend on B!): $\mathsf{P}(\tau | Y, X) \propto \tau^{\frac{\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} \int \tau^{3/2} \tau^{N/2} exp(-\frac{\tau}{2} B^T \Sigma_B^{-1} B) exp(-\frac{\tau}{2} (Y - XB)^T (Y - XB)) \mathsf{d}B$ As we used a conjugate prior for τ, we know that we expect a Gamma distribution for the posterior. Therefore, we can take τN / 2 out of the integral and start guessing what looks like a Gamma distribution. We also factorize inside the exponential: $\mathsf{P}(\tau | Y, X) \propto \tau^{\frac{N+\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} \int \tau^{3/2} exp \left[-\frac{\tau}{2} \left( (B - \Omega X^T Y)^T \Omega^{-1} (B - \Omega X^T Y) - Y^T X \Omega X^T Y + Y^T Y \right) \right] \mathsf{d}B$ We recognize the conditional posterior of B. This allows us to use the fact that the pdf of the Normal distribution integrates to one: $\mathsf{P}(\tau | Y, X) \propto \tau^{\frac{N+\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} exp\left[-\frac{\tau}{2} (Y^T Y - Y^T X \Omega X^T Y) \right]$ We finally recognize a Gamma distribution, allowing us to write the posterior as: $\tau | Y, X \sim \Gamma \left( \frac{N+\kappa}{2}, \; \frac{1}{2} (Y^T Y - Y^T X \Omega X^T Y + \lambda) \right)$ • Joint posterior (2): sometimes it is said that the joint posterior follows a Normal Inverse Gamma distribution: $B, \tau | Y, X \sim \mathcal{N}IG(\Omega X^TY, \; \tau^{-1}\Omega, \; \frac{N+\kappa}{2}, \; \frac{\lambda^\ast}{2})$ where $\lambda^\ast = Y^T Y - Y^T X \Omega X^T Y + \lambda$ • Marginal posterior of B: we can now integrate out τ: $\mathsf{P}(B | Y, X) = \int \mathsf{P}(\tau) \mathsf{P}(B | Y, X, \tau) \mathsf{d}\tau$ $\mathsf{P}(B | Y, X) = \frac{\frac{\lambda^\ast}{2}^{\frac{N+\kappa}{2}}}{(2\pi)^\frac{3}{2} |\Omega|^{\frac{1}{2}} \Gamma(\frac{N+\kappa}{2})} \int \tau^{\frac{N+\kappa+3}{2}-1} exp \left[-\tau \left( \frac{\lambda^\ast}{2} + (B - \Omega X^TY)^T \Omega^{-1} (B - \Omega X^TY) \right) \right] \mathsf{d}\tau$ Here we recognize the formula to integrate the Gamma function: $\mathsf{P}(B | Y, X) = \frac{\frac{\lambda^\ast}{2}^{\frac{N+\kappa}{2}} \Gamma(\frac{N+\kappa+3}{2})}{(2\pi)^\frac{3}{2} |\Omega|^{\frac{1}{2}} \Gamma(\frac{N+\kappa}{2})} \left( \frac{\lambda^\ast}{2} + (B - \Omega X^TY)^T \Omega^{-1} (B - \Omega X^TY) \right)^{-\frac{N+\kappa+3}{2}}$ And we now recognize a multivariate Student's t-distribution: $\mathsf{P}(B | Y, X) = \frac{\Gamma(\frac{N+\kappa+3}{2})}{\Gamma(\frac{N+\kappa}{2}) \pi^\frac{3}{2} |\lambda^\ast \Omega|^{\frac{1}{2}} } \left( 1 + \frac{(B - \Omega X^TY)^T \Omega^{-1} (B - \Omega X^TY)}{\lambda^\ast} \right)^{-\frac{N+\kappa+3}{2}}$ We hence can write: $B | Y, X \sim \mathcal{S}_{N+\kappa}(\Omega X^TY, \; (Y^T Y - Y^T X \Omega X^T Y + \lambda) \Omega)$ • Bayes Factor: one way to answer our goal above ("is there an effect of the genotype on the phenotype?") is to do hypothesis testing. We want to test the following null hypothesis: $H_0: \; a = d = 0$ In Bayesian modeling, hypothesis testing is performed with a Bayes factor, which in our case can be written as: $\mathrm{BF} = \frac{\mathsf{P}(Y | X, a \neq 0, d \neq 0)}{\mathsf{P}(Y | X, a = 0, d = 0)}$ We can shorten this into: $\mathrm{BF} = \frac{\mathsf{P}(Y | X)}{\mathsf{P}_0(Y)}$ Note that, compare to frequentist hypothesis testing which focuses on the null, the Bayes factor requires to explicitly model the data under the alternative. This makes a big difference when interpreting the results (see below). $\mathsf{P}(Y | X) = \int \mathsf{P}(\tau) \mathsf{P}(Y | X, \tau) \mathsf{d}\tau$ First, let's calculate what is inside the integral: $\mathsf{P}(Y | X, \tau) = \frac{\mathsf{P}(B | \tau) \mathsf{P}(Y | X, \tau, B)}{\mathsf{P}(B | Y, X, \tau)}$ Using the formula obtained previously and doing some algebra gives: $\mathsf{P}(Y | X, \tau) = \left( \frac{\tau}{2 \pi} \right)^{\frac{N}{2}} \left( \frac{|\Omega|}{|\Sigma_B|} \right)^{\frac{1}{2}} exp\left( -\frac{\tau}{2} (Y^TY - Y^TX\Omega X^TY) \right)$ Now we can integrate out τ (note the small typo in equation 9 of supplementary text S1 of Servin & Stephens): $\mathsf{P}(Y | X) = (2\pi)^{-\frac{N}{2}} \left( \frac{|\Omega|}{|\Sigma_B|} \right)^{\frac{1}{2}} \frac{\frac{\lambda}{2}^{\frac{\kappa}{2}}}{\Gamma(\frac{\kappa}{2})} \int \tau^{\frac{N+\kappa}{2}-1} exp \left( -\frac{\tau}{2} (Y^TY - Y^TX\Omega X^TY + \lambda) \right)$ Inside the integral, we recognize the almost-complete pdf of a Gamma distribution. As it has to integrate to one, we get: $\mathsf{P}(Y | X) = (2\pi)^{-\frac{N}{2}} \left( \frac{|\Omega|}{|\Sigma_B|} \right)^{\frac{1}{2}} \left( \frac{\lambda}{2} \right)^{\frac{\kappa}{2}} \frac{\Gamma(\frac{N+\kappa}{2})}{\Gamma(\frac{\kappa}{2})} \left( \frac{Y^TY - Y^TX\Omega X^TY + \lambda}{2} \right)^{-\frac{N+\kappa}{2}}$ We can use this expression also under the null. In this case, as we need neither a nor d, B is simply μ, ΣB is $\sigma_{\mu}^2$ and X is a vector of 1's. We can also defines $\Omega_0 = ((\sigma_{\mu}^2)^{-1} + N)^{-1}$. In the end, this gives: $\mathsf{P}_0(Y) = (2\pi)^{-\frac{N}{2}} \frac{|\Omega_0|^{\frac{1}{2}}}{\sigma_{\mu}} \left( \frac{\lambda}{2} \right)^{\frac{\kappa}{2}} \frac{\Gamma(\frac{N+\kappa}{2})}{\Gamma(\frac{\kappa}{2})} \left( \frac{Y^TY - \Omega_0 N^2 \bar{Y}^2 + \lambda}{2} \right)^{-\frac{N+\kappa}{2}}$ We can therefore write the Bayes factor: $\mathrm{BF} = \left( \frac{|\Omega|}{\Omega_0} \right)^{\frac{1}{2}} \frac{1}{\sigma_a \sigma_d} \left( \frac{Y^TY - Y^TX\Omega X^TY + \lambda}{Y^TY - \Omega_0 N^2 \bar{Y}^2 + \lambda} \right)^{-\frac{N+\kappa}{2}}$ When the Bayes factor is large, we say that there is enough evidence in the data to support the alternative. Indeed, the Bayesian testing procedure corresponds to measuring support for the specific alternative hypothesis compared to the null hypothesis. Importantly, note that, for a frequentist testing procedure, we would say that there is enough evidence in the data to reject the null. However we wouldn't say anything about the alternative as we don't model it. The threshold to say that a Bayes factor is large depends on the field. It is possible to use the Bayes factor as a test statistic when doing permutation testing, and then control the false discovery rate. This can give an idea of a reasonable threshold. • Hyperparameters: the model has 5 hyperparameters, $\{\kappa, \, \lambda, \, \sigma_{\mu}, \, \sigma_a, \, \sigma_d\}$. How should we choose them? Such a question is never easy to answer. But note that all hyperparameters are not that important, especially in typical quantitative genetics applications. For instance, we are mostly interested in those that determine the magnitude of the effects, σa and σd, so let's deal with the others first. As explained in Servin & Stephens, the posteriors for τ and B change appropriately with shifts (y + c) and scaling ($y \times c$) in the phenotype when taking their limits. This also gives us a new Bayes factor, the one used in practice (see Guan & Stephens, 2008): $\mathrm{lim}_{\sigma_{\mu} \rightarrow \infty \; ; \; \lambda \rightarrow 0 \; ; \; \kappa \rightarrow 0 } \; \mathrm{BF} = \left( \frac{N}{|\Sigma_B^{-1} + X^TX|} \right)^{\frac{1}{2}} \frac{1}{\sigma_a \sigma_d} \left( \frac{Y^TY - Y^TX (\Sigma_B^{-1} + X^TX)^{-1} X^TY}{Y^TY - N \bar{Y}^2} \right)^{-\frac{N}{2}}$ Now, for the important hyperparameters, σa and σd, it is usual to specify a grid of values, i.e. M pairs (σa,σd). For instance, Guan & Stephens used the following grid: $M=4 \; ; \; \sigma_a \in \{0.05, 0.1, 0.2, 0.4\} \; ; \; \sigma_d = \frac{\sigma_a}{4}$ Then, we can average the Bayes factors obtained over the grid using, as a first approximation, equal weights: $\mathrm{BF} = \sum_{m \, \in \, \text{grid}} \frac{1}{M} \, \mathrm{BF}(\sigma_a^{(m)}, \sigma_d^{(m)})$ In eQTL studies, the weights can be estimated from the data using a hierarchical model (see below), by pooling all genes together as in Veyrieras et al (PLoS Genetics, 2010). • Implementation: the following R function is adapted from Servin & Stephens supplementary text 1. ```BF <- function(G=NULL, Y=NULL, sigma.a=NULL, sigma.d=NULL, get.log10=TRUE){ stopifnot(! is.null(G), ! is.null(Y), ! is.null(sigma.a), ! is.null(sigma.d)) subset <- complete.cases(Y) & complete.cases(G) Y <- Y[subset] G <- G[subset] stopifnot(length(Y) == length(G)) N <- length(G) X <- cbind(rep(1,N), G, G == 1) inv.Sigma.B <- diag(c(0, 1/sigma.a^2, 1/sigma.d^2)) inv.Omega <- inv.Sigma.B + t(X) %*% X inv.Omega0 <- N tY.Y <- t(Y) %*% Y log10.BF <- as.numeric(0.5 * log10(inv.Omega0) - 0.5 * log10(det(inv.Omega)) - log10(sigma.a) - log10(sigma.d) - (N/2) * (log10(tY.Y - t(Y) %*% X %*% solve(inv.Omega) %*% t(X) %*% cbind(Y)) - log10(tY.Y - N*mean(Y)^2))) if(get.log10) return(log10.BF) else return(10^log10.BF) } ``` In the same vein as what is explained here, we can simulate data under different scenarios and check the BFs: ```N <- 300 # play with it PVE <- 0.1 # play with it grid <- c(0.05, 0.1, 0.2, 0.4, 0.8, 1.6, 3.2) MAF <- 0.3 G <- rbinom(n=N, size=2, prob=MAF) tau <- 1 a <- sqrt((2/5) * (PVE / (tau * MAF * (1-MAF) * (1-PVE)))) d <- a / 2 mu <- rnorm(n=1, mean=0, sd=10) Y <- mu + a * G + d * (G == 1) + rnorm(n=N, mean=0, sd=tau) for(m in 1:length(grid)) print(BF(G, Y, grid[m], grid[m]/4)) ``` • Binary phenotype: using a similar notation, we model case-control studies with a logistic regression where the probability to be a case is $\mathsf{P}(y_i = 1) = p_i$. There are many equivalent ways to write the likelihood, the usual one being: $y_i \; \overset{i.i.d}{\sim} \; Bernoulli(p_i) \; \text{ with } \; \mathrm{ln} \frac{p_i}{1 - p_i} = \mu + a \, g_i + d \, \mathbf{1}_{g_i=1}$ Using Xi to denote the i-th row of the design matrix X and keeping the same definition as above for B, we have: $p_i = \frac{e^{X_i^TB}}{1 + e^{X_i^TB}}$ As the yi's can only take 0 and 1 as values, the likelihood can be written as: $\mathcal{L}(B) = \prod_{i=1}^N p_i^{y_i} (1-p_i)^{1-y_i}$ We still use the same prior as above for B (but there is no τ anymore) and the Bayes factor now is: $\mathrm{BF} = \frac{\int \mathsf{P}(B) \mathsf{P}(Y | X, B) \mathrm{d}B}{\int \mathsf{P}(\mu) \mathsf{P}(Y | X, \mu) \mathrm{d}\mu}$ The interesting point here is that there is no way to calculate these integrals analytically. Therefore, we will use Laplace's method to approximate them, as in Guan & Stephens (2008). $\mathsf{P} (Y|X) = \int \exp \left( \mathrm{ln} \, \mathsf{P}(B) + \mathrm{ln} \, \mathsf{P}(Y | X, B) \right) \mathsf{d}B$ $\mathsf{P} (Y|X) = \int \exp \left[ \mathrm{ln} \left( (2 \pi)^{-\frac{3}{2}} \, \frac{1}{\sigma_\mu \sigma_a \sigma_d} \, \exp\left( -\frac{1}{2} (\frac{\mu^2}{\sigma_\mu^2} + \frac{a^2}{\sigma_a^2} + \frac{d^2}{\sigma_d^2}) \right) \right) + \sum_{i=1}^N y_i \, \mathrm{ln} (p_i) + \sum_{i=1}^N (1-y_i) \, \mathrm{ln} (1-p_i) \right] \mathsf{d}B$ using f to denote the function inside the exponential: $\mathsf{P} (Y|X) = \int \exp \left( f(B) \right) \mathsf{d}B$ First we need to calculate the first derivatives of f, by noting that they all have a very similar form: $\frac{\partial f}{\partial x} = - \frac{x}{\sigma_x^2} + \sum_{i=1}^N \left(\frac{y_i}{p_i} - \frac{1-y_i}{1-p_i} \right) \frac{\partial p_i}{\partial x}$ where x is μ, a or d. Also, there is a simple form for the first derivatives of pi: $\frac{\partial p_i}{\partial x} = \frac{ (\frac{\partial X_i^TB}{\partial x} e^{X_i^TB}) (1+e^{X_i^TB}) - e^{X_i^TB} (\frac{\partial X_i^TB}{\partial x} e^{X_i^TB})}{(1+e^{X_i^TB})^2} = \frac{p_i^2}{e^{X_i^TB}} \frac{\partial X_i^TB}{\partial x}$ where $\frac{\partial X_i^TB}{\partial x}$ is 1 for μ, gi for a, and $\mathbf{1}_{g_i=1}$ for d. This gives us one equation per parameter: $\frac{\partial f}{\partial \mu} = - \frac{\mu}{\sigma_\mu^2} + \sum_{i=1}^N \left(\frac{y_i}{p_i} - \frac{1-y_i}{1-p_i} \right)$ $\frac{\partial f}{\partial a} = - \frac{a}{\sigma_a^2} + \sum_{i=1}^N \left(\frac{y_i}{p_i} - \frac{1-y_i}{1-p_i} \right) g_i$ $\frac{\partial f}{\partial d} = - \frac{d}{\sigma_d^2} + \sum_{i=1}^N \left(\frac{y_i}{p_i} - \frac{1-y_i}{1-p_i} \right) \mathbf{1}_{g_i=1}$ Second, we need to calculate the second derivatives of f: ... The second derivatives of f are strictly negative. Therefore, f is globally convex and hence has a unique global maximum, at $B^\star = [\mu^\star a^\star d^\star]^T$. As a consequence, we have the right to use the Laplace's method to approximate each integral of the Bayes factor around their respective maximum. finding the maximums: iterative procedure or generic solver -> to do implementation: in R -> to do finding the effect sizes and their std error: to do • Link between Bayes factor and P-value: see Wakefield (2008) to do • Hierarchical model: pooling genes, learn weights for grid and genomic annotations, see Veyrieras et al (PLoS Genetics, 2010) to do • Multiple SNPs with LD: joint analysis of multiple SNPs, handle correlation between them, see Guan & Stephens (Annals of Applied Statistics, 2011) for MCMC, see Carbonetto & Stephens (Bayesian Analysis, 2012) for Variational Bayes to do • Confounding factors in phenotype: factor analysis, see Stegle et al (PLoS Computational Biology, 2010) to do • Genetic relatedness: linear mixed model, see Zhou & Stephens (Nature Genetics, 2012) to do • Discrete phenotype: count data as from RNA-seq, Poisson-like likelihood, see Sun (Biometrics, 2012) to do • Multiple phenotypes: matrix-variate distributions, tensors to do • Non-independent genes: enrichment in known pathways, learn "modules" to do • References: • Servin & Stephens (PLoS Genetics, 2007) • Guan & Stephens (PLoS Genetics, 2008) • Stephens & Balding (Nature Reviews Genetics, 2009)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 68, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8228769302368164, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2007/05/08/algebras/?like=1&_wpnonce=cb13afa090
# The Unapologetic Mathematician ## Algebras We have defined a ring as a $\mathbb{Z}$-module (abelian group) $R$ with a linear function $R\otimes R\rightarrow R$ satisfying certain properties. The concept of an algebra takes this definition and extends it to work over more general base rings than $\mathbb{Z}$. Let $A$ be a module over a commutative ring $R$ with unit. Then $A$ has both a left and a right action by $R$, since $R$ is commutative. Thus, when we take the tensor product $A\otimes_RA$, the result is also an $R$ module. It makes sense, then, to talk about an $R$-module homomorphism $A\otimes_RA\rightarrow A$. Equivalently, this is a “multiplication” function $m:A\times A\rightarrow A$ such that • $m(a,b_1+b_2)=m(a,b_1)+m(a,b_2)$ • $m(a_1+a_2,b)=m(a_1,b)+m(a_2,b)$ • $m(ra,b)=m(a,rb)=rm(a,b)$ An $R$-module equipped with such a multiplication is called an $R$-algebra. We will often write the multiplication as $m(a,b)=ab$. In many cases of interest, the base ring will be a field $\mathbb{F}$, but any ring is an algebra over $\mathbb{Z}$. Usually the term “algebra” on its own will refer to an associative algebra. This imposes an additional condition like the one we had in the definition of a ring: $(ab)c=a(bc)$. An algebra may also have a unit $1$ so that $1a=a=a1$ for all $a\in A$. Algebras can also be commutative if $ab=ba$ for all elements $a,b\in A$. There are other kinds of algebras we’ll get to later that are not associative. Pretty much everything I’ve said about rings works for associative algebras as well, substituting “$R$-module” for “abelian group”. An $R$-module $M$ is a left $A$-module if there is an $R$-linear function $A\otimes_RM\rightarrow M$, and a similar definition works for right $A$-modules. We can take direct sums and tensor products of $A$-modules, and we have an $R$-module of homomorphisms $\hom_A(M_1,M_2)$. All these constructions are clear from what we’ve said about modules over rings if we consider that $A$ is a ring, and that an $A$-module is an abelian group with actions of both $R$ and $A$ which commute with each other. The standard constructions of rings also work for algebras. In particular, we can start with an $R$-module $M$ and build the free $R$-algebra on $M$ like we built the free ring on an abelian group. Just use $\bigoplus_{n\in\mathbb{N}}M^{\otimes_R n}$, where the tensor powers over $R$ make sense because $R$ is commutative. We can also start with any semigroup $S$ and build the semigroup algebra $R[S]$ just like we did for the semigroup ring $\mathbb{Z}[S]$. As a special case, we can take $S$ to be the free commutative monoid on $n$ generators and get the algebra $R[x_1,...,x_n]$ of polynomials in $n$ variables over $R$. In fact, almost all of “high school algebra” is really about studying the algebra $\mathbb{Q}[x_1,...,x_n]$, where $\mathbb{Q}$ is the field of rational numbers I’m almost ready to define. Another source of $R$-algebras extends the notion of the ring of endomorphisms. If $M$ is any $R$-module, then ${\rm End}_R(M)=\hom_R(M,M)$ is again an $R$-module, and composition is $R$-bilinear, making this into an $R$-algebra. Algebras over more general commutative rings than $\mathbb{Z}$ — particularly over fields — are extremely useful objects of study mostly because the linear substrate can often be much simpler. Building everything on abelian groups can get complicated because abelian groups can be complicated, but building everything on vector spaces over a field is generally pretty straightforward since vector spaces and their linear transformations are so simple. About these ads ### Like this: Like Loading... Posted by John Armstrong | Ring theory ## 11 Comments » 1. [...] one that we haven’t considered directly: let be a commutative ring with unit and let be an algebra over with unit. Then we have a homomorphism of rings sending to — the action of on the [...] Pingback by | June 14, 2007 | Reply 2. [...] mention the Bracket polynomial and the Jones polynomial. Jones was studying a certain kind of algebra when he realized that the defining relations for these algebras were very much like those of the [...] Pingback by | July 11, 2007 | Reply 3. [...] go any further into linear algebra. Specifically, we’ll need to know a few things about the algebra of polynomials. Specifically (and diverging from the polynomials discussed earlier) we’re [...] Pingback by | July 28, 2008 | Reply 4. [...] And, of course, once we’ve got monoids and -linearity floating around, we’re inexorably drawn — Serge would way we have an irresistable compulsion — to consider monoid objects in the category of -modules. That is: -algebras. [...] Pingback by | October 24, 2008 | Reply 5. [...] or not. We know that the linear maps from a vector space (of finite dimension ) to itself form an algebra over . We can pick a basis and associate a matrix to each of these linear transformations. It turns [...] Pingback by | February 5, 2009 | Reply 6. [...] We’re about to talk about certain kinds of algebras that have the added structure of a “grading”. It’s not horribly important at the [...] Pingback by | October 23, 2009 | Reply 7. [...] and Symmetric Algebras There are a few graded algebras we can construct with our symmetric and antisymmetric tensors, and at least one of them will be [...] Pingback by | October 26, 2009 | Reply 8. [...] we proceed with the differential geometry: Lie algebras. These are like “regular” associative algebras in that we take a module (often a vector space) and define a bilinear operation on it. This much is [...] Pingback by | May 17, 2011 | Reply 9. [...] Algebras from Associative Algebras There is a great source for generating many Lie algebras: associative algebras. Specifically, if we have an associative algebra we can build a lie algebra on the same [...] Pingback by | May 18, 2011 | Reply 10. [...] called and give it a bilinear operation which we write as . We often require such operations to be associative, but this time we impose the following two [...] Pingback by | August 6, 2012 | Reply 11. [...] off, we need an algebra over a field . This doesn’t have to be associative, as our algebras commonly are; all we [...] Pingback by | August 10, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. • ## RSS Feeds RSS - Posts RSS - Comments • ## Feedback Got something to say? Anonymous questions, comments, and suggestions at Formspring.me! %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 67, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9317700862884521, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/25241/proof-there-is-a-1-1-correspondence-between-an-uncountable-set-and-itself-minus
# Proof there is a 1-1 correspondence between an uncountable set and itself minus a countable part of it Problem statement: Let A be an uncountable set, B a countable subset of A, and C the complement of B in A. Prove that there exists a one-to-one correspondence between A and C. My thoughts: There's a bijection between A and A (the identity function). There's a bijection between C and C (the identity function). There's a bijection between B and $\mathbb{N}$. That's all I know. - I take it you don't have cardinal arithmetic on hand yet? – Arturo Magidin Mar 6 '11 at 3:07 Cardinal numbers are in the next chapter, so I guess not. This chapter proves that a countable union of countable sets is countable, $\mathbb{N}$ is countable, $\mathbb{Q}$ is countable, the algebraic numbers are countable, and $\mathbb{R}$ is not countable. – Matt Gregory Mar 6 '11 at 3:08 ## 1 Answer Since $A\setminus B$ is uncountable, assuming countable choice it has a countably infinite subset $B'$. Then $B'\cup B$ is countable, so there is a bijection $g:B'\cup B\to B'$. Define $f:A\to A\setminus B$ by $f\vert_{B'\cup B}=g$ and $f(a)=a$ for $a\in A\setminus(B'\cup B)$. Basically, you just take chunks off of $A$ and $A\setminus B$ that have equal size, respectively $B'\cup B$ and $B'$, leaving the remaining sets equal to $A\setminus(B'\cup B)$, then piece together the two bijections. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.943336009979248, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/110737/list
## Return to Question 6 Incorporated suggestion from my last paragraph into the question as a whole Let $E/\mathbb{Q}$ be an elliptic curve. By the modularity theorem, the prime indexed coefficients of its $L$-function agree with those of a weight $2$ cusp eigenform $f$ with integer coefficients. This immediately imply that the coefficients are congruent (mod $p$) p^k$) for every prime$p$. k > 0$. However, the converse is also true: if the coefficients of the $L$-series of $E$ and that of $f$ are congruent (mod $p$) p^k$) for every prime$p$, k > 0$, then the $L$-series agree. One could try to get congruence of coefficients (mod $p$) p^k$) for some value of$p$and for larger values of$p$k$ by a method analogous to the method via Langlands and Tunnell rather than as a consequence of modularity lifting theorems. One immediately runs into a stumbling block because when $p > 3$ k$or$p\$ is big enough the group is nonsolvable and the methods of Langlands and Tunnell can't be applied (in a known way) to prove relevant cases of the strong Artin conjecture. Nevertheless, there exists an $n$ such that there is an injective representation $\rho: GL(2, \mathbb{Z}/p\mathbb{Z}) mathbb{Z}/p^k\mathbb{Z}) \to GL(n, \mathbb{C})$. If one can take this representation to be irreducible, then according to the strong Artin conjecture, its $L$-function should be automorphic. Even assuming that this is the case, it's not at all immediately clear (at least, without knowing the modularity theorem) that the $L$-function of the corresponding automorphic representation is related to that a weight $2$ holomorphic cusp eigenform for $GL(2)$. But functoriality can sometimes be used to relate $L$-functions for automorphic representations on one group to $L$-functions of automorphic representations on another group. The arrows only go one way, and in this case it looks like the wrong way, but sometimes one can characterize the arrows' images. Given that we know ex post that there is a relationship between the (mod $p$) p^k$) Galois representation attached to an elliptic curve and that of$f$(uniform over$p$!), k$!), one can ask whether one can see the relationship directly'' from functoriality, without passing through the modularity lifting theorems. Moreover, if one could do this for infinitely many $p$, k$, perhaps one could show that the coefficients of the$L$-function of the elliptic curve$E$match up with those of the associated modular form$f$in characteristic$0\$ so as to obtain a different proof of the modularity theorem (conditional, of course, on functoriality). The picture that I've sketched above is full of holes like Swiss cheese and it will take me years to understand the precise statements that I allude to above (never mind the proofs!). Nevertheless, I feel that there's a kernel of a well-defined question in what I write. I assume that the strategy that I allude to breaks down somewhere, because otherwise I would have heard about it. I would be grateful to anybody who would be willing to enlighten me as to what goes wrong. To make one closing remark, the above strategy seems unnatural insofar as one is switching between different Galois representations when one could instead be looking at $p$-adic families.Perhaps one it would be better [Edited 10/27/12 to ask if one incorporate my last paragraph.] One could prove that the (mod $p^k$) Galois representations are modular also attempt to get a result as $k$ p$varies by the strategy rather than$k$if there are technical problems that I outlined above.come up with varying$k$but not with varying$p\$. 5 added 9 characters in body; deleted 78 characters in body Let $E/\mathbb{Q}$ be an elliptic curve. By the modularity theorem, the prime indexed coefficients of its $L$-function agree with those of a weight $2$ cusp eigenform $f$ with integer coefficients. This immediately imply that the coefficients are congruent (mod $p$) for every prime $p$. However, the converse is also true: if the coefficients of the $L$-series of $E$ and that of $f$ are congruent (mod $p$) for every prime $p$, then the $L$-series agree. The work of Langlands and Tunnell can be used to show that if the elliptic curve has irreducible (mod $3$) Galois representation, then coefficients of the $L$-function agree with those of a weight $2$ cusp form with coefficients in some algebraic number field $K$ (mod $v$), where $v$ is a prime above $3$ in $K$. This was the starting point of Wiles' proof of the modularity theorem for semi-stable elliptic curves over $\mathbb{Q}$. One could try to get congruence of coefficients (mod $p$) for larger values of $p$ by a method analogous to the method via Langlands and Tunnell rather than as a consequence of modularity lifting theorems. One immediately runs into a stumbling block because when $p > 3$ the group is nonsolvable and the methods of Langlands and Tunnell can't be applied (in a known way) to prove relevant cases of the strong Artin conjecture. Nevertheless, there exists an $n$ such that there is an injective representation $\rho: GL(2, \mathbb{Z}/p\mathbb{Z}) \to GL(n, \mathbb{C})$. If one can take this representation to be irreducible, then according to the strong Artin conjecture, its $L$-function should be automorphic. Even assuming that this is the case, it's not at all immediately clear (at least, without knowing the modularity theorem) that the $L$-function of the corresponding automorphic representation is related to that a weight $2$ holomorphic cusp eigenform for $GL(2)$. But functoriality can sometimes be used to relate $L$-functions for automorphic representations on one group to $L$-functions of automorphic representations on another group. The arrows only go one way, and in this case it looks like the wrong way, but sometimes one can characterize the arrows' images. Given that we know ex post that there is a relationship between the (mod $p$) Galois representation attached to an elliptic curve and that of $f$ (uniform over $p$!), one can ask whether one can see the relationship directly'' from functoriality, without passing through the modularity lifting theorems. Moreover, if one could do this for infinitely many $p$, perhaps one could show that the coefficients of the $L$-function of the elliptic curve $E$ match up with those of the associated modular form $f$ in characteristic $0$ so as to obtain a different proof of the modularity theorem (conditional, of course, on functoriality). The picture that I've sketched above is full of holes like Swiss cheese and it will take me years to understand the precise statements that I allude to above (never mind the proofs!). Nevertheless, I feel that there's a kernel of a well-defined question in what I write. I assume that the strategy that I allude to breaks down somewhere, because otherwise I would have heard about it. I would be grateful to anybody who would be willing to enlighten me as to what goes wrong. To make one closing remark, the above strategy seems unnatural insofar as one is switching between different Galois representations when they naturally come in one could instead be looking at $p$-adic families. Perhaps one it would be better to ask if one could prove that the (mod $p^k$) Galois representations are modular as $k$ varies by the strategy that I outlined above. However, I'm more worried about reducibility of representations when $k > 1$. 4 deleted 70 characters in body; added 53 characters in body Let $E/\mathbb{Q}$ be an elliptic curve. By the modularity theorem, the prime indexed coefficients of its $L$-function agree with those of a weight $2$ cusp eigenform $f$ with integer coefficients. This immediately imply that the coefficients are congruent (mod $p$) for every prime $p$. However, the converse is also true: if the coefficients of the $L$-series of $E$ and that of $f$ are congruent (mod $p$) for every prime $p$, then the $L$-series agree. The work of Langlands and Tunnell can be used to show that if the elliptic curve has irreducible (mod $3$) Galois representation, then coefficients of the $L$-function agree with those of a weight $2$ cusp form with coefficients in some algebraic number field $K$ (mod $v$), where $v$ is a prime above $3$ in $K$. This was the starting point of Wiles' proof of the modularity theorem for semi-stable elliptic curves over $\mathbb{Q}$. One could try to get congruence of coefficients (mod $p$) for larger values of $p$ by a method analogous to the method via Langlands and Tunnell rather than as a consequence of modularity lifting theorems. One immediately runs into a stumbling block because once $p$ gets big enough, when $GL(2, \mathbb{Z}/p\mathbb{Z})$ p > 3$the group is not isomorphic to a subgroup nonsolvable and the methods of$GL(2, \mathbb{C})$Langlands and so Tunnell can't be applied (in a known way) to prove relevant cases of the strong Artin conjecturefor$GL(2, \mathbb{C}\$) isn't immediately relevant. However Nevertheless, there exists an $n$ such that there is an injective representation $\rho: GL(2, \mathbb{Z}/p\mathbb{Z}) \to GL(n, \mathbb{C})$. If one can take this representation to be irreducible, then according to the strong Artin conjecture, its $L$-function should be automorphic. Even assuming that this is the case, it's not at all immediately clear (at least, without knowing the modularity theorem) that the $L$-function of the corresponding automorphic representation is related to that a weight $2$ holomorphic cusp eigenform for $GL(2)$. But functoriality can sometimes be used to relate $L$-functions for automorphic representations on one group to $L$-functions of automorphic representations on another group. The arrows only go one way, and in this case it looks like the wrong way, but sometimes one can characterize the arrows' images. Given that we know ex post that there is a relationship between the (mod $p$) Galois representation attached to an elliptic curve and that of $f$ (uniform over $p$!), one can ask whether one can see the relationship directly'' from functoriality, without passing through the modularity lifting theorems. Moreover, if one could do this for infinitely many $p$, perhaps one could show that the coefficients of the $L$-function of the elliptic curve $E$ match up with those of the associated modular form $f$ in characteristic $0$ so as to obtain a different proof of the modularity theorem (conditional, of course, on functoriality). The picture that I've sketched above is full of holes like Swiss cheese and it will take me years to understand the precise statements that I allude to above (never mind the proofs!). Nevertheless, I feel that there's a kernel of a well-defined question in what I write. I assume that the strategy that I allude to breaks down somewhere, because otherwise I would have heard about it. I would be grateful to anybody who would be willing to enlighten me as to what goes wrong. To make one closing remark, the above strategy seems unnatural insofar as one is switching between different Galois representations when they naturally come in $p$-adic families. Perhaps one it would be better to ask if one could prove that the (mod $p^k$) Galois representations are modular as $k$ varies by the strategy that I outlined above. However, I'm more worried about reducibility of representations when $k > 1$. 3 added 129 characters in body Let $E/\mathbb{Q}$ be an elliptic curve. By the modularity theorem, the prime indexed coefficients of its $L$-function agree with those of a weight $2$ cusp eigenform $f$ with integer coefficients. This immediately imply that the coefficients are congruent (mod $p$) for every prime $p$. However, the converse is also true: if the coefficients of the $L$-series of $E$ and that of $f$ are congruent (mod $p$) for every prime $p$, then the $L$-series agree. The work of Langlands and Tunnell can be used to show that if the elliptic curve has irreducible (mod $3$) Galois representation, then coefficients of the $L$-function agree with those of a weight $2$ cusp form with coefficients in some algebraic number field $K$ (mod $v$), where $v$ is a prime above $3$ in $K$. This was the starting point of Wiles' proof of the modularity theorem for semi-stable elliptic curves over $\mathbb{Q}$. One could try to get congruence of coefficients (mod $p$) for larger values of $p$ by a method analogous to the method via Langlands and Tunnell rather than as a consequence of modularity lifting theorems. One immediately runs into a stumbling block because once $p$ gets big enough, $GL(2, \mathbb{Z}/p\mathbb{Z})$ is not isomorphic to a subgroup of $GL(2, \mathbb{C})$ and so the Artin conjecture for $GL(2, \mathbb{C}$) isn't immediately relevant. However, there exists an $n$ such that there is an injective representation $\rho: GL(2, \mathbb{Z}/p\mathbb{Z}) \to GL(n, \mathbb{C})$. If one can take this representation to be irreducible then according to the strong Artin conjecture, its $L$-function should be automorphic. Even assuming that this is the case, it's not at all immediately clear (at least, without knowing the modularity theorem) that the $L$-function of the corresponding automorphic representation is related to that a weight $2$ holomorphic cusp eigenform for $GL(2)$. But functoriality can sometimes be used to relate $L$-functions for automorphic representations on one group to $L$-functions of automorphic representations on another group. The arrows only go one way, and in this case it looks like the wrong way, but sometimes one can characterize the arrows' images. Given that we know ex post that there is a relationship between the (mod $p$) Galois representation attached to an elliptic curve and that of $f$ (uniform over $p$!), one can ask whether one can see the relationship directly'' from functoriality, without passing through the modularity lifting theorems. Moreover, if one could do this for infinitely many $p$, perhaps one could show that the coefficients of the $L$-function of the elliptic curve $E$ match up with those of the associated modular form $f$ in characteristic $0$ so as to obtain a different proof of the modularity theorem (conditional, of course, on functoriality). The picture that I've sketched above is full of holes like Swiss cheese and it will take me years to understand the precise statements that I allude to above (never mind the proofs!). Nevertheless, I feel that there's a kernel of a well-defined question in what I write. I assume that the strategy that I allude to breaks down somewhere, because otherwise I would have heard about it. I would be grateful to anybody who would be willing to enlighten me as to what goes wrong. To make one closing remark, the above strategy seems unnatural insofar as one is switching between different Galois representations when they naturally come in $p$-adic families. Perhaps one it would be better to ask if one could prove that the (mod $p^k$) Galois representations are modular as $k$ varies by the strategy that I outlined above. However, I'm more worried about reducibility of representations when $k > 1$. 2 edited body; added 42 characters in body; added 10 characters in body; added 1 characters in body Let $E/\mathbb{Q}$ be an elliptic curve. By the modularity theorem, the prime indexed coefficients of its $L$-function agree with those of a weight $2$ cusp eigenform $f$ with integer coefficients. This immediately imply that the coefficients are congruent (mod $p$) for every prime $p$. However, the converse is also true: if the coefficients of the $L$-series of $E$ and that of $f$ are congruent (mod $p$) for every prime $p$, then the $L$-series agree. The work of Langlands and Tunnell can be used to show that if the elliptic curve has irreducible (mod $3$) Galois representation, then coefficients of the $L$-function agree with those of a weight $2$ cusp form with coefficients in some algebraic number field $K$ (mod $v$), where $v$ is a prime above $3$ in $K$. This was the starting point of Wiles' proof of the modularity theorem for semi-stable elliptic curves over $\mathbb{Q}$. One could try to get congruence of coefficients (mod $p$) for larger values of $p$ by a method analogous to the method via Langlands and Tunnell rather than as a consequence of modularity lifting theorems. One immediately runs into a stumbling block because once $p$ gets big enough, $GL(2, \mathbb{Z}/p\mathbb{Z})$ is not isomorphic to a subgroup of $GL(2, \mathbb{C})$ and so the Artin conjecture for $GL(2, \mathbb{C}$) isn't immediately relevant. However, there exists an $n$ such that there is an injective representation $\rho: GL(2, \mathbb{Z}/p\mathbb{Z}) \to GL(n, \mathbb{C})$. If one can take this representation to be irreducible then according to the strong Artin conjecture, its $L$-function should be automorphic. Even assuming that this is the case, it's not at all immediately clear (at least, without knowing the modularity theorem) that the $L$-function of the corresponding automorphic representation is related to that a weight $2$ holomorphic cusp eigenform for $GL(2)$. But functoriality can sometimes be used to relate $L$-functions for automorphic representations on one group to $L$-functions of automorphic representations on another group. Given that we know ex post that there is a relationship between the (mod $p$) Galois representation attached to an elliptic curve and that of $f$ (uniform over $p$!), one can ask whether one can see the relationship directly'' from functoriality, without passing through the modularity lifting theorems. Moreover, if one could do this for infinitely many $p$, perhaps one could show that the coefficients of the $L$-function of the elliptic curve $E$ match up with those of the associated modular form $f$ in characteristic $0$ so as to obtain a different proof of the modularity theorem (conditional, of course, on functoriality). The picture that I've sketched above is full of holes like Swiss cheese and it will take me years to understand the precise statements that I allude to above (never mind the proofs!). Nevertheless, I feel that there's a kernel of a well-defined question in what I write. I assume that the strategy that I allude to breaks down somewhere, because otherwise I would have heard about it. I would be grateful to anybody who would be willing to enlighten me as to what goes wrong. To make one closing remark, the above strategy seems unnatural insofar as one is switching between different Galois representations when they naturally come in coherent $p$-adic families. Perhaps one it would be better to ask if one could prove that the (mod $p^k$) Galois representations are modular as $k$ varies by the strategy that I outlined above. However, I'm more worried about reducibility of representations when $k > 1$. 1 # Why doesn't functoriality immediately imply the modularity theorem? Let $E/\mathbb{Q}$ be an elliptic curve. By the modularity theorem, the prime indexed coefficients of its $L$-function agree with those of a weight $2$ cusp eigenform $f$ with integer coefficients. This immediately imply that the coefficients are congruent (mod $p$) for every prime $p$. However, the converse is also true: if the coefficients of the $L$-series of $E$ and that of $f$ are congruent (mod $p$) for every prime $p$, then the $L$-series agree. The work of Langlands and Tunnell can be used to show that if the elliptic curve has irreducible (mod $3$) Galois representation, then coefficients of the $L$-function agree with those of a weight $2$ cusp form with coefficients in some algebraic number field $K$ (mod $v$), where $v$ is a prime above $3$ in $K$. This was the starting point of Wiles' proof of the modularity theorem for semi-stable elliptic curves over $\mathbb{Q}$. One could try to get congruence of coefficients (mod $p$) for larger values of $p$ by a method analogous to the method via Langlands and Tunnell rather than as a consequence of modularity lifting theorems. One immediately runs into a stumbling block because once $p$ gets big enough, $GL(2, \mathbb{Z}/p\mathbb{Z})$ is not isomorphic to a subgroup of $GL(2, \mathbb{C})$ and so the Artin conjecture for $GL(2, \mathbb{C}$) isn't immediately relevant. However, there exists an $n$ such that there is an injective representation $\rho: GL(2, \mathbb{Z}/p\mathbb{Z}) \to GL(n, \mathbb{C})$. If one can take this representation to be irreducible then according to the strong Artin conjecture, its $L$-function should be automorphic. Even assuming that this is the case, it's not at all immediately clear (at least, without knowing the modularity theorem) that the $L$-function of the corresponding automorphic representation is related to that a weight $2$ holomorphic cusp eigenform for $GL(2)$. But functoriality can be used to relate $L$-functions for automorphic representations on one group to $L$-functions of automorphic representations on another group. Given that we know ex post that there is a relationship between the (mod $p$) Galois representation attached to an elliptic curve and that of $f$ (uniform over $p$!), one can ask whether one can see the relationship directly'' from functoriality, without passing through the modularity lifting theorems. Moreover, if one could do this for infinitely many $p$, perhaps one could show that the coefficients of the $L$-function of the elliptic curve $E$ match up with those of the associated modular form $f$ in characteristic $0$ so as to obtain a different proof of the modularity theorem. The picture that I've sketched above is full of holes like Swiss cheese and it will take me years to understand the precise statements that I allude to above (never mind the proofs!). Nevertheless, I feel that there's a kernel of a well-defined question in what I write. I assume that the strategy that I allude to breaks down somewhere, because otherwise I would have heard about it. I would be grateful to anybody who would be willing to enlighten me as to what goes wrong. To make one closing remark, the above strategy seems unnatural insofar as one is switching between different Galois representations when they naturally come in coherent families. Perhaps one it would be better to ask if one could prove that the (mod $p^k$) Galois representations are modular as $k$ varies by the strategy that I outlined above. However, I'm more worried about reducibility of representations when $k > 1$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 274, "mathjax_display_tex": 0, "mathjax_asciimath": 6, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9545089602470398, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/196003-find-equation-line-cb-print.html
# Find the equation of the line CB Printable View • March 15th 2012, 11:07 AM andyboy179 Find the equation of the line CB hello, this is my question: The point A(3,2) is one end of the diameter AB of a circle. Both A and B lie on the line y=1/3x+1. The point C(1,4) lies on the circumference of the circle. Fine the equation of CB. im not too sure at all how i would do this question so i would require some help please! could you write up step by step what i have to do and explain fully so i can try and understand what is going on?! thanks! • March 15th 2012, 11:32 AM Wilmer Re: Find the equation of the line CB Quote: Originally Posted by andyboy179 .....could you write up step by step what i have to do and explain fully so i can try and understand what is going on?! • March 15th 2012, 11:42 AM princeps Re: Find the equation of the line CB Quote: Originally Posted by andyboy179 hello, this is my question: The point A(3,2) is one end of the diameter AB of a circle. Both A and B lie on the line y=1/3x+1. The point C(1,4) lies on the circumference of the circle. Fine the equation of CB. im not too sure at all how i would do this question so i would require some help please! could you write up step by step what i have to do and explain fully so i can try and understand what is going on?! thanks! Let's denote center of circle as : $O(x_O,y_O)$ step 1 . Solve following system of equations in order to find center of circle : $\begin{cases}(x_O-x_A)^2+(y_O-y_A)^2=(x_O-x_C)^2+(y_O-y_C)^2 \\y_O=\frac{1}{3}x_O+1\end{cases}$ step 2 . calculate point B : $x_B=2x_O-x_A$ $y_B=2y_O-y_A$ step 3 . Write equation of the line CB using equation of the line through two points . $y-y_C=\frac{y_C-y_B}{x_C-x_B}(x-x_C)$ • March 16th 2012, 04:13 PM bjhopper Re: Find the equation of the line CB Hi andyboy79, The following may be of more help 1 the perpendicular bisector of AC passes thru the circle center 2 solution of y=1/3x+1 and ! above gives the center 3 radius of circle = distance between A and center 4 determine point B using radius and y=1/3x+1 5 CB has same slope as 1.Use point slope formular to get equation of CBl All times are GMT -8. The time now is 01:18 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.928253710269928, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/110270-domain-function.html
# Thread: 1. ## domain of a function f(x;y) = 1/ (9x^2 + y^2) what is the domain of this function? i know that 9x^2 + y^2 cannot be equals to 0 so where do i go from there? 2. Originally Posted by yen yen f(x;y) = 1/ (9x^2 + y^2) what is the domain of this function? i know that 9x^2 + y^2 cannot be equals to 0 so where do i go from there? The domain is the set of all ordered pairs (x, y) where $x \in$ R\{0} and $y \in$ R\{0}. 3. hi $9x^{2}+y^{2}$ cannot equal zero only when $x$ and $y$ aren't equal to zero.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8910837769508362, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/83920-taylor-s-inequality.html
Thread: 1. Taylor's Inequality ok i dont know how to find the error bound plz help? (a) Approximate f by a Taylor polynomial with degree n at the number a. T2(x) = Enter a mathematical expression. 12+(1/4)*(x-4)-(1/64)*(x-4)^2 (b) Use Taylor's Inequality to estimate the accuracy of the approximation f Tn(x) when x lies in the given interval. (Round the answer to six decimal places.) i think i find M to be .001953125 to use in the formula (M/(n+1)!)|x-a|^n+1 but i dont know what to do on this anyone help? THANKS! 2. Originally Posted by ahawk1 ok i dont know how to find the error bound plz help? (a) Approximate f by a Taylor polynomial with degree n at the number a. T2(x) = Enter a mathematical expression. 12+(1/4)*(x-4)-(1/64)*(x-4)^2 (b) Use Taylor's Inequality to estimate the accuracy of the approximation f Tn(x) when x lies in the given interval. (Round the answer to six decimal places.) i think i find M to be .001953125 to use in the formula (M/(n+1)!)|x-a|^n+1 but i dont know what to do on this anyone help? THANKS! I believe the approximation of the function using a Taylor polynomial of degree 2 should be: $2 + \frac{x - 4}{4} - \frac{(x-4)^2}{64}$ 3. Originally Posted by icemanfan I believe the approximation of the function using a Taylor polynomial of degree 2 should be: $2 + \frac{x - 4}{4} - \frac{(x-4)^2}{64}$ i already had that in my answer up top but i need to know the error bound not that actual polynomial but thanks!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8804898262023926, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/115212/transform-a-polynomial-into-another-one-upto-a-constant
## transform a polynomial into another one upto a constant ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have a polynomial $p(x)=a_Nx^N+a_{N-1}x^{N-1}+\dots+a_0$. I want to convert this into another polynomial of same order, say $b_Ny^N+b_{N-1}y^{N-1}+\dots+b_0$. Is it possible to find a transformation $y=f(x)$ that will transform the coefficients $a_n$ into $b_n$? For the case of second order polynomial, a linear transform does the job. I am asking this in order to generate random variables. For gaussian random variable case I can sample from $N(0,1)$, density of which is a second order polynomial inside an exponential. By a linear transformation $ax+b$, I can get samples from $N(b,a^2)$. What I am trying to do is assuming I have samples from $x \sim density\propto \exp(a_Nx^N+a_{N-1}x^{N-1}+\dots+a_0)$, whether or not I can get samples from $y \sim \exp(b_Ny^N+b_{N-1}y^{N-1}+\dots+b_0)$ just by a deterministic transform $y=f(x)$. Btw, $a_N$ is a negative constant and $N$ is always even, so that $\exp(polynomial)$ makes a sensible probability density function. Thanks a lot in advance. edit As Robert Israel stated in his comment, the constant term is to be a proper normalization term. The basic question is this, a gaussian corresponds to a log-polynomial density where the polynomial is of 2nd order, and one can sample $N(\mu,\sigma^2)$ from $N(0,1)$ just by the transform $\mu+\sigma x$. Hence this corresponds to converting a polynomial into another one. Constant term is not of importance as it is just a normalization term. So for an higher order polynomial of degree N is there a possible way to do such a thing? - I don't understand the question. What relationship do you want the first and second polynomial to have? – Qiaochu Yuan Dec 2 at 22:46 so in $p(x)$ if I replace x with f(x), or x with a y, I will get the polynomial with new coefficients. – YBE Dec 2 at 22:50 3 You are given the $a_n$'s and the $b_n$'s? In general, this is not possible. E.g., $p(x) = -x^2$, $q(y) = -x^2-1$. Assuming that you are only interested in real parameters, $p(x)$ assumes the value $0$, but $q(y)$ never does, so there is no $f$ such that $p(x) = q(f(x))$ for all $x$. – Goldstern Dec 2 at 23:20 In the quadratic case a linear transformation is not enough, even over the complex numbers. The reason being that you have only two parameters to chose (a and b), while you have to solve for three equations. To be concrete $p(ax+b)=a_{2} a^{2} x^{2} + (2 a_{2} a b + a_{1} a) x + a_{2} b^{2} + a_{1} b + a_{0}$ so in order for the linear transformation to turn one polynomial into the other you have to have $a_{2} a^{2}=b_2$, $(2 a_{2} a b + a_{1} a)=b_1$ and $a_{2} b^{2} + a_{1} b + a_{0}=b_0$. These three equations will be rarely solvable at the same time. – Maarten Derickx Dec 3 at 0:50 1 Perhaps the confusion is due to the fact that YBE is implicitly allowing the constant term in the quadratic polynomial to be adjusted to obtain a probability density function. So for $p(x)$ and $q(y)$ quadratics with negative leading coefficient, you can get $p(ax+b) = q(x) + c$ for some constant $c$ by solving two equations in two unknowns. – Robert Israel Dec 3 at 1:16 ## 1 Answer It is possible to do that using the algorithm known as the Independent Metropolis-Hastings sampler without having to do any transformation and without having to compute the constant normalizing terms. Assume you can sample from the density $q(x)\propto \exp(g(x))$ where $g(x)$ is a polynomial (say, you are sampling from the normal distribution) and your objective is to get a sample from the density $p(x)\propto \exp(f(x))$ where $f(x)$ is another polynomial (naturally both polynomials need to have negative leading term to insure those densities exist). Then the algorithm will produce a Markov process whose invariant distribution is $p(x)$. Starting from some arbitrary initial value, say $X^0$, let's say that you have already $M$ draws. To get a new sample draw $X^{M+1}$, draw a random variable from $q$. Let's call that draw $X'$. Then accept or rejet $X'$ with probability $$1 \wedge \exp[f(X')-f(X^M)+g(X^M)-g(X')]$$ If you have accepted then set $X^{M+1}=X'$ otherwise set $X^{M+1}=X^M$. Eventually that Markov process would have converged and you could consider say the last $N$ draws as such a sample from the desired distribution. A very good introduction to that algorithm is section 7.4 of this excellent book Edit As indicated by the author of the question in a comment below, if the question is to transform a sample from one continuous distribution having density $q(x)$ into a sample from another continuous distribution having density $p(x)$, then this could be achieved in the following way. Let $Q$ and $P$ be the CDF's associated with $q$ and $p$ respectively. Assume we have an i.i.d. sample $X_1$,...,$X_n$ from $q$, then $P^{-1}(Q(X_1))$,...,$P^{-1}(Q(X_n))$ is an i.i.d. sample from $p$. (this is straightforward to prove). Now, the issue in your case is that $P$ and the quantile function $P^{-1}$ may not be known in closed-form (even the normalizing constant of the density associated with $P$ may be not be known in closed form.) However, numerical integration and interpolation could work here. - Thanks for the answer, I know that I can achieve sampling from arbitrary densities by Metropolis-Hastings algorithm. My question is more on the order of, if I have samples from $exp(polynomial_1)$ can I get the samples from $exp(polynomial_2)$ just by transforming the samples I get from the first log-polynomial density. I will be sampling from such densities in an online fashion so speed is a concern and I cannot wait for the M-H chain to mix. I pretty much want a way to sample directly from such densities. I already sample from log-polynomial densities using slice sampling. – YBE Dec 4 at 8:39 I edited my answer to reply to your comment about transforming a sample from one continuous distribution into a sample from another. – an12 Dec 4 at 9:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 67, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9490344524383545, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Hilbert_Transform
# Hilbert transform (Redirected from Hilbert Transform) In mathematics and in signal processing, the Hilbert transform is a linear operator which takes a function, u(t), and produces a function, H(u)(t), with the same domain. The Hilbert transform is named after David Hilbert, who first introduced the operator in order to solve a special case of the Riemann–Hilbert problem for holomorphic functions. It is a basic tool in Fourier analysis, and provides a concrete means for realizing the harmonic conjugate of a given function or Fourier series. Furthermore, in harmonic analysis, it is an example of a singular integral operator, and of a Fourier multiplier. The Hilbert transform is also important in the field of signal processing where it is used to derive the analytic representation of a signal u(t). The Hilbert transform was originally defined for periodic functions, or equivalently for functions on the circle, in which case it is given by convolution with the Hilbert kernel. More commonly, however, the Hilbert transform refers to a convolution with the Cauchy kernel, for functions defined on the real line R (the boundary of the upper half-plane). The Hilbert transform is closely related to the Paley–Wiener theorem, another result relating holomorphic functions in the upper half-plane and Fourier transforms of functions on the real line. The Hilbert transform, in red, of a square wave, in blue ## Introduction The Hilbert transform of u can be thought of as the convolution of u(t) with the function h(t) = 1/(πt). Because h(t) is not integrable the integrals defining the convolution do not converge. Instead, the Hilbert transform is defined using the Cauchy principal value (denoted here by p.v.) Explicitly, the Hilbert transform of a function (or signal) u(t) is given by $H(u)(t) = \text{p.v.} \int_{-\infty}^{\infty}u(\tau) h(t-\tau)\, d\tau =\frac{1}{\pi} \ \text{p.v.} \int_{-\infty}^{\infty} \frac{u(\tau)}{t-\tau}\, d\tau,$ provided this integral exists as a principal value. This is precisely the convolution of u with the tempered distribution p.v. 1/πt (due to Schwartz (1950); see Pandey (1996, Chapter 3)). Alternatively, by changing variables, the principal value integral can be written explicitly (Zygmund 1968, §XVI.1) as $H(u)(t) = -\frac{1}{\pi}\lim_{\varepsilon\downarrow 0}\int_{\varepsilon}^\infty \frac{u(t+\tau)-u(t-\tau)}{\tau}\,d\tau.$ When the Hilbert transform is applied twice in succession to a function u, the result is negative u: $H(H(u))(t) = -u(t),\,$ provided the integrals defining both iterations converge in a suitable sense. In particular, the inverse transform is −H. This fact can most easily be seen by considering the effect of the Hilbert transform on the Fourier transform of u(t) (see Relationship with the Fourier transform, below). For an analytic function in upper half-plane the Hilbert transform describes the relationship between the real part and the imaginary part of the boundary values. That is, if f(z) is analytic in the plane Im z > 0 and u(t) = Re f(t + 0·i ) then Im f(t + 0·i ) = H(u)(t) up to an additive constant, provided this Hilbert transform exists. ### Notation This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (December 2011) In signal processing the Hilbert transform of u(t) is commonly denoted by $\widehat u(t).\,$ However, in mathematics, this notation is already extensively used to denote the Fourier transform of u(t). Occasionally, the Hilbert transform may be denoted by $\tilde{u}(t)$. Furthermore, many sources define the Hilbert transform as the negative of the one defined here. ## History The Hilbert transform arose in Hilbert's 1905 work on a problem posed by Riemann concerning analytic functions (Kress (1989); Bitsadze (2001)) which has come to be known as the Riemann–Hilbert problem. Hilbert's work was mainly concerned with the Hilbert transform for functions defined on the circle (Khvedelidze 2001; Hilbert 1953). Some of his earlier work related to the Discrete Hilbert Transform dates back to lectures he gave in Göttingen. The results were later published by Hermann Weyl in his dissertation (Hardy, Littlewood & Polya 1952, §9.1). Schur improved Hilbert's results about the discrete Hilbert transform and extended them to the integral case (Hardy, Littlewood & Polya 1952, §9.2). These results were restricted to the spaces L2 and ℓ2. In 1928, Marcel Riesz proved that the Hilbert transform can be defined for u in Lp(R) for 1 ≤ p ≤ ∞, that the Hilbert transform is a bounded operator on Lp(R) for the same range of p, and that similar results hold for the Hilbert transform on the circle as well as the discrete Hilbert transform (Riesz 1928). The Hilbert transform was a motivating example for Antoni Zygmund and Alberto Calderón during their study of singular integrals (Calderón & Zygmund 1952). Their investigations have played a fundamental role in modern harmonic analysis. Various generalizations of the Hilbert transform, such as the bilinear and trilinear Hilbert transforms are still active areas of research today. ## Relationship with the Fourier transform The Hilbert transform is a multiplier operator (Duoandikoetxea 2000, Chapter 3). The symbol of H is σH(ω) = −i sgn(ω) where sgn is the signum function. Therefore: $\mathcal{F}(H(u))(\omega) = (-i\,\operatorname{sgn}(\omega))\cdot \mathcal{F}(u)(\omega)\,$ where $\mathcal{F}$ denotes the Fourier transform. Since sgn(x) = sgn(2πx), it follows that this result applies to the three common definitions of $\mathcal{F}.$ By Euler's formula, $\sigma_H(\omega) \, \ =\ \begin{cases} \ \ i = e^{+i\pi/2}, & \mbox{for } \omega < 0\\ \ \ \ \ 0, & \mbox{for } \omega = 0\\ \ \ -i = e^{-i\pi/2}, & \mbox{for } \omega > 0. \end{cases}$ Therefore H(u)(t) has the effect of shifting the phase of the negative frequency components of u(t) by +90° (π/2 radians) and the phase of the positive frequency components by −90°. And i·H(u)(t) has the effect of restoring the positive frequency components while shifting the negative frequency ones an additional +90°, resulting in their negation. When the Hilbert transform is applied twice, the phase of the negative and positive frequency components of u(t) are respectively shifted by +180° and −180°, which are equivalent amounts. The signal is negated, i.e., H(H(u)) = −u, because: $\big(\sigma_H(\omega)\big)^2 = e^{\pm i\pi} = -1 \qquad \text{for } \omega\neq 0.$ ## Table of selected Hilbert transforms Signal $u(t)\,$ Hilbert transform[fn 1] $H(u)(t)\,$ $\sin(t)\,$ [fn 2] $-\cos(t)\,$ $\cos(t)\,$ [fn 2] $\sin(t)\,$ $\exp \left( i t \right)$ $- i \exp \left( i t \right)$ $\exp \left( -i t \right)$ $i \exp \left( -i t \right)$ $1 \over t^2 + 1$ $t \over t^2 + 1$ Sinc function $\sin(t) \over t$ $1- \cos(t)\over t$ Rectangular function $\sqcap(t)$ ${1 \over \pi} \log \left | {t+{1 \over 2} \over t-{1 \over 2}} \right |$ Dirac delta function $\delta(t) \,$ ${1 \over \pi t}$ Characteristic Function $\chi_{[a,b]}(t) \,$ $\frac{1}{\pi}\log \left \vert \frac{t-a}{t-b}\right \vert \,$ Notes 1. Some authors, e.g., Bracewell, use our −H as their definition of the forward transform. A consequence is that the right column of this table would be negated. 2. ^ a b The Hilbert transform of the sin and cos functions can be defined in a distributional sense, if there is a concern that the integral defining them is otherwise conditionally convergent. In the periodic setting this result holds without any difficulty. An extensive table of Hilbert transforms is available (King 2009). Note that the Hilbert transform of a constant is zero. ## Domain of definition It is by no means obvious that the Hilbert transform is well-defined at all, as the improper integral defining it must converge in a suitable sense. However, the Hilbert transform is well-defined for a broad class of functions, namely those in Lp(R) for 1< p <∞. More precisely, if u is in Lp(R) for 1<p<∞, then the limit defining the improper integral $H(u)(t) = -\frac{1}{\pi}\lim_{\epsilon\downarrow 0}\int_\epsilon^\infty \frac{u(t+\tau)-u(t-\tau)}{\tau}\,d\tau$ exists for almost every t. The limit function is also in Lp(R), and is in fact the limit in the mean of the improper integral as well. That is, $-\frac{1}{\pi}\int_\epsilon^\infty \frac{u(t+\tau)-u(t-\tau)}{\tau}\,d\tau\to H(u)(t)$ as ε→0 in the Lp-norm, as well as pointwise almost everywhere, by the Titchmarsh theorem (Titchmarsh 1948, Chapter 5). In the case p=1, the Hilbert transform still converges pointwise almost everywhere, but may fail to be itself integrable even locally (Titchmarsh 1948, §5.14). In particular, convergence in the mean does not in general happen in this case. The Hilbert transform of an L1 function does converge, however, in L1-weak, and the Hilbert transform is a bounded operator from L1 to L1,w (Stein & Weiss 1971, Lemma V.2.8). (In particular, since the Hilbert transform is also a multiplier operator on L2, Marcinkiewicz interpolation and a duality argument furnishes an alternative proof that H is bounded on Lp.) ## Properties ### Boundedness If 1<p<∞, then the Hilbert transform on Lp(R) is a bounded linear operator, meaning that there exists a constant Cp such that $\|Hu\|_p \le C_p\| u\|_p$ for all u∈Lp(R). This theorem is due to Riesz (1928, VII); see also Titchmarsh (1948, Theorem 101). The best constant Cp is given by $C_p=\begin{cases}\tan \frac{\pi}{2p} & \text{for } 1 < p\leq 2\\ \cot\frac{\pi}{2p} & \text{for } 2<p<\infty. \end{cases}$ This result is due to (Pichorides 1972); see also Grafakos (2004, Remark 4.1.8). The same best constants hold for the periodic Hilbert transform. The boundedness of the Hilbert transform implies the Lp(R) convergence of the symmetric partial sum operator $S_R f = \int_{-R}^{R}\hat{f}{\xi}e^{2\pi i x\xi}\,d\xi$ to f in Lp(R), see for example (Duoandikoetxea 2000, p. 59). ### Anti-self adjointness The Hilbert transform is an anti-self adjoint operator relative to the duality pairing between Lp(R) and the dual space Lq(R), where p and q are Hölder conjugates and 1 < p,q < ∞. Symbolically, $\langle Hu, v\rangle = \langle u, -Hv\rangle$ for u ∈ Lp(R) and v ∈ Lq(R) (Titchmarsh 1948, Theorem 102). ### Inverse transform The Hilbert transform is an anti-involution (Titchmarsh 1948, p. 120), meaning that $H(H(u)) = -u\,$ provided each transform is well-defined. Since H preserves the space Lp(R), this implies in particular that the Hilbert transform is invertible on Lp(R), and that $H^{-1} = -H.\,$ ### Differentiation Formally, the derivative of the Hilbert transform is the Hilbert transform of the derivative, i.e. these two linear operators commute: $H\left(\frac{du}{dt}\right) = \frac{d}{dt}H(u).$ Iterating this identity, $H\left(\frac{d^ku}{dt^k}\right) = \frac{d^k}{dt^k}H(u).$ This is rigorously true as stated provided u and its first k derivatives belong to Lp(R) (Pandey 1996, §3.3). One can check this easily in the frequency domain, where differentiation becomes multiplication by ω. ### Convolutions The Hilbert transform can formally be realized as a convolution with the tempered distribution (Duistermaat & Kolk 2010, p. 211) $h(t) = \text{p.v. }\frac{1}{\pi t}.$ Thus formally, $H(u) = h*u.\,$ However, a priori this may only be defined for u a distribution of compact support. It is possible to work somewhat rigorously with this since compactly supported functions (which are distributions a fortiori) are dense in Lp. Alternatively, one may use the fact that h(t) is the distributional derivative of the function log|t|/π; to wit $H(u)(t) = \frac{d}{dt}\left(\frac{1}{\pi} (u*\log|\cdot|)(t)\right).$ For most operational purposes the Hilbert transform can be treated as a convolution. For example, in a formal sense, the Hilbert transform of a convolution is the convolution of the Hilbert transform on either factor: $H(u*v) = H(u)*v = u*H(v).\,$ This is rigorously true if u and v are compactly supported distributions since, in that case, $h*(u*v) = (h*u)*v = u*(h*v).\,$ By passing to an appropriate limit, it is thus also true if u ∈ Lp and v ∈ Lr provided $1 < \frac{1}{p} + \frac{1}{r},$ a theorem due to Titchmarsh (1948, Theorem 104). ### Invariance The Hilbert transform has the following invariance properties on L2(R). • It commutes with translations. That is, it commutes with the operators Taƒ(x) = ƒ(x + a) for all a in R • It commutes with positive dilations. That is it commutes with the operators Mλƒ(x) = ƒ(λx) for all λ > 0. • It anticommutes with the reflection Rƒ(x) = ƒ(−x). Up to a multiplicative constant, the Hilbert transform is the only bounded operator on L2 with these properties (Stein 1970, §III.1). In fact there is a larger group of operators commuting with the Hilbert transform. The group SL(2,R) acts by unitary operators Ug on the space L2(R) by the formula $\displaystyle{U_{g}^{-1}f(x) =(cx+d)^{-1} f\left({ax+b\over cx +d}\right),\,\,\,g=\begin{pmatrix} a & b \\ c & d \end{pmatrix}.}$ This unitary representation is an example of a principal series representation of SL(2,R). In this case it is reducible, splitting as the orthogonal sum of two invariant subspaces, Hardy space H2(R) and its conjugate. These are the spaces of L2 boundary values of holomorphic functions on the upper and lower halfplanes. H2(R) and its conjugate consist of exactly those L2 functions with Fourier transforms vanishing on the negative and positive parts of the real axis respectively. Since the Hilbert transform is equal to H = -i (2P - I), with P being the orthogonal projection from L2(R) onto H2(R), it follows that H2(R) and its orthogonal are eigenspaces of H for the eigenvalues ± i. In other words H commutes with the operators Ug. The restrictions of the operators Ug to H2(R) and its conjugate give irreducible representations of SL(2,R)—the so-called limit of discrete series representations.[1] ## Extending the domain of definition ### Hilbert transform of distributions It is further possible to extend the Hilbert transform to certain spaces of distributions (Pandey 1996, Chapter 3). Since the Hilbert transform commutes with differentiation, and is a bounded operator on Lp, H restricts to give a continuous transform on the inverse limit of Sobolev spaces: $\mathcal{D}_{L^p} = \underset{n\to\infty}{\underset{\longleftarrow}{\lim}} W^{n,p}(\mathbb{R}).$ The Hilbert transform can then be defined on the dual space of $\mathcal{D}_{L^p}$, denoted $\mathcal{D}_{L^p}'$, consisting of Lp distributions. This is accomplished by the duality pairing: for $u\in \mathcal{D}'_{L^p}$, define $H(u)\in \mathcal{D}'_{L^p}$ by $\langle Hu,v\rangle \overset{\mathrm{def}}{=} \langle u, -Hv\rangle$ for all $v\in\mathcal{D}_{L^p}$. It is possible to define the Hilbert transform on the space of tempered distributions as well by an approach due to Gel'fand & Shilov (1967)[page needed], but considerably more care is needed because of the singularity in the integral. ### Hilbert transform of bounded functions The Hilbert transform can be defined for functions in L∞(R) as well, but it requires some modifications and caveats. Properly understood, the Hilbert transform maps L∞(R) to the Banach space of bounded mean oscillation (BMO) classes. Interpreted naively, the Hilbert transform of a bounded function is clearly ill-defined. For instance, with u = sgn(x), the integral defining H(u) diverges almost everywhere to ±∞. To alleviate such difficulties, the Hilbert transform of an L∞-function is therefore defined by the following regularized form of the integral $H(u)(t) = \text{p.v.} \int_{-\infty}^\infty u(\tau)\left\{h(t-\tau)- h_0(-\tau)\right\}\,d\tau$ where as above h(x) = 1/πx and $h_0(x) = \begin{cases} 0&\mathrm{if\ }|x|<1\\ \frac{1}{\pi x} &\mathrm{otherwise} \end{cases}$ The modified transform H agrees with the original transform on functions of compact support by a general result of Calderón & Zygmund (1952); see Fefferman (1971). The resulting integral, furthermore, converges pointwise almost everywhere, and with respect to the BMO norm, to a function of bounded mean oscillation. A deep result of Fefferman (1971) and Fefferman & Stein (1972) is that a function is of bounded mean oscillation if and only if it has the form ƒ + H(g) for some ƒ, g ∈ L∞(R). ## Conjugate functions The Hilbert transform can be understood in terms of a pair of functions f(x) and g(x) such that the function $F(x) = f(x) + ig(x)$ is the boundary value of a holomorphic function F(z) in the upper half-plane (Titchmarsh 1948, Chapter V). Under these circumstances, if f and g are sufficiently integrable, then one is the Hilbert transform of the other. Suppose that f ∈ Lp(R). Then, by the theory of the Poisson integral, f admits a unique harmonic extension into the upper half-plane, and this extension is given by $u(x+iy) = u(x,y) = \frac{1}{\pi}\int_{-\infty}^\infty f(s)\frac{y}{(x-s)^2+y^2}\,ds$ which is the convolution of f with the Poisson kernel $P(x,y) = \frac{1}{\pi}\frac{y}{x^2+y^2}.$ Furthermore, there is a unique harmonic function v defined in the upper half-plane such that F(z) = u(z) + iv(z) is holomorphic and $\lim_{y\to\infty} v(x+iy) = 0.$ This harmonic function is obtained from f by taking a convolution with the conjugate Poisson kernel $Q(x,y) = \frac{1}{\pi}\frac{x}{x^2+y^2}.$ Thus $v(x,y) = \frac{1}{\pi}\int_{-\infty}^\infty f(s)\frac{x-s}{(x-s)^2+y^2}\,ds.$ Indeed, the real and imaginary parts of the Cauchy kernel are $\frac{i}{\pi z} = P(x,y) + iQ(x,y),$ so that F = u + iv is holomorphic by the Cauchy integral theorem. The function v obtained from u in this way is called the harmonic conjugate of u. The (non-tangential) boundary limit of v(x,y) as y → 0 is the Hilbert transform of f. Thus, succinctly, $H(f) = \lim_{y\to 0} Q(-,y)\star f.$ ### Titchmarsh's theorem A theorem due to Edward Charles Titchmarsh makes precise the relationship between the boundary values of holomorphic functions in the upper half-plane and the Hilbert transform (Titchmarsh 1948, Theorem 95). It gives necessary and sufficient conditions for a complex-valued square-integrable function F(x) on the real line to be the boundary value of a function in the Hardy space H2(U) of holomorphic functions in the upper half-plane U. The theorem states that the following conditions for a complex-valued square-integrable function F : R → C are equivalent: • F(x) is the limit as z → x of a holomorphic function F(z) in the upper half-plane such that $\int_{-\infty}^\infty |F(x+iy)|^2\,dx < K.$ • −Im(F) is the Hilbert transform of Re(F), where Re(F) and Im(F) are real-valued functions with F = Re(F) + i Im(F). • The Fourier transform $\mathcal{F}(F)(x)$ vanishes for x < 0. A weaker result is true for functions of class Lp for p > 1 (Titchmarsh 1948, Theorem 103). Specifically, if F(z) is a holomorphic function such that $\int_{-\infty}^\infty |F(x+iy)|^p\,dx < K$ for all y, then there is a complex-valued function F(x) in Lp(R) such that F(x + iy) → F(x) in the Lp norm as y → 0 (as well as holding pointwise almost everywhere). Furthermore, $F(x) = f(x) - i g(x)\,$ where ƒ is a real-valued function in Lp(R) and g is the Hilbert transform (of class Lp) of ƒ. This is not true in the case p = 1. In fact, the Hilbert transform of an L1 function ƒ need not converge in the mean to another L1 function. Nevertheless (Titchmarsh 1948, Theorem 105), the Hilbert transform of ƒ does converge almost everywhere to a finite function g such that $\int_{-\infty}^\infty \frac{|g(x)|^p}{1+x^2}\,dx < \infty.$ This result is directly analogous to one by Andrey Kolmogorov for Hardy functions in the disc (Duren 1970, Theorem 4.2). ### Riemann–Hilbert problem One form of the Riemann–Hilbert problem seeks to identify pairs of functions F+ and F− such that F+ is holomorphic on the upper half-plane and F− is holomorphic on the lower half-plane, such that for x along the real axis, $F_+(x) - F_-(x) = f(x)$ where f(x) is some given real-valued function of x ∈ R. The left-hand side of this equation may be understood either as the difference of the limits of F± from the appropriate half-planes, or as a hyperfunction distribution. Two functions of this form are a solution of the Riemann–Hilbert problem. Formally, if F± solve the Riemann–Hilbert problem $f(x) = F_+(x) - F_-(x),$ then the Hilbert transform of f(x) is given by $H(f)(x) = \frac{1}{i}(F_+(x) + F_-(x))$ (Pandey 1996, Chapter 2). ## Hilbert transform on the circle See also: Hardy space For a periodic function f the circular Hilbert transform is defined as $\tilde f(x)=\frac{1}{2\pi}\text{ p.v.}\int_0^{2\pi}f(t)\cot\left(\frac{x-t}{2}\right)\,dt.$ The circular Hilbert transform is used in giving a characterization of Hardy space and in the study of the conjugate function in Fourier series. The kernel, $\cot\left(\frac{x-t}{2}\right)$ is known as the Hilbert kernel since it was in this form the Hilbert transform was originally studied (Khvedelidze 2001). The Hilbert kernel (for the circular Hilbert transform) can be obtained by making the Cauchy kernel 1/x periodic. More precisely, for x≠0 $\frac{1}{2}\cot\left(\frac{x}{2}\right)=\frac{1}{x}+\sum_{n=1}^\infty \left( \frac{1}{x+2n\pi} + \frac{1}{x-2n\pi} \right) .$ Many results about the circular Hilbert transform may be derived from the corresponding results for the Hilbert transform from this correspondence. Another more direct connection is provided by the Cayley transform C(x) = (x – i) / (x + i), which carries the real line onto the circle and the upper half plane onto the unit disk. It induces a unitary map $\displaystyle{Uf(x)=\pi^{-1/2} (x+i)^{-1} f(C(x))}$ of L2(T) onto L2(R). The operator U carries the Hardy space H2(T) onto the Hardy space H2(R).[2] ## Hilbert transform in signal processing ### Bedrosian's theorem Bedrosian's theorem states that the Hilbert transform of the product of a low-pass and a high-pass signal with non-overlapping spectra is given by the product of the low-pass signal and the Hilbert transform of the high-pass signal, or $H(f_{LP}(t) f_{HP}(t)) = f_{LP}(t) H(f_{HP}(t))\,$ where fLP and fHP are the low- and high-pass signals respectively (Schreier & Scharf 2010, 14). Amplitude modulated signals are modeled as the product of a bandlimited "message" waveform, um(t), and a sinusoidal "carrier": $u(t) = u_m(t) \cdot \cos(\omega t + \phi)\,$ When um(t) has no frequency content above the carrier frequency, $\frac{\omega}{2\pi}\text{ Hz,}$ then by Bedrosian's theorem: $H(u)(t)= u_m(t) \cdot \sin(\omega t + \phi).$ (Bedrosian 1962) ### Analytic representation Main article: analytic signal This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (December 2011) In the context of signal processing, the conjugate function interpretation of the Hilbert transform, discussed above, gives the analytic representation of a signal u(t): $u_a(t) = u(t) + i\cdot H(u)(t),\,$ which is a holomorphic function in the upper half plane. For the narrowband model [above], the analytic representation is: $u_a(t)\,$ $= u_m(t) \cdot \cos(\omega t + \phi) + i\cdot u_m(t) \cdot \sin(\omega t + \phi)\,$ $= u_m(t) \cdot \left[\cos(\omega t + \phi) + i\cdot \sin(\omega t + \phi)\right]\,$ $= u_m(t) \cdot e^{i(\omega t + \phi)}\,$   (by Euler's formula) () This complex heterodyne operation shifts all the frequency components of um(t) above 0 Hz. In that case, the imaginary part of the result is a Hilbert transform of the real part. This is an indirect way to produce Hilbert transforms. ### Phase/Frequency modulation This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (December 2011) The form: $u(t) = A\cdot \cos(\omega t + \phi_m(t))\,$ is called phase (or frequency) modulation. The instantaneous frequency is  $\omega + \phi_m^\prime(t).$  For sufficiently large ω, compared to  $\phi_m^\prime$: $H(u)(t) \approx A\cdot \sin(\omega t + \phi_m(t)),\,$ and: $u_a(t) \approx A \cdot e^{i(\omega t + \phi_m(t))}.$ ### Single sideband modulation (SSB) This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (December 2011) Main article: Single-sideband_modulation When um(t) in  Eq.1 is also an analytic representation (of a message waveform), that is: $u_m(t) = m(t) + i\cdot \widehat{m}(t),$ the result is single-sideband modulation: $u_a(t) = (m(t) + i\cdot \widehat{m}(t)) \cdot e^{i(\omega t + \phi)},$ whose transmitted component is: $\begin{align} u(t) &= \operatorname{Re}\{u_a(t)\}\\ &= m(t)\cdot \cos(\omega t + \phi) - \widehat{m}(t)\cdot \sin(\omega t + \phi). \end{align}$ ### Causality This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (December 2011) The function h with h(t) = 1/(πt) is a non-causal filter and therefore cannot be implemented as is, if u is a time-dependent signal. If u is a function of a non-temporal variable, e.g., spatial, the non-causality might not be a problem. The filter is also of infinite support which may be a problem in certain applications. Another issue relates to what happens with the zero frequency (DC), which can be avoided by assuring that s does not contain a DC-component. A practical implementation in many cases implies that a finite support filter, which in addition is made causal by means of a suitable delay, is used to approximate the computation. The approximation may also imply that only a specific frequency range is subject to the characteristic phase shift related to the Hilbert transform. See also quadrature filter. ## Discrete Hilbert transform Figure 1: Filter whose frequency response is bandlimited to about 95% of the Nyquist frequency Figure 2: Hilbert transform filter with a highpass frequency response Figure 3. Figure 4. The Hilbert transform of cos(wt) is sin(wt). This figure shows the difference between sin(wt) and an approximate Hilbert transform computed by the MATLAB library function, hilbert(­­­­·) For a discrete function, u[n], with discrete-time Fourier transform (DTFT), U(ω), the Hilbert transform is given by: $H(u)[n]\ =\ \scriptstyle{DTFT}^{-1} \displaystyle \{U(\omega)\cdot \sigma_H(\omega)\},$ where: $\sigma_H(\omega)\ \stackrel{\mathrm{def}}{=}\ \begin{cases} e^{+i\pi/2}, & -\pi < \omega < 0 \\ e^{-i\pi/2}, & 0 < \omega < \pi\\ 0, & \omega=-\pi, 0, \pi. \end{cases}$ And by the convolution theorem, an equivalent formulation is: $H(u)[n] = u[n] * h[n],\,$ where: $h[n]\ \stackrel{\mathrm{def}}{=}\ \scriptstyle{DTFT}^{-1} \big \{\displaystyle \sigma_H(\omega)\big \} = \begin{cases} 0, & \mbox{for }n\mbox{ even},\\ \frac2{\pi n} & \mbox{for }n\mbox{ odd}. \end{cases}$ When the convolution is performed numerically, an FIR approximation is substituted for h[n], as shown in Figure 1, and we see rolloff of the passband at the low and high ends (0 and Nyquist), resulting in a bandpass filter. The high end can be restored, as shown in Figure 2, by an FIR that more closely resembles samples of the smooth, continuous-time h(t). But as a practical matter, a properly-sampled u[n] sequence has no useful components at those frequencies. As the impulse response gets longer, the low end frequencies are also restored.[3] With an FIR approximation to h[n], a method called overlap-save is an efficient way to perform the convolution on a long u[n] sequence. Sometimes the array FFT{h[n]} is replaced by corresponding samples of σH(ω). That has the effect of convolving with the periodic summation[4]: $h_N[n]\ \stackrel{\text{def}}{=}\ \sum_{m=-\infty}^{\infty} h[n-mN].$ Figure 3 compares a half-cycle of hN[n] with an equivalent length portion of h[n]. The difference between them and the fact that they are not shorter than the segment length (N) are sources of distortion that are managed (reduced) by increasing the segment length and overlap parameters. The popular MATLAB function, hilbert(u,N), returns an approximate discrete Hilbert transform of u[n] in the imaginary part of the complex output sequence. The real part is the original input sequence, so that the complex output is an analytic representation of u[n]. Similar to the discussion above, hilbert(u, N) only uses samples of the sgn(ω) distribution and therefore convolves with hN[n]. Distortion can be managed by choosing N larger than the actual u[n] sequence and discarding an appropriate number of output samples. An example of this type of distortion is shown in Figure 4. ## Notes 1. See: 2. Hilbert studied the discrete transform : $\frac{1}{n} * u[n]=\sum_{m=-\infty}^\infty \frac{u(m)}{n-m}\qquad m\neq n,$ and showed that for u(n) in ℓ2 the sequence H(u)[n] is also in ℓ2 (see Hilbert's inequality). An elementary proof of this fact can be found in (Grafakos 1994). This transform was used by E. C. Titchmarsh to give alternate proofs of the results of M. Riesz in the continuous case (Titchmarsh 1926; Hardy, Littlewood & Polya 1952, ¶314), but it is not used for pragmatic signal processing. ## References • Bargmann, V. (1947), "Irreducible unitary representations of the Lorentz group", Ann. of Math. 48: 568–640 • Bedrosian, E. (December 1962), "A Product Theorem for Hilbert Transforms", Rand Corporation Memorandum (RM-3439-PR) • Benedetto, John J. (1996). Harmonic analysis and applications. Boca Raton, FL: CRC Press. ISBN 0849378796. • Bitsadze, A.V. (2001), "Boundary value problems of analytic function theory", in Hazewinkel, Michiel, , Springer, ISBN 978-1-55608-010-4 . • Bracewell, R. (1986), The Fourier Transform and Its Applications (2nd ed.), McGraw-Hill, ISBN 0-07-116043-4 . • Calderón, A.P.; Zygmund, A. (1952), "On the existence of certain singular integrals", Acta Mathematica 88 (1): 85–139, doi:10.1007/BF02392130 . • Carlson, Crilly, and Rutledge (2002), Communication Systems (4th ed.), ISBN 0-07-011127-8 . • Duoandikoetxea, J. (2000), Fourier Analysis, American Mathematical Society, ISBN 0-8218-2172-5 . • Duistermaat, J.J. (2010), Distributions, Birkhäuser, doi:10.1007/978-0-8176-4675-2, ISBN 978-0-8176-4672-1 . • Duren, P. (1970), Theory of $H^p$-Spaces, New York: Academic Press . • Fefferman, C. (1971), "Characterizations of bounded mean oscillation", Bull. Amer. Math. Soc. 77 (4): 587–588, doi:10.1090/S0002-9904-1971-12763-5, MR 0280994 . • Fefferman, C.; Stein, E.M. (1972), "Hp spaces of several variables", Acta Math. 129: 137–193, doi:10.1007/BF02392215, MR 0447953 . • Gel'fand, I.M.; Shilov, G.E. (1967), Generalized Functions, Vol. 2, Academic Press . • Grafakos, Loukas (1994), "An Elementary Proof of the Square Summability of the Discrete Hilbert Transform", American Mathematical Monthly (Mathematical Association of America) 101 (5): 456–458, doi:10.2307/2974910, JSTOR 2974910 . • Grafakos, Loukas (2004), Classical and Modern Fourier Analysis, Pearson Education, Inc., pp. 253–257, ISBN 0-13-035399-X . • Hardy, G. H.; Littlewood, J. E.; Polya, G. (1952), Inequalities, Cambridge: Cambridge University Press, ISBN 0-521-35880-9 . • Hilbert, David (1953), Grundzüge einer allgemeinen Theorie der linearen Integralgleichungen, Chelsea Pub. Co. • Khvedelidze, B.V. (2001), "Hilbert transform", in Hazewinkel, Michiel, , Springer, ISBN 978-1-55608-010-4 . • King, Frederick W. (2009), Hilbert Transforms 2, Cambridge: Cambridge University Press, p. 453, ISBN 978-0-521-51720-1 . • Kress, Rainer (1989), Linear Integral Equations, New York: Springer-Verlag, p. 91, ISBN 3-540-50616-0 . • Lang, Serge (1985), SL(2,R), Graduate Texts in Mathematics 105, Springer-Verlag, ISBN 0-387-96198-4 • Pandey, J.N. (1996), The Hilbert transform of Schwartz distributions and applications, Wiley-Interscience, ISBN 0-471-03373-1 • Pichorides, S. (1972), "On the best value of the constants in the theorems of Riesz, Zygmund, and Kolmogorov", Studia Mathematica 44: 165–179 • Riesz, Marcel (1928), "Sur les fonctions conjuguées", Mathematische Zeitschrift 27 (1): 218–244, doi:10.1007/BF01171098 • Rosenblum, Marvin; Rovnyak, James (1997), Hardy classes and operator theory, Dover, ISBN 0-486-69536-0 • Schwartz, Laurent (1950), Théorie des distributions, Paris: Hermann . • Schreier, P.; Scharf, L. (2010), Statistical signal processing of complex-valued data: the theory of improper and noncircular signals, Cambridge University Press • Stein, Elias (1970), Singular integrals and differentiability properties of functions, Princeton University Press, ISBN 0-691-08079-8 . • Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces, Princeton University Press, ISBN 0-691-08078-X . • Sugiura, Mitsuo (1990), Unitary Representations and Harmonic Analysis: An Introduction, North-Holland Mathematical Library 44 (2nd ed.), Elsevier, ISBN 0444885935 • Titchmarsh, E (1926), "Reciprocal formulae involving series and integrals", Mathematische Zeitschrift 25 (1): 321–347, doi:10.1007/BF01283842 . • Titchmarsh, E (1948), Introduction to the theory of Fourier integrals (2nd ed.), Oxford University: Clarendon Press (published 1986), ISBN 978-0-8284-0324-5 . • Zygmund, Antoni (1968), Trigonometric series (2nd ed.), Cambridge University Press (published 1988), ISBN 978-0-521-35885-9 .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 100, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.835498034954071, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/67743?sort=newest
## Maximal Abelian Subgroups of p-groups ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) A non-abelian group of order $p^n$ ($n\geq 4$) always has normal abelian group of order $p^3$, and this theorem is useful in enumeration/ classification of groups of order $p^4$. So, abelian normal subgroups of $p$ groups are useful in the classification problem. Alperin, in his paper on "Large Abelian Subgroups of $p$ groups" stated a result of Burnside namely "a group of order $p^n$ has normal abelian subgroups of order $p^m$ with $n\leq m(m-1)/2$". Question: For (non-abelian) group $G$ of order $2^5$, by result of Burnside, there will be normal abelian subgroups of order $p^m$ with $5\leq m(m-1)/2$, which means $m\geq 4$. So conclusion is $G$ always has normal abelian subgroup of order $2^4$. But if we check the list of groups of order $2^5$, then there are some non-abelian groups where maximaum order of abelian (normal) subgroup is $2^3$. Can one explain, what is going wrong here? (I am confused with this theorem.) Does all maximal abelian subgroups of a non-abelian finite $p$ group have same order? Also, please, suggest some reference for some results on maximal abelian subgroups of $p$ groups? - A very similar question can be found at [math.stackexchange.com](math.stackexchange.com/questions/44275/…). The [answer](math.stackexchange.com/questions/44275/…) by Derek Holt to the first question is quite good. It might be better, if you restrict yourself to asking only one question per question (I see 3 questions here). – Someone Jun 14 2011 at 8:34 @Someone: I went through some papers of Alperin and Burnside, but still I am not satisfied. I didn't get enough material. If someone gives some direction for these questions, then its fine. – unknown (google) Jun 14 2011 at 8:38 1 This MO question is also relevant: mathoverflow.net/questions/57104 . – Emil Jeřábek Jun 14 2011 at 10:58 ## 1 Answer As I explained in my answer on maths.stackexchange, what Alperin wrote is clearly wrong. He has misquoted what Burnside proved, which was that a group of order $p^n$ with centre of order $p^c$ contains a normal abelian subgroup of order $p^m$ for some $m$ with $n≤m+(m−c)(m+c−1)/2$. Burnside cites a related result of Miller that there is a normal abelian subgroup of order $p^m$, for any $m$ with $n>m(m−1)/2$. What is that you are still confused about? The answer to your second question is no. For example a dihedral group of order 16 has a maximal cyclic subgroup of order 8, but it also has subgroups of order 4 isomorphic to $C_2^2$, which are maximal subject to being abelian. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9326006770133972, "perplexity_flag": "head"}
http://mathoverflow.net/questions/20622?sort=newest
## Construction of the petit Zariski topos out of the gros topos of a scheme ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let S be a scheme. Let (Sch/S) be a small category of schemes over S (including essentially all finitely presented schemes affine over S). Let E = (Sch/S)zar denote the gros Zariski topos with its local ring object A1. Is there a nice way to construct the petit Zariski topos X = Szar out of the locally ringed topos E? (By nice I mean, for example, that there is a universal property that the locally ringed topos X possesses with respect to E.) There are variations of this question in which I am also interested: For example, one can replace E by the gros étale (or fppf or fpqc) topos (Sch/S)ét and ask for the construction of Szar out of (Sch/S)ét. Or one can replace X by the petit étale (or fppf or fpqc) topos Sét and ask for the construction of it out of E = (Sch/S)zar. - 2 There was quite a long discussion on this topic over at the n-Category Cafe, here: golem.ph.utexas.edu/category/2009/01/… Have you seen it? – Manny Reyes Apr 16 2010 at 22:54 @Manny: Thanks for pointing out that discussion to me. Denis-Charles Cisinski's comments there were quite helpful to me. Furthermore, the discussion contains a link to a paper by Mathieu Anel, "Grothendieck topologies from unique factorisation systems" where my question is completely answered in the case that my scheme S is affine, i.e. Spec A: E classifies local rings in topoi while X classifies local rings that are localisations of A, so X is the subtopos of those objects Y of E such that A^1 restricted to Y is a localisation of A. This should generalise easily to general schemes S. – Marc Nieper-Wißkirchen Nov 19 2010 at 11:20 ## 1 Answer I will deal with étale toposes because they behave much better in every possible way. They are also easier to define, althought they require substantially more commutative algebra to work with in practice: The gros étale topos for $S$ is just $((Shv(Aff_{\acute Et})\downarrow S).$ We can construct from it the petit étale topos by considering $Shv(\acute Et \downarrow S)$, where $(\acute Et \downarrow S)$ is the subcategory of the gros étale topos consisting of étale morphisms $A\to S$ where $A$ is affine. This site is equipped with the induced topology. Now for the ring object. For the petit topos, we let $\mathcal{O}_S$ be defined simply the sheaf sending any affine scheme to its corresponding ring (exercise: Show that this is a sheaf). This defines a ring object in the category of sheaves on the small site (exercise: Prove this. (Hint: Think of the definition of a group object and recall that the Yoneda embedding is full.)). For the large topos, we just let it be the base change of the affine line. It's not hard to show that they agree on étale morphisms $A\to S$ for A affine. It turns out that the gros and petit toposes have a geometric morphism induced by the inclusion of the small site into the large site. I don't know if there is a specific universal property, per se, but it turns out that they are "homotopy equivalent" in a suitable sense. For an explanation of the homotopy condition, see Mac Lane and Moerdijk - Sheaves in Geometry and Logic Chapter 7. Edit: If I remember correctly, the statement about "homotopy equivalence" does not work in the fppf or fpqc topologies. The small flat sites are too small, in some sense. - Harry, thanks for your answer. However, it is not the answer I was looking for. It is clear that one can construct the petit (étale) topos out of the gros (étale) topos and that the ring object can be constructed the way you described it. But the question was whether this construction can be viewed as a natural one or whether the geometric morphisms between the gros and the petit toposes fulfill some universal properties. (For example, if I am not mistaken, constructing the petit étale topos out of the petit Zariski topos has a nice description; see M. Hakim's thesis.) Marc – Marc Nieper-Wißkirchen Apr 13 2010 at 9:11 Where in M. Hakim's thesis? – Harry Gindi Apr 13 2010 at 10:13 See for example in section IV.5 of her thesis. A good account on what it says about the étale spectrum can be found here: springerlink.com/content/10x9002103602132/… – Marc Nieper-Wißkirchen Apr 16 2010 at 15:24 I was able to find her thesis in one of the usual places =). – Harry Gindi Apr 16 2010 at 15:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8989797234535217, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/38560/low-energy-effective-action-but-in-what-sense?answertab=oldest
# “low-energy effective action” but in what sense? In string theory, consistency with Weyl invariance imposes dynamics on the background fields through the vanishing of the beta functions. Those dynamics can also be derived from the so-called low energy effective action: $$S = \frac{1}{2\kappa_0^2}\int d^{26} X\; \sqrt{-G}\; \mathrm{e}^{-2\Phi}\,(R-\frac{1}{12}H_{\mu\nu\lambda}H^{\mu\nu\lambda}+4 \partial_{\mu}\Phi\partial^{\mu}\Phi)$$ (at least in bosonic string theory) Maybe I shouldn't worry over lexical denomination, but I find this naming of "low-energy" a bit obscure. In what sense is it used? Would it be because the background fields are supposed to emerge from the fundamental strings? Or because we forget the massive excited states of the string (with masses around the Planck scale and irrelevant for low energy phenomenology)? - ## 1 Answer It's a standard terminology – and set of insights – not only in string theory but in quantum field theories or anything that can be approximated by (other) quantum field theories at... low energies. Such a low-energy action becomes very accurate for the calculation of interaction of particles (quanta of the fields) of low energies, in this case $E\ll m_{\rm string}$. Equivalently, the frequencies of the quanta must be much smaller than the characteristic frequency of string theory. The previous sentence may also be applied in the classical theory: the low-energy effective action becomes accurate for calculations of interactions of waves whose frequency is much lower than the stringy frequency or, equivalently, whose wavelength is much longer than the string scale, $\lambda\gg l_{\rm string}$. Low-energy effective actions may completely neglect particles whose mass is (equal to or) higher than the characteristic energy scale, in this case $m_{\rm string}$, because such heavy particles can't be produced by the scattering of low-energy particles at all – so they may be consistently removed from the spectrum in this approximation. The scattering of the light and massless particles that are kept may be approximately calculated from the low-energy effective action and this approximation only creates errors that are proportional to positive powers of $(E/m_{\rm string})$ so these errors may be ignored for $E\ll m_{\rm string}$. You may imagine that there are corrections in the action proportional to $\alpha'$ or its higher powers that would make the effective action more accurate at higher energies but become negligible for low-energy processes. There are lots of insights – conceptual ones as well as calculations – surrounding similar approximations and they're a part of the "renormalization group" pioneered mainly by Ken Wilson in the 1970s. In particular, by "low-energy effective actions", we usually mean the Wilsonian effective actions. But they're pretty much interchangeable concepts to the 1PI (one-particle-irreducible) effective actions, up to a different treatment of massless particles. It is impossible to teach everything about the renormalization group and effective theories in a single Stack Exchange answer. This is a topic for numerous chapters of quantum field theory textbooks – and for whole graduate courses. So I just conclude with a sentence relevant for your stringy example: string theory may be approximated by quantum field theories for all processes in which only particles much lighter than the string mass are participating and in which they have energies much smaller than the string scale, too. If that's the case, predictions of string theory for the amplitudes are equal to the predictions of a quantum field theory, the low-energy effective field theory, up to corrections proportional to powers $(E/E_{\rm string})$. - Thanks for the time you put in this detailed answer Lubos. I am familiar with the concept of renormalization group and effective field theory. My interrogation was about arguments specific to the derivation of this effective action, or differences between in an "effective field theory"-description in string theory in comparison to quantum field theory. But you answered that. – Just_a_wannabe Sep 28 '12 at 14:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9369533658027649, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/3774/what-is-a-rewinding-argument/3804
# What is a “rewinding argument”? I've been reading a bit about cryptographic protocols and I keep seeing the phrase "rewinding argument". I've been unable to find a good source that would explain what is meant by this. It seems like proofs that use this technique cause trouble against active adversaries? I would appreciate if someone would explain what a rewinding argument is and why this is the case. - 2 Do you have some examples where this phrase is used? – Paŭlo Ebermann♦ Sep 11 '12 at 18:08 – Jaska Sep 11 '12 at 18:42 2 – PulpSpy Sep 11 '12 at 19:24 ## 2 Answers Rewinding is used in all sorts of interactive protocols, but it's perhaps easiest to understand it for a zero-knowledge property. In proving zero-knowledge, we consider a cheating verifier interacting with an honest prover. The prover knows something that the verifier doesn't (say, the factorization of an RSA modulus), and we worry that by cheating, the verifier could gain some information. The zero-knowledge property means that "whatever the verifier outputs from this interaction, he could have generated without interacting at all". If that's true, then the interaction must not have conveyed any meaningful "knowledge" to the verifier. So how do we prove the statement in quotes? We say: first suppose you have a cheating verifier $V$. When $V$ talks to an honest prover, it outputs (a distribution of) some transcript $t$. We have to show how to sample the same (or very close) distribution of $t$, without talking to any honest prover. It's not likely that we can analyze the code of $V$ to "figure out what it's doing." Instead, we have to treat $V$ as a kind of black-box. Recall that $V$ is designed to operate in an interactive fashion, so we have to feed protocol messages into $V$, pretending to be the honest prover. We might feed into $V$ a simulated "message 1" from the prover, and then later a simulated "message 2". Then, after seeing how $V$ responded, we might go back to a previous internal state of $V$ and feed in a different simulated "message 2" -- that's rewinding. We can rewind and invoke $V$ many different times, as long as we are careful to spend only polynomial time overall (assuming $V$ itself is polynomial-time). BTW, there are some security frameworks (e.g., Universal Composability) which do not allow rewinding. - The short version is: a "rewinding argument" is a proof technique used to demonstrate the security of a zero-knowledge proof (i.e., to show that an interactive protocol is zero-knowledge). Rewinding arguments can be used to show soundness, or to show that the zero-knowledge property is met. For more details, see PulpSpy's answer to another question (as PulpSpy suggests). Or, read any introductory reference on zero-knowledge proofs. Rewinding is a fundamental technique for proving the security of zero-knowledge proofs, so it should be covered in any good introduction to the subject. - It's also used to show soundness (for arguments and/or knowledge extraction). $\hspace{1.1 in}$ – Ricky Demer Sep 15 '12 at 7:27 Thanks, @RickyDemer. You have a good point: rewinding arguments can be used in showing soundness of other interactive arguments / proofs of knowledge (it is not limited to just zero-knowledge proofs). In my defense, I did mention in my answer that "rewinding arguments can be used to show soundness", but I appreciate your reminder that this applies more broadly than zero-knowledge proofs. That's a great point. – D.W. Sep 15 '12 at 7:56 Oh, yeah, I somehow missed that you mentioned soundness. $\;$ – Ricky Demer Sep 15 '12 at 9:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9276395440101624, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/90569-license-print.html
# License Printable View • May 26th 2009, 08:18 AM Sampras In Florida, the last three digits of a female with birth month $m$ and birth date $b$ are represented by $40(m-1)+b+500$. For both males and females, the fourth and fifth digits from the end give the year of birth. Determine the dates of birth of people the the numbers whose last five digits are $42218$ and $53953$. So the year of births for the two people are $1918$ and $1953$. Now $40(m-1)+b+500 = 218$ and $40(m-1)+b+500 = 953$ assuming both people are female. Then just guess and check to get $m$ and $b$? What if both are male? • May 26th 2009, 11:31 AM HallsofIvy Quote: Originally Posted by Sampras In Florida, the last three digits of a female with birth month $m$ and birth date $b$ are represented by $40(m-1)+b+500$. For both males and females, the fourth and fifth digits from the end give the year of birth. Determine the dates of birth of people the the numbers whose last five digits are $42218$ and $53953$. So the year of births for the two people are $1918$ and $1953$. Now $40(m-1)+b+500 = 218$ and $40(m-1)+b+500 = 953$ assuming both people are female. Then just guess and check to get $m$ and $b$? What if both are male? I would interpret "the fourth and fifth digits from the end" of 42218 and 53953 as being 42 and 53 respectively so the years of birth are 1942 and 1953, not 1918 and 1953. I notice that you then use 218 and 953, dropping the first two digits was that "1918" a typo? The smallest that m or b can be is 1 so the smallest possible value for 40(m-1)+ b+ 500 is 500. That "218" is impossible. Perhaps this was a male? 40(m-1)+ b+ 500= 40m- 40+ b+ 500= 953 or 40m+ b= 493. Again, the largest that m can be is 12 and 40(12)= 480 so it is possible that m= 12, b= 13. If we were to try m= 11, 40(11)= 440 and 493- 440= 53. The only possible answer is m= 12, d= 13. The birthday is Dec. 13, 1953. • May 26th 2009, 11:41 AM Sampras Quote: Originally Posted by HallsofIvy I would interpret "the fourth and fifth digits from the end" of 42218 and 53953 as being 42 and 53 respectively so the years of birth are 1942 and 1953, not 1918 and 1953. I notice that you then use 218 and 953, dropping the first two digits was that "1918" a typo? The smallest that m or b can be is 1 so the smallest possible value for 40(m-1)+ b+ 500 is 500. That "218" is impossible. Perhaps this was a male? 40(m-1)+ b+ 500= 40m- 40+ b+ 500= 953 or 40m+ b= 493. Again, the largest that m can be is 12 and 40(12)= 480 so it is possible that m= 12, b= 13. If we were to try m= 11, 40(11)= 440 and 493- 440= 53. The only possible answer is m= 12, d= 13. The birthday is Dec. 13, 1953. isn't the smallest possible value $501$? All times are GMT -8. The time now is 12:05 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9436850547790527, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/111006/show-fx-x-ln-x-is-not-uniformly-continuous
Show $f(x)=x\ln x$ is not uniformly continuous Show $f:(0,\infty)\rightarrow \mathbb{R}, f(x)=x\ln x$ on $(0,\infty)$ is not uniformly continuous. I think that the general way to prove that something is not continuous in a metric space is to let $\epsilon=...$ and show that $\forall\delta>0$, $d'(f(x)-f(y))>\epsilon$. I can't use the Mean Value Theorem because we haven't gone over it yet. Here's an attempt: Let $\epsilon=1$. Without loss of generality, let $x>y>0$. $|x\ln x-y\ln y|<|(x-y)(\ln x)|\leq|x-y||\ln x|$...$\delta=\frac{\epsilon}{\ln x}$ and therefore doesn't work for all $x$? - What is the domain of the function? – azarel Feb 19 '12 at 18:38 $X=(0,\infty)$. – Emir Feb 19 '12 at 18:42 1 Answer You are not proving that $f$ is not continuous, you are proving that $f$ (which is, in fact, continuous) is not uniformly continuous. So you want to show that for some $\epsilon$ there is no $\delta$ that works for all $x$. In fact you can take $\epsilon = 1$. Hint: if $y > x > 0$, $y \ln y - x \ln x > (y - x) \ln x$. - Ah, that's right. How can I tell whether a given function is not uniformly continuous on a given metric space by inspection? If I try to prove that it is uniformly continuous and get stuck I have a hard time telling whether I'm going about it the wrong way or whether it is actually not uniformly continuous. – Emir Feb 19 '12 at 19:15 I doubt that there's a general way of doing it "by inspection". Basically you want to see if you can find points $x, y$ that are arbitrarily close ($d(x,y) < \delta$) but where $f(x)$ and $f(y)$ are not close ($d'(f(x),f(y)) \ge \epsilon$). Since a continuous function on a compact metric space is uniformly continuous, this often involves $x$ and $y$ going off "to infinity" in some sense. – Robert Israel Feb 19 '12 at 20:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9534898996353149, "perplexity_flag": "head"}
http://mathoverflow.net/questions/69002?sort=newest
## On the place where $\mathrm{Hilb}_{lines}^{x}(X)$ is smooth. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let `$X\subset \mathbb{P}_{\mathbb{C}}^N$` be irreducible generically smooth closed subscheme and let `$\mathrm{Hilb}_{lines}^{x}(X)$` denote the Hilbert scheme of lines contained in $X$ and passing through the point $x\in X$. Is it true that the set ```$$ \{ x\in X : \mathrm{Hilb}_{lines}^{x}(X) \mbox{ is smooth } \} $$``` is constructible? Thanks in advance. - ## 1 Answer Yes. This can be seen as follows: Let $Hilb_{lines}(X)$ denote the Hilbert scheme of lines in $X$ and let $\Gamma \subset X \times Hilb_{lines}(X)$ be the correponding universal family. Then $Hilb_{lines}^x(X) = p^{-1}(x)$ where $p:\Gamma \to X$ is induced by the first projection. So we are reduced to the folowing (well-known) statement: Let $f:Y \to X$ be a proper morphism of finite type schemes over a field. Then the set `$\{x \in X | f^{-1}(x) \mbox{ is smooth } \}$` is constructible. To prove this, by replacing $X$ by $X_{red}$ and $Y$ by `$Y \times_X X_{red}$` we may assume $X$ is reduced (since we only care about the fibres). By generic flatness, we may find a finite stratification of $X$ by locally closed reduced subschemes $X_i$ so that the induced morphisms $Y \times_X X_i \to X_i$ are all flat. For a flat proper morphism the locus of points in the base so that the fibres are smooth is open. It follows that the set we are interested in is a finite union of open subsets of closed subsets of $X$, so is constructible. - Thank you very much. – gio Jun 28 2011 at 17:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9436607360839844, "perplexity_flag": "head"}
http://mathoverflow.net/questions/42820/expressions-for-the-square-of-an-integral/42869
Expressions for the Square of an Integral Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Is there a way to simplify the following expression: $\lgroup{\int^A_0 x(s)ds}\rgroup ^2$ I'm looking for an expression that can possibly get rid of the squared term, so that I can have just an integral of the first order. - 1 If $I$ is the value of the integral, why can't you just take $s(x) = I^2/(Au(x))$? This is probably not the answer you were looking for, so can you be more specific? – Justin Shih Oct 19 2010 at 19:27 2 Yes, unless the OP is OK with an s depending on r and u. I agree: the question should be more precise, and maybe redirected to math.stackexchange.com – Pietro Majer Oct 19 2010 at 19:52 It's still confusing. What are $r(t)$, $s(t)$? And you minimize the cost function under what? – Hung Tran Oct 20 2010 at 1:22 2 Answers I'm not sure about simplifying, but you can easily write your objective functional in Bolza form like this: ```$$ \begin{align} &\min_{u(t) \in \Omega(t)} \, J = z(T)^2 + \int_{0}^{T} s(t)u(t)dt \\ s.t. &\frac{dz(t)}{dt} = r(t)u(t),\quad z(0) = 0 \end{align} $$``` - You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. For instance, the derivatives wrto $A$ of the two expressions coincide choosing $s(x):=2r(x)\int_0^xr(\xi)u(\xi)d\xi$. So the two expressions coincide for all $A$ since they both vanish at $A=0$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9301067590713501, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/190453/is-it-morally-right-and-pedagogically-right-to-google-answers-to-homework/190624
# Is it morally right and pedagogically right to google answers to homework? [closed] This is a soft question that I have been struggling with lately. My professor sets tough questions for homework (around 10 per week). The difficulty is such that if I attempt the questions entirely on my own, I usually get stuck for over 2 hours per question, with no guarantee of succeeding. Some of the questions are in fact theorems proved by famous mathematicians like Gauss, or results from research papers. As much as I dislike to search for answers on the internet, I am often forced to by time constraints if I even expect to complete the homework in time for submission. (I am taking 2 other modules and writing an undergraduate thesis too). My school does not have explicit rules against googling for homework, so I guess it is not a legal issue. However, it often goes against my conscience, and I wonder if this practice is counterproductive for my mathematical development. Any suggestions and experience dealing with this? - 30 Of course you should ASK THE PROFESSOR if you have doubts about what outside help is allowed or expected. Also: if stuck you can go and see the professor for help. That should be FAR MORE useful to you than getting something on the Internet. – GEdgar Sep 3 '12 at 13:46 7 In the real world, no one cares how you come up with answers to questions, so long as the answers are found soon enough to be useful, and that any outside sources are reasonably compensated and or attributed. Why systems of education fail to pass on this information is beyond me. – zzzzBov Sep 3 '12 at 18:36 7 @zzzzBov: Being able to do things by yourself is considered valuable, otherwise noone can be expected to do new things. In the Tour de France, you not only need to reahc the finish line, you are expected to do it without doping. And even I could win the Tour if I were allowed to go by car. – Hagen von Eitzen Sep 3 '12 at 18:41 8 Well, this isn't a mathematical question. It's more like a moral one. Shall I suggest it to be migrated? Meta, maybe. – jmendeth Sep 3 '12 at 18:43 3 @zzzzBov The problem is that students which google the assignments often submit dubious work, and many times understand nothing from what they submit. In real life, no matter how fast you can get the answer, if you do this your boss will definitely be not pleased. In school, the instructor can usually see the mistakes, and explain to the students, in real life, you are usually the "expert"..... The goal of the assignments is not for the student to solve the problem, it is to help the student understand the concept.In real life the goal is to solve the problem. Google cannot help in the first – N. S. Sep 3 '12 at 21:17 show 17 more comments ## closed as not constructive by Andres Caicedo, Carl Mummert, William, Austin Mohr, Michael GreineckerSep 4 '12 at 14:33 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or specific expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, see the FAQ for guidance. ## 12 Answers Personal anecdote. In the late 1970's i was taking topology from Munkres, Topology: a first course. The professor was Joel Spencer, a wonderful teacher, who is up for an AMS Trustee position, see the current Notices. In particular, he made up his own assignments that might not be questions in the book, which takes extra care and work. We had gone through compactness and the more intuitive sequential compactness and limit point compactness. We did most of the proof in class, that the product of just two compact spaces was also compact. the homework was to complete the proof for compactness, and throw in proofs that the product of two sequentially compact spaces was also sequentially compact, and the product of two limit point compact spaces was also limit point compact. Two of them were easy enough, but i struggled with the limit point one for at least a couple of days. Eventually I handed in a paper saying just that "I couldn't do this one." It came back from the grader with "Excellent" written on top, because the supposed fact is false. I was mystified, I asked Prof. Spencer what was so great about it. It took years for me to understand that not being able to prove something false was exactly right. I still have the book. I see on page 182, problem 5(e) that Munkres was well aware of this, referring to Counterexamples in Topology by Steen and Seebach. Putting it together, two hours on a mathematics problem does not seem very much to me. Oh, meanwhile, I am not in favor of cheating, or asking (anonymously) for others to support cheating. - 3 +1 and pity I can't make it +5 – John Wordsworth Sep 3 '12 at 19:40 2 (+1) This is certainly an interesting twist on the Prove or disprove... style of questioning. There is an inherent bias when one is told that a certain statement is true from the outset which tends to make such questions easier and, in my opinion, less useful for training towards mathematical research. This flips that bias completely on its head. – cardinal Sep 4 '12 at 0:00 1 "Do you expect a student to spend 20 hours a week outside of class doing homework?" Yes. A typical undergraduate where I went to school takes 4 classes per term. At average of 3 hours per class that is 12 hours of class time per week. Compare to my contracted work hours, the students have at least 30 hours left in the week that they should fill with school work and related activities. – Willie Wong♦ Sep 4 '12 at 14:33 1 @WillieWong: I think Graphth means 20 hours a week for each course. – Jesse Madnick Sep 4 '12 at 15:46 1 @WillieWong Yes, that is exactly what I mean. 20 hours for one course. – Graphth Sep 4 '12 at 16:24 show 11 more comments Let me explain why I, and almost all faculty members I know, do not want students searching for homework problems online. • It destroys our ability to calibrate the course difficulty. Twenty hours of homework a week is very high for a math course; higher than I would expect from any course that was not promoted as a "boot camp" style course. Either you are falling behind the rest of the class, or other people are turning in much scantier work than you are, or everyone is googling the problems. The first two situations are obvious, and your professor should be adjusting to it. The last situation is invisible. We had an analysis course at MI last year pedagogically ruined because everyone kept solving the homework problems, so the professor kept increasing his pace, until an in class test revealed that no one was actually doing the homework themselves. • It forces us to use more obscure, and often not as good, problems. There are some fields where there are computations every student should do -- and, as a result, they are written up in books and online sources everywhere. It hurts my ability to design good problem sets if I can't put this fundamental problems on the problem set. Even in fields where there are not such key problems, there are often only so many ways to set up an example so that it is doable in a reasonable amount of time. If I can't use the examples which are already online, then I need to pick larger and stranger values for my parameters, which makes the problem set harder. • I do not believe that students will learn as much from reading a solution as finding it themselves; this is probably uncontroversial. Moreover, I think that hearing a solution from a classmate with whom you have been discussing the problem together is better than hearing it from a classmate who solved it separately; hearing it from a classmate is better than hearing it from a faculty member; and hearing it from a faculty member is better than reading it in a textbook or here on math.SE. I think that the more interactive and the less polished the presentation, the more you have to engage your own understanding to process and take in the answer. This is why I almost never leave full answers to questions that look like homework here; I think it is harmful. Let me quote the policy I will have for the combinatorial representation theory course I will be teaching this Fall: Homework Policy: You are welcome to consult each other provided (1) you list all people and sources who aided you, or whom you aided and (2) you write-up the solutions independently, in your own language. If you seek help from other mathematicians/math students, you should be seeking general advice, not specific solutions, and must disclose this help. I am, of course, glad to provide help! I don't intend for you to need to consult books and papers outside your notes. If you do consult such, you should be looking for better/other understanding of the definitions and concepts, not solutions to the problems. You MAY NOT post homework problems to internet fora seeking solutions. Although I know of cases where such fora are valuable, and I participate in some, I feel that they have a major tendency to be too explicit in their help. You may post questions asking for clarification and alternate perspectives on concepts and results we have covered. You should ask your professor for his or her policy, but I think that this is on the permissive side of what most math professors would write if they thought about a policy. - 2 It is less permissive than mine was. I was teaching undergraduates, and it was generally pretty obvious, even in the more advanced courses, whether they understood the solutions that they were offering. Besides, the longer I taught, the more I came to think that it was more the students’ problem than mine: they’re the ones who fail to benefit if they misuse resources. – Brian M. Scott Sep 3 '12 at 18:09 5 A telling approach I have used on more than one occasion is to include verbatim a question (or two) from the homeworks on the test. This is usually a very strong gauge on whether the work is being done and understood outside of class. – cardinal Sep 3 '12 at 18:58 Firstly, you should always appropriately reference any information you find out in this way. Secondly, I think this process can actually be helpful to your learning, provided you spend a reasonable amount of time thinking about the problem first, as you are likely to collaterally learn other things while looking for the information you want. I would also recommend talking to other people on your course (and/or the professor) about the problem before you search the net. Thirdly, if you don't understand what you read online, then don't hand it in as a solution. It's usually better to give whoever is reading your homework assignments an accurate idea of what you do and don't understand. As an aside, there are a number of classical theorems proved by mathematicians like Gauss that are not unreasonable to set as homework exercises. You will likely have been presented with a completely different theoretical framework to the one that existed historically, which can make these results much easier to prove than they would have been at the time. - 4 Why do you have to reference information in homework when the instructor didn't reference the exercises? – Michael Greinecker Sep 3 '12 at 13:36 5 This was advice given to me as an undergraduate which I think is worth following. For what it's worth, I think the instructor should reference the exercises as well, so you might as well lead by example! – Matt Pressland Sep 3 '12 at 13:37 1 @Michael: Because you have some integrity (in this regard)...? :-) – cardinal Sep 3 '12 at 13:38 2 @cardinal I think using something on homework is very different from taking credit for something publicly, i.e. plagiarism. One can certainly ask an instructor whether using other sources is okay. An instructor can actually design exercises to get students to read papers... – Michael Greinecker Sep 3 '12 at 13:42 2 – Joel Reyes Noche Sep 3 '12 at 13:43 show 5 more comments In my opinion, restricting study materials is counterproductive (particularly if no computer-searchable version of the course textbooks exist.) I realize that blindly copying answers is bad, but cheating on coursework has always been a problem and it is an issue that is independent of the Internet. One common complaint is that students will learn less by Googling than they will by reading the textbook. This may or may not be true, but being able to search gives the learner access to much more targeted information. The difference between needing to skim through fifty pages you already understand in hopes of finding a paragraph you didn't, and being able to immediately enrich yourself on the topic desired, is phenomenal. The thing is, the anti-Google teachers are right about one thing - you aren't going to remember how to use it practically if you don't actually use it. One answer here said that the degree of the problem became apparent when an in-class test revealed that the students, who up until then had been passing relentlessly difficult questions with ease, knew next to nothing. This is actually a really useful thing to know, because armed with that knowledge the real problem becomes apparent - the students aren't using their research, which is why it isn't 'sticking.' A great option would be to hold a brief, three- or five-question test before each class - placing numerous, smaller checkpoints along the way will teach the students how to learn the material and retain it for use far better than either cramming or Googling together a paper. I'm going to go one step further, though, and say that this also illustrates a deeper need for education to evolve. We don't live in the dark anymore - we live in an age of effulgence, where learning of any sort is a phrase away. To educate successfully, it will become necessary to embrace this by teaching more applied mathematics and asking more questions. To wit, if the course itself demands knowledge, the students will learn. That said, I do not at all approve of students asking for (or receiving) verbatim solutions, either online or from classmates. This is cheating no matter where it takes place. - The following excerpt from an answer JDH gave on a thread on meta might serve as a useful standard of comparison. It is much more permissive than the approach of David Speyer: My opinion is that there is nothing wrong at all with posting homework questions here, particularly interesting ones, and I find much of the negative reaction to homework-question posters to be somewhat strange, alien to my way of learning mathematics in a give-and-take exchange of mathematical ideas. Surely posting questions here and studying the answers is not much different than studying hard in the library, talking mathematics with one's colleagues at math tea or talking to one's professor, which are all excellent ways to learn mathematics. In particular, I expect that students who post questions here might learn just as much if not more from the resulting answers as from their professors---we have a number of talented mathematicians, who are very good at explaining things---and that math.SE provides a valuable service to students having unapproachable professors, having professors who do not explain well, or who have few colleagues able to help them. Furthermore, the math.SE community strongly benefits from the questions and the insightful answers that might be posted. (...)In particular, I hereby give all of my own students complete permission to post any and all their homework problems here, and indeed I encourage them to post their questions here and to study the answers well and thereby to learn some mathematics. I will be testing them on their understanding at the exam. I would also encourage all mathematics professors to adopt a policy of encouraging collaboration on homework among their students, as talking about mathematics with one's colleagues is assuredly one of the best ways to learn mathematics. Indeed, I recommend that all professors should actively encourage their students to form study groups in order to work on their homework problems together. Learning as a group, they will go very far. - Michael, do we vote on this to indicate the value of your answer, or our agreement or otherwise with JDH? – user16299 Sep 4 '12 at 11:44 @YemonChoi I guess voting about agreement. I think this question is actually a bit too soft, to fit the Q&A format. – Michael Greinecker Sep 4 '12 at 11:46 First of all, be relax and take things easier. If some problems are hard and you cannot solve them then I see no problem to ask for help as long as you want to understand and learn the tackling ways of the problems, not only to hand in a solution. The real fact is that some teachers do really poor at their classes and expect a lot from their students. I don't know how things in your case are, but if you like mathematics you may learn a lot on your own. Moreover, if you study the problems and the solutions posted on this site you'll learn a lot! - The aim of doing homework is practicing what you theoretically learned at school. When you do your homework, you gain experience in the subject. If you are having difficulties on doing your homework, that is the sign of your lack of comprehension in the subject. If every homework question is taking your two hours, then that means that you seriously are lacking theoretical background on the subject. I suggest you repeat it before rushing into homework. Or first try solving easier questions, and gain experience in the progress. Don't leave your homework on your difficult courses to the last night before the dead-line. If you do so, when you realize that it won't finish in time, you will most probably give up doing it, or try to make it solved on the internet. Start doing your homework a few days earlier than the dead-line. If you start earlier, you will have enough time to overcome any hindrances on your path. If you think won't be able to solve a problem, don't ask the problem itself to the other people. Ask the part of the problem in which you are stuck. If you ask the whole question to someone else, the solution won't be your original work and you won't get much benefit from that homework. If you had no choice and made your question solved to someone else, at least try to solve a similar question (e.g.; change the numbers in the original question) yourself. Remember that homework is for your own benefit. If you want to succeed, don't cheat. - I remember a class from graduate school. Microbial Physiology. There was one exam that we all had two weeks to work on and we were told it was fine to collaborate with others. The answers to the questions were not multiple choice or fill in the blank rather they were answers that required detailed explanations. The exam was also very difficult. I remember working with other grad students throughout those two weeks trying to help each other research the answers. We would gather periodically in someones lab or perhaps get together for lunch to go over what we had figured out so far. We divided up tasks so different people would look at different aspects of the problem. The end result was that we all learned a lot about the physiologic structures we were studying. I will always remember that exam as being very difficult but one of the most enjoyable and challenging to complete. If the intent of the homework is to research and discover an answer, then by all means do so by seeking out advice from others, but if the intent was to give a student practice in learning a particular subject then it should be done by that student on their own. - In the end, the real danger in googling the answer is that in the end your understanding won't be good enough. You are in school to learn and you want to make darned sure, given tuition rates today, that you are getting your money's worth. From this perspective, the goal isn't to get the right answer so much as to understand the problem. If you can't understand the work well enough to do it now, then what happens the next time? So I don't know about the moral issues. After all is it morally wrong to cheat yourself? I think the pedagogical issues are very real and you want to be careful about what you are doing. The thing your question says to me though is that you already are somewhat lost in the class in which case meeting with your professor is a very good idea and seeing what sort of help is recommended. A second thing is I would recommend getting a study buddy you can bounce thoughts and ideas off. That's a good way to develop understanding too. Personally I don't have any problem with doing methodology research for homework (I am not a professor though). But googling for how to solve a specific kind of problem is not the same thing as googling the answer. With the first at least you can hold out hope that you will understand the process better at the end. The latter, not so much. - Be your own book, create your own. You have a free mind to explore. Discover your potentials, if you resort to googling, you will not be complete and you will be missing the fun of failures as it breeds success.. - Write up your homework solutions in tex and cite your sources. Soon the professor will want you to help with research. - 21 No, please don't use TeX :) Spend your time with solving the problems, not with typing up their solutions. Among dozens of beginning students I had who tried their hands at TeX, most of the time they spent on typing was completely wasted because crappy solutions were written up in crappy TeX... Only a handful of students did a good job both with solving a problem and typing it up. – t.b. Sep 3 '12 at 16:20 @t.b.: You are right, in fact we should actively discourage math beginners from using tex, so that its usage lets us quickly discriminate math ability. – binn Sep 3 '12 at 17:32 7 While I agree that typing homework problems in (La)TeX doesn't particularly help with what the OP is asking, I wouldn't be so quick to dismiss its value. I learned to use LaTeX precisely by typing up homework solutions, and it has turned out to be an invaluable skill for other projects. Besides, it means I still have access to all those homework solutions without having to worry about losing the papers, and on a number of occasions it's been helpful to go back and read them again to remind myself of something. – David Zaslavsky Sep 3 '12 at 18:12 Yes, I also used to TeX my solutions in some courses. But only because a) my handwriting is horrible for outsiders (which is still better than one of the profs who had invented the "universel letter" $i=\iota=1=\ell$), b) it was a fun experience ot learn TeX in non-contrived exercises, c) it encourages to write legible texts with argumentation instead of wild heaps of uncommented formulas, d) I had time enough to do so because I found all solutions by myself instead of spending fruitless hours in google (which didn't even exist) – Hagen von Eitzen Sep 3 '12 at 18:36 1 At this point, TeX is faster than hand-writing for me, especially because I usually go through multiple drafts while writing in pen. – Potato Sep 3 '12 at 19:13 show 1 more comment You go to a University to learn something. At first it's your decision. Noone forced you. There are many ways to obtain a degree. What matters is if you gain what you require for your planned future ahead. For example, if your plan is to become a maths professor and you chose a maths degree as a step of the journey towards your goal, then it is useless to go around and Google for the answer because you will have a hard-time when you actually become a professor which will be the time your ability to fix those will matter. But if your goal is to become an analytical engineer then googling may not harm you as much. Because you can use the time you save to learn something else that matters. It doesn't have to be reading a textbook. For example, the skill of finding the answer for a tough maths question from the Internet can become handy in your future carrier as it may improve your ability to deliver in short time spans and think out of the box. After all what matters is not how you learn. What matters is how you use what you learnt and use it to make money. Weather we like to accept or not, we do everything like learning to earn money. So ideally everything and practically at least most of the things we do must increase our value in the targetted future job market. If you keep that end in mind then you will not get lost. ever. - 2 What kind of point of view is this? – timur Sep 4 '12 at 15:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9724645614624023, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-equations/41525-find-general-solution-differential-equation.html
# Thread: 1. ## Find the general solution for the differential equation dy/dx - 2y.cosecx=tan(x/2) between 0<x<pi Having some trouble with this one - I have taken e^integral of 2cosecx - which I got (tan(x/2))^2 - Then I multiplied this to all terms and I am not sure if what I'm getting is working so a full explanation would be really appreciated. Many thanks, Bryn 2. Originally Posted by Bryn dy/dx - 2y.cosecx=tan(x/2) between 0<x<pi Having some trouble with this one - I have taken e^integral of 2cosecx - which I got (tan(x/2))^2 - Then I multiplied this to all terms and I am not sure if what I'm getting is working so a full explanation would be really appreciated. Many thanks, Bryn Close ..... you need $e^{\int {\color{red}-}2 \, \text{cosec} \, x \, dx}$. So the integrating factor will be $\frac{1}{\tan^2 \frac{x}{2}}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9345514178276062, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/171066-given-definition-there-one-two-rs.html
# Thread: 1. ## Given this definition, is there one or two Rs? The definition uses R for relation but I am wondering if the R near the "element of" symbol is also the "relation" notation or if it is used for "reals". Sorry if this is a dumb question but any input would be greatly appreciated! Thanks in advance! Attached Thumbnails 2. It's all the same $R$. Remember a relation can be written as ordered pairs: $aRb\Leftrightarrow (a,b)\in R$. 3. So, the right hand side means "Point (a,b) is part of the relation R"?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.923798143863678, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/95843?sort=votes
## Proto-Euclidean algorithm ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Consider the Euclidean algorithm (EA) as a way to measure the relative length $b/a$ of a shorter stick $b$ compared to a longer one $a$ by recursively determining $$q_i = \left\lfloor \frac{r_i}{r_{i+1}} \right\rfloor\qquad (*)$$ $$r_{i+2} = r_i\bmod r_{i+1}$$ with $r_0 = a$, $r_1 = b$. The relative length $b/a$ is then given by the (finite or infinite) continued fraction $$\cfrac{1}{q_0 + \cfrac{1}{q_1 + \cfrac{1}{q_2 + \cfrac{1}{\ddots }}}} =:\ [ q_0, q_1, q_2, \ldots ]^{-1}$$ A rather similar and somehow simpler algorithm is the following which I call proto-Euclidean algorithm (PEA): $$q_i = \left\lfloor \frac{r_0}{r_{i+1}} \right\rfloor$$ $$r_{i+2} = r_0\bmod r_{i+1}$$ The relative length $b/a$ is then given by the (finite or infinite) continued product $$\frac{1}{q_0}(1- \frac{1}{q_1}(1- \frac{1}{q_2}(1-\cdots))) =:\ \langle q_0, q_1, q_2, \ldots \rangle$$ [Update: The one and crucial difference between the two algorithms is the numerator in $(*)$ which represents the reference length against which the current "remainder" is measured: in EA it is adjusted in every step to the last "remainder", in PEA it is held fixed to $r_0$.] For comparison’s sake, with $a=1071$, $b=462$ , the Euclidean algorithm yields $$[2, 3, 7]^{-1} = \cfrac{1}{2 + \cfrac{1}{3 + \cfrac{1}{7}}} = \frac{22}{51}$$ while the proto-Euclidean algorithm yields $$\langle2,7,25,51\rangle = \frac{1}{2}(1- \frac{1}{7}(1- \frac{1}{25}(1-\frac{1}{51}))) = \frac{22}{51}$$. Under which name is the proto-Euclidean algorithm known? Where is it investigated and compared to the Euclidean algorithm? Or is it just folklore? I am especially interested in the following questions: • How fast does PEA converge compared to EA? (Just a side note: the first approximations in the sample above are equal: $[2, 3]^{-1} = \frac{3}{7} = \langle2,7\rangle$). One advantage of EA over PEA seems to be that it takes fewer steps, and smaller numbers are involved in the course of calculation, since the numerator in $(*)$ decreases. • Is PEA significantly less efficient than EA? - If efficiency is your goal, then shouldn't you modify the Euclidean algorithm to deal with "remainders closest to zero" instead of "negative remainders"? – Gjergji Zaimi May 3 2012 at 11:34 1 @Gjergji: I have no efficiency goals and don't intend to use PEA, just hoped to understand EA even better, eventually, by comparing it to PEA. Where do you see negative remainders? PEA's remainders go as close to zero as possible, at least closer than EA's. – Hans Stricker May 3 2012 at 11:44 @Emil: Thanks for the corrections, looks much better now! – Hans Stricker May 3 2012 at 11:54 ## 3 Answers Sometimes your method is much faster. For the golden ratio $\tau=\frac{1+\sqrt5}{2},$ the Euclidean algorithm gives all quotients $1$ so $[1,1,1,1,1,1,\cdots]$. Your method gives $<1, 2, 4, 17, 19, 5777, 5779, 192900153617, 192900153619, \cdots>$ where the terms after the first appear to come in pairs $\lceil \tau^{2\cdot3^j} \rceil-1,\lceil \tau^{2\cdot3^j} \rceil+1$. So taking $b,a$ to be successive Fibonacci numbers can sometimes give a large advantage to your method. Actually a ratio of $\tau+1$ is slightly more dramatic. By my calculations $b,a=F_{53},F_{51}=86267571272, 32951280099$ gives $6$ terms $<2,4,17,19,5777,5779>$ vs $51$ terms $[2,1,1,\cdots,1,2]$. At the other extreme, the Euclidean algorithm gives $[n-1,1,L-1]$ for $\frac{nL-1}{L}.$ It would appear that taking $L=\frac{\mathop{lcm}(1,2,\cdots,n)}{n}$ requires $n-2$ terms for your method. Hence with $n=12$ and $L=2310$ one has for $\frac{27719}{2310}$ the expansions $[11,1,2309]$ vs $<11, 12, 2519, 2771, 3079, 3464, 3959, 4619, 5543, 6929>.$ - @Aaron: I'd like to give 10 upvotes but can only give one. Do you see a chance to give (general) conditions instead of (extreme) examples? And how to relate such conditions to the algorithms? – Hans Stricker May 3 2012 at 18:09 It seems like cheating, but if the continued fraction has lots of small q values showing up then that speaks against EA being efficient. So quadratic surds may be better for your method. Search Pierce Expansion and Engel Expansion to see what is knownIt is worth remarking that in your expansion the sequence of numbers grows exponentially so in terms of number of digits required to write down it might not be as competitive. – Aaron Meyerowitz May 3 2012 at 18:47 @Aaron: Thanks for the hints to Pierce and Engel - I'll take a trip to Egypt, were Aeryk already has sent me to. (The historical route seems to be from the Egypts to Pythagoras to Euclid and finally to Fibonacci-Sylvester and beyond.) – Hans Stricker May 3 2012 at 18:58 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Your proto-Euclidean algorithm is basically equivalent to the Greedy algorithm for finding the alternating Egyptian fraction representation of a rational. For instance, in your example, if we expand the nested parentheses we get: $$\frac{22}{51} = \frac{1}{2}(1-\frac{1}{7}(1-\frac{1}{25}(1-\frac{1}{51}))) = \frac{1}{2}-\frac{1}{14}+\frac{1}{350}-\frac{1}{17850}$$ UPDATE: This is almost the Fibonacci-Sylvester Algorithm for finding Egyptian Fractions. The difference being that alternating signs between the fractions that the proto-Euclidean algorithm creates. I'm not sure how that affects the rate of convergence and such. You could probably eliminate the sign changes by choosing the signs on the $r_i$ and/or adding/subtracting 1 from each of them. (This is what I had to do for a similar project but can't remember which turned out to give the right answer.) Some heuristics on the F-S method can be found here. UPDATE #2: Here's a more detailed explanation of the similarity. The Greedy/Fibonacci-Sylvester algorithm can be rephrased to look like a Euclidean-ish Algorithm. Here is the example above: $$51 = 3 \cdot 22 - 15$$ $$51 \cdot 3 = 11 \cdot 15 - 12$$ $$51 \cdot 3 \cdot 11 = 141\cdot 12-9$$ $$51\cdot 3 \cdot 11 \cdot 141 = 26367 \cdot 9 - 0$$ so the Greedy/F-S algorithm gives $$\frac{22}{51} = \frac{1}{3}+\frac{1}{11}+\frac{1}{141}+\frac{1}{26367}$$ So the Greedy/F-S algorithm for $a/b$ at the $n$th step is doing a modified division algorithm with $bq_1q_2q_3\cdots q_{n-1}$ as the dividend and $r_{n-1}$ as the divisor (where $q_i$ is the $i$th quotient and $r_i$ is the $i$th remainder) and the Egyptian fraction is given by $\sum 1/q_i$. I say "modified division algorithm" because instead of the usual $b=aq+r$, the $+$ is replaced by a $-$. In your PEA (I think), you just kept the plus. This is why I conjecture that the heuristics and such are the same. It seems like for every long $a$, $b$ pair in the Greedy/F-S algorithm, there should be an analogous long $a$, $b$ pair for the PEA. I don't have anything at this time other than a gut feeling to back me up. Maybe I'll try to construct an example... - @Aeryk: Thanks. Do you have a reference, where Euclid's algorithm is discussed as an improvement of the greedy algorithm? – Hans Stricker May 3 2012 at 13:34 @Aeryk's UPDATE: I don't want to change the PE algorithm: it stands like it is, another instance of a family of algorithms of which Euclid's is another - and maybe better - one. The communality is the approach (how to commensurate different quantities), the recursion schema (with $r_i$ replaced by $r_0$), and in the final end two continued expressions which are reciprocally related. – Hans Stricker May 3 2012 at 17:55 @Aeryk: In the meanwhile I have learned that there is nothing like the Egyptian fraction representation, but many. The Greedy algorithm just delivers one. Can you tell me more about the Greedy algorithm for an alternating Egyptian fraction representation? (I guess in general there will be more than one such representation.) – Hans Stricker May 3 2012 at 23:29 Let me point out that PEA is sometimes considerably "better" and more "to the point" than Fibonacci-Sylvester (see here): By FS: $$\frac{5}{91} = \frac{1}{19} + \frac{1}{433} + \frac{1}{249553} + \frac{1}{93414800161} + \frac{1}{17452649778145716451681}$$ $$\frac{5}{121} = \frac{1}{25} + \frac{1}{757} + \frac{1}{763309} + \frac{1}{873960180913} + \frac{1}{1527612795642093418846225}$$ By PEA: $$\frac{5}{91} = \frac{1}{18} - \frac{1}{1638}$$ $$\frac{5}{121} = \frac{1}{24} - \frac{1}{2904}$$ The claim that PEA is "basically equivalent to the Greedy algorithm" which in turn is "almost the Fibonacci-Sylvester Algorithm" needs further explanation. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 18, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9400635361671448, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/distribution-theory?page=4&sort=newest&pagesize=15
# Tagged Questions Use this tag for questions about Schwartz distributions, also known as Generalised Functions. For questions about "probability distributions", use (probability-distributions). For questions about distributions as sub-bundles of a vector bundle, use (differential-geometry). 1answer 50 views ### Identify the distrionbutional derivative with classical derivative? I am reading Rudin's Functional Analysis and got quite confused by his proof for theorem 7.25, which he calls Sobolev's Lemma. In proving the theorem, he defines the function $F$, and calculates its ... 1answer 168 views ### Easy question on derivative in the sense of distribution I would like help proving this elementary result: Let $f\in L^{1}_{loc}(a,b)$. Let $x_0 \in (a,b)$ Let $F(x)=\int^{x}_{x_0} f$. Then $F'=f$ in the sense of distributions. i.e How do I show ... 2answers 107 views ### Howto show that function is a representation fot the delta function via complex path integrals? So given is the definition: $$f(x):=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}e^{ikx}dk$$ I'm supposed to show that this is a representation of the Dirac delta "function" ($f(x) = \delta(x)$) ... 1answer 192 views ### Delta Dirac Function Show that $\delta$, function of Dirac, defined than $\left<{\delta_0,\phi}\right> = \phi(0)$ belongs to $W^{-1,p}(]-1,1[)$ and $\delta_0 \notin L^p(-1,1)$ $\forall p\geq 1$. How I will be able ... 2answers 77 views ### Why begin with distributions and then move to tempered ones? After reading several books on distribution theory, I got a strange feeling. Why do they all begin with the theory of distributions and then move on to tempered distributions? Why can't we just start ... 2answers 129 views ### Paley-Wiener type theorems for distributions? In general a theorem of Paley-Wiener type gives a relation between the decay of a function and the smoothness of its Fourier transformation, and there are plenty of them since there are many kinds of ... 1answer 96 views ### If $f$ is a bounded tempered distribution and $g \in L^1$ is then $\int_{\Bbb R^n}(f\ast\tilde\varphi)(x)\tilde g(x)\,dx$ a tempered distribution? Let $f$ be a bounded tempered distribution, that is, $f\ast\varphi \in L^\infty(\mathbb R^n)$ for every Schwartz function $\varphi$. If $g \in L^1(\mathbb R^n)$, does the following definition define ... 1answer 45 views ### Laplacian in $\Bbb R^2$ acting on compact test-function I am trying to follow an argument in Strichartz's "A Guide to Distribution Theory and Fourier Transforms" We consider $\langle \Delta u, \rho \rangle$ where $\Delta u$ is the two dimensional ... 1answer 121 views ### What is good about homogeneous functions? Given $r>0$ and $f:\mathbb{R}^n\to \mathbb{R}$, $d_rf$ is the function defined by \begin{equation}d_rf(x_1,x_2,\dots,x_n)=f(rx_1,rx_2,\dots,rx_n)\end{equation} and is called the $r$-dilation of ... 2answers 104 views ### How to cook up test functions? Let $\Omega\subset \mathbb{R}$ be open. A test function is a $C^\infty$ function with compact support. This is a rather strong restriction, for instance, no analytic function is a test function. But ... 0answers 78 views ### What is $\overline{\partial} 1/z^2$? it is all in the title : what is $\overline{\partial} \frac{1}{z^2}$ in the sense of distributions ? I remember that $\overline{\partial} \frac{1}{z}$ is a dirac at 0, but I can't seem to find a way ... 1answer 170 views ### Fractional derivatives of delta function $\delta (x)$ How can I define the fractional derivative of the Delta function? I mean $D^{\alpha}= \frac{d^{\alpha}}{dx^{\alpha}}$ where $\alpha$ can be any real number, then if we define \$D^{\alpha} \delta (x) ... 1answer 95 views ### generalized functions (Distributions) elementary question I am working with Strichartz's "A Guide to Distribution Theory and Fourier Transforms" (self-study -> not a homework question). He says none of the distributions that correspond to $1/|x|$ are ... 1answer 111 views ### Confused by a proof in Rudin's Functional Analysis I am referring to a proof in Part II of Rudin's Functional Analysis. I got confused by his proof of Thm 6.26 (page 167). He says by applying (2) successively we can get inequality (4), but I do not ... 1answer 51 views ### Nice example where $D^{\alpha}\Lambda_{f}\neq\Lambda_{D^{\alpha}f}$? Let $\Omega\subset\mathbb{R}^n$ be open and $f$ be a locally integrable function. The distribution associated with $f$, $\Lambda_{f}\in D'(\Omega)$, is defined via \begin{equation} ... 0answers 23 views ### Notation Issues I am reading a paper, and have come across a notation I don't understand, it says: To the resulting sequence of orthonormal eigenfunctions we may associate a sequence of distributions ${dU_{k_i}}$ in ... 0answers 47 views ### Good references on Distribution Theory [duplicate] Possible Duplicate: Distribution theory book Two books I have been reading are Strichartz's A Guide to Distribution Theory and Fourier Transforms and PartII of Rudin's Functional Analysis . ... 0answers 83 views ### Liouville's Theorem in $\mathbb{R}^n$ Liouville's Theorem states that if a tempered distribution is harmonic, $\Delta{u}=0$, then $u$ is given by a polynomial. For the argument, we take Fourier transform of $\Delta{u}=0$ to obtain ... 2answers 174 views ### Convergence of test-functions is not induced by any metric. By $\mathcal{D}(\mathbb{R})$ we denote linear space of smooth compactly supported functions. We say that $\{\varphi_n:n\in\mathbb{N}\}\subset\mathcal{D}(\mathbb{R})$ converges to ... 1answer 102 views ### Delta function question Given the functions $$f(x)= \delta (x-a)$$ $$g(x)= \frac{1}{a} \delta \left(x- \frac{1}{a}\right)$$ for a real constant $a\gt0$, is there a relationship between $f$ and $g$? I believe that \$ ... 2answers 219 views ### Square root of compactly supported C-infinity function Given $u \in \mathcal{C}^\infty_0(\mathbb{R}^n)$, $u \geq 0$ everywhere, is $v(x) = \sqrt{u(x)}$ also in $\mathcal{C}^\infty_0$? It is clear that the only problematic points are the boundary of the ... 2answers 143 views ### Regarding the definition of Schwartz Space of functions I came across a definition of Schwartz Space where they were defined as functions $f$ such that $\mathrm{lim}_{|x|\to \infty} |x^{\alpha}D^{\beta}f(x)|=0$ for any pair of multiindices \$\alpha,\ ... 1answer 77 views ### how to compute the convolution of two measures explicitly Here is my example:u and v are the surface measures on the spheres {${x;|x|=a}$} and {${x;|x|=b}$} in $\mathbb{R}^{3}$.Then what's $u\ast v$ ? And what if in $\mathbb{R}^{n}$? 0answers 80 views ### What's the Fourier transform of these functions? The Fourier transform of $|x|^{\alpha}$. This is the Fourier transform of a homogeneous function, and there are several cases of various $\alpha$: when $a\leq -n$, it's not a temperate distribution; ... 1answer 126 views ### a question about convolution of two distributions Generally,when taking convolution of two distributions,at least one of which is supposed to be of compact support. But when u,$v\in S'(\mathbb{R})$ ( temperate distributions) have suports on the ... 1answer 78 views ### Some questions about distribution theorem Given an equation $P(D)u=0$, where $P$ is a polynomial (not equal to a constant). Here are some basic information about the distributional solution $u$: If $P$ has at least one real root, then there ... 1answer 367 views ### Proving the mean value property of harmonic functions using distributions? A professor I talked to showed me a proof of the mean value property. (He actually showed it for functions solving the heat equation instead of Laplace's equation, but it seems like the argument is ... 1answer 179 views ### The distribution $\Delta u$ (where $u = \ln|\vec{x}|$) Problem Consider the function $u(\vec{x})=\ln|\vec{x}|$ as a distribution on $\mathbb{R}^3$ and $\mathbb{R}^2$. We want to determine $\Delta u$ in the distribution sense. First calculate $\Delta u$ ... 1answer 161 views ### What is the sum of only half the exponential terms that give the Dirac comb? The following infinite sum of exponential terms gives a Dirac comb: $$\sum_{n=-\infty}^\infty e^{i n x} = 2 \pi \sum_{n=-\infty}^\infty \delta(x - 2 \pi n)$$ Of course the sum doesn't strictly ... 1answer 231 views ### Normalization parameter, properties of Dirac delta functions Suppose $\psi_E (x)=N(E)\exp (ikx)$ where $\psi_E (x)$ is a momentum eigenfunction, $N(E)$ is the normalization constant on the energy scale such that \$\langle E'|E\rangle=\int_{-\infty}^\infty ... 0answers 106 views ### Integration methods for functions with Delta distributions Which Monte-Carlo methods are available for computing a multidimensional integral with Delta distributions (in case one cannot sample them explicitly)? PS: I also asked a similar question at ... 0answers 73 views ### Colombeau product of distributions How can use the Colombeau generalized function method to evaluate the product of distributions $\delta (x) \times \delta (x)$ or $\delta ^{n} (x) \times \delta ^{m} (x)$ (derivatives of dirac ... 0answers 49 views ### Why we have “$\Delta \Gamma = \delta$”, in $\mathbb{R}^{n}$, where $\Gamma$ is fundamental solution and $\delta$ is the dirac measure? Why we have "$\Delta \Gamma = \delta$", in $\mathbb{R}^{n}$, where $\Gamma$ is fundamental solution and $\delta$ is the dirac measure? 2answers 241 views ### limit of an integral with a Lorentzian function We want to calculate the $\lim_{\epsilon \to 0} \int_{-\infty}^{\infty} \frac{f(x)}{x^2 + \epsilon^2} dx$ for a function $f(x)$ such that $f(0)=0$. We are physicist, so the function $f(x)$ is smooth ... 1answer 45 views ### Uniform convergence and convergence in $S'(\mathbb{R}^n)$ Let $$\hat{f_\epsilon}: \xi \mapsto \exp(-\epsilon |\xi|) \frac{\sin(|\xi|t)}{|\xi| t}$$ denote to the Fourier transform of $f$. How do I see $\hat{f_\epsilon}$ converges uniformly on ... 1answer 117 views ### Approximating an integral Might be simple, but i don't get it. Why is the integral in the last line approximately equal to $n(\varphi(\frac{-1}{2n}) - \varphi(\frac{1}{2n}))$? 1answer 231 views ### Application of closed graph theorem. I'm having a problem applying the closed graph theorem, which I think stems from distributions still being very new to me. I am reading a proof in Stein and Weiss, Introduction to Fourier Analysis ... 0answers 54 views ### Why $\partial_{i}(A^{*}u)=A^{*}(\sum^{n}_{j=1}a_{ji}\partial_{j} u)$? We define the affine transformation on distributions by $$\langle A^{*}u, \phi \rangle=\frac{1}{\det(A)}\langle u,\phi(A^{-1}x)\rangle$$ Assume this we should have \langle \partial_{i}(A^{*}u), ... 0answers 47 views ### wavefront set of a distribution If $(x_0,\xi_0)\in\mathbb{R}^{2n}$ is a given point in phase space, how do I construct a compactly supported distribution $u$ which has WF$(u)=\{(x_0,t\xi_0) | t>0\}$ ? 1answer 334 views ### Fourier transformation of sin, cos, sinh and cosh I am trying to solve the following exercise Use $\mathcal{F}(e^{xb}) = 2\pi \delta_{ib}$ to calculate the Fourier-Transformation of $\sin x$, $\cos x$, $\sinh x$ and $\cosh x$ Now I am a little ... 0answers 119 views ### Poisson equation on half-space Let $H$ be the open half-space of $\Bbb R^n$ defined by $x_n > 0$. Let $f : \overline H \to \Bbb R$ continuous and harmonic on $H$. Define the function $F : \Bbb R^n \to \Bbb R$ by ... 1answer 78 views ### How to use the Malgrange-Ehrenpreis-Theorem In the Wikipedia article of this theorem http://en.wikipedia.org/wiki/Malgrange%E2%80%93Ehrenpreis_theorem it is said that i could be used to prove that $P(\partial/\partial x_i)u(x)=f(x)$ has a ... 1answer 134 views ### Questions concerning a proof that $\mathcal{D}$ is dense in $\mathcal{S}$. I am currently working through this lecture notes and on page 164, there it is said The space of $\mathcal{D}(\mathbb{R}^n)$ of smooth complex-valued functions with compact support is contained ... 0answers 68 views ### Why does the following define a distribution and of which order? I want to show that $$\phi\mapsto\underset{\varepsilon\searrow 0}{lim}\int_{-\infty}^{\infty}\frac{\phi(x)}{x+i\varepsilon}dx$$ defines a distribution on $\mathcal{D}(\mathbb{R})$ but I just don't ... 1answer 239 views ### Does zero distributional derivative imply constant function? If a real function $f\colon[a,b]\to\mathbb{R}$ is differentiable and its derivative $f'$ is zero, then $f$ is constant. Does this result still hold when $f$ has a weak derivative? Explicitly, suppose ... 1answer 45 views ### whats the order of a distributional derivate? I have to calculate the derivatives of order $\le 2$ of for example $f(x) = |x|$, is it the same as the second derivate, what does this "of order $\le 2$" mean? calculating distributionell derivatives ... 1answer 36 views ### connection between the support and the representation of a distribution I want to show, that for $u' \in \mathcal{D}'(\mathbb{R}^n)$ supp $u$ = $\{ 0 \}$ iff there exist numbers $m \in \mathbb{N}, c_{\alpha} \in \mathbb{K}$ such that \$u = \sum_{|\alpha| \le m} c_{\alpha} ... 2answers 65 views ### why is $\frac{\phi(x)-\phi(-x)}{x}$ for smooth $\phi$ bounded at $x=0$ Why is $\frac{\phi(x)-\phi(-x)}{x}$ for smooth $\phi$ bounded at $x=0$? If i set $\phi(x) = \sqrt{|x|}$, it definitely not bounded. I saw this on page 293 of ... 2answers 215 views ### principal value as distribution, written as integral over singularity Let $C_0^\infty(\mathbb{R})$ be the set of smooth functions with compact support on the real line $\mathbb{R}.$ Then, the map \operatorname{p.\!v.}\left(\frac{1}{x}\right)\,: ... 1answer 73 views ### what means a integral exists in the distributional sense? what exactly means if an integral exists just in the distributional sense, for example the fourier-transform of $x^2 e^{-\lambda x}$ or of $H(R-|x|)$ where $R > 0$ and $H$ is the ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 143, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9056245684623718, "perplexity_flag": "head"}
http://gilkalai.wordpress.com/2010/11/20/janos-pach-guth-and-katzs-solution-of-erdos-distinct-distances-problem/?like=1&source=post_flair&_wpnonce=78d053c4d8
Gil Kalai’s blog ## János Pach: Guth and Katz’s Solution of Erdős’s Distinct Distances Problem Posted on November 20, 2010 by Click here for the most recent polymath3 research thread. Erdős and Pach celebrating another November day many years ago. The Wolf disguised as Little Red Riding Hood. Pach disguised as another Pach. This post is authored by János Pach ### A Festive Day: November 19 Today is a festive day. It was on this day, November 19, 1863, that Abraham Lincoln presented his famous Gettysburg Address. Seventy nine years later, on the same day (on his own birthday!), Georgy Zhukov, Marshal of the Soviet Union, launched Operation Uranus, turning the tide of the battle of Stalingrad and of World War II. Now sixty eight years later, here we stand (or sit) and experience my very first attempt to contribute to a blog, as Gil has suggested so many times during the past couple of years. But above all, this is a festive day, because earlier today Larry Guth and Nets Hawk Katz posted on arXiv (http://arxiv.org/PS_cache/arxiv/pdf/1011/1011.4105v1.pdf) an almost complete solution of Erdös’s Distinct Distances Problem. The story started with Erdős’s 1946 paper published in the American Mathematical Monthly. In this paper, he posed two general questions about the distribution of distances determined by a finite set of points in a metric space. 1. Unit Distance Problem: At most how many times can the same distance (say, distance 1) occur among a set of n points? 2. Distinct Distances Problem: What is the minimum number of distinct distances determined by a set of n points? Because of the many failed attempts to give reasonable bounds on these functions even in the plane, one had to realize that these questions are not merely “gems” in recreational mathematics. They have raised deep problems, some of which can be solved using graph theoretic and combinatorial ideas. In fact, the discovery of many important combinatorial techniques and results were motivated by their expected geometric consequences. (For more about the history of this problem, read my book with Pankaj Agarwal: Combinatorial Geometry, and for many related open problems, my book with Peter Brass and Willy Moser: Research Problems in Discrete Geometry.) Erdős conjectured that in the plane the number of unit distances determined by n points is at most $n^{1+c/loglog n}$, for a positive constant c, but the best known upper bound, due to Spencer, Szemeredi, and Trotter is only $O(n^{4/3})$. As for the Distinct Distances Problem, the order of magnitude of the conjectured minimum is $n/\sqrt{log n}$, while the best lower bound was $n^{0.8641...}$, thanks to combined efforts by J. Solymosi – C.D. Toth (2001) and N.H. Katz – G. Tardos (2004). This was the situation until today! The sensational new paper of Guth and Katz presents a proof of an almost tight lower bound of the order of n/log n. Let us celebrate this fantastic development! In this area of research, it is already considered a great achievement if by introducing an ingenious new idea one is able to improve a bound by a factor of $n^{\delta}$ for some positive δ. Indeed, it took more than 30 years, before F. Chung (1984) slightly improved on the first nontrivial lower bound of $n^{2/3}$ for the Distinct Distances Problem, due to L. Moser (1952). It took another decade or so to further improve the exponent from 2/3, first to 3/4, and then to 4/5 (see K. Clarkson et al. (1990), F. Chung – E. Szemeredi – W. Trotter (1992), L. Szekely (1997)). Of course, the Unit Distance Problem and the Distinct Distances Problem are closely related. The maximum number of times the same distance can occur among n points, multiplied by the minimum number of distinct distances, is at least n choose 2. Therefore, an $n^{1+\epsilon}$  upper bound on the first problem would immediately imply an $(1/2)n^{1-\epsilon}$ lower bound on the second. However, no one has managed to break the $n^{4/3}$ barrier for the first problem for more than a quarter of a century. Another common aspect of the two problems is that, according to Erdős’s conjectures, for both of them the order of magnitude of the optimum is attained for the integer grid. One has to be careful with such conjectures. It is a curious feature of this subject that it is very hard to come up with interesting nontrivial constructions. The situation is somewhat similar to Hilbert’s problem on the densest packing of spheres (also known, somewhat incorrectly, as Kepler’s conjecture, in 3 dimensions). In the lack of good constructions, most researchers conjecture that for any fixed d, the densest packing of spheres in d-dimensional space is lattice-like. Referring to the 3-dimensional space, C. A. Rogers famously remarked that “many mathematicians believe, and all physicists know” that this is the case. Following the work of T. Hales, it is now generally believed that the physicists were right. But why did they “know” that the densest packings were lattice-like? Because most molecular structures in nature that emerge under high pressure take some crystal formation. The physicists argued that if there existed more economical arrangements, “Nature” would have surely discovered them. At the moment we have no idea about Nature’s analogous behavior in higher dimensions. Similarly, we know no satisfying physical model forsimulating the problems of repeated distances that would give us a clue about Erdős’s conjectures. Nevertheless, without any help from Nature, Guth and Katz managed to show that Erdős’s conjecture on repeated distances was not far from the truth. This is a great achievement. I do not really know what blogging is about. Therefore, I simply follow Gil’s instructions. He spent the last 24 hours in Lausanne, and gave a wonderful talk: the 3rd Bernoulli Lecture at the Special Semester on Discrete and Computational Geometry(http://dcgprogram.epfl.ch/). When he asked me to write this entry, he told me to “express my feelings about the new results.” I am not quite sure what he meant. But here are two things that come to my mind. First, I have always felt the Distinct Distances Problem may have a nice solution that can be explained to a bright undergraduate. I was hoping that someone would come up with the right elegant approach. It seems that my gut feeling was correct. (I am afraid that it will take much longer to settle the Unit Distance Problem.) Finally, a few words about the “elegant approach.” György Elekes, who died 2 years ago at the age of 60, was an excellent teacher, a great mathematician, and a very nice and straightforward person. I had the privilege to take some of his classes at Eotvos University and to have many conversations with him over the years on mathematics and just about everything else. Most of Elekes’s work was related to Erdős-type problems in geometry and number theory, and most of it was brilliantly elegant. It is hard to forget, for example, his “book proof” of a sum-product estimate based on the Szemeredi-Trotter theorem on incidences, or his beautiful construction showing that n points in the plane can determine $n^{3/2}$ unit circles. He had a “vision”: a plan how to get an n/log n lower bound on the Distinct Distances problem. It had been sitting in the back of his mind for almost a decade, and he persistently returned to it and tried to complete the elements of the proof. He gave several talks about the subject, and left some unfinished manuscripts behind. To tell the truth, I did not believe in Elekes’s plan. It sounded too brave and too complicated at the same time. I was not convinced at all whether either of the steps he wanted to break the proof into was true and only a matter of hard work and better insight. Even the first obstacles appeared to be insurmountable. Luckily, Micha Sharir was much wiser. He believed in the program, and at the request of Marton Elekes (Gyuri’s son, a great mathematician on his own right), he completed Elekes’s notes, started exploring them, and published them in a joint paper. His presentation was fascinating, but still not sufficiently convincing for me. Larry Guth and Nets Hawk Katz did not share my feeling. As Paul Erdős would say, “their brains were open.” They found some beautiful shortcuts and brought this plan to fruition. What a treat! This is a festive day, indeed. I can imagine how Erdős would react on the news: how happy he would be to pay the prize that he probably offered for such a breakthrough… János Pach An exposition of the main ideas of the proof of Guth and Katz can be found on Terry Tao’s Blog.  William Gasarch wrote about it in computational complexity and gave a link to a page listing previous results, and mentioned a forthcoming book by Julia Garibaldi, Alex Iosevich, and Steven Senger due out in Jan 2011 entitled “The Erdos Distance Problem“. A beautiful post listing several applications of continuous methods to geometric problems by Nabil Mustafa can be found here on the Geomblog. About these ads ### Like this: Like Loading... This entry was posted in Combinatorics, Geometry, Guest blogger, Open problems and tagged Larry Guth, Nets Hawk Katz. Bookmark the permalink. ### 13 Responses to János Pach: Guth and Katz’s Solution of Erdős’s Distinct Distances Problem 1. domotorp says: Just don’t show that top photo to Oleg Pikhurko! 2. Gil Kalai says: Let me make one further comment on the method: One ingredient of Elekes’ programme which is discussed and studied in the paper of Elekes and Sharir was relating Erdos’s distance problem to certain incidence questions in three dimensions. A very effective method for studying those incidence problems is algebraic. The algebraic method plays a role in the paper by Elekes and Sharir as well as in the earlier solution by Katz and Guth to the “joints problem”. The solution of the “joints problem” relies on an idea by Zeev Dvir in Zeev’s solution of the finite field Kakeya problem. • Nets Katz says: Gil, Your comment is really on point. It was the algebraic method introduced by Dvir together with the Elekes-Sharir approach relating the Erdos problem to point-line incidences which made the problem seem possibly approachable. However, the main part of the incidence problem cannot be done by purely algebraic means, as it does not hold in finite fields. To carry out our argument, we needed to combine the algebraic method with the more topological approach associated with the Szemeredi-Trotter theorem. The tool which allows these approaches to be combined is the polynomial ham sandwich theorem of Stone and Tukey. The polynomial ham sandwich theorem was introduced into incidence theory when Larry Guth used it in his paper on the endpoint result for Bennett-Carbery-Tao’s multilinear Kakeya theorem. However, the use he and I find for it here is a little different. In his paper, he used it to find a polynomial bisecting delta-balls, thus giving him an ability to make in a continuous problem an analogous statement to “the polynomial vanishes on all the points.” Our problem, however, was discrete. Instead of using the polynomial ham sandwich theorem to cut volumes in two, we used it to divide in two the set of points. Doing this repeatedly, we create a cell decomposition with very three dimensional properties … unless the points are all in the zero set of a polynomial. If they are, it has low degree and we can apply the algebraic method. The reason the proof could not have happened until now is that although cell theoretic methods were very well developed, it was known that not all sets of points in space had a nice cell decomposition because they might be structured as a quite two dimensional set. The alternative of having them in the zero-set of a low degree polynomial could not be appreciated until the very recent development of the algebraic method. Nets • Gil Kalai says: Many thanks, Nets, for this interesting explanation of the beautiful picture. Gil • Gil Kalai says: Dear Nets, as there are two or more ingredients of your proof with Larry that are related to Kakeya’s conjecture, an obvious question is if there are some implications or expected applications of yours new ideas to the Kakeya’s conjecture? • Nets Katz says: Gil, Thanks for asking. For many years, I had always felt that there was a deep connection between the Erdos distance problem and the Kakeya problem. For instance, my work with Tardos on the sums-entries problem, which gave the previous record was in a kind of perfect analogy to work with Tao on the sums differences problem. But I never quite understood why the two problems should be so closely related. In light of the work of Elekes and Sharir, I think it is rather clear. Really, it was a big part of their contribution to transform a problem on counting distances in the plane, to a problem about counting incidences between lines and points in space. Now as to your question, I think it is important to understand that Kakeya is not strictly an incidence problem but in the language of this paper: http://lanl.arxiv.org/abs/math/0101195 a delta-discretized problem. The most dramatic contribution of our paper is the use of the polynomial ham sandwich theorem to create a cell decomposition. Cell decompositions have a history of failing badly in delta-discretized problems. I do not really expect them to do much better now. However, the Elekes-Sharir construction is much more robust. Take two points at distance 1 and the set of “reasonable” rigid motions taking one to the other looks like a delta tube. What the work of Elekes and Sharir can do is convert delta-discretized distance problems like the Falconer conjecture into the same world as 3d Kakeya. This could take results from the 3d Kakeya problem even under somewhat heavy self-similarity conditions and possibly give quite satisfying looking results in Falconer. Nets • Gil Kalai says: That’s very interesting, Nets! –Gil 3. Pingback: The Guth-Katz bound on the Erdős distance problem « What’s new 4. Michael Lacey says: My warm congratulations to both Nets and Larry on a breakthrough result, one with a beautiful proof. 5. Pingback: 지난 두 달 반 동안 유럽에 다녀왔습니다 « P 는 NP 6. Raitis Ozols says: Hello! I am student from Latvia, my main interest is mathematics. Can anybody sent me Erdos proof that n points on plane can be placed so that there occur at least n^(1+c/log log n) unit distances? And what is value of constant c? Thank you! 7. Gil Kalai says: Dear Raitis The basic idea is to take a $\sqrt n$ by $\sqrt n$ planar grid and choose the most popular distance occuring there. I dont know what c is. 8. satbir singh malhi says: hii nets Hawk Katz i want to ionwork on your reseach paper Kakeya sets in cantor Directions .can you hepl me to understand some intial point of this paper .i have some question about it can u guide me how and from where i can start my reading • ### Blogroll %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 12, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9550037384033203, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/205055-tangent-line-constant.html
# Thread: 1. ## tangent line is constant? show that the length of the portion of any tangent line to the astroid x2/3+y2/3=a2/3 cut off by the coordinate axes is constant 2. ## Re: tangent line is constant? Hey pnfuller. Can you show us what you have tried? 3. ## Re: tangent line is constant? Here are the steps I used to solve the problem: 1.) Use implicit differentiation to find $\frac{dy}{dx}$. 2.) Use the slope from step 1, and the point-slope formula to write the tangent line, using the given implicit relationship to simplify. 3.) Express the tangent line in the two-intercept form $\frac{x}{a}+\frac{y}{b}=1$. 4.) Use the distance formula to find the distance between the two intercepts, again using the given implicit relationship to simplify. 5.) You should find this distance depends only on the given constant a. 4. ## Re: tangent line is constant? how do you find the slope from dy/dx... i got dy/dx=-2/3x-1/3/(2/3y-1/3​) Originally Posted by MarkFL2 Here are the steps I used to solve the problem: 1.) Use implicit differentiation to find $\frac{dy}{dx}$. 2.) Use the slope from step 1, and the point-slope formula to write the tangent line, using the given implicit relationship to simplify. 3.) Express the tangent line in the two-intercept form $\frac{x}{a}+\frac{y}{b}=1$. 4.) Use the distance formula to find the distance between the two intercepts, again using the given implicit relationship to simplify. 5.) You should find this distance depends only on the given constant a. 5. ## Re: tangent line is constant? I would simplify, and write: $\frac{dy}{dx}=-\left(\frac{y}{x} \right)^{\frac{1}{3}}$ Now, use a general point, such as $(x_0,y_0)$ and write the tangent line at this point (using the point-slope formula) as: $y=-\left(\frac{y_0}{x_0} \right)^{\frac{1}{3}}(x-x_0)+y_0$ Now first, write this in the slope-intercept form, and you will find a nice simplification using the original implicit relation. Then , write the line in the two-intercept form, and compute the distance between the two intercepts. edit: Once you have the line from the point-slope formula, you could simply compute the intercepts directly from it.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8649435043334961, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/21530/if-a-product-of-relatively-prime-integers-is-an-nth-power-then-each-is-an-n?answertab=active
# If a product of relatively prime integers is an $n$th power, then each is an $n$th power Show that if $a$, $b$, and $c$ are positive integers with $\gcd(a, b) = 1$ and $ab = c^n$, then there are positive integers $d$, and $e$ such that $a = d^n$ and $b = e^n$. - 3 Have you considered the prime factorizations of $a$, $b$, and $c$? – Jon Feb 11 '11 at 15:03 1=ax+by => 1=gcd(a^n,b^n) – kira Feb 11 '11 at 15:43 @kira: Please ask questions, don't give orders. Also: please make your titles informative, not sentence fragments. – Arturo Magidin Feb 11 '11 at 16:21 @kira: The title describes your problem exactly; why did you change it back to not having mark-up and to the old phrasing? – Arturo Magidin Feb 11 '11 at 19:46 Because of this:@kira: Please ask questions, don't give orders. Also: please make your titles informative, not sentence fragments. – Arturo Magidin 3 hours ago – kira Feb 11 '11 at 20:16 show 1 more comment ## 3 Answers Of course it's trivial using unique factorization. Here's a more general proof using gcd's (or ideals) that has the benefit of giving an explicit closed form: LEMMA $\rm\ \ c|ab,\ (a,b,c)=1\ \ \Rightarrow\ \ (a,c)^n\ (b,c)^n\ =\ (c)^n$ Proof $\rm\quad (a,c)^n\ (b,c)^n\ =\ ((a,c)\:(b,c))^n\ =\ (ab,c(a,b,c))^n\ =\ (ab,c)^n\ =\ (c)^n$ This may be considered as the essence of Fermat's method of infinite descent. It generalizes to rings of algebraic integers but depends upon much deeper results in this more general context, viz. the finiteness of the class number and Dirichlet's unit theorem. For further discussion see my post here, esp. the quote by Weil. - Thanks! – kira Feb 13 '11 at 20:15 Been a while since I did any maths, so this is probably the wrong way of going about it. I'd take logs with base $a$ of $ab=c^n$, this will give: $$1 + \log_a(b) = n\log_a(c) \rightarrow 1 = n\log_a(c) - \log_a(b).$$ For the right hand side to equal 1, we need $b = e^n$ (this wouldn't necessarily be the case if $\gcd(a,b) \neq 1$): $$1 = n(\log_a(c) - \log_a(e)).$$ I'd take the inverse log from here, giving $a = \left(\frac{c}{e}\right)^n$. Simple to explain why $\frac{c}{e}$ must be a whole number, $d$. Bet that's the worst possible solution to this problem - obvious assuming the fundamental theorem of arithmetic -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9125462174415588, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/217655/how-random-is-my-deck-of-cards
# How random is my deck of cards? I play a few card games, and I thought it would be fun to write a card shuffling program, to see how many shuffles it takes to randomize the deck. The algorithm is: 1. Cut the deck in the middle ± random offset. 2. while one hand is still full, place a small but random number of cards into the second hand in the front/back/both. 3. repeat until random. The question is, how do I check for randomness? I've considered that this might be a 52-spin Ising model, and I can make some function 'cost energy's when they are ordered (ie ace of clubs followed by two of clubs 'cost' more than being next to say the seven of hearts) but this might be overkill... Is there a mathematical test for randomness I could use here, to check the order of my cards? - 1 By the way Donald Knuth's TAOCP has some extended material on shuffling decks and analysis thereof. Just thought you might be interested. – Kaz Oct 21 '12 at 2:05 ## 3 Answers Randomness is not a property of a deck, it is a property of a probability distribution over decks. xkcd said it best: This is not the algorithm you want. What you actually want to check is whether your algorithm, after $n$ iterations, produces a distribution over decks that is approximately uniform. Fortunately, someone has already done the work for you: Persi Diaconis showed that it takes about $7$ riffle shuffles. If you don't want to rely on that result, just run your algorithm lots of times and keep track of which decks show up depending on how many times in a row you run it. (Probably you shouldn't keep track of decks themselves but rather the first $k$ cards in them for some reasonable $k$.) - 2 To make my point about evaluating decks vs. probability distributions more explicitly, consider an algorithm which returns the same very good shuffle each time. This is not the algorithm you want even though, every time you run it, a "randomness test" would certify that its output is very "random"! For a more realistic example, consider an algorithm which returns one of $10^{10}$ very good shuffles each time. This is still not the algorithm you want even though you probably wouldn't be able to tell for awhile. – Qiaochu Yuan Oct 20 '12 at 22:16 1 There are actually $52! \approx 8 \cdot 10^{67}$ possible decks and $10^{10}$ of them is really not very many! – Qiaochu Yuan Oct 20 '12 at 22:19 Persian Diaconis is actually my inspiration, but my algorithm doesn't do a riffle shuffle. I don't know what the official name is of the shuffle I get the program to do. You're right however, i want uniform distribution of cards after x iterations of this algorithm, and I want to find x. So that between games the same runs don't appear. – Pureferret Oct 21 '12 at 7:00 If you do a few thousand or tens of thousands of runs of your algorithm, you can look at the distribution of where the first (and each other) card ends up. It should be reasonably uniform, and the chi-square test can tell you if the unevenness that you see is to be expected. I would also check how far apart cards that started out together finish. It looks like your algorithm might not split them apart soon enough. - Valid or useful or interesting notions of "random-ness" need more than basic probability notions, since the probability of 100 "heads" in flipping a fair coin has the same probability as any other specific sequence of outcomes, but is arguably implausible as a "random" outcome. To my mind, the "Kolmogorov-Solomonoff-Chaitin" notion of "complexity" is the apt notion. This is discussed wonderfully and at length in the first part of Li-Vitanyi's book on the subject. A crude approximation of the idea is that a "thing" is "random" if it admits no simpler description than itself (!). Yes, of course, this depends on the language/descriptive apparatus, but has provable sense when suitably qualified. Given that most card games refer to discernible "patterns" (things with compressible descriptions), a "random" hand would be one lacking two-of-a-kind, and so on. A "random" distribution in a deck would, in particular, have no more pattern in it than might be "expected". The question of whether there is a notion of "too-violent-to-be-random" non-pattern-forming in a given context seems to be ambiguous: while long runs of all heads or all tails are suspicious, lack of them is also suspicious. This kind of example suggests that a configuration of a deck of cards to that no one has a playable hand might also be suspicious... depending on the context. The operationally significant question of whether or not an innocent-seeming "mixing" can produce "randomness" with relatively few iterations is slightly different. However, from the viewpoint of "complexity", surely the answer is "no", since the hands-of-cards which arise in this way immediately admit a much simpler description than themselves. Nevertheless, or perhaps because of this observation, we can decide to declare a merely relative notion of randomness for a deck of cards, in terms of a small proper subset of "genuine" tests of "randomness/compressibility". Of course, if the only "deals" of hands of cards that were allowed were "random" in any strong sense, the probability would be very low that anyone would have a playable hand... -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9630985856056213, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/4503/how-do-you-explain-the-volatility-smile-in-the-black-scholes-framework?answertab=active
# How do you explain the volatility smile in the Black-Scholes framework? Does anyone have an explanation for the currently naturally forming volatility smile (and the variations) in the market? - The part of this question appropriate to this site is whether the assumptions of the Black-Scholes equation lead to the solution they reach. That question has been vetted by many and the conclusion is that the analysis is correct. Whether these assumptions are correct is not a mathematical question. – Ross Millikan Nov 9 '12 at 4:09 After the 1987 crash people realized that extreme events were more likely than the log normal distribution suggests. They developed better option models, leading to the out of the money options to be priced more expensively to account for the greater risk. People still talk and think in terms of BS implied vol because 1) it is convenient, 2) many other models can be considered extensions of Black-Scholes, and 3) they can use the volatility surface from the market to price exotic options. – John Nov 9 '12 at 4:32 @JoeCoderGuy, yes I think you make it a lot harder than it is. Trading volatility is not that whole lot different from trading other asset classes. The crash of 1987 simply showed market participants that the IVs far away from the money (particularly downside puts) were not fairly valued (not in terms of pricing model because the pricing model just translates IVs -> currency denominated prices but in terms of absolute level) but please see my answer for details. – Freddy Nov 12 '12 at 2:52 ## 5 Answers The Black-Scholes model is based on the assumption of lognormal returns of the underlying asset. There is much evidence and argument that stock market returns are not normal on a logarithmic basis, and there is no particular reason to assume a normal distribution, either. In particular an implied volatility smile is evidence of "fat tails" in the returns expected by market participants---excess kurtosis, if you will. It's been pointed out by Taleb that with over 100 years of history, we simply don't have enough data to estimate the tails or the higher moments of the distribution of stock market returns. Edit: in R: ````> library("moments") > SPX <- read.csv("http://ichart.finance.yahoo.com/table.csv?s=%5EGSPC&d=10&e=10&f=2012&g=d&a=10&b=10&c=1992&ignore=.csv") > kurtosis(diff(log(SPX$Adj.Close))) [1] 11.36604 ```` That's for the last twenty years less a day of data, and does not include the crash of 1987. For a normal distribution, we would expect the kurtosis to be 3. The excess kurtosis in this case is $11.36604-3=8.36604$, which is still quite significantly not normal. Excess kurtosis means that a distribution has a more peaked center and fatter tails than a normal distribution, which means in the case of options a higher probability that the underlying will take on values far away from its current value come expiration time. This is why, when excess kurtosis is not taken into account, options further from the money appear to imply a higher volatility for the underlying asset. - 1 Just did what you said --- I was curious myself --- the returns are quite leptokurtic, not normal. – justin-- Nov 10 '12 at 1:05 The double-sided exponential distribution happens to be the fattest-tailed distribution that has a moment-generating function---but just barely---its mgf has vertical asymptotes which may impair one's ability to derive an equivalent martingale distribution (of the same exponential family) suitable for pricing options as in the Black-Scholes model. – justin-- Nov 10 '12 at 1:40 1 @JoeCoderGuy You might want to check this article out: Shalom Benaim and Peter Friz. Smile Asymptotics II: Models with Known Moment Generating Functions. Journal of Applied Probability , Vol. 45, No. 1 (Mar., 2008), pp. 16-32 – justin-- Nov 13 '12 at 5:49 ## Did you find this question interesting? Try our newsletter email address I agree with some arguments above. One can find several explanattion to volatility smile: • Against the BS framework assumption, volatility is not constant and traders don't expect it to be constant • This is a matter of option supply and demand • volatility smile incorporates the kurtosis seen in the underlying One can find hints on this issue in P.Wilmott's Frequently asked question in quantitative finance - Consider a more financially plausible model than Black-Scholes: one where the stock can suddenly go bankrupt due to fraud, and the volatility varies over time. Neither model is perfect, but the new one (call it SVJ) will be "less wrong". Mathematically, we no longer have the Black-Scholes SDE based on a single stochastic generator $W$ $$\frac{dS}{S} = \mu dt + \sigma dW$$ but rather an SDE with 3 generators: $W,Z$ and a jump process $J$ $$\frac{dS}{S} = \mu dt + \sigma dW - dJ \\ d\sigma^2= \kappa(\bar{\sigma}^2-\sigma^2) dt + \eta \sigma dZ$$ It is possible (though not particularly easy) to fit this more complicated, realistic model to the market. Big banks do it all the time. Any model, including both BS and SVJ, can be run "backwards", by which I mean that it can start with an option price and derive an implied parameter. If the model has $M$ parameters $p_1, p_2, \dots, p_M$ that are normally used to find a model price $V$, then we can also choose any one of the parameters, call it $p_n$, to derive from an observed price $P$ (normally by root-finding techniques). Let's say we run backwards from market prices to get implied values of $\sigma$ for both Black-Scholes and SVJ. You will observe a far flatter skew for the SVJ. This is true even if we remove either the jumps $J$ or stochastic volatility $Z$. Here, for example, is a skew in Black-Scholes volatilities arising from pricing an array of options in a proprietary jump-diffusion model with flat (by strike) volatility of 20% We see that a constant volatility parameter in the jump-diffusion is equivalent to a skew of Black-Scholes volatilities. Conclusion: the smile comes from the model being too strong a simplification of reality. - 1 Oh thank God you agree with me. – SRKX♦ Nov 12 '12 at 20:30 Implied volatility has very little to do with any particular pricing model, especially not much with BS. BS is a translation tool between prices and volatility, with its own many model deficiencies. I won't get into such model assumptions because my point is an entirely different one. Even the smile/smirk is entirely unrelated to the Black-Scholes model and I read your question that you like to understand why IVs, far from the money, are higher than close to the money ones. My point is that IVs are entirely supply and demand driven. Whether its the IVs of an option that expires a year from now or tomorrow, whether its the IV of a 90 put (equity option), 30 delta strike (fx), or any other rates option, commodity option, what have you... Someone correctly pointed out that most of the origin of the smile/smirk can be traced back to the October 1987 shock ( http://en.wikipedia.org/wiki/Black_Monday_(1987) ) Before pretty much all IVs regardless of moneyness were priced equally. However, the market after the stock market crash found that primarily downside protection was priced way too cheaply. Now I guess the core of the question is why: Most can be attributed to those who generally wrote such options such as sell-side trading desks but especially also floor traders who made markets in such options. They suffered steep losses and found out that they were not enough compensated for writing options such deep out of the money. It has to do with the fact that the market under-estimated the probability of such extreme events happening. However, another important finding that causes many market makers to trade IVs far away from the money higher than at the money ones has to do with the belief that return volatility far away from current price levels tends to be a lot higher than current return volatility. For example, return volatility is expected to be higher at 20% lower market levels than current return volatility. I guess the rational behind that is in a sense a paradox: Nobody generally would need to care about an instantaneous move 10% or so lower from current levels, thus in a sense it does not matter which implied volatilies are attached to a 90 put. However, should the unexpected happen than the expectation is for sharply higher return volatility 10% below current levels. Keep in mind the smile/smirk is dynamic and only reflects current expectations. I have seen on numerous occasions "inverted smirks" in equity index and stock options in that out of the money calls exhibited higher IVs than out of the money puts. But generally in this asset class space out of the money puts exhibit higher IVs than equally spaced out of the money calls. Another point that supports this general shape of the "smirk" is that empirically stock markets exhibit much higher down-side return volatility than up-side return volatility. Panic and fear in a sense causes more irrational behavior in people than exuberance. So, in summary, all has to do with how the market prices the probability of shocks (to the down and upside) occurring and their severity. EDIT: Please keep in mind that most all IVs away from current spot/forward levels exhibit the property of being more richly priced than ATM IVs. On the call side this is the case because nowadays options are often utilized as leveraged bets instead of buying the underlying outright. Put IVs that are struck below current spots/forwards are bought more aggressively because of the desire to protect long positions in the underlying (buy-side fund industry especially). IVs away from the money are simply more demanded than ATM ones because of the above reasons. I recommend you do not read a whole lot more into it than this because trading volatility in the end of the day is nothing else than trading any other asset, prices are set as a pure function of supply and demand, the only difference is that most underlying assets exhibit linear pay off functions where as volatility is non-linear in nature and market participants can exactly define how this non-linearity is structured by trading different strikes or option combinations. Research suggests that there is a relationship between how pronounced the smirk is and future return expectations in the underlying. This hardly surprises me because in a sense it reinforces what I explained above in that people either protect with down side puts if they expect the underlying to exhibit future negative returns and equally buy upside calls (often as a leveraged bet instead of the underlying) if they expect future returns in the underlying to be positive, hence the smile = higher IVs away from the money. - I have to disagree that down-side return volatility is much higher than up-side return volatility---historical returns are in fact negatively skewed, but only slightly. If you look at actual data, you will find many extreme upside moves to balance the downside. – justin-- Nov 10 '12 at 17:52 @Freddy do you agree that IV are computed using market prices and some formula $f(...,\sigma)=p$? – SRKX♦ Nov 10 '12 at 18:43 @justin, feel free to run your own analysis over a time series that comprises a longer period of time. You find that realized vol was the highest when markets crashed and not when market topped out (on average). This is pretty much an established fact and you can feel free to google numerous academic papers that investigated this. – Freddy Nov 11 '12 at 1:39 @SRKX, absolutely disagree, and I disagreed in my comment to your answer already. Implied vols are not the outcome of any pricing model, implied vols are a market consensus and, for example, the BS model is a mere translation tool from implied vols -> currency denominated prices. Listed options prices are not a result of supply and demand of an option but supply and demand for implied volatility. If you insist IVs are the result of some formula then I highly doubt you ever worked as volatility trader. – Freddy Nov 11 '12 at 1:43 The volatility smile is made out of implied volatilities. This means that you take as input $K,S,r,T$ and the price of the option $p$ and you use it to find $\sigma$ such that $$p = BS(K,S,r,T,\sigma)$$ But $p$ is defined by the market, so the $\sigma$ you find are the estimated volatilities by market participants, if they believe in the Black Scholes framework. The shape of the graph (a smile) shows that, in the BS framework, market participants estimate different volatilities for a same asset depending on the moneyness of the option. This means that the BS framework does not hold in reality (at least market participants don't believe in it), as for the same asset, it assumes that $\sigma$ is constant. - I don't think it's only missing a variable, there are various weak points to the theory (normality of returns for example). Your question was about interpreting implied volatilities, and I'm answering that IV can't really be interpreted because they rely on a "wrong" model yielding contradictory results. – SRKX♦ Nov 10 '12 at 11:11 I am not sure I agree with you saying "IV can't really be interpreted because they rely on a "wrong" model...". IV are not underlying any model at all. IVs across the smile is how market consensus prices risk. I has nothing to do with any model in a very similar way than absolute stock prices have everything to do with market consensus and very little to do with any pricing model. Please see my answer for what I mean with that in more detail. – Freddy Nov 10 '12 at 11:22 2 @Freddy you compute IV using the BS formula.... if that's not assuming a model then I don't know what it is. It means "what would be the volatility taken by market participants if they used the BS formula to price their option?". Well that's pretty much depending on BS to me. – SRKX♦ Nov 10 '12 at 12:00 @SRKX, sorry but I strongly disagree. Pretty much all direct market participants who manage risk on the volatility side trade volatility not options prices. The translated prices are an agreed faulty way of paying for the bought or sold volatility. The asset that is traded and priced is volatility. Nobody cares about BS when considering whether to take risk in buying or selling an option, BS comes into play when looking to calculate the price that is paid/received for the volatility that is traded. – Freddy Nov 10 '12 at 12:14 just in case the following causes confusion: I did not want to say "IV are not underlying any model at all" but wanted to say "IV are not relying on any model at all". Sorry for the confusion but I could not edit the original comment anymore – Freddy Nov 10 '12 at 12:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9456781148910522, "perplexity_flag": "middle"}
http://en.m.wikibooks.org/wiki/Linear_Algebra/Column_and_Row_Spaces
Linear Algebra/Column and Row Spaces The column space is the other important vector space used in studying an m x n matrix. If we consider multiplication by a matrix as a sort of transformation that the vectors undergo, then the null space and the column space are the two natural collections of vectors which need to be studied to understand how this transformation works. While the null space focussed on those vectors which vanished under action of the matrix (i.e. the solutions of Ax = 0) the column space corresponds to the transformed vectors themselves (i.e. all Ax). It is the totality of all the vectors after the transformation. Those readers who have studied abstract algebra can relate the null space to the kernel of a homomorphism and the column space to the range. Another important space associated with the matrix is the row space. Like its name suggests it is built entirely out of the rows of the matrix. We shall later see that the row space can be identified with the column space in a particular sense. In the special case of an invertible matrix, the row space and the column space are exactly equal. The Column Space Given an m x n matrix A the column space of A, denoted by C(A), is the collection of all the vectors formed by linear combinations of the columns of A. More precisely if the columns of A are $c_1,c_2 \cdots c_n$, where each $c_i$ is an m-dimensional vector then, $\hbox{C}(A) = \{\mathbf{v} \in \mathbb{R}^m : \mathbf{v} = \mathbf{\alpha_1 c_1 + \alpha_2 c_2 + \cdots \alpha_n c_n},\ \alpha_i\in \mathbb{R}\}$ The column space is thus essentially built out of the columns of the matrix. It is the linear span of the columns and in that capacity is also a vector space with the standard operations. (Recall that span of any collection of vectors is always a vector space). Also as the columns are vectors in the m - dimensional space so the column space is naturally a subspace of $\mathbb{R}^m$. Clearly if we let, $x = \begin{pmatrix} \alpha_1\\ \alpha_2\\ \vdots\\ \alpha_n \end{pmatrix}$ then we can say that, $Ax = \begin{pmatrix} c_1 & c_2 & \ldots & c_n \end{pmatrix} \begin{pmatrix} \alpha_1\\ \alpha_2\\ \vdots\\ \alpha_n \end{pmatrix} = \alpha_1c_1 + \alpha_2c_2 + \ldots + \alpha_nc_n$ Hence our definition can be modified to, $\hbox{C}(A) = \{\mathbf{v} \in \mathbb{R}^m : \mathbf{v} = \mathbf{Ax},\ x\in\mathbb{R}^n \}$ This gives us the idea that the column space is precisely the set of transformed vectors by the action of multiplication by the matrix. ↑Jump back a section Basis for the column space We shall now prove a theorem regarding the basis for the column space of a matrix. This constructive proof will also allow us to state a very important theorem called the rank-nullity theorem as a corollary. Theorem: The columns of the m x n matrix A corresponding to the basic variables form a basis of the column space of A. Proof: First note that the column space of A is formed by the span of all the columns, $c_1,c_2$...$c_n$. If we take the basis of the null space of A as outlined previously, then as $v_1 \in \hbox{Null Space}(A)$ so $Av_1$=0. But this shows us that the column associated with $v_1$ is a linear combination of the columns associated with the basic variables. (Note that the contribution of the other free variable columns is 0 and that of the $v_1$ linked column is 1.) In this way all the free variable associated columns are linear combinations of the basic variable associated columns. So all the basic variable linked columns span the entire column space. Now suppose that the row echelon form of $A$ is $U$, and if we take the basic variable columns the submatrix obtained from A is $A_1$ and that obtained from U is $U_1$. Clearly $U_1$ is the row echelon form of $A_1$ (see exercises) and so the two share the same null space. Now $U_1$ has only basic variables associated with it because we take only basic variables from A to A1 and so its nullity is zero. This tells us that $U_1 x = 0$ has only the trivial solution. Hence $A_1 x = 0$ also has only the trivial solution and so its nullity is zero as well. It follows that the columns of $A_1$ which were the basic variable linked columns of A are linearly independent. Thus the basic variable columns are both linearly independent and spanning as a result of which they form a basis of the column space. Q.E.D Let us look at an example: Suppose $A = \begin{pmatrix} 1 & 2 & 0 & 2 & 5 \\ -2 & -5 & 1 & -1 & -8 \\ 0 & -3 & 3 & 4 & 1 \\ 3 & 6 & 0 & -7 & 2 \end{pmatrix}$ The first step involves reducing A to its row echelon form U. Now $U = \begin{pmatrix} 1 & 0 & 2 & 0 & 1 \\ 0 & 1 & -1 & 0 & 1 \\ 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix}$ We encircle the first non zero entries in each row by brackets: $U = \begin{pmatrix} (1) & 0 & 2 & 0 & 1 \\ 0 & (1) & -1 & 0 & 1 \\ 0 & 0 & 0 & (1) & 1 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix}$ Clearly the basic variables are $x_1, x_2$ and $x_4$ and the free variables are $x_3$ and $x_5$. So by the theorem, the basis of the column space of A consists of the columns $c_1, c_2$ and $c_4$ and is given by: $\Bigg\{ \begin{pmatrix} 1\\ -2\\ 0\\ 3 \end{pmatrix},\begin{pmatrix} 2\\ -5\\ -3\\ 6 \end{pmatrix},\begin{pmatrix} 2\\ -1\\ 4\\ 7 \end{pmatrix} \Bigg\}$ Note that the nullity of A is 2 which is equal to the difference between the total number of columns and the number of elements in the column space. ↑Jump back a section The Row Space The row space, as, the name suggests is the space built out by the rows of the matrix. Given an m x n matrix A the row space of A, denoted by R(A), is defined as the collection of all the vectors formed by linear combinations of the rows of A. More precisely if the rows of A are $r_1,r_2$...$r_m$, where each $r_i$ is an n-dimensional vector then, $\hbox{R}(A) = \{\mathbf{v} \in \mathbb{R}^n : \mathbf{v} = \mathbf{\alpha_1 r_1 + \alpha_2 r_2 + \cdots \alpha_m r_m},\ \alpha_i\in \mathbb{R}\}$ The row space is thus the linear span of the rows and so is also a vector space with the standard operations. It is a subspace of $\mathbb{R}^n$. ↑Jump back a section Basis of the row space The basis of the row space of A consists of precisely the non zero rows of U where U is the row echelon form of A. This fact is derived from combining two results which are: 1. R(A) = R(U) if U is the row echelon form of A. 2. The non zero rows of a matrix in row echelon form are linearly independent. The proofs are outlined in the exercises. Let us look at an example: Suppose $A = \begin{pmatrix} 1 & 2 & 0 & 2 & 5 \\ -2 & -5 & 1 & -1 & -8 \\ 0 & -3 & 3 & 4 & 1 \\ 3 & 6 & 0 & -7 & 2 \end{pmatrix}$ If we take its row echelon form then we have, $U = \begin{pmatrix} 1 & 0 & 2 & 0 & 1 \\ 0 & 1 & -1 & 0 & 1 \\ 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix}$ The first three rows are non zero and so the basis of the row space is: $\{(1,\ 0,\ 2,\ 0,\ 1),(0,\ 1,\ -1,\ 0,\ 1),(0,\ 0,\ 0,\ 1,\ 1) \}$ ↑Jump back a section Rank of a matrix Notice that in our example the basis of the row space has 3 elements which is the same number as that in the basis of the column space which we derived earlier. This is not a coincidence. It is in fact always true that the number of elements in the basis of the row space and that in of the column space is equal. The steps in the reasoning are as follows: 1. The number of non zero rows in U is equal to the number of pivots (or leading non zero entries in each row) in U. 2. The number of basic variables by their definition equals the number of pivot containing columns (and so equal the number of pivots). 3. By the previous theorem the number of basic variables equals the number of elements in the basis of the column space. 4. It follows that the bases have equal number of elements. This unique number of elements in the two bases is called the rank of the matrix. Clearly since the rows of a matrix are the columns of its transpose, so a matrix and its transpose have the same rank. The rank of a matrix A is often written as Rank (A) just as the nullity is written as Nullity (A). ↑Jump back a section The rank nullity theorem We now proceed to a very important theorem of linear algebra, called the rank nullity theorem. Another form of this theorem will appear in the chapter on linear transformations. The theorem, in our context, is : For an m x n matrix A, Rank (A) + Nullity (A) = n = Number of columns of A The logic behind this theorem is clear. The rank, as we have seen earlier, corresponds to the number of basic variables and the nullity to the number of free variables. Since any column is either basic or free (but not both) obviously the theorem is true. Readers acquainted with the first isomorphism theorem of groups can note that this theorem can be related with it. We shall later see how this is done. ↑Jump back a section Exercises 1. Evaluate bases for the column and row spaces for: 1. $\begin{pmatrix} 2 & 1 & -2 \\ 1 & -2 & 1 \\ -3 & 1 & 1 \end{pmatrix}$ 2. $\begin{pmatrix} -2 & -1 & 4 & 2 \\ 1 & -2 & 1 & 1 \\ -3 & 3 & -5 & -3 \end{pmatrix}$ 3. $\begin{pmatrix} 1 & 1 & 1 \\ 1 & 0 & 1 \\ 0 & 1 & 1 \end{pmatrix}$ 2. Show that if A is an m x n matrix then the row space of A coincides with that of its row echelon form U. Hint: Prove that an elementary row transformation on any row of A doesn't alter the fact that it is a linear combination of the rows of A. 3. Show that the non zero rows of a matrix in row echelon form are linearly independent. Hint: Suppose $\alpha_1r_1+\cdots \alpha_mr_m=0$ where $\alpha_i$ are scalars and $r_i$ the non zero rows. Now if the first pivot occurs at the (1,j)th position of the matrix then the jth component of each row other then of $r_1$ is 0. So $\alpha_1$ is zero. Similarly all other $\alpha_i$'s are also necessarily zero. 4. Combine the above two results to show that the non zero rows of a matrix in its row echelon form U form a basis of both R(A) and R(U). 5. Show that a n order square matrix is invertibile $\iff$ its rank is n $\iff$ R(A) = C(A) = $\mathbb{R}^n$$\iff$ Ax = b has a unique solution for each m dimensional vector b. 6. Show that if A is a m x n and B a n x p matrix then: 1. C(AB) $\subseteq$ C(A). 2. R(AB) $\subseteq$ R(B). Hint: For (1) note that $\{ABx : x\in\mathbb{R}^p\}\subseteq\{Ay : y\in\mathbb{R}^n\}$ and for (2) note that $R(AB) = C((AB)^T) = C(B^TA^T) \subseteq C(B^T) = R(B)$ 7. A submatrix of a matrix A is a matrix obtained by deleting some rows and/or columns of A. Show that for every submatrix C of A, we have Rank (C) $\le$ Rank (A). Hint: Consider a matrix B formed by deleting rows of A not in C. Then Rank (B) $\le$ Rank (A) and Rank (C) $\le$ Rank (B). 8. Show that a m x n matrix A of rank r has at least one r x r submatrix of rank r, that is, A has an invertible submatrix of order r. Hint: Let B be the matrix consisting of r linearly independent row vectors of A. Since Rank (B) = r so we can take r linearly independent vectors of B to get an r x r invertible submatrix C. ↑Jump back a section
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 61, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9237751364707947, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/78526/finding-elements-from-venn-diagram
# Finding elements from Venn Diagram I am stuck with this question, In a group of 42 theatrical performers, we have 21 singers, 22 actors and 21 dancers. There are some managers who cannot perform on stage. 10 people can sing and act. 10 people can dance and act. 8 people can sing and dance. 5 people can write poetry. All poets are actors, but 2 of them can dance too. 3 people can direct plays. All directors are dancers, but 1 can sing too. 3 people can sing, act and dance. Find the number of managers. How many people can do any two things together? I made the Venn diagram, The problem I am facing is that that if I am finding just the actors, they come out to be negative so I am not sure about my result. I would be thankful if anyone could help me out in it. Thanks ## EDIT 2: Thank-you for the help. I am again drawing the Venn diagram. This time I am having a small problem in determining if A should contain 5(as Brain mentioned) or it should be 2 (this is also same as Brian mentioned but out of 5, 3 are in $P$ so it should be 2) right? - 1 Your venn diagram is not correct. Try to draw it again. – Hassan Muhammad Nov 3 '11 at 9:06 ## 1 Answer Work out from the middle: you know that there are $3$ people who can sing, dance, and act. There are $10$ who can sing and act, but $3$ of them are already accounted for, so the shaded region at the top where you have $10$ should really have $7$: $7$ of the people who can sing and act cannot dance, and the other $3$ can. Similarly, there are $7$ who can act and dance but cannot sing, and there are $5$ who can dance and sing but cannot act. Now consider the singers. There are $21$ of them altogether. $7$ of these can also act but cannot dance; $3$ can also act and dance; and $5$ can also dance but cannot act. These singers who can do something else as well account for $15$ of the $21$ singers, so there must be $6$ singers who can neither act nor dance. Similar reasoning, which I’ll leave you to try, shows that there are $6$ dancers who can neither sing nor act and $5$ actors who can neither dance nor sing. If you now add up the figures in all seven regions within the three circles, you should get a total of $39$. Since there are $42$ people altogether, this means that there must be $42-39=3$ managers. Note that up to this point the directors and poets are red herrings: you can ignore them completely. The last question is ambiguous. I can’t tell whether it wants the total number of people who can do exactly two things, the total number who can do at least two things, or for every pair of things the number who can do at least (or exactly) those two things. If it’s asking for the number of people who can do at least two things, we can start with the $3+7+7+5=22$ people in the various intersections of the circles. Now consider the poets: all of them are actors, so all of them can do at least two things. Two of them, however, are actor-dancers, so we’ve already counted them in the $22$. The other $3$, however, are actors who neither dance nor sing, so they weren’t counted and need to be added; this brings the total to $25$. Similarly, one of the $3$ directors has already been counted (as a dancer-singer), but the other two are dancers who neither sing nor act, so they have to be added to the total of those with multiple talents, bringing it to $27$. - Thankyou so very much for the answer. Please check out the edit. – Fahad Uddin Nov 3 '11 at 9:40 @Akito: The new diagram looks good. If you want to finish it off completely, you should have $4$ in the just dancer category and $3$ in the region outside the circles. – Brian M. Scott Nov 3 '11 at 10:10 Thanks again. It went perfect. One thing I wanted to ask was that can we name the sets like $DANCERS$, $PLAYERS$ etc or is it necessary to have a single character name(like $P$, $Q$ etc)? – Fahad Uddin Nov 3 '11 at 11:28 @Akito: When you start using the names of the sets in formulas, writing expressions like $S\cap D$, for instance, it’s conventional and much handier to have single-character names. If you’re just labelling regions in a Venn diagram, it doesn’t really matter. – Brian M. Scott Nov 3 '11 at 22:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9683736562728882, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/11600/why-so-many-arguments-for-the-transformation-equations-of-generalized-coordinate?answertab=active
# Why so many arguments for the transformation equations of generalized coordinates? For a system of $N$ particles with $k$ holonomic constraints, their Cartesian coordinates are expressed in terms of generalized coordinates as $$\mathbf{r}_1 = \mathbf{r}_1(q_1, q_2,..., q_{3N-k}, t)$$ $$...$$ $$\mathbf{r}_N = \mathbf{r}_N(q_1, q_2,..., q_{3N-k}, t)$$ Each particle in space can be uniquely identified by 3 independent variables, so why aren't the above of the form $$\mathbf{r}_i = \mathbf{r}(q_{i1}, q_{i2}, q_{i3})?$$ Note there is only one transformation $\mathbf{r}$ for all $\mathbf{r}_i$, a function of only three generalised coordinates and independent of $t$. - +1 I really like this question :-) – David Zaslavsky♦ Jun 26 '11 at 21:08 ## 2 Answers The $k$ holonomic constraints are used to eliminate $k$ $q$s, so reducing their number from $3N$ to $3N-k$. This then introduces the dependence of some of the transformation equations on t and other $q$s. You have k holonomic constraints of the form $$\mathbf{f}_1(q_1, q_2,..., q_{3N},t) = 0$$ $$...$$ $$\mathbf{f}_k(q_1, q_2,...,q_{3N}, t) = 0$$ $3N$ q coordinates for the $N$ particles $$(q_1,q_2,q_3), (q_4,q_5,q_6),..., (q_{3N-2},q_{3N-1},q_{3N})$$ $3N$ transformation equations relating cartesian to generalised coordinates $$\mathbf{r}_1 = \mathbf{r}_1(q_1,q_2,q_3)$$ $$...$$ $$\mathbf{r}_N = \mathbf{r}_N(q_{3N-2},q_{3N-1},q_{3N})$$ Using the first constraint to eliminate $q_1$ gives $\mathbf{r}_1= \mathbf{g}_1(q_2,q_3,..,q_{3N},t)$ which is of the same form as one of the transformation equations you quoted. Which $q$ can be eliminated depends upon the constraints of the problem and so in general, the transformation equations are of the form $$\mathbf{r}_i= \mathbf{r}_i(q_1,q_2,..,q_{3N-k},t)$$ - The generalized coordinates of a system of $N$ particles apply to the system as a whole, not the individual particles, and accordingly they can (and often do) combine the coordinates of multiple particles. One common example is that of two-body orbital motion: one generalized coordinate is the position of the center of mass of the system, $$\mathbf{q}_1 = \frac{m_1 \mathbf{r}_1 + m_2 \mathbf{r}_2}{m_1 + m_2}$$ and the other is the displacement vector between the two bodies, $$\mathbf{q}_2 = \mathbf{r}_2 - \mathbf{r}_1$$ Each of the generalized coordinates depends on physical coordinates from both objects, and if you invert the transformation you'll find that the physical coordinates of each object depend on both generalized coordinates. So you can't just express $\mathbf{r}_i$ in terms of $q_{i(1,2,3)}$ alone. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 14, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9029231071472168, "perplexity_flag": "head"}