url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://jetestelavirb.com/2-3-2/ | # 2/3 2
2/3 2. The calculator helps in finding fraction value from multiple fractions operations. To solve the equation, factor the left hand side by.
Steps using factoring by grouping. This calculator performs basic and advanced fraction operations, expressions with fractions combined with integers, decimals, and mixed numbers. 2/3 * 2/3 all the following questions represent 2/3 times 2/3 in fraction form, so it's very much important to observe the different variations of this question.
### = 2/3 = 2 ÷ 3 = 0.66666666666667.
Raise 3 3 to the power of 2 2. Look at the note at the bottom of the. Steps using factoring by grouping.
### The Calculator Helps In Finding Fraction Value From Multiple Fractions Operations.
Another way to do it is to simply measure out 2 teaspoons (to add to the 10 tablespoons) since 2/3 tablespoon is also equal to 2 teaspoons. This calculator does not provide result in the form of a mixed number. Where, 2/3 is the multiplicand, 2/3 is the multiplier, 4/9 is the simplest form of product of fractions.
### A Reduced Fraction Is A Common Fraction In Its Simplest Possible Form.
Color(red)(a^x * a^y = a^(x+y)) 2^3 * 2^2 = 2^(3+2) = 2^5 = 2*2*2*2*2 = 32 Check whether the first polynomial is a factor of the second polynomial by dividing the second polynomial by. Or how to add 2/3 to 2/3?
### You Can Show When Your Child Isn't With Either Parent By Marking 3Rd Party Time.
This schedule usually involves 4 teams who work for 2 days, then get 2 rest days, followed by 3 days of work. Here is the answer to questions like: 2/3 as a decimal is 0.66666666666667.
### Operators In The Same Box Group Left To Right (Except For Exponentiation, Which Groups From Right To Left ).
What is 2/3 plus 2/3? First, 2^3 * 2^2 = (222) * (2*2) = 8 * 4 = 32 second, we can use the following rule of exponents: It’s a simple problem, calculate it using radical property, i.e., a^{n/m}=\sqrt[m]{a^n} now, rewrite it, 2^{3/2}=\sqrt[2]{2^3} =\sqrt[2]{2^2×2. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8860793709754944, "perplexity": 907.7462222838449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571198.57/warc/CC-MAIN-20220810161541-20220810191541-00462.warc.gz"} |
http://math.stackexchange.com/questions/229915/simple-set-theoretic-question-of-subsets-of-mathbb-rd/229935 | # simple set theoretic question of subsets of $\mathbb R^d$
Let $A,B\subseteq\mathbb R^d$ with $A$ closed such that $A\subset\overline{B}$. Does there exist $B'\subset B$ such that $A=\overline{B'}$?
-
I know, but this is not the question! – Andy Teich Nov 5 '12 at 19:33
I am inclined to say that this set is $A\cap B$. Well I think if you take the closure of that you end up with $A$. – tst Nov 5 '12 at 19:35
$A\cap B$ can be empty! – Andy Teich Nov 5 '12 at 19:37
ok, then I think it depends on whether the set $A$ has empty interior or not. Well actually I think this can happen when $cl(int(A))=A$. – tst Nov 5 '12 at 19:40
If $A=\{1\}$ and $B=(0,1)$ then clearly the answer is no. – tst Nov 5 '12 at 19:42
Sorry, I misread the question initially.
The answer is no in general. For example, take $B$ to be plane minus $x$-axis, and $A$ to be $x$-axis. If $B'$ exists, it must be a subset of both $A$ and $B$, which is empty.
-
Here is a counterexample that is essentially the same as Sanchez's answer, but a dimension simpler. Let
• $A = \{0 \}$ and
• $B = (0,1)$,
so that $A \subseteq \overline{B}$. The only set whose closure is a singleton is the set itself, but that is not a valid choice in our instance since $A \not\subseteq B$.
-
If $cl(in(A))=A$ then there exists $B'$ as requested.
If $cl(in(A))=A$ then all points in $A$ are accumulation points.
We have $cl(A\cap B)\subset A\cap cl(B)=A$.
Let $x\in A\backslash cl(A\cap B)$, then since $x$ is an accumulation point in $A$, $\exists U \subset A\backslash cl(A\cap B)$, $\lambda(U)\ne 0$, with $\lambda$ the Lebesque measure in $\mathbb{R}^d$.
However $\lambda(A)=\lambda(A\cap B)$, so $A\backslash cl(A\cap B)=\emptyset$.
I believe (but I cannot prove it) that if $cl(in(A))\ne A$, $B'$ exists only when $A\subset B$.
-
Take the set of all inner points of $A$. If $x$ is an inner point of $A$, then it is an inner point of $\overline B$ as well, so it is an inner point of $B$ (as the closure doesn't add inner points). So $IP(A) \subset B$ and $cl(IP(A)) = A$.
-
Closures can in fact add inner points. For example, take $$B=\{x\in\Bbb R^d:0<\lVert x\rVert<1\}.$$ – Cameron Buie Nov 5 '12 at 19:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9752806425094604, "perplexity": 280.2051976942105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507450767.7/warc/CC-MAIN-20141017005730-00370-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/20522/for-which-p-can-one-claim-the-existence-of-the-set-of-all-sets-that-satisfy-p/22902 | # For which P can one claim the existence of the set of all sets that satisfy P?
Given a property P, is there some rules that are sufficient or necessary to determine if there exists a set of all sets with property P?
-
In what logical theory? – Qiaochu Yuan Feb 5 '11 at 12:42
@Qiao In the weakest possible system, and/or in ZFC. – Holowitz Feb 5 '11 at 12:53
@yunone: most probably, he wants to know how close to the Axiom of Unrestricted Comprehension* you can get, rather. – Mariano Suárez-Alvarez Feb 5 '11 at 13:55
In ZFC, the following are equivalent:
1. The class $\{x\mid P(x)\}$ is a set.
2. There is an ordinal $\alpha$ such that whenever $P(x)$, then $x$ has rank at most $\alpha$.
3. There is a set $B$ such that whenever $P(x)$, then $x\in B$.
4. There is no class function $F$ from the class $\{x\mid P(x)\}$ onto the ordinals.
5. There is an ordinal $\theta$ such that there is no function mapping $\{x\mid P(x)\}$ onto $\theta$.
6. There is a set $C$ such that there is no function mapping $\{x\mid P(x)\}$ onto $C$.
7. There is an ordinal $\theta$ that does not map injectively into $\{x\mid P(x)\}$.
8. There is a set $D$ that does not map injectively into $\{x\mid P(x)\}$.
Proof. (1 iff 2) If the class is a set, then it must be contained in some $V_\alpha$, and so every element will have rank at most $\alpha$. The converse is the Separation axiom.
(2 iff 3) Use that every $V_\alpha$ is a set, and every set is contained in some $V_\alpha$.
(1 implies 4) The ordinals are not a set, so this follows by the Replacement axiom.
(4 implies 2) Map each $x$ for which $P(x)$ holds to its rank.
(1 implies 5) For every set, there is an ordinal onto which it does not map, namely, the successor of its cardinality.
(5 iff 6) Every set is bijective with an ordinal.
(5 implies 7) If a class does not map surjectively onto $\theta$, then $\theta$ cannot map injectively into the class.
(7 iff 8) Every set is bijective with an ordinal.
(7 implies 2) If $\theta$ does not map injectively into $\{x\mid P(x)\}$, then that class cannot contain sets of arbitrarily large rank.
QED
Meanwhile, the following notions are strictly weaker in ZFC, if ZFC is consistent:
• There is a map from the ordinals onto $\{x \mid P(x)\}$.
• There is a bijection of $\{x\mid P(x)\}$ with the ordinals.
• There is a bijection of${x\mid P(x)}$ with $V$, the entire set-theoretic universe.
There reason that they are weaker in general is that it is relatively consistent with ZFC that there is no definable (from parameters) well-ordering of the universe. In such a model $V$, there is no class surjection or bijection from the ordinals to $V$, since this would provide the desired well-ordering, but $V$ is not a set. Similarly, in such a model, there is no bijection from the class of ordinals to the entire universe, but the class of ordinals is not a set. Such a model can be constructed using the forcing technique, by an Easton support iteration that adds a Cohen subset to unboundedly many regular cardinals.
Addendum. Let me add that there can be no purely syntactic characterization of the properties $P$ for which $\{x\mid P(x)\}$ is a set. The reason is that some properties determine sets in some models of ZFC, but not in others. So the question of whether this class is a set depends not just on the syntactic features of $P$, but on the properties of the universe in which the class is to be formed. An example of this is the property $P(x)\iff CH\wedge x=x$, which determines a set just in case $\neg CH$.
-
If the property $\mbox{P}$ is expressible by an equivalent first-order formula $\varphi(x)$ of set theory in the free variable $x$, an exact (i.e. sufficient and necessary) condition for the existence of some set $A$ of all sets $x$, which satisfy $\mbox{P}$ in $\mbox{ZFC}$, i.e. for
$\models_{ZFC} \exists A \forall x (x \in A \leftrightarrow \varphi(x))$,
is by some axiom of separation the condition
$\models_{ZFC} \exists B \forall x (\varphi(x) \rightarrow x \in B)$.
I.e. you need only by the completeness theorem of first-order logic to
1. express $\mbox{P}$ as an equivalent first-order formula $\varphi(x)$ of set theory,
2. find some suitable set $B$, and
3. prove $\vdash_{ZFC} \forall x (\varphi(x) \rightarrow x \in B)$ in your metatheory.
-
This is dangerously close to the answer "This set exists if there exists a set that contains it." – Dylan Wilson Feb 20 '11 at 8:47
@Dylan: More precisely: "The class A is a set iff there is some set B with A is a subclass of B." – Steffen Schuler Feb 20 '11 at 9:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9677674770355225, "perplexity": 136.36342510016732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383160/warc/CC-MAIN-20130516092623-00078-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://mathhelpforum.com/differential-equations/165225-another-general-solution-problem.html | # Math Help - Another general solution problem
1. ## Another general solution problem
y`-y=x^2 +1
y=ax^2+bx+c ( as it x^2 +1 is a quadratic) I also found that y´= 2ax+b
I then plugged it into the original equation ( y´-y= x^2 +1)
and got
(2ax+b)-(ax^2 +bx+c)= x^2+1
I am suppose to find a,b,c ( constants) but I dont really know how to start as they are 3 different constants. I wasnt able to come up with 3 different equations.
2. $\displaystyle \frac{dy}{dx} - y = x^2 + 1$.
This is first order linear, so use the integrating factor method. The integrating factor is $\displaystyle e^{\int{-1\,dx}} = e^{-x}$.
Multiplying both sides of the DE by the integrating factor gives
$\displaystyle e^{-x}\frac{dy}{dx} - e^{-x}y = e^{-x}(x^2 + 1)$
$\displaystyle \frac{d}{dx}(e^{-x}y) = e^{-x}(x^2 + 1)$
$\displaystyle e^{-x}y = \int{e^{-x}(x^2 + 1)\,dx}$
$\displaystyle e^{-x}y = -e^{-x}(x^2 + 1) - \int{-2x\,e^{-x}\,dx}$
$\displaystyle e^{-x}y = -e^{-x}(x^2 + 1) + 2\int{x\,e^{-x}\,dx}$
$\displaystyle e^{-x}y = -e^{-x}(x^2 + 1) + 2\left(-x\,e^{-x} - \int{-e^{-x}\,dx}\right)$
$\displaystyle e^{-x}y = -e^{-x}(x^2 + 1) - 2x\,e^{-x} + 2\int{e^{-x}\,dx}$
$\displaystyle e^{-x}y = -e^{-x}(x^2 + 1) - 2x\,e^{-x} - 2e^{-x} + C$
$\displaystyle y = -x^2 - 1 - 2x - 2 + Ce^{-x}$
$\displaystyle y = Ce^{-x} - x^2 - 2x - 3$.
3. Why did you integrate both sides and Where are the constants, shouldnt you plug what y is (ax^2+bx+c) and solve for them. Could you explain
4. Originally Posted by aa751
Why did you integrate both sides and Where are the constants, shouldnt you plug what y is (ax^2+bx+c) and solve for them. Could you explain
You don't know that $\displaystyle y$ is a quadratic. In fact, you should realise that the function is $\displaystyle ax^2 + bx + c + 0$, the zero part means you need to find a function that creates $\displaystyle 0$ when it is subtracted from its derivative, in other words, an exponential part.
I suggest you research the Integrating Factor method.
5. Its just that when the teacher explained in class he did it differently but thank you alot for your help and I will research | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8495305776596069, "perplexity": 564.5178211016398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096290.39/warc/CC-MAIN-20150627031816-00160-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/429326/a-new-question-about-solvability-of-a-direct-product | # A new question about solvability of a direct product
I had asked that if $G$ is a direct product of a $2$-group and a simple group, then is it possible that $G$ be a solvable group. That the answer is no! But by a Remark on T.M. Gagen, Topics in Finite Groups, London Math. Soc. Lecture Note Ser., vol. 16, Cambridge Univ. Press, Cambridge, 1976. I am confused! I will write the Remark below:
Definition 11.3. A group $G$ with an abelian Sylow $2$-subgroup is said to be an $A^*$-group, if $G$ has a normal series $1\subseteq N \subseteq M \subseteq G$ where $N$ and $G/M$ are of odd order and $M/N$ is a direct product of a $2$-group and simple groups of type $L_2(q)$ or $JR$.
Theorem A. Let $G$ be a finite group with an abelian Sylow $2$-subgroup. Then $G$ is an $A^*$-group.
Remark. If a group $G$ has an abelian Sylow $2$-subgroup $T$ of rank $1$, then $G$ is solvable and $2$-nilpotent and clearly an $A^*$-group. If $T$ has rank $2$, then $G$ is $2$-nilpotent unless $T$ is of type $( 2^\alpha , 2^\alpha )$ . But then if $\alpha > 1$, $G$ is solvable by [7] and clearly an $A^*$-group since it has $2$-length $1$. Thus we may assume that $|T | = 4$ and then we can apply the results of [13]. Again $G$ is an $A^*$-group.
By the Remark, it is possible that $G$ (with abelian sylow $2$-group) be a solvable group. In this case $M$ and then $M/N$ must be solvable, and can not be have a simple group as its direct factor. a contradiction with Definition!
Am I right?!
• I think you are. Either some confusion exists there with some definition (do you have a direct link to that paper?) or else that could be a rather major blunder...Unless the remark should have remarked the rank 1 thing, which perhaps is what makes $\,G\,$ solvable! – DonAntonio Jun 25 '13 at 18:57
• Please see here for how to use Markdown formatting (italics is not the same thing as math mode). – Zev Chonoles Jun 25 '13 at 18:58
• Hmmm...yet there still is a problem, me believes... – DonAntonio Jun 25 '13 at 19:03
This is a matter of convention. When $G$ is solvable, $M/N$ is an (abelian) 2-group. It is still the case that $M/N$ is the direct product of a 2-group and a set of simple groups, each isomorphic to $L_2(q)$ or $J_1$ or $^2G_2(q)$. In case $G$ is solvable, that set of simple groups is empty.
The remark is trying to clarify what happens in the low rank cases, where typically the result is a solvable group. In the $2^a$ case, Cayley showed you can take $G/M=1$, $M/N$ the cyclic Sylow 2-subgroup. In the $2^a \times 2^a$ ($a>1$) case Brauer's result shows $G/M$ can be taken to have order 1 or 3, and $M/N$ to be the homocyclic Sylow 2-subgroup. In the non-homocyclic case, Burnside or Frobenius shows you can take $G/M=1$ and $M/N$ to be the Sylow 2-subgroup. Only in the case $C_2 \times C_2$ (amongst rank 1 or 2) do you get to the case supporting a non-solvable $M/N$.
Rank 3 is interesting because of $J_1$ and $^2G_2(q)$.
I'll mention that Gagen, Bender, and Gorenstein all use the same phrasing, and all intend to allow the groups to be solvable as well as non-solvable.
Another small point: Gagen and Bender use the name “$A^*$-group” rather than “$A$-group” used by Gorenstein. The definitions are actually nearly identical; I believe they are trying to avoid confusion with a similar (weaker) result classifying A-groups in the sense of Hall: groups in which all Sylow subgroups are abelian.
• Thanks a lot. Is there any thing(book, paper, ...) about solvable groups. I need some property of solvable groups for my thesis, but I can't find what I need!!! I have read Rose and Gorenstein. Thanks again. – Adeleh Jun 25 '13 at 20:02
• Doerk–Hawk Finite Solvable Groups and Huppert Endliche Gruppen are pretty thorough for solvable groups. What sort of (already known) results are you looking for? – Jack Schmidt Jun 25 '13 at 20:27
• Thanks for your suggestion. I need some things about property of sylow subgroups of a solvable group and some thing else that may be very simple or may not be true!! For example is it always true that $Z(G)\leq \Phi(G)$ where $G$ is solvable? and other things like this. I hope that the first book be helpful. The second is germany, and I can't use it however I think that is very useful book. – Adeleh Jun 26 '13 at 7:34
• Generally “central” and “Frattini” have a similar feel, but they are independent. For instance the center of the cyclic group of order 2 is large, while the frattini is small, but the dihedral group of order 18 has a large frattini subgroup and a small center. The central chief factors are covered by the system normalizers, and the frattini chief factors are covered by the pre-frattini subgroups. The largest normal subgroup contained in a system normalizer is the hypercenter (formed by the longest chain of centrals); in a prefrattini is the frattini (formed by the longest chain of frattinis). – Jack Schmidt Jun 26 '13 at 14:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8235588073730469, "perplexity": 282.21849950484165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527828.69/warc/CC-MAIN-20190722072309-20190722094309-00016.warc.gz"} |
http://www.crm.umontreal.ca/~physmath/home.dir/RANDOM.dir/RANDOM-ABS/strahov.html | # Gambielli point processes
## Evgeny Strahov
Abstract:
We distinguish a subclass of stochastic point processes which we call "Giambelli compatible point processes". A striking feature of these processes is that the Giambelli formula for the Schur functions remains invariant under the averaging over random configurations.
We prove that orthogonal polynomial ensembles, z-measures on partitions, and the spectral measures of characters of generalized regular representations of S(\infty) induce point processes of that type.
We show that the Giambelli compatible point processes are characterized by certain identities which are reduced to formulae for averages of characteristic polynomials in the case of orthogonal polynomial ensembles. These identities imply the determinantal structure of the correlation functions. Based on these identities we provide the most straightforward (among those known so far) derivation of the associated correlation kernels. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9179666638374329, "perplexity": 878.7224242435685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585916.29/warc/CC-MAIN-20211024081003-20211024111003-00296.warc.gz"} |
https://www.newton.ac.uk/seminar/20190815090010001 | skip to content
# Water-wave forcing on submerged plates
Presented by:
Malte Peter Universität Augsburg
Date:
Thursday 15th August 2019 - 09:00 to 10:00
Venue:
INI Seminar Room 1
Abstract:
We discuss the application of the Wiener-Hopf method to linear water-wave interactions with submerged plates. As the guiding problem, the Wiener-Hopf method is used to derive an explicit expression for the reflection coefficient when a plane wave is obliquely incident upon a submerged semi-infinite porous plate in water of finite depth. Having used the Cauchy Integral Method in the factorisation, the expression does not rely on knowledge of any of the complex-valued eigenvalues or corresponding vertical eigenfunctions in the region occupied by the plate. It is shown that the Residue Calculus technique yields the same result as the Wiener-Hopf method for this problem and this is also used to derive an analytical expression for the solution of the corresponding finite-plate problem. Applications to submerged rigid plates and elastic plates are discussed as well.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8174113035202026, "perplexity": 826.2349940900252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875142323.84/warc/CC-MAIN-20200217115308-20200217145308-00056.warc.gz"} |
https://www.physicsforums.com/threads/dart-trajectory-trig-proof.257481/ | # Dart trajectory, trig proof.
1. Sep 18, 2008
### LANS
A target is suspended on a platform. A dart launcher is placed at ground level and aimed directly at the target along the line of sight (the distance between dart and target can vary infinitely). Assume a bottomless pit below the target. The dart is launched, and regardless of speed, it will hit the target (although the height at which it hits the target varies). Prove that this will always happen.
Half of the assignment is to solve it with given numbers (10m horizontal distance, initial velocity at 24 m/s, 35 deg above horizontal). I had no trouble solving and verifying that part. However, when I try to repeat the same equations without numbers, I end up at -1 = 1.
I scanned the work I've done so far. I'd appreciate it if someone could point out where I went wrong.
My scanned work is available at:
http://webserver.smphoto.com/physics12/
2. Sep 19, 2008
### alphysicist
Hi LANS,
I believe you have two closely related errors in your work. The first occurs when you are deriving a form for $\Delta h$. You first write the general formula:
$$\Delta h=v_i\ \Delta t+\frac{1}{2}a\ \Delta t^2$$
and below that you have:
$$\Delta h=\sin\theta\ z\ \left(\frac{\Delta d}{\cos\theta\ z}\right)+\frac{1}{2}a \left(\frac{\Delta d}{\cos\theta\ z }\right)^2$$
but this is not right. You have chosen downwards to be positive, but the term with the initial velocity is providing an upwards displacement, and so must be negative. (The point is since the two terms on the right are in different directions they must have different signs.) So this equation should be:
$$\Delta h=\ -\sin\theta\ z\ \left(\frac{\Delta d}{\cos\theta\ z}\right)+\frac{1}{2}a \left(\frac{\Delta d}{\cos\theta\ z }\right)^2$$
The next problem occurs right after that, where you say:
$$\tan\theta\ \Delta y -\Delta y = \Delta h$$
which is not correct based on the way you have defined $\Delta y$ and $\Delta h$. If you look back at the diagram at the top of the page, what this equation is saying is
$$y=\Delta y +\Delta h$$
Now y is a length, so it is a positive number. $\Delta y$ is the displacement of the target, and since it is moving downwards, that would be a positive number. But $\Delta h$ is the vertical displacement of the dart, and since that is upwards, that would be a negative number.
So if $y$ is some number like 8m, and $\Delta y$ is some number like 2m, then $\Delta h$ would be -6m. And so your equation above should be:
\begin{align} y&=\Delta y -\Delta h\nonumber\\ \tan\theta\ \Delta y -\Delta y &= \ -\Delta h\nonumber \end{align}
(More mathematically we would say the lengths of these three add together, so that what we would actually use is absolute values:
$$|y|=|\Delta y| +|\Delta h|$$
and then since $\Delta h$ is negative, $|\Delta h|=\ -\Delta h$.)
Once you make these changes, I think you'll see that all the terms cancel out in the line immediate below where you have a circled 1 in your work. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9571599960327148, "perplexity": 370.46715719910156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00091-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://www.physicsforums.com/showthread.php?p=4021556 | ## What is work?
Okay i know that energy is the ability or capacity to do work. But how do you define work, as like the isnt work just the transference of energy in one form to another? Which is kinda circular reasoning. Like the rely on each other for a defintion. Is that right?
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Mentor The appropriate definition of work depends on what theory you are using. One is the mechanical definition: "work is f.d". This definition is used in basic Newtonian mechanics. In it forces are the primary things and everything else is defined based on forces. Then energy is non-circularly defined as the capacity to do work. The other is the thermodynamic definition: "work is a transfer of energy other than through heat". This definition is used in Lagrangian and Hamiltonian mechanics as well as all field theories. In these, the Lagrangian is the primary thing and everything else is defined based on the Lagrangian. Then energy is non-circularly defined as the conserved quantity associated with time invariance of the Lagrangian.
Mentor
Blog Entries: 27
Quote by hwall95 Okay i know that energy is the ability or capacity to do work. But how do you define work, as like the isnt work just the transference of energy in one form to another? Which is kinda circular reasoning. Like the rely on each other for a defintion. Is that right?
You may want to bookmark the Hyperphysics website and use it as a starting point for future querries.
http://hyperphysics.phy-astr.gsu.edu/hbase/wcon.html
Zz.
## What is work?
Quote by DaleSpam The appropriate definition of work depends on what theory you are using. One is the mechanical definition: "work is f.d". This definition is used in basic Newtonian mechanics. In it forces are the primary things and everything else is defined based on forces. Then energy is non-circularly defined as the capacity to do work. The other is the thermodynamic definition: "work is a transfer of energy other than through heat". This definition is used in Lagrangian and Hamiltonian mechanics as well as all field theories. In these, the Lagrangian is the primary thing and everything else is defined based on the Lagrangian. Then energy is non-circularly defined as the conserved quantity associated with time invariance of the Lagrangian.
hahah yeah thanks i was meaning the thermodynamics definition sorry, okay thanks but when you say "other then heat energy", is that because thermal energy itself is just kinetic energy?
Mentor
Quote by hwall95 hahah yeah thanks i was meaning the thermodynamics definition sorry, okay thanks but when you say "other then heat energy", is that because thermal energy itself is just kinetic energy?
No, that is just the definition of work in thermodynamics. Energy can be transfered between systems either through heat, or through anything else. Anything else is called work.
The basic distinction is that heat is a rather disorganized microscopic transfer that you cannot see in detail, as opposed to things like macroscopic fields and forces.
Quote by DaleSpam No, that is just the definition of work in thermodynamics. Energy can be transfered between systems either through heat, or through anything else. Anything else is called work. The basic distinction is that heat is a rather disorganized microscopic transfer that you cannot see in detail, as opposed to things like macroscopic fields and forces.
ohhh okay thanks i understand that now, thanks heaps :) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8912671208381653, "perplexity": 819.5588657506463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383508/warc/CC-MAIN-20130516092623-00043-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://tutorial.math.lamar.edu/classes/calciii/SphericalCoords.aspx | Paul's Online Notes
Paul's Online Notes
Home / Calculus III / 3-Dimensional Space / Spherical Coordinates
Show Mobile Notice Show All Notes Hide All Notes
Mobile Notice
You appear to be on a device with a "narrow" screen width (i.e. you are probably on a mobile phone). Due to the nature of the mathematics on this site it is best views in landscape mode. If your device is not in landscape mode many of the equations will run off the side of your device (should be able to scroll to see them) and some of the menu items will be cut off due to the narrow screen width.
### Section 12.13 : Spherical Coordinates
In this section we will introduce spherical coordinates. Spherical coordinates can take a little getting used to. It’s probably easiest to start things off with a sketch.
Spherical coordinates consist of the following three quantities.
First there is $$\rho$$. This is the distance from the origin to the point and we will require $$\rho \ge 0$$.
Next there is $$\theta$$. This is the same angle that we saw in polar/cylindrical coordinates. It is the angle between the positive $$x$$-axis and the line above denoted by $$r$$ (which is also the same $$r$$ as in polar/cylindrical coordinates). There are no restrictions on $$\theta$$.
Finally, there is $$\varphi$$. This is the angle between the positive $$z$$-axis and the line from the origin to the point. We will require $$0 \le \varphi \le \pi$$.
In summary, $$\rho$$ is the distance from the origin to the point, $$\varphi$$ is the angle that we need to rotate down from the positive z-axis to get to the point and $$\theta$$ is how much we need to rotate around the $$z$$-axis to get to the point.
We should first derive some conversion formulas. Let’s first start with a point in spherical coordinates and ask what the cylindrical coordinates of the point are. So, we know $$\left( {\rho ,\theta ,\varphi } \right)$$ and want to find $$\left( {r,\theta ,z} \right)$$. Of course, we really only need to find $$r$$ and $$z$$ since $$\theta$$ is the same in both coordinate systems.
If we look at the sketch above from directly in front of the triangle we get the following sketch,
We know that the angle between the $$z$$-axis and $$\rho$$ is $$\varphi$$ and with a little geometry we also know that the angle between $$\rho$$ and the vertical side of the right triangle is also $$\varphi$$.
Then, with a little right triangle trig we get,
\begin{align*}z & = \rho \cos \varphi \\ r & = \rho \sin \varphi \end{align*}
and these are exactly the formulas that we were looking for. So, given a point in spherical coordinates the cylindrical coordinates of the point will be,
\begin{align*}r & = \rho \sin \varphi \\ \theta & = \theta \\ z & = \rho \cos \varphi \end{align*}
Note as well from the Pythagorean theorem we also get,
${\rho ^2} = {r^2} + {z^2}$
Next, let’s find the Cartesian coordinates of the same point. To do this we’ll start with the cylindrical conversion formulas from the previous section.
\begin{align*}x & = r\cos \theta \\ y & = r\sin \theta \\ z & = z\end{align*}
Now all that we need to do is use the formulas from above for $$r$$ and $$z$$ to get,
\begin{align*}x & = \rho \sin \varphi \cos \theta \\ y & = \rho \sin \varphi \sin \theta \\ z & = \rho \cos \varphi \end{align*}
Also note that since we know that $${r^2} = {x^2} + {y^2}$$ we get,
${\rho ^2} = {x^2} + {y^2} + {z^2}$
Converting points from Cartesian or cylindrical coordinates into spherical coordinates is usually done with the same conversion formulas. To see how this is done let’s work an example of each.
Example 1 Perform each of the following conversions.
1. Convert the point $$\displaystyle \left( {\sqrt 6 ,\frac{\pi }{4},\sqrt 2 } \right)$$ from cylindrical to spherical coordinates.
2. Convert the point $$\left( { - 1,1, - \sqrt 2 } \right)$$ from Cartesian to spherical coordinates.
Show All Solutions Hide All Solutions
a Convert the point $$\displaystyle \left( {\sqrt 6 ,\frac{\pi }{4},\sqrt 2 } \right)$$ from cylindrical to spherical coordinates. Show Solution
We’ll start by acknowledging that $$\theta$$ is the same in both coordinate systems and so we don’t need to do anything with that.
Next, let’s find$$\rho$$.
$\rho = \sqrt {{r^2} + {z^2}} = \sqrt {6 + 2} = \sqrt 8 = 2\sqrt 2$
Finally, let’s get $$\varphi$$. To do this we can use either the conversion for $$r$$ or $$z$$. We’ll use the conversion for $$z$$.
$z = \rho \cos \varphi \hspace{0.25in} \Rightarrow \hspace{0.25in}\cos \varphi = \frac{z}{\rho } = \frac{{\sqrt 2 }}{{2\sqrt 2 }}\hspace{0.25in} \Rightarrow \hspace{0.25in}\varphi = {\cos ^{ - 1}}\left( {\frac{1}{2}} \right) = \frac{\pi }{3}$
Notice that there are many possible values of $$\varphi$$ that will give $$\cos \varphi = \frac{1}{2}$$, however, we have restricted $$\varphi$$ to the range $$0 \le \varphi \le \pi$$ and so this is the only possible value in that range.
So, the spherical coordinates of this point will are $$\left( {2\sqrt 2 ,\frac{\pi }{4},\frac{\pi }{3}} \right)$$.
b Convert the point $$\left( { - 1,1, - \sqrt 2 } \right)$$ from Cartesian to spherical coordinates. Show Solution
The first thing that we’ll do here is find $$\rho$$.
$\rho = \sqrt {{x^2} + {y^2} + {z^2}} = \sqrt {1 + 1 + 2} = 2$
Now we’ll need to find $$\varphi$$. We can do this using the conversion for $$z$$.
$z = \rho \cos \varphi \hspace{0.25in} \Rightarrow \hspace{0.25in}\cos \varphi = \frac{z}{\rho } = \frac{{ - \sqrt 2 }}{2}\hspace{0.25in} \Rightarrow \hspace{0.25in}\varphi = {\cos ^{ - 1}}\left( {\frac{{ - \sqrt 2 }}{2}} \right) = \frac{{3\pi }}{4}$
As with the last parts this will be the only possible $$\varphi$$ in the range allowed.
Finally, let’s find $$\theta$$. To do this we can use the conversion for $$x$$ or $$y$$. We will use the conversion for $$y$$ in this case.
$\sin \theta = \frac{y}{{\rho \sin \varphi }} = \frac{1}{{2\left( {\frac{{\sqrt 2 }}{2}} \right)}} = \frac{1}{{\sqrt 2 }} = \frac{{\sqrt 2 }}{2}\hspace{0.5in} \Rightarrow \hspace{0.25in}\theta = \frac{\pi }{4}\,\,{\mbox{or}}\,\,\theta = \frac{{3\pi }}{4}$
Now, we actually have more possible choices for $$\theta$$ but all of them will reduce down to one of the two angles above since they will just be one of these two angles with one or more complete rotations around the unit circle added on.
We will however, need to decide which one is the correct angle since only one will be. To do this let’s notice that, in two dimensions, the point with coordinates $$x = - 1$$ and $$y = 1$$ lies in the second quadrant. This means that $$\theta$$ must be angle that will put the point into the second quadrant. Therefore, the second angle, $$\theta = \frac{{3\pi }}{4}$$, must be the correct one.
The spherical coordinates of this point are then $$\left( {2,\frac{{3\pi }}{4},\frac{{3\pi }}{4}} \right)$$.
Now, let’s take a look at some equations and identify the surfaces that they represent.
Example 2 Identify the surface for each of the following equations.
1. $$\rho = 5$$
2. $$\displaystyle \varphi = \frac{\pi }{3}$$
3. $$\displaystyle \theta = \frac{{2\pi }}{3}$$
4. $$\rho \sin \varphi = 2$$
Show All Solutions Hide All Solutions
a $$\rho = 5$$ Show Solution
First, think about what this equation is saying. This equation says that, no matter what $$\theta$$ and $$\varphi$$ are, the distance from the origin must be 5. So, we can rotate as much as we want away from the $$z$$-axis and around the $$z$$-axis, but we must always remain at a fixed distance from the origin. This is exactly what a sphere is. So, this is a sphere of radius 5 centered at the origin.
The other way to think about it is to just convert to Cartesian coordinates.
\begin{align*}\rho & = 5\\ {\rho ^2} & = 25\\ {x^2} + {y^2} + {z^2} & = 25\end{align*}
Sure enough a sphere of radius 5 centered at the origin.
b $$\displaystyle \varphi = \frac{\pi }{3}$$ Show Solution
In this case there isn’t an easy way to convert to Cartesian coordinates so we’ll just need to think about this one a little. This equation says that no matter how far away from the origin that we move and no matter how much we rotate around the $$z$$-axis the point must always be at an angle of $$\frac{\pi }{3}$$ from the $$z$$-axis.
This is exactly what happens in a cone. All of the points on a cone are a fixed angle from the $$z$$-axis. So, we have a cone whose points are all at an angle of $$\frac{\pi }{3}$$ from the $$z$$-axis.
c $$\displaystyle \theta = \frac{{2\pi }}{3}$$ Show Solution
As with the last part we won’t be able to easily convert to Cartesian coordinates here. In this case no matter how far from the origin we get or how much we rotate down from the positive $$z$$-axis the points must always form an angle of $$\frac{{2\pi }}{3}$$ with the $$x$$-axis.
Points in a vertical plane will do this. So, we have a vertical plane that forms an angle of $$\frac{{2\pi }}{3}$$ with the positive $$x$$-axis.
d $$\rho \sin \varphi = 2$$ Show Solution
In this case we can convert to Cartesian coordinates so let’s do that. There are actually two ways to do this conversion. We will look at both since both will be used on occasion.
Solution 1
In this solution method we will convert directly to Cartesian coordinates. To do this we will first need to square both sides of the equation.
${\rho ^2}{\sin ^2}\varphi = 4$
Now, for no apparent reason add $${\rho ^2}{\cos ^2}\varphi$$ to both sides.
\begin{align*}{\rho ^2}{\sin ^2}\varphi + {\rho ^2}{\cos ^2}\varphi & = 4 + {\rho ^2}{\cos ^2}\varphi \\ {\rho ^2}\left( {{{\sin }^2}\varphi + {{\cos }^2}\varphi } \right) & = 4 + {\rho ^2}{\cos ^2}\varphi \\ {\rho ^2} & = 4 + {\left( {\rho \cos \varphi } \right)^2}\end{align*}
Now we can convert to Cartesian coordinates.
\begin{align*}{x^2} + {y^2} + {z^2} & = 4 + {z^2}\\ {x^2} + {y^2} & = 4\end{align*}
So, we have a cylinder of radius 2 centered on the $$z$$-axis.
This solution method wasn’t too bad, but it did require some not so obvious steps to complete.
Solution 2
This method is much shorter, but also involves something that you may not see the first time around. In this case instead of going straight to Cartesian coordinates we’ll first convert to cylindrical coordinates.
This won’t always work, but in this case all we need to do is recognize that $$r = \rho \sin \varphi$$ and we will get something we can recognize. Using this we get,
\begin{align*}\rho \sin \varphi & = 2\\ r & = 2\end{align*}
At this point we know this is a cylinder (remember that we’re in three dimensions and so this isn’t a circle!). However, let’s go ahead and finish the conversion process out.
\begin{align*}{r^2} & = 4\\ {x^2} + {y^2} & = 4\end{align*}
So, as we saw in the last part of the previous example it will sometimes be easier to convert equations in spherical coordinates into cylindrical coordinates before converting into Cartesian coordinates. This won’t always be easier, but it can make some of the conversions quicker and easier.
The last thing that we want to do in this section is generalize the first three parts of the previous example.
\begin{align*}\rho & = a\hspace{0.5in}{\mbox{sphere of radius }}a{\mbox{ centered at the origin}}\\ \varphi & = \alpha \hspace{0.5in}{\mbox{cone that makes an angle of }}\alpha {\mbox{ with the positive }}z - {\mbox{axis}}\\ \theta & = \beta \hspace{0.5in}{\mbox{vertical plane that makes an angle of }}\beta {\mbox{ with the positive }}x - {\mbox{axis}}\end{align*} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.987589418888092, "perplexity": 216.76000706427357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948871.42/warc/CC-MAIN-20230328201715-20230328231715-00053.warc.gz"} |
https://www.physicsforums.com/threads/proof-everywhere-tangent-to-curve.233476/ | # Proof: Everywhere Tangent to Curve?
1. May 6, 2008
### bakra904
Proof: Everywhere Tangent to Curve??
If the function v depends on x and y, v(x,y) and we know there exists some function psi(x,y) such that
vx = partial w.r.t (y) of psi
vy= -(partial w.r.t (x) of psi)
show that the curves psi(x,y) = constant, are everywhere tangent to v.
Last edited: May 6, 2008
2. May 6, 2008
### DavidWhitbeck
Usually you are supposed to show effort to get there, but I think this is a case where either you get it or you don't.
$$\nabla \psi$$ is normal to surfaces of constant $$\psi$$ and $$v\cdot \nabla \psi = 0$$. Fill in the rest.
3. May 6, 2008
### bakra904
Thanks a bunch! I'm a new poster and did not know about the effort rule...I had worked on it but did not post what I had worked on.
I was trying to use the fact that if v = $$\nabla \times$$ $$\psi$$,
then that would imply that $$\psi$$ is a stream function, which in cartesian co-ordinates would reduce to:
Vx = $$\frac{\partial\psi}{\partial y}$$ and Vy = - $$\frac{\partial\psi}{\partial x}$$
which is basically what the problem had to begin with. Then, since I know that $$\psi$$ (x,y) is a stream function, doesn't it have to be tangent to v by virtue of the fact that its a streamline?
4. May 6, 2008
### DavidWhitbeck
Are you trying to curl a scalar field??
5. May 7, 2008
### bakra904
oh right...i overlooked that part. thanks!
6. May 9, 2008
### bakra904
so basically $$v. \nabla\psi = 0$$ which proves that $$v$$ and $$\nabla\psi$$ are perpendicular (since their dot product is 0) and so $$\psi$$ must be tangent to $$v$$
Similar Discussions: Proof: Everywhere Tangent to Curve? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.899485170841217, "perplexity": 1136.3579528342952}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609837.56/warc/CC-MAIN-20170528141209-20170528161209-00493.warc.gz"} |
http://mathhelpforum.com/algebra/117141-simplify.html | # Math Help - Simplify
1. ## Simplify
Hi
I need to simplify the following, giving the result without fractional indices:
$\frac{(x^2-1)^2 \sqrt{x+1}}{(x-1)^{3/2}}$
The solution: $(x+1)^2 \sqrt{x^2-1}$
I don't see how they obtain this solution, can somebody give me a few steps?
Thank you
2. Originally Posted by Coxx
Hi
I need to simplify the following, giving the result without fractional indices:
$\frac{(x^2-1)^2 \sqrt{x+1}}{(x-1)^{3/2}}$
The solution: $(x+1)^2 \sqrt{x^2-1}$
I don't see how they obtain this solution, can somebody give me a few steps?
Thank you
simplify:
$\frac{(x^2-1)^2 \sqrt{x+1}}{(x-1)^{3/2}}$
----------------------------------------------------------
remember:
$(a^2-b^2)=(a-b)(a+b)$
so,
$(x^2-1)^2= ((x-1)(x+1))^2=(x+1)^2 (x-1)^2$
-----------------------------------------------------------
$\frac{(x+1)^2 (x-1)^2 \sqrt{x+1}}{(x-1)^{3/2}}$
u can cancel (x-1)...
3. Hello, Coxx!
What a strange way to leave the answer . . .
Simplify: . $\frac{(x^2-1)^2 \sqrt{x+1}}{(x-1)^{3/2}}$
The solution: $(x+1)^2 \sqrt{x^2-1}$
We have: . $\frac{\bigg[(x-1)(x+1)\bigg]^2\sqrt{x+1}}{(x-1)^{\frac{3}{2}}}$ . $= \;\frac{(x-1)^2(x+1)^2(x+1)^{\frac{1}{2}}}{(x-1)^{\frac{3}{2}}}$
. . . . $=\; \frac{(x-1)^2}{(x-1)^{\frac{3}{2}}} \cdot\frac{(x+1)^2(x+1)^{\frac{1}{2}}}{1} \;=\;(x-1)^{\frac{1}{2}}(x+1)^{\frac{1}{2}}(x+1)^2$
. . . . $=\;\bigg[(x-1)(x+1)\bigg]^{\frac{1}{2}}(x+1)^2 \;=\;(x^2-1)^{\frac{1}{2}}(x+1)^2$
. . . . $= \;\sqrt{x^2-1}\,(x+1)^2$
4. hi,
So obvious many thanks
I just started to improve my maths skills so i'm really a newbie in this field!
greets | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9906808137893677, "perplexity": 1120.6938908751931}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257832942.23/warc/CC-MAIN-20160723071032-00136-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://socratic.org/questions/how-do-you-prove-the-statement-lim-as-x-approaches-6-for-x-4-3-9-2-using-the-eps#251435 | Calculus
Topics
# How do you prove the statement lim as x approaches 6 for ((x/4)+3) = 9/2 using the epsilon and delta definition?
Apr 8, 2016
Please see the explanation section below.
#### Explanation:
Preliminary analysis
${\lim}_{x \rightarrow \textcolor{g r e e n}{a}} \textcolor{red}{f \left(x\right)} = \textcolor{b l u e}{L}$ if and only if
for every $\epsilon > 0$, there is a $\delta > 0$ such that:
for all $x$, $\text{ }$ if $0 < \left\mid x - \textcolor{g r e e n}{a} \right\mid < \delta$, then $\left\mid \textcolor{red}{f \left(x\right)} - \textcolor{b l u e}{L} \right\mid < \epsilon$.
So we want to make $\left\mid {\underbrace{\textcolor{red}{\left(\frac{x}{4} + 3\right)}}}_{\textcolor{red}{f \left(x\right)}} - {\underbrace{\textcolor{b l u e}{\frac{9}{2}}}}_{\textcolor{b l u e}{L}} \right\mid$ less than some given $\epsilon$ and we control (through our control of $\delta$) the size of $\left\mid x - {\underbrace{\textcolor{g r e e n}{6}}}_{\textcolor{g r e e n}{a}} \right\mid$
Look at the thing we want to make small:
$\left\mid \left(\frac{x}{4} + 3\right) - \frac{9}{2} \right\mid = \left\mid \frac{x}{4} - \frac{3}{2} \right\mid = \left\mid \frac{x - 6}{4} \right\mid = \frac{\left\mid x - 6 \right\mid}{\left\mid 4 \right\mid} = \frac{\left\mid x - 6 \right\mid}{4}$
And there's the thing we control, in the numerator!
We can make $\frac{\left\mid x - 6 \right\mid}{4} < \epsilon$ by making $\left\mid x - 6 \right\mid < 4 \epsilon$.
So we will choose $\delta = 4 \epsilon$. (Any lesser $\delta$ would also work.)
(Detail: if $\left\mid x - 6 \right\mid < 4 \epsilon$, then we can multiply on both sides by the positive number $\frac{1}{4}$ to get $\frac{\left\mid x - 6 \right\mid}{4} < \epsilon$.)
Now we need to actually write up the proof:
Proof
Given $\epsilon > 0$, choose $\delta = 4 \epsilon$. $\text{ }$ (note that $\delta$ is also positive).
Now for every $x$ with $0 < \left\mid x - 6 \right\mid < \delta$, we have
$\left\mid f \left(x\right) - \frac{9}{2} \right\mid = \left\mid \left(\frac{x}{4} + 3\right) - \frac{9}{2} \right\mid = \left\mid \frac{x - 6}{4} \right\mid = \frac{\left\mid x - 6 \right\mid}{4} < \frac{\delta}{4}$
[Detail if $\left\mid x - 6 \right\mid < \delta$, we can conclude that $\frac{\left\mid x - 6 \right\mid}{4} < \frac{\delta}{4}$. -- we usually do not mention this, but leave it to the reader. See below.]
And $\frac{\delta}{4} = \frac{4 \epsilon}{4} = \epsilon$
Therefore, with this choice of delta, whenever $0 < \left\mid x - 6 \right\mid < \delta$, we have $\left\mid f \left(x\right) - \frac{9}{2} \right\mid < \epsilon$
So, by the definition of limit, ${\lim}_{x \rightarrow 6} \left(\frac{x}{4} + 3\right) = \frac{9}{2}$.
We can condense a bit
for every $x$ with $0 < \left\mid x - 6 \right\mid < \delta$, we have
$\left\mid f \left(x\right) - \frac{9}{2} \right\mid = \left\mid \left(\frac{x}{4} + 3\right) - \frac{9}{2} \right\mid$
$= \left\mid \frac{x - 6}{4} \right\mid$
$= \frac{\left\mid x - 6 \right\mid}{4}$
$< \frac{\delta}{4} = \frac{4 \epsilon}{4} = \epsilon$.
So, $\left\mid f \left(x\right) - \frac{9}{2} \right\mid < \epsilon$.
##### Impact of this question
2031 views around the world | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 39, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9858829975128174, "perplexity": 372.88643605824035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570913.16/warc/CC-MAIN-20220809064307-20220809094307-00233.warc.gz"} |
https://www.physicsforums.com/threads/motion-in-two-dimensions-question.74029/ | # Motion in two dimensions question
1. May 2, 2005
### jhson114
A car is on a circular oval-shaped track and at point F, the car is at rest. about 1/8th of the way around the oval from point F, there's a point G. Now i dont understand why the following two statements are incorrect.
1. The acceleration at point F is zero. As point G becomes closer and closer to point F, the change in velocity vector becomes smaller and smaller, eventually it becomes zero.
2. The acceleration at point F is perpendicular to the curve.
2 is incorrect because acceleration at point F is zero since the car is at rest. but i dont understand why 1 is incorrect. it seems right to me..
2. May 2, 2005
### StatusX
1 is incorrect and 2 depends on the forces pushing the car around. The acceleration is the instantaneous change in velocity. Since the car is turning, its velocity vector is changing direction, and so there is an acceleration. Now, if the speed of the car is constant, there is never any acceleration parallel to its motion, and so the acceleration is perpendicular at all points, but I don't know if this is what's going on here.
3. May 2, 2005
### OlderDan
The answer to 1 makes sense if you assume the car is moving through point G with some speed. The statement of the problem appears to be incomplete.
4. May 2, 2005
### jhson114
the car starts from rest at point F and speeds up continuously as it moves around an oval. the car is in motion when it passes the point G.
To me, as the point G is moved closer and closer to the F, the velocity vector will indeed get smaller and eventually reach zero when it is at point G. So i dont see what's wrong with statement 1.
for statement 2, since the car is at rest, how can it have a perpendicular acceleration?
5. May 2, 2005
### StatusX
I don't understand what you mean by "as G gets closer to F." I thought you said F was 1/8 of the way around the oval from G. Is the oval shrinking? Or does F move with the car?
If the car is continuously sped up, the acceleration is never 0. How could it be?
6. May 2, 2005
### KingNothing
I think they mean that G is just a point, and it's distance from F can be varied, although it start 1/8 of the track from F. But again, that's just how I interpreted it, we can all agree they stated whatever they meant to say very poorly.
7. May 2, 2005
### juvenal
Velocity and acceleration are two different things, since dv/dt = a. A zero instantaneous velocity tells you nothing about your instantaneous acceleration.
8. May 2, 2005
### jhson114
its like what KingNothing said. its just a poing on the oval. if you move this point closer and closer to F, does the velocity eventually get zero??
9. May 2, 2005
### juvenal
I actually don't understand why (2) is wrong. Is the oval perfectly circular? If not, then there will be components of the acceleration that are parallel to the curve. EDIT: actually you'll need a parallel component even in the circular case to get the car moving.
(1) is just talking about calculating the derivative of v at point F.
acceleration = lim (v(x+delta) - v(x))/delta, as delta approaches zero.
Have you studied calculus before? Think about this: if I drop you out of a helicopter with no horizontal velocity, you will be at rest the moment I drop you, but your instantaneous acceleration will be g.
Last edited: May 2, 2005
10. May 2, 2005
### OlderDan
I think you have #2 figured out. There may be an acceleration even if the car is at rest, but if so it will be in the direction of motion. There can be no perpendicular acceleration with the car at rest.
For number 1, the key point is that the car is moving in all cases. The statement that G gets closer to F is an obscure way of asking "what if the dimensions of the track are decreasing?" If the dimesions are decreasing, then the radius of curvature of any part of the track is decreasing. If you have some velocity as before, and the radius of curvature is approaching zero, you still have the same change in velocity from one side of the track to the other or between any two points near G that have the same velocity direction as on the original track. The change is happening faster, so if you had been asked about the RATE of change of velocity (acceleration) instead of the change itself, that would approach infinity. Besides, as someone else noted, being at rest a point F does not mean there is no acceleration there.
11. May 2, 2005
### juvenal
Why is that? It's not impossible to have a radial force component.
12. May 2, 2005
### StatusX
The component of acceleration parrallel to the velocity changes the particle's speed while the component perpendicular changes its direction. It's true, at the moment the particle begins to move, the speed can only increase, it cannot change direction because it has no direction. The speed is increasing along the direction of the track, and the acceleration will be parallel to the track. Juvenal, if there is a net radial force at the moment the particle is at rest, it will begin to move towards the center of the circle. But since the track is there, it provides an opposing force and this doesn't happen. Once the speed builds up, a radial force is required to keep the object in circular motion.
13. May 2, 2005
### juvenal
I don't think the speed has to build up - it just has to be non-zero, which happens almost immediately. I think it would work if you started with a radial force, but also accelerated forward at time t_0, as I mentioned in my edit above.
14. May 2, 2005
### StatusX
Any radial force beyond the necessary centripetal force will cause the particle to accelerate towards the center of the circle. The centripetal force is mv^2/r, so even a small net radial force applied immediately after the particle begins to move (very small v) will cause this to happen. Keep in mind the centripetal force is supplied by the walls of the track, and changes as the particle moves faster and is pushed more strongly into the wall.
15. May 2, 2005
### juvenal
Right - but I wasn't talking about an extra radial force beyond the centripetal, except exactly at time t=0. Anyhow, a small net radial force applied at ANY time will cause acceleration towards the center.
16. May 2, 2005
### StatusX
I'm sorry, I meant it will cause it to move towards the center, in that the radius will start to shrink. If there was any radial force when the particle is at rest, it will begin to move towards the center. I think your problem is that you know the velocity is increasing, and a radial force will be necessary any time after t=0, and so you think there must be a force at t=0. Its true the radial force will only be zero instantaneously, but if it was some finite number greater than 0, there would be a finite time for which it would be greater than mv^2/r, and then there would be some finite distance by which the radius would shrink. It's not like the force is 0 when the particle is at rest and then switches to some finite number the instant it starts to move.
17. May 2, 2005
### OlderDan
It is a car speeding up from rest on a track. Any piece of the track can be considered to have a radius of curvature perpendicular to the direction of the track. If there is no velocity at any moment in time, there is no acceleration in the direction of that radius by a = mv^2/r. If v = 0 and then a =0
If by radial acceleration you mean toward the center of the oval, then there could be a radial component, but the problem was carefully worded to talk about acceleration perpendicular to the curve of the track, not towards the center of the oval.
Similar Discussions: Motion in two dimensions question | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8409566283226013, "perplexity": 338.94720351105263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105927.27/warc/CC-MAIN-20170819220657-20170820000657-00687.warc.gz"} |
http://paperity.org/p/11631403/fractional-planks | # Fractional Planks
Discrete & Computational Geometry, Jan 2002
In 1950 Bang proposed a conjecture which became known as “the plank conjecture”: Suppose that a convex set S contained in the unit cube of R n and touching all its sides is covered by planks. (A plank is a set of the form {(x 1 , ..., x n ): x j ∈ I} for some j ∈ {1, ...,n} and a measurable subset I of [0, 1]. Its width is defined as |I| .) Then the sum of the widths of the planks is at least 1 . We consider a version of the conjecture in which the planks are fractional. Namely, we look at n -tuples f 1 , ..., f n of nonnegative-valued measurable functions on [0,1] which cover the set S in the sense that ∑ f j (x j ) ≥ 1 for all (x 1 , ..., x n )∈ S . The width of a function f j is defined as ∈t 0 1 f j (x) dx . In particular, we are interested in conditions on a convex subset of the unit cube in R n which ensure that it cannot be covered by fractional planks (functions) whose sum of widths (integrals) is less than 1 . We prove that this (and, a fortiori, the plank conjecture) is true for sets which touch all edges incident with two antipodal points in the cube. For general convex bodies inscribed in the unit cube in R n we prove that the sum of widths must be at least 1/n (the true bound is conjectured to be 2/n).
This is a preview of a remote PDF: https://link.springer.com/content/pdf/10.1007%2Fs00454-001-0088-x.pdf
Aharoni, Holzman, Krivelevich, Meshulam. Fractional Planks, Discrete & Computational Geometry, 2002, 585-602, DOI: 10.1007/s00454-001-0088-x | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9750295877456665, "perplexity": 608.4944318963579}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155702.33/warc/CC-MAIN-20180918205149-20180918225149-00506.warc.gz"} |
https://en.m.wikipedia.org/wiki/SNO%2B | # SNO+
A rope basket anchors the acrylic vessel
SNO+ is a physics experiment designed to search for neutrinoless double beta decay, with secondary measurements of proton–electron–proton (pep) solar neutrinos, geoneutrinos from radioactive decays in the Earth, and reactor neutrinos. It is under construction (as of February 2017) using the underground equipment already installed for the former Sudbury Neutrino Observatory (SNO) experiment at SNOLAB. It could also observe supernovae neutrinos if a supernova occurs in our galaxy.
## Physics Goals
The primary goal of the SNO+ detector is the search for neutrinoless double beta decay, specifically with regard to decay of ${\displaystyle ^{130}Te}$ [1], to understand if a neutrino is its own anti-particle (i.e. a majorana fermion). Secondary physics goals include measurement of neutrinos or antineutrinos from:
## Testing and construction
The previous experiment, SNO, used water within the sphere and relied on Cherenkov radiation interaction. The SNO+ experiment will use the sphere filled with linear alkyl benzene to act as a liquid scintillator and target material.[2] The sphere is surrounded with photomultiplier tubes and the assembly is floated in water and the sphere held down against the resulting buoyant forces by ropes. Testing (filled with water) is expected to begin early 2016, with full operation with liquid a few months after that, and Tellurium loading begins in 2017.[1]
A neutrino interaction with this liquid produces several times more light than an interaction in a water Cherenkov experiment such as the original SNO experiment or Super-Kamiokande. The energy threshold for the detection of neutrinos can, therefore, be lower, and proton–electron–proton solar neutrinos (with an energy of 1.44 MeV) can be observed. In addition, a liquid scintillator experiment can detect anti-neutrinos like those created in nuclear fission reactors and the decay of thorium and uranium in the earth.
Many tons of Tellurium-130, a double beta decaying material, will be added to the experiment. This will make SNO+ the largest experiment to study neutrinoless double beta decay.
Earlier proposals placed more emphasis on neutrino observations. The current emphasis on neutrinoless double beta decay is because the interior of the acrylic vessel has been significantly contaminated by radioactive daughter products of the radon gas that is common in the mine air. These could leach into the scintillator, where some would be removed by the filtration system, but the remainder may interfere with low-energy neutrino measurements.[3] The neutrinoless double beta decay observations are not affected by this.[3]
The project received funding for initial construction from NSERC in April 2007. As of early 2013, the cavity had been refurbished and re-sealed to new cleanliness standards, more stringent than for the original SNO due to the new experiment's greater sensitivity.
The main civil engineering challenge is that the current SNO vessel is supported by a series of ropes, to prevent the weight of the heavy water inside from sinking it in the surrounding normal water. The proposed liquid scintillator (linear alkyl benzene) is lighter than water, and must be held down instead, but still without blocking the view of its interior. The existing support rope attachment points, cast into the acrylic sphere's equator, are not suitable for upside-down use.
## Computing
The collaboration is investigating the use of grid resources to deliver the computing power needed by the experiment. This is after the success of the LHC Computing Grid (wLCG) used by the LHC experiments. The SNO+ VO has been using resources provided by GridPP.[4]
## References
1. ^ a b Andringa, S.; et al. (SNO+ Collaboration) (2015). "Current Status and Future Prospects of the SNO+ Experiment". Advances in High Energy Physics. 2016: 1–21. arXiv:1508.05759. Bibcode:2015arXiv150805759S. doi:10.1155/2016/6194250.
2. ^ Lasserre, T.; Fechner, M.; Mention, G.; Reboulleau, R.; Cribier, M.; Letourneau, A.; Lhuillier, D. (2010). "SNIF: A Futuristic Neutrino Probe for Undeclared Nuclear Fission Reactors". arXiv:1011.3850 [nucl-ex].
3. ^ a b Kaspar, Jarek; Biller, Steve (10 September 2013). SNO+ with Tellurium. 13th International Conference on Topics in Astroparticle and Underground Physics. Asilomar, California. p. 21. Retrieved 2015-08-18.
4. ^ "Grid Computing". SNO+. Retrieved 2014-08-05. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8332061767578125, "perplexity": 3781.2713925964104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259327.59/warc/CC-MAIN-20190526165427-20190526191427-00328.warc.gz"} |
http://tex.stackexchange.com/questions/97247/what-is-the-correct-way-to-generate-notation-for-a-finite-set-using-for-loop-in?answertab=active | # What is the correct way to generate notation for a finite set using for loop in Latex?
I am trying to create a command that will take an index n as a parameter, and generate the the expanded form of this {a_1, a_2, a_3,...,a_n}.
For my first attempt, I tried to use the syntax suggested in pgffor: Special treatment for last item in \foreach-list and http://stackoverflow.com/questions/2561791/iteration-in-latex to create code that would output a_1 if the index was 1, and output a "," followed by a_n, if the index was not 1.
\usepackage{pgffor}
\newcommand{\sigi}[1]{\foreach \n [count=\ni] in {1,...#1}{%
\ifnum\ni=1%
\sigma_#1%
\else%
,\sigma_#1%
}}
Unfortunately, I have worked with neither LaTeX for loops nor the pgffor package before, so I am unclear about how to interpret the compile error that occurs when I try to use this command as follows:
$\sigi{1}$
Error message:
! Undefined control sequence.
\pgffor@count@@parse ...mathresult }\pgfmathparse
{int(#3-1)}\let #1=\pgfmat...
l.363 \sigi{1}
-
The idea was ok, but there were several mistakes in your code. Here's a fixed version:
\documentclass{article}
\usepackage{pgffor,pgfmath}
\begin{document}
\newcommand{\sigi}[1]{\foreach \ni in {1,...,#1}{%
\ifnum\ni=1
\sigma_{\ni}%
\else%
,\sigma_{\ni}%
\fi
}}
$\sigi{12}$
\end{document}
The problems were:
• missing pgfmath package
• missing comma after ...
• superfluous % after \ifnum\ni=1
• the subscript of \sigma should be \ni, not #1, and it should be enclosed in braces (important when #1>9)
• missing \fi
• \foreach \n [count=\ni] was ok, but \foreach \ni is simpler
-
The superfluity of the % is somewhat minor, no? :-) (and your last point is quite true, but your corrected version does not take it into account) – Mariano Suárez-Alvarez Feb 7 '13 at 21:00
Regarding %: I was just trying to be thorough. A space (or \relax) after a numeral (which % inhibits) makes sure that TeX stops expanding the number. In the above case, TeX tried to expand \sigma, which was truly not problematic --- but I think that writing space after numerals is still a good habit to develop, because once the lack of it does cause trouble, you'll be debugging for hours ... trust me, I know ;-) – Sašo Živanović Feb 7 '13 at 21:24
Thanks for reminding me of the last point ... I did some asynchronous processing :-) – Sašo Živanović Feb 7 '13 at 21:25
Here's a rather customizable version using LaTeX3.
\documentclass{article}
\usepackage{xparse}
\ExplSyntaxOn
% the user level macro
\NewDocumentCommand{\sigi}{ s O{a} D<>{2} m }
{
\{ % initial delimiter
\IfBooleanTF{#1}
{
\merlin_sigi_nodots:nn { #2 } { #3 }, \dots, #2\sb{#4}
}
{
\merlin_sigi:nnn { #2 } { #3 } { #4 }
}
\} % final delimiter
}
% the inner main function that decides if dots are necessary or not
\cs_new_protected:Npn \merlin_sigi:nnn #1 #2 #3
{
\int_compare:nTF { #3 < #2 }
{\merlin_sigi_nodots:nn { #1 } { #3 } }
{
\int_compare:nTF { #3 <= #2 + 2 }
{ \merlin_sigi_nodots:nn { #1 } { #3 } }
{ \merlin_sigi_dots:nnn { #1 } { #2 } { #3 } }
}
}
% the loop for printing the sequence when no dots are required
\cs_new_protected:Npn \merlin_sigi_nodots:nn #1 #2
{
\int_step_inline:nnnn { 1 } { 1 } { #2 - 1 } { #1\sb{##1}, }
#1\sb{#2}
}
% the loop for printing the sequence when dots are required
\cs_new_protected:Npn \merlin_sigi_dots:nnn #1 #2 #3
{
\int_step_inline:nnnn { 1 } { 1 } { #2 } { #1\sb{##1}, }
\dots,
#1\sb{#3}
}
\ExplSyntaxOff
\begin{document}
$\sigi{3}$
$\sigi{4}$
$\sigi[b]{5}$
$\sigi<3>{10}$
$\sigi[c]<4>{20}$
$\sigi<5>{4}$
$\sigi{1}$
$\sigi{2}$
\bigskip
$\sigi*{n}$
$\sigi*[b]{m}$
$\sigi*[c]<3>{k}$
\end{document}
You can specify both the variable name (default a) and the number of initial indexed elements (default 2). If the number given as argument is less than the default number or one or two bigger, the full list is printed, as a list
{a1,a2,…,a4}
would be rather awkward.
Specification
The \sigi macro has
• One optional argument (in brackets) representing the variable name (default a)
• One optional argument (between < and >) representing the number of elements spelled out at the beginning
• A mandatory argument (in braces, as usual) representing the final number
Non positive integer input in the second optional argument or in the mandatory argument will cause errors.
However the macro admits also a *-variant for a "generic" last argument, as shown in the last three lines of input. The syntax is the same for the optional arguments; the mandatory argument can be anything.
-
Cool! I wrote a rather simpler LaTeX3 version myself ... just for fun (and learning!). A question, however: isn't the D argument specifier supposed to mean "do not use"? :-) – Sašo Živanović Feb 7 '13 at 21:53
@SašoŽivanović It's "do not use" in functions (that is after a colon in the "inner" macros). For \NewDocumentCommand it specifies an optional argument delimited by different characters than brackets. – egreg Feb 7 '13 at 21:56
new data overload ... thanks for explaining! – Sašo Živanović Feb 8 '13 at 2:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9551664590835571, "perplexity": 3640.896738149889}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678693548/warc/CC-MAIN-20140313024453-00072-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://www.cfd-online.com/Forums/openfoam-programming-development/104750-question-about-ueqn-sonicfoam-print.html | CFD Online Discussion Forums (http://www.cfd-online.com/Forums/)
- OpenFOAM Programming & Development (http://www.cfd-online.com/Forums/openfoam-programming-development/)
- - A question about UEqn sonicFoam (http://www.cfd-online.com/Forums/openfoam-programming-development/104750-question-about-ueqn-sonicfoam.html)
lfgmarc July 14, 2012 20:25
A question about UEqn sonicFoam
Hi, I have a question about the UEqn implemented into sonicFoam, the code of the UEqn is:
Code:
fvVectorMatrix UEqn ( fvm::ddt(rho, U) + fvm::div(phi, U) + turbulence->divDevRhoReff(U) );
My question is about the term that is computed from the turbulence model, i.e if K-epsilon model is selected, this term results:
Code:
tmp<fvVectorMatrix> kEpsilon::divDevRhoReff(volVectorField& U) const { return ( - fvm::laplacian(muEff(), U) - fvc::div(muEff()*dev2(fvc::grad(U)().T())) ); }
Initially I thought that it corresponded to
but after examining the definition of dev2 in src/OpenFOAM/primitives/Tensor/TensorI.H
I found the following:
is the rate-of-strain tensor given by
but according to the definition of DivDevRhoReff
And manipulating the expression (1) I conclude that the term
Code:
+ turbulence->divDevRhoReff(U)
does not correspond to (1).
If someone can help me with my confusion about this term, I was very thankful
haze_1986 January 24, 2013 13:26
Quote:
Originally Posted by lfgmarc (Post 371474) Hi, I have a question about the UEqn implemented into sonicFoam, the code of the UEqn is: Code: fvVectorMatrix UEqn ( fvm::ddt(rho, U) + fvm::div(phi, U) + turbulence->divDevRhoReff(U) ); My question is about the term that is computed from the turbulence model, i.e if K-epsilon model is selected, this term results: Code: tmp kEpsilon::divDevRhoReff(volVectorField& U) const { return ( - fvm::laplacian(muEff(), U) - fvc::div(muEff()*dev2(fvc::grad(U)().T())) ); } Initially I thought that it corresponded to but after examining the definition of dev2 in src/OpenFOAM/primitives/Tensor/TensorI.H I found the following: is the rate-of-strain tensor given by but according to the definition of DivDevRhoReff And manipulating the expression (1) I conclude that the term Code: + turbulence->divDevRhoReff(U) does not correspond to (1). If someone can help me with my confusion about this term, I was very thankful
Indeed, I was thinking it should be dev instead of dev2 as well if the explanation by poplar in http://www.cfd-online.com/Forums/ope...tml#post388688 is correct.
I hope I can find out what is wrong.
jens_klostermann February 15, 2013 10:07
Hi,
the term
is equal to
But I miss the term
in
Code:
turbulence->divDevRhoReff(U)
maybe someone can comment on this hidden term?
Jens
sasanghomi June 14, 2013 15:36
Hi jens
Did you find this term ? I have a similar problem .
Thanks
Sasan.
jens_klostermann June 17, 2013 09:57
No, I didn't habe time to work further on this problem.
sharonyue December 24, 2014 16:48
Quote:
Originally Posted by jens_klostermann (Post 434460) No, I didn't habe time to work further on this problem.
I found a similar thing on this: http://www.cfd-online.com/Forums/ope...oam-2-2-x.html
pgiannatselis February 1, 2015 17:49
Reading http://www.cfd-online.com/Forums/ope...ivdevreff.html I concluded that this term is zero if the fluid is incompressible and near convergence it doesn't concern us. The term remains for stability reasons when we are still far from the solution. If somebody more experienced confirms my thinking, the problem is solved.
I don't feel able to tell you that my point is 100% correct.
All times are GMT -4. The time now is 03:37. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.804303765296936, "perplexity": 4774.4010778765805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398465089.29/warc/CC-MAIN-20151124205425-00136-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://uh-ir.tdl.org/uh-ir/browse?type=subject&value=bacteria | Now showing items 1-1 of 1
• #### Mathematical modeling the effect of antimicrobials on heterogeneous bacterial populations
(2012-08)
This dissertation comprises of six chapters with chapters 2-4 being individual case studies, each case study corresponding to a project involving use of mathematical modeling to characterize the effect of antimicrobials ... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8923361301422119, "perplexity": 3858.6891547222936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592650.53/warc/CC-MAIN-20180721164755-20180721184755-00365.warc.gz"} |
http://docplayer.net/21447067-The-mole-6-022-x-10-23.html | # The Mole x 10 23
Size: px
Start display at page:
Transcription
1 The Mole x 10 23
2 Background: atomic masses Look at the atomic masses on the periodic table. What do these represent? E.g. the atomic mass of Carbon is (atomic # is 6) We know there are 6 protons and 6 neutrons in C-12 Protons and neutrons have roughly the same mass of about 1.66 x10-24 grams Set this mass equal to 1 amu (atomic mass units). Carbon-12 thus has a mass of 12 amu. The atomic mass shown on the Periodic Table is a weighted average of masses of all isotopes of an element But more useful to associate atomic mass with a mass in grams.
3 The Mole Scientist set out to develop a basic unit of measurement to convert from atomic mass to grams. Used Carbon-12 to set the standard. Scientists set one mole = the number of atoms of C-12 in 12 grams of C-12. Experiments show that there are x carbon atoms in 12 grams of Carbon x things is a mole of any thing! Also known as Avogadro s number
4 Who wants to be a Mollionaire? Q: how long would it take to spend a mole of \$1 bills if they were being spent at a rate of one billion per second? A: \$ 6.02 x bills x sec/\$1,000,000,000 bills = 6.02 x payments = 6.02 x seconds 6.02 x seconds / 60 = x minutes x minutes / 60 = x hours x hours / 24 = x 10 9 days x 10 9 days / = x 10 7 years A: It would take 19 million years
5 Molar mass The mass of one mole of something is called its molar mass. Since one atom of C-12 = 12 amu and one mole of C-12 = 12 grams, we can use atomic masses to directly convert from amu/atom to grams/mol. For an element, molar mass = Atomic Mass from Periodic Table, but use g/mole rather than amu Example: Lithium s atomic mass = 6.94 amu Thus, 1 mole of Li = 6.94 g Li This is expressed a molar mass of 6.94 g/mol Sometimes referred to as gram formula mass
6 Molar mass What are the following element molar masses? S =32.06 g/mol Ag = g/mol For a compound, molar mass = sum of molar masses of each element times the number of atoms of that element in the compound. CO 2 = g/mol C x 1 = x 1 = O x 2 = x 2 = 12.01g/mol 32.00g/mol g/mol
7 Molar Mass Calculations Determine the mass in grams of: mol of Au mol of Zn Determine the number of moles of: g of Cu g of Na 2.0 mol Au x g/mol = 3.9 x 10 2 g 4.37 mol Zn x 65.4 g/mol = 2.85 x 10 2 g 254g Cu x mol/63.5g = 4.0 mol Cu 12g Na x mol/23.0g = 0.52 mol Na
8 Cu 3 (BO 3 ) 2 Molar mass = g/mol Cu x 3 = x 3 = g/mol B x 2 = x 2 = g/mol O x 6 = x 6 = g/mol g/mol Calculate molar masses (to 2 decimal places) CaCl g/mol (Ca x 1, Cl x 2) (NH 4 ) 2 CO g/mol (N x 2, H x 8, C x 1, O x 3) O g/mol (O x 2) C 6 H 12 O g/mol (C x 6, H x 12, O x 6)
9 Comparing sugar (C 12 H 22 O 11 ) & H 2 O Same volume? mass? # of moles? # of molecules? # of atoms? 1 gram each No, they have different densities. Yes, that s what grams are! No, they have dif. molar masses No, they have dif. molar masses No 1 mol each No, molecules have dif. sizes. No, molecules have dif. masses Yes. Yes (6.02 x in each) No, sugar has more (45:3 ratio)
10 Converting between grams and moles If we are given the # of grams of a compound we can determine the # of moles, & vise-versa In order to convert from one to the other you must first calculate the molar mass (g/mol) Then use dimensional analysis to convert: moles to grams: mol x g/mol = g grams to moles: g x mol/g = mol This can be represented in an equation triangle mol g x g/mol
11 Converting between grams and moles First: Determine the compound s molar mass (g/mol) using the Periodic Table. mol g x g/mol Formula g/mol g HCl mol (n) 0.25 H 2 SO NaCl Cu Equation g= g/mol x mol mol= g x mol/g g= g/mol x mol mol= g x mol/g
12 Empirical and molecular formula Consider NaCl (ionic) vs. H 2 O 2 (covalent) Na Cl Na Cl Cl Na Cl Na Chemical formulas are either simplest (a.k.a. empirical ) or molecular (all bonded atoms). Ionic compounds are always expressed as the simplest ratio of the ions (formula units like NaCl or Li 2 O). Thus ionic formulas are always empirical. Covalent compounds can be shown as either molecular formulas (e.g. H 2 O 2 ) or empirical (e.g. HO)
13 % composition Percent Composition: Identifies the elements present in a compound as a mass percent of the total compound mass. The mass percent is obtained by dividing the mass of each element by the total mass of a compound and converting to percentage. Example problem: CH 2 O 1 mole of CH 2 O = 1 mole C, 2 moles H and 1 mole O Total mass = ( )g/mol = 30.03g/mol Percent composition: %C = 12.01/30.03 x 100% = 39.99% %H = 2.02/30.03 x 100% = 6.73% %O = 16.00/30.03 x 100% = 53.28%
14 Pathway to figure out empirical formula
15
### Element of same atomic number, but different atomic mass o Example: Hydrogen
Atomic mass: p + = protons; e - = electrons; n 0 = neutrons p + + n 0 = atomic mass o For carbon-12, 6p + + 6n 0 = atomic mass of 12.0 o For chlorine-35, 17p + + 18n 0 = atomic mass of 35.0 atomic mass
### How much does a single atom weigh? Different elements weigh different amounts related to what makes them unique.
How much does a single atom weigh? Different elements weigh different amounts related to what makes them unique. What units do we use to define the weight of an atom? amu units of atomic weight. (atomic
### The Mole Notes. There are many ways to or measure things. In Chemistry we also have special ways to count and measure things, one of which is the.
The Mole Notes I. Introduction There are many ways to or measure things. In Chemistry we also have special ways to count and measure things, one of which is the. A. The Mole (mol) Recall that atoms of
### Chemical Calculations: The Mole Concept and Chemical Formulas. AW Atomic weight (mass of the atom of an element) was determined by relative weights.
1 Introduction to Chemistry Atomic Weights (Definitions) Chemical Calculations: The Mole Concept and Chemical Formulas AW Atomic weight (mass of the atom of an element) was determined by relative weights.
### 1. How many hydrogen atoms are in 1.00 g of hydrogen?
MOLES AND CALCULATIONS USING THE MOLE CONCEPT INTRODUCTORY TERMS A. What is an amu? 1.66 x 10-24 g B. We need a conversion to the macroscopic world. 1. How many hydrogen atoms are in 1.00 g of hydrogen?
### Balance the following equation: KClO 3 + C 12 H 22 O 11 KCl + CO 2 + H 2 O
Balance the following equation: KClO 3 + C 12 H 22 O 11 KCl + CO 2 + H 2 O Ans: 8 KClO 3 + C 12 H 22 O 11 8 KCl + 12 CO 2 + 11 H 2 O 3.2 Chemical Symbols at Different levels Chemical symbols represent
### Study Guide For Chapter 7
Name: Class: Date: ID: A Study Guide For Chapter 7 Multiple Choice Identify the choice that best completes the statement or answers the question. 1. The number of atoms in a mole of any pure substance
### 3.3 Moles, 3.4 Molar Mass, and 3.5 Percent Composition
3.3 Moles, 3.4 Molar Mass, and 3.5 Percent Composition Collection Terms A collection term states a specific number of items. 1 dozen donuts = 12 donuts 1 ream of paper = 500 sheets 1 case = 24 cans Copyright
### Chapter 4. Chemical Composition. Chapter 4 Topics H 2 S. 4.1 Mole Quantities. The Mole Scale. Molar Mass The Mass of 1 Mole
Chapter 4 Chemical Composition Chapter 4 Topics 1. Mole Quantities 2. Moles, Masses, and Particles 3. Determining Empirical Formulas 4. Chemical Composition of Solutions Copyright The McGraw-Hill Companies,
### CHAPTER 3 Calculations with Chemical Formulas and Equations. atoms in a FORMULA UNIT
CHAPTER 3 Calculations with Chemical Formulas and Equations MOLECULAR WEIGHT (M. W.) Sum of the Atomic Weights of all atoms in a MOLECULE of a substance. FORMULA WEIGHT (F. W.) Sum of the atomic Weights
### Chapter 3: Stoichiometry
Chapter 3: Stoichiometry Key Skills: Balance chemical equations Predict the products of simple combination, decomposition, and combustion reactions. Calculate formula weights Convert grams to moles and
### Part One: Mass and Moles of Substance. Molecular Mass = sum of the Atomic Masses in a molecule
CHAPTER THREE: CALCULATIONS WITH CHEMICAL FORMULAS AND EQUATIONS Part One: Mass and Moles of Substance A. Molecular Mass and Formula Mass. (Section 3.1) 1. Just as we can talk about mass of one atom of
### Chemical formulae are used as shorthand to indicate how many atoms of one element combine with another element to form a compound.
29 Chemical Formulae Chemical formulae are used as shorthand to indicate how many atoms of one element combine with another element to form a compound. C 2 H 6, 2 atoms of carbon combine with 6 atoms of
### 10 The Mole. Section 10.1 Measuring Matter
Name Date Class The Mole Section.1 Measuring Matter In your textbook, read about counting particles. In Column B, rank the quantities from Column A from smallest to largest. Column A Column B 0.5 mol 1.
### = 16.00 amu. = 39.10 amu
Using Chemical Formulas Objective 1: Calculate the formula mass or molar mass of any given compound. The Formula Mass of any molecule, formula unit, or ion is the sum of the average atomic masses of all
### Woods Chem-1 Lec-02 10-1 Atoms, Ions, Mole (std) Page 1 ATOMIC THEORY, MOLECULES, & IONS
Woods Chem-1 Lec-02 10-1 Atoms, Ions, Mole (std) Page 1 ATOMIC THEORY, MOLECULES, & IONS Proton: A positively charged particle in the nucleus Atomic Number: We differentiate all elements by their number
### The Mole. Chapter 10. Dimensional Analysis. The Mole. How much mass is in one atom of carbon-12? Molar Mass of Atoms 3/1/2015
The Mole Chapter 10 1 Objectives Use the mole and molar mass to make conversions among moles, mass, and number of particles Determine the percent composition of the components of a compound Calculate empirical
### Chemistry 65 Chapter 6 THE MOLE CONCEPT
THE MOLE CONCEPT Chemists find it more convenient to use mass relationships in the laboratory, while chemical reactions depend on the number of atoms present. In order to relate the mass and number of
### The Mole Concept and Atoms
Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Chapter 4 24 September 2013 Calculations and the Chemical Equation The Mole Concept and Atoms Atoms are exceedingly
### Name Date Class CHEMICAL QUANTITIES. SECTION 10.1 THE MOLE: A MEASUREMENT OF MATTER (pages 287 296)
Name Date Class 10 CHEMICAL QUANTITIES SECTION 10.1 THE MOLE: A MEASUREMENT OF MATTER (pages 287 296) This section defines the mole and explains how the mole is used to measure matter. It also teaches
### Name Date Class CHEMICAL QUANTITIES. SECTION 10.1 THE MOLE: A MEASUREMENT OF MATTER (pages 287 296)
10 CHEMICAL QUANTITIES SECTION 10.1 THE MOLE: A MEASUREMENT OF MATTER (pages 287 296) This section defines the mole and explains how the mole is used to measure matter. It also teaches you how to calculate
### The Mole Concept. The Mole. Masses of molecules
The Mole Concept Ron Robertson r2 c:\files\courses\1110-20\2010 final slides for web\mole concept.docx The Mole The mole is a unit of measurement equal to 6.022 x 10 23 things (to 4 sf) just like there
### Chapter 3. Chemical Reactions and Reaction Stoichiometry. Lecture Presentation. James F. Kirby Quinnipiac University Hamden, CT
Lecture Presentation Chapter 3 Chemical Reactions and Reaction James F. Kirby Quinnipiac University Hamden, CT The study of the mass relationships in chemistry Based on the Law of Conservation of Mass
### 2 The Structure of Atoms
CHAPTER 4 2 The Structure of Atoms SECTION Atoms KEY IDEAS As you read this section, keep these questions in mind: What do atoms of the same element have in common? What are isotopes? How is an element
### Chapter 6 Chemical Calculations
Chapter 6 Chemical Calculations 1 Submicroscopic Macroscopic 2 Chapter Outline 1. Formula Masses (Ch 6.1) 2. Percent Composition (supplemental material) 3. The Mole & Avogadro s Number (Ch 6.2) 4. Molar
### MOLAR MASS AND MOLECULAR WEIGHT Themolar mass of a molecule is the sum of the atomic weights of all atoms in the molecule. Molar Mass.
Counting Atoms Mg burns in air (O 2 ) to produce white magnesium oxide, MgO. How can we figure out how much oxide is produced from a given mass of Mg? PROBLEM: If If 0.200 g of Mg is is burned, how much
### Chemistry B11 Chapter 4 Chemical reactions
Chemistry B11 Chapter 4 Chemical reactions Chemical reactions are classified into five groups: A + B AB Synthesis reactions (Combination) H + O H O AB A + B Decomposition reactions (Analysis) NaCl Na +Cl
### Moles. Moles. Moles. Moles. Balancing Eqns. Balancing. Balancing Eqns. Symbols Yields or Produces. Like a recipe:
Like a recipe: Balancing Eqns Reactants Products 2H 2 (g) + O 2 (g) 2H 2 O(l) coefficients subscripts Balancing Eqns Balancing Symbols (s) (l) (aq) (g) or Yields or Produces solid liquid (pure liquid)
### F321 MOLES. Example If 1 atom has a mass of 1.241 x 10-23 g 1 mole of atoms will have a mass of 1.241 x 10-23 g x 6.02 x 10 23 = 7.
Moles 1 MOLES The mole the standard unit of amount of a substance (mol) the number of particles in a mole is known as Avogadro s constant (N A ) Avogadro s constant has a value of 6.02 x 10 23 mol -1.
### AS1 MOLES. oxygen molecules have the formula O 2 the relative mass will be 2 x 16 = 32 so the molar mass will be 32g mol -1
Moles 1 MOLES The mole the standard unit of amount of a substance the number of particles in a mole is known as Avogadro s constant (L) Avogadro s constant has a value of 6.023 x 10 23 mol -1. Example
### Molar Mass Worksheet Answer Key
Molar Mass Worksheet Answer Key Calculate the molar masses of the following chemicals: 1) Cl 2 71 g/mol 2) KOH 56.1 g/mol 3) BeCl 2 80 g/mol 4) FeCl 3 162.3 g/mol 5) BF 3 67.8 g/mol 6) CCl 2 F 2 121 g/mol
### Chapter 3! Stoichiometry: Calculations with Chemical Formulas and Equations. Stoichiometry
Chapter 3! : Calculations with Chemical Formulas and Equations Anatomy of a Chemical Equation CH 4 (g) + 2O 2 (g) CO 2 (g) + 2 H 2 O (g) Anatomy of a Chemical Equation CH 4 (g) + 2 O 2 (g) CO 2 (g) + 2
### SYMBOLS, FORMULAS AND MOLAR MASSES
SYMBOLS, FORMULAS AND MOLAR MASSES OBJECTIVES 1. To correctly write and interpret chemical formulas 2. To calculate molecular weights from chemical formulas 3. To calculate moles from grams using chemical
### Moles. Balanced chemical equations Molar ratios Mass Composition Empirical and Molecular Mass Predicting Quantities Equations
Moles Balanced chemical equations Molar ratios Mass Composition Empirical and Molecular Mass Predicting Quantities Equations Micro World atoms & molecules Macro World grams Atomic mass is the mass of an
### KEY for Unit 1 Your Chemical Toolbox: Scientific Concepts, Fundamentals of Typical Calculations, the Atom and Much More
KEY for Unit 1 Your Chemical Toolbox: Scientific Concepts, Fundamentals of Typical Calculations, the Atom and Much More The Modern Periodic Table The Periodic Law - when elements are arranged according
### Calculations and Chemical Equations. Example: Hydrogen atomic weight = 1.008 amu Carbon atomic weight = 12.001 amu
Calculations and Chemical Equations Atomic mass: Mass of an atom of an element, expressed in atomic mass units Atomic mass unit (amu): 1.661 x 10-24 g Atomic weight: Average mass of all isotopes of a given
### Simple vs. True. Simple vs. True. Calculating Empirical and Molecular Formulas
Calculating Empirical and Molecular Formulas Formula writing is a key component for success in chemistry. How do scientists really know what the true formula for a compound might be? In this lesson we
Chemical Composition Review Mole Calculations Percent Composition Copyright Cengage Learning. All rights reserved. 8 1 QUESTION Suppose you work in a hardware store and a customer wants to purchase 500
### 10 Cl atoms. 10 H2O molecules. 8.3 mol HCN = 8.3 mol N atoms 1 mol HCN. 2 mol H atoms 2.63 mol CH2O = 5.26 mol H atoms 1 mol CH O
Chem 100 Mole conversions and stoichiometry worksheet 1. How many Ag atoms are in.4 mol Ag atoms? 6.0 10 Ag atoms 4.4 mol Ag atoms = 1.46 10 Ag atoms 1 mol Ag atoms. How many Br molecules are in 18. mol
### STOICHIOMETRY UNIT 1 LEARNING OUTCOMES. At the end of this unit students will be expected to:
STOICHIOMETRY LEARNING OUTCOMES At the end of this unit students will be expected to: UNIT 1 THE MOLE AND MOLAR MASS define molar mass and perform mole-mass inter-conversions for pure substances explain
### Lecture 5, The Mole. What is a mole?
Lecture 5, The Mole What is a mole? Moles Atomic mass unit and the mole amu definition: 12 C = 12 amu. The atomic mass unit is defined this way. 1 amu = 1.6605 x 10-24 g How many 12 C atoms weigh 12 g?
### Unit 3 Notepack Chapter 7 Chemical Quantities Qualifier for Test
Unit 3 Notepack Chapter 7 Chemical Quantities Qualifier for Test NAME Section 7.1 The Mole: A Measurement of Matter A. What is a mole? 1. Chemistry is a quantitative science. What does this term mean?
### Matter. Atomic weight, Molecular weight and Mole
Matter Atomic weight, Molecular weight and Mole Atomic Mass Unit Chemists of the nineteenth century realized that, in order to measure the mass of an atomic particle, it was useless to use the standard
### Chemical Composition. Introductory Chemistry: A Foundation FOURTH EDITION. Atomic Masses. Atomic Masses. Atomic Masses. Chapter 8
Introductory Chemistry: A Foundation FOURTH EDITION by Steven S. Zumdahl University of Illinois Chemical Composition Chapter 8 1 2 Atomic Masses Balanced equation tells us the relative numbers of molecules
### We know from the information given that we have an equal mass of each compound, but no real numbers to plug in and find moles. So what can we do?
How do we figure this out? We know that: 1) the number of oxygen atoms can be found by using Avogadro s number, if we know the moles of oxygen atoms; 2) the number of moles of oxygen atoms can be found
### MOLECULAR MASS AND FORMULA MASS
1 MOLECULAR MASS AND FORMULA MASS Molecular mass = sum of the atomic weights of all atoms in the molecule. Formula mass = sum of the atomic weights of all atoms in the formula unit. 2 MOLECULAR MASS AND
### Ch. 10 The Mole I. Molar Conversions
Ch. 10 The Mole I. Molar Conversions I II III IV A. What is the Mole? A counting number (like a dozen) Avogadro s number (N A ) 1 mole = 6.022 10 23 representative particles B. Mole/Particle Conversions
### 602X10 21 602,000,000,000, 000,000,000,000 6.02X10 23. Pre- AP Chemistry Chemical Quan44es: The Mole. Diatomic Elements
Pre- AP Chemistry Chemical Quan44es: The Mole Mole SI unit of measurement that measures the amount of substance. A substance exists as representa9ve par9cles. Representa9ve par9cles can be atoms, molecules,
### Ch. 6 Chemical Composition and Stoichiometry
Ch. 6 Chemical Composition and Stoichiometry The Mole Concept [6.2, 6.3] Conversions between g mol atoms [6.3, 6.4, 6.5] Mass Percent [6.6, 6.7] Empirical and Molecular Formula [6.8, 6.9] Bring your calculators!
### Percent Composition and Molecular Formula Worksheet
Percent Composition and Molecular Formula Worksheet 1. What s the empirical formula of a molecule containing 65.5% carbon, 5.5% hydrogen, and 29.0% 2. If the molar mass of the compound in problem 1 is
### A dozen. Molar Mass. Mass of atoms
A dozen Molar Mass Science 10 is a number of objects. A dozen eggs, a dozen cars, and a dozen people are all 12 objects. But a dozen cars has a much greater mass than a dozen eggs because the mass of each
### Atoms, Elements, and the Periodic Table (Chapter 2)
Atoms, Elements, and the Periodic Table (Chapter 2) Atomic Structure 1. Historical View - Dalton's Atomic Theory Based on empirical observations, formulated as Laws of: Conservation of Mass Definite Proportions
### Mole Notes.notebook. October 29, 2014
1 2 How do chemists count atoms/formula units/molecules? How do we go from the atomic scale to the scale of everyday measurements (macroscopic scale)? The gateway is the mole! But before we get to the
### CHAPTER 8: CHEMICAL COMPOSITION
CHAPTER 8: CHEMICAL COMPOSITION Active Learning: 1-4, 6-8, 12, 18-25; End-of-Chapter Problems: 3-4, 9-82, 84-85, 87-92, 94-104, 107-109, 111, 113, 119, 125-126 8.2 ATOMIC MASSES: COUNTING ATOMS BY WEIGHING
### Chem 31 Fall 2002. Chapter 3. Stoichiometry: Calculations with Chemical Formulas and Equations. Writing and Balancing Chemical Equations
Chem 31 Fall 2002 Chapter 3 Stoichiometry: Calculations with Chemical Formulas and Equations Writing and Balancing Chemical Equations 1. Write Equation in Words -you cannot write an equation unless you
### Formulas, Equations and Moles
Chapter 3 Formulas, Equations and Moles Interpreting Chemical Equations You can interpret a balanced chemical equation in many ways. On a microscopic level, two molecules of H 2 react with one molecule
Page 1 of 14 Amount of Substance Key terms in this chapter are: Element Compound Mixture Atom Molecule Ion Relative Atomic Mass Avogadro constant Mole Isotope Relative Isotopic Mass Relative Molecular
### Moles, Molecules, and Grams Worksheet Answer Key
Moles, Molecules, and Grams Worksheet Answer Key 1) How many are there in 24 grams of FeF 3? 1.28 x 10 23 2) How many are there in 450 grams of Na 2 SO 4? 1.91 x 10 24 3) How many grams are there in 2.3
### Chemical Calculations: Formula Masses, Moles, and Chemical Equations
Chemical Calculations: Formula Masses, Moles, and Chemical Equations Atomic Mass & Formula Mass Recall from Chapter Three that the average mass of an atom of a given element can be found on the periodic
### Chemical Proportions in Compounds
Chapter 6 Chemical Proportions in Compounds Solutions for Practice Problems Student Textbook page 201 1. Problem A sample of a compound is analyzed and found to contain 0.90 g of calcium and 1.60 g of
### Atomic Masses. Chapter 3. Stoichiometry. Chemical Stoichiometry. Mass and Moles of a Substance. Average Atomic Mass
Atomic Masses Chapter 3 Stoichiometry 1 atomic mass unit (amu) = 1/12 of the mass of a 12 C atom so one 12 C atom has a mass of 12 amu (exact number). From mass spectrometry: 13 C/ 12 C = 1.0836129 amu
### Multiple Choice questions (one answer correct)
Mole Concept Multiple Choice questions (one answer correct) (1) Avogadro s number represents the number of atoms in (a) 12g of C 12 (b) 320g of sulphur (c) 32g of oxygen (d) 12.7g of iodine (2) The number
### Calculating Atoms, Ions, or Molecules Using Moles
TEKS REVIEW 8B Calculating Atoms, Ions, or Molecules Using Moles TEKS 8B READINESS Use the mole concept to calculate the number of atoms, ions, or molecules in a sample TEKS_TXT of material. Vocabulary
### Stoichiometry. Web Resources Chem Team Chem Team Stoichiometry. Section 1: Definitions Define the following terms. Average Atomic mass - Molecule -
Web Resources Chem Team Chem Team Section 1: Definitions Define the following terms Average Atomic mass - Molecule - Molecular mass - Moles - Avagadro's Number - Conservation of matter - Percent composition
### TOPIC 7. CHEMICAL CALCULATIONS I - atomic and formula weights.
TOPIC 7. CHEMICAL CALCULATIONS I - atomic and formula weights. Atomic structure revisited. In Topic 2, atoms were described as ranging from the simplest atom, H, containing a single proton and usually
### Other Stoich Calculations A. mole mass (mass mole) calculations. GIVEN mol A x CE mol B. PT g A CE mol A MOLE MASS :
Chem. I Notes Ch. 12, part 2 Using Moles NOTE: Vocabulary terms are in boldface and underlined. Supporting details are in italics. 1 MOLE = 6.02 x 10 23 representative particles (representative particles
### b. N 2 H 4 c. aluminum oxalate d. acetic acid e. arsenic PART 2: MOLAR MASS 2. Determine the molar mass for each of the following. a. ZnI 2 b.
CHEMISTRY DISCOVER UNIT 5 LOTS OF PRACTICE ON USING THE MOLE!!! PART 1: ATOMIC MASS, FORMULA MASS, OR MOLECULAR MASS 1. Determine the atomic mass, formula mass, or molecular mass for each of the following
### Lecture Topics Atomic weight, Mole, Molecular Mass, Derivation of Formulas, Percent Composition
Mole Calculations Chemical Equations and Stoichiometry Lecture Topics Atomic weight, Mole, Molecular Mass, Derivation of Formulas, Percent Composition Chemical Equations and Problems Based on Miscellaneous
### Answers and Solutions to Text Problems
Chapter 7 Answers and Solutions 7 Answers and Solutions to Text Problems 7.1 A mole is the amount of a substance that contains 6.02 x 10 23 items. For example, one mole of water contains 6.02 10 23 molecules
### CHEM 110: CHAPTER 3: STOICHIOMETRY: CALCULATIONS WITH CHEMICAL FORMULAS AND EQUATIONS
1 CHEM 110: CHAPTER 3: STOICHIOMETRY: CALCULATIONS WITH CHEMICAL FORMULAS AND EQUATIONS The Chemical Equation A chemical equation concisely shows the initial (reactants) and final (products) results of
### Honors Chemistry: Unit 6 Test Stoichiometry PRACTICE TEST ANSWER KEY Page 1. A chemical equation. (C-4.4)
Honors Chemistry: Unit 6 Test Stoichiometry PRACTICE TEST ANSWER KEY Page 1 1. 2. 3. 4. 5. 6. Question What is a symbolic representation of a chemical reaction? What 3 things (values) is a mole of a chemical
### Unit 6 The Mole Concept
Chemistry Form 3 Page 62 Ms. R. Buttigieg Unit 6 The Mole Concept See Chemistry for You Chapter 28 pg. 352-363 See GCSE Chemistry Chapter 5 pg. 70-79 6.1 Relative atomic mass. The relative atomic mass
### Chapter 6 Notes. Chemical Composition
Chapter 6 Notes Chemical Composition Section 6.1: Counting By Weighing We can weigh a large number of the objects and find the average mass. Once we know the average mass we can equate that to any number
### Lecture 3: (Lec3A) Atomic Theory
Lecture 3: (Lec3A) Atomic Theory Mass of Atoms Sections (Zumdahl 6 th Edition) 3.1-3.4 The Concept of the Mole Outline: The mass of a mole of atoms and the mass of a mole of molecules The composition of
### Name: Section: Calculating Dozens
Name: Section: The Mole This lesson is an introduction to the concept of the Mole and calculating conversions related to the Mole. The best analogy for understanding a mole is the dozen. A dozen is simply
### Solution. Practice Exercise. Concept Exercise
Example Exercise 9.1 Atomic Mass and Avogadro s Number Refer to the atomic masses in the periodic table inside the front cover of this textbook. State the mass of Avogadro s number of atoms for each of
### Chem 115 POGIL Worksheet - Week 4 Moles & Stoichiometry
Chem 115 POGIL Worksheet - Week 4 Moles & Stoichiometry Why? Chemists are concerned with mass relationships in chemical reactions, usually run on a macroscopic scale (grams, kilograms, etc.). To deal with
### CHEM 101/105 Numbers and mass / Counting and weighing Lect-03
CHEM 101/105 Numbers and mass / Counting and weighing Lect-03 Interpretation of Elemental Chemical Symbols, Chemical Formulas, and Chemical Equations Interpretation of an element's chemical symbol depends
### Chapter 8 How to Do Chemical Calculations
Chapter 8 How to Do Chemical Calculations Chemistry is both a qualitative and a quantitative science. In the laboratory, it is important to be able to measure quantities of chemical substances and, as
### Stoichiometry. What is the atomic mass for carbon? For zinc?
Stoichiometry Atomic Mass (atomic weight) Atoms are so small, it is difficult to discuss how much they weigh in grams We use atomic mass units an atomic mass unit (AMU) is one twelfth the mass of the catbon-12
### Chapter 3. Mass Relationships in Chemical Reactions
Chapter 3 Mass Relationships in Chemical Reactions This chapter uses the concepts of conservation of mass to assist the student in gaining an understanding of chemical changes. Upon completion of Chapter
### Chapter 1: Moles and equations. Learning outcomes. you should be able to:
Chapter 1: Moles and equations 1 Learning outcomes you should be able to: define and use the terms: relative atomic mass, isotopic mass and formula mass based on the 12 C scale perform calculations, including
### 1. When the following equation is balanced, the coefficient of Al is. Al (s) + H 2 O (l)? Al(OH) 3 (s) + H 2 (g)
1. When the following equation is balanced, the coefficient of Al is. Al (s) + H 2 O (l)? Al(OH) (s) + H 2 (g) A) 1 B) 2 C) 4 D) 5 E) Al (s) + H 2 O (l)? Al(OH) (s) + H 2 (g) Al (s) + H 2 O (l)? Al(OH)
### Calculation of Molar Masses. Molar Mass. Solutions. Solutions
Molar Mass Molar mass = Mass in grams of one mole of any element, numerically equal to its atomic weight Molar mass of molecules can be determined from the chemical formula and molar masses of elements
### Formulae, stoichiometry and the mole concept
3 Formulae, stoichiometry and the mole concept Content 3.1 Symbols, Formulae and Chemical equations 3.2 Concept of Relative Mass 3.3 Mole Concept and Stoichiometry Learning Outcomes Candidates should be
### THE MOLE / COUNTING IN CHEMISTRY
1 THE MOLE / COUNTING IN CHEMISTRY ***A mole is 6.0 x 10 items.*** 1 mole = 6.0 x 10 items 1 mole = 60, 00, 000, 000, 000, 000, 000, 000 items Analogy #1 1 dozen = 1 items 18 eggs = 1.5 dz. - to convert
### Tuesday, November 27, 2012 Expectations:
Tuesday, November 27, 2012 Expectations: Sit in assigned seat Get out Folder, Notebook, Periodic Table Have out: Spiral (notes), Learning Target Log (new) No Backpacks on tables Listen/Pay Attention Learning
### CHEMICAL FORMULA COEFFICIENTS AND SUBSCRIPTS. Chapter 3: Molecular analysis 3O 2 2O 3
Chapter 3: Molecular analysis Read: BLB 3.3 3.5 H W : BLB 3:21a, c, e, f, 25, 29, 37,49, 51, 53 Supplemental 3:1 8 CHEMICAL FORMULA Formula that gives the TOTAL number of elements in a molecule or formula
### Atomic mass is the mass of an atom in atomic mass units (amu)
Micro World atoms & molecules Laboratory scale measurements Atomic mass is the mass of an atom in atomic mass units (amu) By definition: 1 atom 12 C weighs 12 amu On this scale 1 H = 1.008 amu 16 O = 16.00
### Chemistry Post-Enrolment Worksheet
Name: Chemistry Post-Enrolment Worksheet The purpose of this worksheet is to get you to recap some of the fundamental concepts that you studied at GCSE and introduce some of the concepts that will be part
### PART I: MULTIPLE CHOICE (30 multiple choice questions. Each multiple choice question is worth 2 points)
CHEMISTRY 123-07 Midterm #1 Answer key October 14, 2010 Statistics: Average: 74 p (74%); Highest: 97 p (95%); Lowest: 33 p (33%) Number of students performing at or above average: 67 (57%) Number of students
### CONSERVATION OF MASS During a chemical reaction, matter is neither created nor destroyed. - i. e. the number of atoms of each element remains constant
1 CHEMICAL REACTINS Example: Hydrogen + xygen Water H + H + + - Note there is not enough hydrogen to react with oxygen - It is necessary to balance equation. reactants products + H + H (balanced equation)
Contents Getting the most from this book...4 About this book....5 Content Guidance Topic 1 Atomic structure and the periodic table...8 Topic 2 Bonding and structure...14 Topic 2A Bonding....14 Topic 2B
### IB Chemistry 1 Mole. One atom of C-12 has a mass of 12 amu. One mole of C-12 has a mass of 12 g. Grams we can use more easily.
The Mole Atomic mass units and atoms are not convenient units to work with. The concept of the mole was invented. This was the number of atoms of carbon-12 that were needed to make 12 g of carbon. 1 mole
### Chapter 5, Calculations and the Chemical Equation
1. How many iron atoms are present in one mole of iron? Ans. 6.02 1023 atoms 2. How many grams of sulfur are found in 0.150 mol of sulfur? [Use atomic weight: S, 32.06 amu] Ans. 4.81 g 3. How many moles
### MASS RELATIONSHIPS IN CHEMICAL REACTIONS
MASS RELATIONSHIPS IN CHEMICAL REACTIONS 1. The mole, Avogadro s number and molar mass of an element. Molecular mass (molecular weight) 3. Percent composition of compounds 4. Empirical and Molecular formulas
### Unit 2: Quantities in Chemistry
Mass, Moles, & Molar Mass Relative quantities of isotopes in a natural occurring element (%) E.g. Carbon has 2 isotopes C-12 and C-13. Of Carbon s two isotopes, there is 98.9% C-12 and 11.1% C-13. Find | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8451365828514099, "perplexity": 3852.8588596392765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103556871.29/warc/CC-MAIN-20220628142305-20220628172305-00685.warc.gz"} |
https://physics.stackexchange.com/questions/625676/interpreting-a-normalized-power-spectral-density-psd/712917 | # Interpreting a normalized Power Spectral Density (PSD)
I am using software to produce power spectral density (PSD) plots of time-series (voltage versus time). Unfortunately, the units of the produced plots are alien to me. I'm used to reading and interpreting PSD's in more common, "tangible" units like dBm/Hz or W/Hz, however these plots are described as:
Returns a PSD in $$dB$$ units that is normalized and divided by frequency bin width (i.e. it is normalized to the time-integral squared amplitude of the time domain and then divided by frequency bin width).
How is a PSD in units of dB to be interpreted, and what is the purpose of "normalizing to the time-integral squared amplitude of the time domain"? No further context is provided.
• What library are you using? Mar 29, 2021 at 22:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8892443180084229, "perplexity": 1391.8469912298845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571056.58/warc/CC-MAIN-20220809155137-20220809185137-00544.warc.gz"} |
http://www.gfdl.noaa.gov/blog/isaac-held/2011/04/16/8-the-recalcitrant-component-of-global-warming/ | # Isaac Held's Blog
## 8. The recalcitrant component of global warming
Evolution of global mean near-surface air temperature in GFDL’s CM2.1 climate model in simulations designed to separate the fast and slow components of the climate response in simulations of future climate change, as described in Held et al, 2010.
Continuing our discussion of transient climate responses, I want to introduce a simple way of probing the relative importance of fast and slow responses in a climate model, by defining the recalcitrant component of global warming, effectively the surface manifestation of changes in the state of the deep ocean.
The black curve in this figure is the evolution of global mean surface air temperature in a simulation of the 1860-2000 period produced by our CM2.1 model, forced primarily by changing the well-mixed greenhouse gases, aerosols, and volcanoes. Everything is an anomaly from a control simulation. (This model does not predict the CO2 or aerosol concentrations from emissions, but simply prescribes these concentrations as a function of time.) The blue curve picks up from this run, using the SRES A1B scenario for the forcing agents until 2100 and then holds these fixed after 2100. In particular, CO2 is assumed to approximately double over the 21st century, and the concentration reached at 2100 (about 720ppm) is held fixed thereafter. The red curves are the result of abruptly returning to pre-industrial (1860) forcing at different times (2000, 2100, 2200, 2300) and then integrating for 100 years. The thin black line connects the temperatures from these four runs averaged over years 10-30 after the abrupt turn-off of the radiative forcing.
One can think of the red lines as simulations of what we might call instantaneous perfect geoenginering, in which one somehow contrives to return the CO2 (and all of the other forcing agents in these simulations) to pre-industrial values. Perfect geoengineering so defined must be clearly distinguished from two other simple hypothetical scenarios discussed in the literature. (Let’s simplify things by just thinking of CO2 as the only relevant forcing agent.) One such scenario consists of just holding the CO2 fixed after a certain time, as in the A1B scenario after 2100 (the blue line) in the figure. The warming that occurs after 2100 as the system approaches its final equilibrium is referred to as the “committed warming” but it might be better to refer to it as the fixed concentration commitment. A second, in many ways more interesting, simple scenario (e.g. Solomon et al, 2009Matthews and Weaver, 2110) consists of abruptly setting the emissions to zero. This is another definition of commitment, which we might call the past emissions commitment, the study of which requires a coupled carbon-climate model. Unlike the fixed concentration commitment, it often results in temperatures that stay roughly unchanged for centuries — the warming due to the reduction in ocean heat uptake is roughly balanced by the ocean uptake of CO2. Perfect geoenginering is much harder than even setting emissions to zero, of course, since one would have to take enough CO2 out of the atmosphere to return to its pre-industrial value. Needless to say, we are not interested in this scenario because of its practical relevance but rather as a convenient probe of climate models.
There are similarities in the evolution after the turnoff of the radiative forcing for the 2100, 2200, and 2300 cases (these all have the same radiative forcing before the turn-off). At first the temperature decays exponentially, with an e-folding time of 3-4 years. An exponential fit yields a cooling in this fast phase of 2.6-2.7K in each case, leaving behind what we refer to as the recalcitrant warming. The spatial structure of the fast response is very similar in these three cases as well, and differs substantially from the spatial structure of the recalcitrant remnant. These are single realizations so some of the slow evolution after the turnoff of radiative forcing could be due to background internal variability. See Held et al (2010) for some further discussion of these simulations. Wu et al (2010) discuss aspects of the response of the hydrological cycle in similar model setups.
In thinking about the recalcitrant warming, it is useful to return once again to our two box model (post #4, ignoring the limitations of this model discussed in post #5)
$c \,dT/dt \, = - \beta T - \gamma (T - T_0) + \mathcal{F}(t)$
$c_0 \, dT_0/dt = \gamma (T - T_0)$
On time scales long compared to the fast relaxation time of the surface box with temperature $T$, we have
$T = (\mathcal{F} + \gamma T_0)/(\beta + \gamma)$.
When the forcing $\mathcal{F}$ is turned off, The solution relaxes on the fast time scale to
$T_\mathcal{R} \equiv \gamma T_0/(\beta + \gamma)$,
so the response is the sum of the recalcitrant part $T_\mathcal{R}$ and fast response proportional to the forcing
$T_\mathcal{F} \equiv \mathcal{F}/(\beta + \gamma)$
An important implication of this plot, taking it at face value, is that the recalcitrant component of surface warming is small at present, implying that the response up to this point can be accurately approximated by the fast component of the response in isolation, which simply consists of rescaling the TCR with the forcing.
Another implication is that acceleration of the warming from the 20th to the 21st century is not primarily due to saturation of the heat uptake (this only accounts for the 0.4K growth of the recalcitrant component), but is primarily just due to acceleration of the growth of the radiative forcing.
It is important to keep in mind the limitations of this idealized picture. There is no reason to expect the slow response to be characterized by one time scale. But more importantly for this line of argument, there is no obvious reason why intermediate time scales, related to sea ice or the relatively shallow circulations that maintain the structure of the main thermocline, could not play more of a role in the transient response of surface temperature, filling in the spectral gap between our fast and slow time scales, and requiring a more elaborate analysis of the linear response in the frequency domain.
[The views expressed on this blog are in no sense official positions of the Geophysical Fluid Dynamics Laboratory, the National Oceanic and Atmospheric Administration, or the Department of Commerce.]
### 8 Responses to “8. The recalcitrant component of global warming”
1. Fred Moolten says:
Isaac – Your simulation assumes a CO2 concentration that remains fixed after reaching 720 ppm. If the concentration rose until a higher level was reached – e.g., 1000 ppm – do you have any idea what the recalcitrant component would be at 2100 or what its ultimate temperature contribution might become? Over the long haul, is CO2 the only GHG we need consider or will there be other long-lived contributions that make total CO2 equivalents exceed the CO2 component alone in driving up the recalcitrant fraction? As fossil fuel combustion subsides, what will be the effect of reductions in negative aerosol forcing?
• Isaac Held says:
At this level of description of the model behavior, you can just think of the global mean radiative forcing as the input, whatever the contribution of $CO_2$ versus other greenhouse gases or aerosols. If $CO_2$ increased to something higher (or lower) than 700 by the end of the century, I think the recalcitrant fraction in 2100 would be about the same — the fast and the slow (recalcitrant) components in this GCM would both roughly scale with the forcing. If the stabilization occurs later, the recalcitrant fraction will be greater at the time of stabilization.
2. HR says:
I hope you don’t mind questions from somebody on a steep learning curve.
1) If i rewrote one of the sentances in the first paragraph by adding two words would it still make sense or am I mis-understanding something? Here goes
“….by defining the recalcitrant component of global warming, effectively the *future* surface manifestation of *present* changes in the state of the deep ocean.”
2) Is the recalcitrant component of global warming simply a movement of energy through the system? Is energy being lost to the deep ocean only to resurface at a later date to add to the surface temperature? Or is it something more complex in how the system has been altered such as changes in ocean currents? I’m just wondering if it is Trenberth’s ‘missing heat’ (hopefully that isn’t too crude).
3) If you left the red line for the turn off at 2100 to run to 2300 would it have the same slope as the blue line, i.e. is the continued warming in the blue line just a manifestation of the recalcitrant component?
Thanks
• HR says:
With regard to 3) I just noticed the black line seems to have the same slope as the blue line, is that just co-incidence or bad eyeballing?
• Isaac Held says:
Your understanding seems fine to me. Your inserts in (1) are fine; (2) I think it is primarily just energy being sequestered but changes in circulation can certainly complicate matters; with regard to the “missing heat”, that is more a question of whether some is sequestered deeper than the layers typically studied in ocean heat content data sets, (3) to the extent that the fast component is proportional to the forcing, the growth of the “committed” warming — the blue line after 2100 — should be the same as the growth of the recalcitrant component as defined by the black line.
3. HR says:
It seems that at least some of the processes that deliver energy to the deep ocean may have undergone multidecadal trends over part of the 20th century.
For example if the PDO has contributed to a trend in overturning for 60-70 years of the 20th century then that would have possible implications for the magnitude of TCR. It looks to me that for at least this part of the 20th century that TCR may have undergone a trend. You seem to describe TCR as a fairly fixed value (e.g. 1.5oC for a doubling of CO2). But it could be possible that this component itself has trended over much of the 20th century. Based on the above paper the implication would be that TCR could have risen over the period with the transition from general La Nina conditions to general El Nino conditions around the mid-1970′s. I guess the variation may be small enough to have little implication on how much energy at any on time is partitioned into the TCR or recalcitrant part of the climate response but it seems like something that should be considered.
Do you consider TCR as a constant over time? Do you consider overturning as a possible source of long term variation in TCR that goes beyond just short term noise? It also seems to have possible implications for attribution of warming over different periods of the 20th century. For example in part the acceleerated warming from the 1970′s onwards could be due to an increase in TCR as overturning slows, less energy is transported to depth and more remains at the surface. It seems to suggest that internal variability (of the oceans) be treated as a sort of forcing on the surface, especially with respect to the TCR component.
4. Marcel Bökstedt says:
Like HR, I am wondering about the possible relation to the “missing heat”. The recalcitrant component would depend on how fast energy is transfered down to the deep ocean. That transfer would presumable be analogous to the “gamma” constant of the slab model. People seem to think that it is hard to estimate the size of this transport directly, but are there ways to constrain it? (sea level measurements, surface temperature change etc?) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 10, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8571142554283142, "perplexity": 1057.3436120390031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645338295.91/warc/CC-MAIN-20150827031538-00239-ip-10-171-96-226.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/290839/help-with-proof-cl-f-with-only-one-equivalance-class-implies-d-f-is-a-pid | # Help with proof : $Cl_F$ with only one equivalance class implies $D_F$ is a PID
I am self-studying a set of legitimately downloaded notes on algebraic number theory. They are somewhat akin to "Ireland and Rosen," Ch. 12.
I would appreciate help in understanding a proof (in the direction) that if $Cl_F$ consists of one equivalence class, then $D_F$, the ring of integers of $F$, is a PID. $F$ is a number field.
For convenience, I've numbered the four questions I would like help with.
Using the definition of equivalence in this context, if there is only one equivalence class, any two ideals of $D_F$ are equivalent and there exist $\alpha, \beta \in D_F$ such that
$\alpha I = \beta D_F$,
where $I$ is any ideal in $D_F$. We want to show $I$ is a principal ideal. So far so good. (1) Although the notes has $\alpha, \beta \in I$, I think they ought to be in $D_F$. Is this correct?
Thus $\alpha I = (\beta)$.
The proof goes on: Let $\omega = \beta \alpha^{- 1} \in F$.
Now I get stuck:
Then (2) $\omega I = \beta I \subseteq I$. I don't see how to get the equality.
Based on this, we know $\omega \in D_F$. This is clear since this type of assertion, based on the inclusion, was previously proved.
Then I also can't get the final assertion, (3) that this implies $(\omega) = I$.
Lastly, I would appreciate any guidance or references as to (4) in general what the product, e.g., $\gamma J$ means, where $J$ is an ideal.
As always, thanks for your patience and help.
-
Hint $\rm\ \ a I = (b)\:\Rightarrow\: b\in aI\:\Rightarrow\: b = ai\:$ thus $\rm\:aI = a(i)\:\Rightarrow\: I = (i)\:$ by cancelling $\rm\:a\ne 0$.
Principal ideals are invertible, so cancellable, thus we can cancel $\rm\,(a)\ne 0\:$ from $\rm\,(a)I = (a)(i)\,$ above. If you're not familiar with invertible or fractional ideals then you can easily prove this directly: suppose $\rm\:aI = aJ\ne 0.\:$ To show $\rm\:I\subseteq J\:$ note $\rm\:i\in I\:\Rightarrow\:ai\in aI\subseteq aJ,\:$ so $\rm\:ai = aj\:$ for some $\rm\:j\in J.\:$ Cancelling $\rm\:a\ne 0\:$ yields $\rm\:i = j\in J,\:$ so $\rm\:I\subseteq J.\:$ By symmetry, $\rm\:J\subseteq I,\:$ therefore $\rm\:I = J.$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9818302392959595, "perplexity": 196.1690006555395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999654815/warc/CC-MAIN-20140305060734-00058-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://mitchellkember.com/sph4u/single-slit-diffraction.html | Single-slit diffraction
When light passes through a single slit, we still have an interference pattern. We can think of the slit as a set of many point sources. We get a bright antinode in the centre, and it has a width of 2Delta y. All the other antinodes are Delta y wide, and the brightness decreases rapidly as you move away from the centre:
The nodal lines in single-split diffraction occur when the path difference from the top and bottom points on the slit are lambda, 2lambda, 3lambda, etc. The antinodes occur at 0, 3//2lambda, 5//2lambda, etc. This is counterintuitive, because a full wavelength of delay means that the waves arrive in phase and have constructive interference (which would be an antinode). However, when the path difference is lambda, there is only one pair of points in the slit that result in this constructive interference. The others are all closer together, and there are many pairs of points separated by a half wavelength, which is destructive. The antinodes have a mix of constructive and destructive interference; this mix becomes increasingly destructive as you move away form the centre, which is why the brightness goes down. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8526912331581116, "perplexity": 460.9229115133545}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057417.10/warc/CC-MAIN-20210923044248-20210923074248-00517.warc.gz"} |
http://www.lmfdb.org/knowledge/show/rcs.rigor.g2c | show · rcs.rigor.g2c all knowls · up · search:
As described in Section 3.4 of [MR:3540958, arXiv:1602.03715] , the completeness of the genus 2 curve database for curves of absolute discriminant $|\Delta|\le 10^6$ has been tested against other tables of genus 2 curves, including those of [Stoll] , and Merriman and Smart [https//doi.org/10.1017/S030500410007153X] . However, as explained on the completeness page, it is only complete within the boxes that were searched, and it is likely that there are at least a few genus 2 curves of minimal discriminant $|D|\le 10^6$ that are not included (even though no such curves are currently known).
The reliability of specific data associated to genus 2 curves is discussed below.
• In cases where the set of rational points has not been provably determined, this is indicated by the label known rational points. In cases where the set of rational points has been provably determined (via some variant of Chabauty's method implemented in Magma), this is indicated by the label all rational points; this applies to about half the curves in the database.
• The power of 2 in the conductor of the Jacobian (originally bounded analytically) has been rigorously verified by Tim Dokchitser and Christopher Doris [arXiv:1706.06162] using algebraic methods.
• All L-function computations are conditional on the assumption that the L-function lies in the Selberg class (in particular, that it has a meromorphic continuation to $\C$ and satisfies a functional equation). This also applies to the Euler factor at 2 for curves with bad reduction at 2.
• Subject to the assumption that the L-function lies in the Selberg class, the root number has been rigorously computed and the analytic ranks are rigorous upper bounds. For 99 percent of the curves in the database the Mordell-Weil rank of the Jacobian has been rigorously computed using Magma code provided by Michael Stoll; in every case this matches the listed analytic rank.
• The data on the geometric endomorphism ring that was initially computed heuristically has now been rigorously certified by Davide Lombardo [arXiv:1610.09674] and by Edgar Costa, Nicolas Mascot, Jeroen Sijsling, and John Voight [arXiv:1705.09248] , independently, by different methods. This rigorously confirms the Sato-Tate group computations (the fact that these independent computation agree is a consistency check).
• Isogeny class identifications are based on a comparison of Euler factors at good primes up to $2^{20}$. Jacobians that are not identified as members of the same isogeny class are provably non-isogenous, but the identification of membership within a particular isogeny class is heuristic (except for Richelot isogenies, no explicit isogenies have been computed). In principle it could be made rigorous in any particular case via a Faltings-Serre argument, as described in [arXiv:1805.10873] , but this has only been done for a handful of cases such as the isogeny class of conductor 277.
All invariants not specifically mentioned above were computed using rigorous algorithms that do not depend on any unproved hypotheses.
Authors:
Knowl status:
• Review status: reviewed
• Last edited by Jennifer Paulhus on 2019-05-03 17:22:44
Referred to by:
History: Differences | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8322368860244751, "perplexity": 765.1407429046423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256858.44/warc/CC-MAIN-20190522143218-20190522165218-00494.warc.gz"} |
http://science.sciencemag.org/content/351/6270/257 | Report
# ASASSN-15lh: A highly super-luminous supernova
See allHide authors and affiliations
Science 15 Jan 2016:
Vol. 351, Issue 6270, pp. 257-260
DOI: 10.1126/science.aac9613
## The most luminous supernova to date
Supernovae are exploding stars at the end of their lives, providing an input of heavy elements and energy into galaxies. Some types have near-identical peak brightness, but in recent years a new class of superluminous supernovae has been found. Dong et al.y report the discovery of ASASSN-15lh (SN 2015L), the most luminous supernova yet found by some margin. It appears to originate in a large quiescent galaxy, in contrast to most super-luminous supernovae, which typically come from star-forming dwarf galaxies. The discovery will provide constraints on models of superluminous supernovae and how they affect their host galaxies.
Science, this issue p. 257
## Abstract
We report the discovery of ASASSN-15lh (SN 2015L), which we interpret as the most luminous supernova yet found. At redshift z = 0.2326, ASASSN-15lh reached an absolute magnitude of Mu,AB = –23.5 ± 0.1 and bolometric luminosity Lbol = (2.2 ± 0.2) × 1045 ergs s–1, which is more than twice as luminous as any previously known supernova. It has several major features characteristic of the hydrogen-poor super-luminous supernovae (SLSNe-I), whose energy sources and progenitors are currently poorly understood. In contrast to most previously known SLSNe-I that reside in star-forming dwarf galaxies, ASASSN-15lh appears to be hosted by a luminous galaxy (MK ≈ –25.5) with little star formation. In the 4 months since first detection, ASASSN-15lh radiated (1.1 ± 0.2) × 1052 ergs, challenging the magnetar model for its engine.
Only within the past two decades has the most luminous class of supernovae (super-luminous supernovae, SLSNe) been identified (1). Compared with the most commonly discovered SNe (Type Ia), SLSNe are more luminous by over two magnitudes at peak and rarer by at least 3 orders of magnitude (2). Like normal SNe, SLSNe are classified by their spectra as either SLSN-I (hydrogen-poor) or SLSN-II (hydrogen-rich). Yet, the physical characteristics of SLSNe may not be simple extensions from their low-luminosity counterparts (1). In particular, the power source for SLSNe-I is poorly understood (3). Adding to the puzzle, SLSNe tend to explode in low-luminosity, star-forming dwarf galaxies (46). The recent advent of wide-area, untargeted transient surveys has made the systematic discovery and investigation of the SLSNe population possible [(7, 8) and references therein].
The All-Sky Automated Survey for SuperNovae [ASAS-SN; www.astronomy.ohio-state.edu/~assassin (9)] scans the visible sky every two to three nights to depths of V ≈ 16.5 to 17.3 mag using a global network of 14-cm telescopes (9) in an untargeted search for new transients, particularly bright supernovae.
On 14 June 2015 (universal time dates are used throughout this paper), ASAS-SN triggered on a new source located at RA = 22h02m15s.45 Dec = –61°39′34″.6 (J2000), coinciding with a galaxy of then unknown redshift, APMUKS(BJ) B215839.70–615403.9 (10). Upon confirmation with our follow-up telescopes, we designated this new source ASASSN-15lh and published its coordinates (11).
By combining multiple epochs of ASAS-SN images, we extended the detections to fainter fluxes, finding prediscovery images of ASASSN-15lh from 8 May 2015 (V = 17.39 ± 0.23 mag), and the light curve through 19 September 2015 is shown in Fig. 1. The ASAS-SN light curve peaked at V = 16.9 ± 0.1 on approximately tpeak ~ JD2457179 (2015 June 05) based on a parabolic fit to the lightcurve (Fig. 1, dashed line). Follow-up images were taken with the Las Cumbres Observatory Global Telescope Network (LCOGT) 1-m telescopes, and the BV light-curves with the galaxy contribution subtracted are also shown.
We obtained an optical spectrum (3700 to 9200 Å) of ASASSN-15lh on 21 June 2015 with the du Pont 100-inch telescope. The steep spectral slope with relatively high blue flux motivated Swift UltraViolet and Optical Telescope (UVOT)/X-Ray Telescope (XRT) (12) target-of-opportunity observations starting on 24 June 2015. The six-band Swift light curve spanning from the ultraviolet (UV) to the optical (1928 to 5468 Å) is shown in Fig. 1. The Swift spectral energy distribution (SED), peaking in the UV, indicates that the source has a high temperature. We derive a 3σ x-ray flux limit of <1.6 × 10−14 ergs s–1 cm–2 (0.3 to 10 keV) from a total XRT exposure of 81 ks taken between 24 June and 18 September 2015.
The du Pont spectrum is mostly featureless (Fig. 2A, first from the top), except for a deep, broad absorption trough near ~5100 Å (observer frame). SNID (13), a commonly used SN classification software that has a spectral library of most types of supernovae except SLSN, failed to find a good SN match. However, we noticed a resemblance between the trough and a feature attributed to O II absorption near Å (rest frame) in the spectrum of PTF10cwr/SN 2010gx, a SLSN-I at z = 0.230 (3, 14, 15). Assuming that the ASASSN-15lh absorption trough (full width at half maximum of ~104 km s–1) was also due to the same feature indicated a similar redshift of z ~ 0.23. An optical spectrum (3250 to 6150 Å) obtained on the Southern African Large Telescope (SALT) revealed a clear Mg II absorption doublet (λλ2796, 2803) at z = 0.232, confirming the redshift expected from our tentative line identification. Subsequent Magellan/Clay (6 July) and SALT (7 July) spectra refined the redshift to z = 0.2326 (Fig. 2, C and D). The available rest frame spectra show continua with steep spectral slope, relatively high blue fluxes, and several broad absorption features also seen in PTF10cwr/SN 2010gx (Fig. 2A, features “a,” “b,” and “c”) and without hydrogen or helium features, which is consistent with the main spectral features of SLSNe-I (1, 3). The broad absorption feature near 4400 Å (Fig. 2, “d”) seen in PTF10cwr/SN 2010gx is not present in ASASSN-15lh. ASASSN-15lh thus has some distinct spectral characteristics in comparison with PTF10cwr/SN 2010gx and some other SLSNe-I (3).
Using a luminosity distance of 1171 Mpc (standard Planck cosmology at z = 0.2326), Galactic extinction of E(BV) = 0.03 mag (16), assuming no host extinction (thus, the luminosity derived is likely a lower limit), and fitting the Swift and LCOGT flux measurements to a simple blackbody (BB) model, we obtain declining rest-frame temperatures of TBB from 2.1 × 104 to 1.3 × 104 K and bolometric luminosities of Lbol = 2.2 × 1045 to 0.4 × 1045 ergs s–1 at rest-frame phases relative to the peak of trest ~ 15 and ~50 days, respectively (Fig. 3). ASASSN-15lh’s bolometric magnitude declines at a best-fit linear rate of 0.048 mag day–1, which is practically identical to SLSN-I iPTF13ajg (17) at 0.049 mag day–1 during similar phases (~10 to ~50 days). Subsequently, the luminosity and temperature reach a “plateau” phase with slow changes, and a similar trend is also seen for iPTF13ajg though with sparser coverage. Overall, the temperature and luminosity time evolution resemble iPTF13ajg, but ASASSN-15lh has a systematically higher temperature at similar phases. The estimated BB radius of ~5 × 1015 cm near the peak is similar to those derived for other SLSNe-I (3, 17). These similarities in the evolution of key properties support the argument that ASASSN-15lh is a member of the SLSN-I class, but with extreme properties.
The absolute magnitudes (AB) in the rest-frame u-band are shown in Fig. 4. Using either TBB or the spectra, there is little K-correction (18) in converting from B-band to rest-frame uAB with . The solid red points at trest ≳ 10 days include B-band data. Before ~10 days, we lack measurements in blue bands. To estimate Mu,AB at these earlier epochs, we assumed the BV = –0.3 mag color and K-corrections found for the later epochs with Swift photometry. We estimate an integrated bolometric luminosity radiated of ~(1.1 ± 0.2) × 1052 ergs over 108 days in the rest frame. Although our estimates at trest ≲ 10 days should be treated with caution, we can securely conclude that the peak Mu,AB is at or brighter than –23.5 ± 0.1, with a bolometric luminosity at or greater than (2.2 ± 0.2) × 1045 ergs s–1. Both values are without precedent for any supernova recorded in the literature. In Fig. 4, we compare ASASSN-15lh with a sample of SLSNe-I (3, 17). Although its spectra resemble the SLSNe-I subclass, ASASSN-15lh stands out from the luminosity distribution of known SLSNe-I, whose luminosities are narrowly distributed around M ~ –21.7 (2, 19). In table S1, we list the peak luminosities of the five most luminous SNe discovered to date, including both SLSN-I and SLSN-II. The spectral correspondence and similarities in temperature, luminosity, and radius evolutions between ASASSN-15lh and some SLSNe-I lead to the conclusion that ASASSN-15lh is the most luminous supernova yet discovered. Even though we find that SLSN-I is the most plausible classification of ASASSN-15lh, it is important to consider other interpretations given its distinct properties. We discuss alternative physical interpretations of ASASSN-15lh in the supplementary text, and given all the currently available data, we conclude that it is most likely a supernova, albeit an extreme one.
The rate of events with similar luminosities to ASASSN-15lh is uncertain. On the basis of a simple model of transient light curves in ASAS-SN observations tuned to reproduce the magnitude distribution of ASAS-SN Type Ia supernovae (supplementary text), the discovery of one ASASSN-15lh–like event implies a rate of r ≃ 0.6 Gpc–3 yr–1 (90% confidence: 0.21 < r < 2.8). This is at least 2 times and can be as much as 100 times smaller than the overall rate of SLSNe-I, r ≃ 32 Gpc–3 yr–1 (90% confidence: 6 < r < 109) from (2), and suggests a steeply falling luminosity function for such supernovae.
For a redshift of z = 0.2326, the host galaxy of ASASSN-15lh has MK ≈ –25.5, which is much more luminous than the Milky Way. We estimate an effective radius for the galaxy of 2.4 ± 0.3 kpc and a stellar mass of M* ≈ 2 × 1011 M. This is in contrast to the host galaxies of other SLSNe, which tend to have much lower M* (46). However, given the currently available data, we cannot rule out the possibility that the host is a dwarf satellite galaxy seen in projection. The lack of narrow hydrogen and oxygen emission lines from the galaxy superimposed in the supernova spectra implies little star formation (SFR) < 0.3 M yr–1 by applying the conversions in (20). Las Cumbres Observatory Global Telescope (LCOGT) astrometry places ASASSN-15lh within 0″.2 (750 pc) of the center of the nominal host. A detailed discussion of the host properties is provided in the supplementary text.
The power source for ASASSN-15lh is unknown. Traditional mechanisms invoked for normal SNe likely cannot explain SLSNe-I (3). The lack of hydrogen or helium suggests that shock interactions with hydrogen-rich circumstellar material, invoked to interpret some SLSNe, cannot explain SLSNe-I or ASASSN-15lh. SLSN-I post-peak decline rates appear too fast to be explained by the radioactive decay of 56Ni (3)—the energy source for Type Ia supernovae. Both the decline rate of the late-time light curve and the integral method (21) will allow tests of whether ASASSN-15lh is powered by 56Ni, and we estimate that ≳30 M of 56Ni would be required to produce ASASSN-15lh’s peak luminosity. Another possibility is that the spindown of a rapidly rotating, highly magnetic neutron star (a magnetar) powers the extraordinary emission (2224). To match the peak Lbol and time scale of ASASSN-15lh, the light-curve models of (23) imply a magnetar spin period and magnetic field strength of P ≃ 1 ms and B ≃ 1014 G, respectively, assuming that all of the spindown power is thermalized in the stellar envelope. If efficient thermalization continues, this model predicts a Lbolt–2 power-law at late times. The total observed energy radiated so far (1.1 ± 0.2 × 1052 ergs) strains a magnetar interpretation because, for P ≲ 1 ms, gravitational wave radiation should limit the total rotational energy available to ergs (25) and the total radiated energy to a third of , which is ~1052 ergs (23).
The extreme luminosity of ASASSN-15lh opens up the possibility of observing such supernovae in the early universe. An event similar to ASASSN-15lh could be observed with the Hubble Space Telescope out to z ~ 6, and with the James Webb Space Telescope out to z ≳ 10 (19). A well-observed local counterpart will be critical in making sense of future observations of the transient high-redshift universe.
## Supplementary Materials
www.sciencemag.org/content/351/6270/257/suppl/DC1
Materials and Methods
Supplementary Text
Figures S1 to S5
Tables S1 to S6
References (2765)
## References and Notes
Acknowledgments: We acknowledge B. Zhang, L. Ho, A. Gal-Yam, and B. Katz for comments; NSF AST-1515927, OSU CCAPP, Mt. Cuba Astronomical Foundation, TAP, SAO, CAS grant XDB09000000 (S.D.); NASA Hubble Fellowship (B.J.S.); FONDECYT grant 1151445, MAS project IC120009 (J.L.P.); NSF CAREER award AST-0847157 (S.W.J.); U.S. Department of Energy (DOE) DE-FG02-97ER25308 (T.W.-S.H.); NSF PHY-1404311 (J.F.B.); D. Victor for donating equipment (BN); FONDECYT postdoctoral fellowship 3140326 (F.O.E.), and Los Alamos National Laboratory Laboratory Directed Research and Development program (P.R.W). B.J.S. is a Hubble and Carnegie-Princeton Fellow. All data used in this paper are made public, including the photometric data (tables S1 to S6) and spectroscopic data at public repository WISeREP (26) (http://wiserep.weizmann.ac.il).
View Abstract | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8189066648483276, "perplexity": 4781.680660866558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744649.77/warc/CC-MAIN-20181118201101-20181118223101-00495.warc.gz"} |
https://stats.stackexchange.com/questions/209421/compare-mlr-model-to-model-y-i-beta-0-beta-1x-1i-beta-2x-2i-ep | # Compare MLR model to model $Y_i = (\beta_0 + \beta_1x_{1i} + \beta_2x_{2i} + \epsilon_i)^{\beta_3}$?
If I have theoretical reasons to suppose the data might be fit with an unusual equation such as the following:
$$Y_i = (\beta_0 + \beta_1x_{1i} + \beta_2x_{2i} + \epsilon_i)^{\beta_3}$$
Can I use Ordinary Least Squares Multiple Linear Regression after a transformation to estimate parameters $\beta{_0,_1,_2,_3}$? If yes, what transformation?
If not, is there some specialized package in R (and brief reading) that might help me compare the fit and residuals from this model against a more typical MLR model?
Thanks.
Example Code:
## while I can run "nls," I cannot get $\epsilon$ inside parentheses nor
## can I have four BETAs
var1 <- rnorm(50, 100, 1)
var2 <- rnorm(50, 120, 2)
var3 <- rnorm(50, 500, 5)
## make a model without $\beta_1$ and $\beta_2$ and with $\epsilon_i$ on outside
nls(var3 ~ (a + var1 + var2)^b, start = list(a = 0.12345, b = 0.54321))
Nonlinear regression model
model: var3 ~ (a + var1 + var2)^b
data: parent.frame()
a b
475.5234 0.9497
residual sum-of-squares: 1365
Number of iterations to convergence: 6
Achieved convergence tolerance: 8.332e-08
## FAILS with exponent on left-hand side and $\epsilon$ inside parentheses
nls(var3^(1/b) ~ (a + var1 + var2), start = list(a = 0.12345, b = 0.54321))
Error in eval(expr, envir, enclos) : object 'b' not found
## FAILS with all BETAs
nls(var3 ~ (a + b*var1 + c*var2)^d, start = list(a = 4, b = 1, c = 1, d = 1))
Error in numericDeriv(form[[3L]], names(ind), env) :
Missing value or an infinity produced when evaluating the model
• Is this homework or self-study? If so, please add the self-study tag, as we answer such questions differently than, well, non-self-study questions! Apr 26, 2016 at 16:51
• @jbowman: Neither homework or self-study (class or textbook). This is my own invented problem. I am neither familiar with nonlinear regression or having parameter act upon $\epsilon$, hoping others can point in right direction. Thanks.
– jtd
Apr 26, 2016 at 19:34
## No (at least not with nls)
From its documentation, nls fits functions of the form $Y_i| \theta, X_i = f(\theta, X_i) + \epsilon$ (and is the MLE in the case that $\epsilon$ is iid Normal), so your relationship is not in the non-linear least squares class.
Let's see if we can describe the distribution $Y$ might follow. Let $Z_i = \beta_0+\beta_1 x_{1i} + \beta_2 x_{2i} + \epsilon_i$ Given that $\epsilon_i$ is $N(0, 1)$, then $Z_i \sim N(\beta_0+\beta_1 x_{1i} + \beta_2 x_{2i}, 1)$. If $\beta_3 = 2$ then for example, we could have that $Y_i$ is non-central $\chi^2_1$.
## Yes (using Box-cox transformations)
If $Y_i = Z_{i}^{\beta_3}$ is a one-to-one transformation (ie, at a minimum, $\beta_3$ is not even) then you have just rediscovered the box-cox family of transformations: $$Y(\lambda) = \begin{cases} (\lambda Z + 1)^{1/\lambda}, \lambda >0 \\ e^Z, \lambda = 0 \end{cases},$$ which clearly includes the scenario you describe. Classically, $\lambda$ is estimated through the profile likelihood, ie, plugging in different values of $\lambda$ and checking the RSS to the least-squares fit. An Analysis of Transformations Revisited (1981) appears to give a good review of the theory. The function boxcox in the package MASS does such an estimation. If $\beta_3$ is a parameter of interest rather than a nuisance you may need to do something more sophisticated.
I think Andrew M have given a good answer; I just want to make a few related points.
As Andrew M indicates you can't do the model as is directly with nonlinear least squares; however, you can fit this closely related model with nonlinear LS:
$Y_i = (\beta_0 + \beta_1x_{1i} + \beta_2x_{2i})^{\beta_3} + \epsilon_i$
This might not seem much use, but it would have value in obtaining an initial estimate of $\beta_3$ to get a good starting point for optimization of the actual model (whether performed directly, or via Box-Cox).
Note also that if $Y$ is strictly positive, you can consider this transformation:
$\log(Y_i) = \beta_3 \log(\beta_0 + \beta_1x_{1i} + \beta_2x_{2i} + \epsilon_i)$
Again, a slight modification (pulling the error term outside the parentheses) allows nonlinear least squares fitting. You could then reweight using the resulting estimate of $\beta_3$ to improve the estimates. The only difficulty would be if you hit a situation where the fitted value inside the log wasn't strictly positive.
[If you're prepare to consider Weibull regression (that is, where the Y's are Weibull with mean dependent on the X's), you might find that you can do something useful with that. It would change the form of the relationship with the x's however. A related approach would be that given a value for $\beta_3$ you could consider transforming $Y$ ($Y^*=Y^{1/\beta_3}$)and fit an exponential GLM with identity link to $Y^*$ rather than a Gaussian. This would again correspond to a Weibull model for $Y$, but with the parameters entering in the way you suggest). This could be done over a grid of $\beta_3$ values to maximize the likelihood for it.] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8051626682281494, "perplexity": 1011.5916285485124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663012542.85/warc/CC-MAIN-20220528031224-20220528061224-00386.warc.gz"} |
http://math.stackexchange.com/users/16306/nathan?tab=activity | # Nathan
less info
reputation
5
bio website location United States age 25 member for 2 years, 5 months seen Mar 6 at 17:50 profile views 11
# 24 Actions
Jan9 accepted Average proportion for proportions with different denominators Jan9 accepted Limit of $1/x^2$ - Apostol 3.2, Example 4 Oct29 awarded Tumbleweed Oct22 asked Average proportion for proportions with different denominators Apr23 awarded Scholar Apr23 accepted Proof that $\int_1^x \frac{1}{t} dt$ is $\ln(x)$ Feb15 comment Proof that $\int_1^x \frac{1}{t} dt$ is $\ln(x)$ @BrianM.Scott Yeah, sorry. I looked through and I couldn't find it, but once gingerjin gave a proof I recognized it. I was too impatient, is all. Thanks anyways! Feb15 comment Proof that $\int_1^x \frac{1}{t} dt$ is $\ln(x)$ Thank you. This is a really clear explanation! Feb15 comment Proof that $\int_1^x \frac{1}{t} dt$ is $\ln(x)$ @JonasMeyer I'll see if I can find something about $e^u$. Maybe that will help me understand this. Feb15 comment Proof that $\int_1^x \frac{1}{t} dt$ is $\ln(x)$ @HenningMakholm Apostol's "One Variable Calculus". If there is a proof, I'm missing it. Feb15 revised Proof that $\int_1^x \frac{1}{t} dt$ is $\ln(x)$ edited body Feb15 asked Proof that $\int_1^x \frac{1}{t} dt$ is $\ln(x)$ Nov29 comment Limit of $1/x^2$ - Apostol 3.2, Example 4 Oh! I think I get it. Any neighborhood N(0) will contain points such that 0 A+2. You can't get around that, no matter what δ you choose. What makes me sure that I get it is that now I don't understand why I didn't see that in the first place. Nov29 comment Limit of $1/x^2$ - Apostol 3.2, Example 4 On reflection, I'm not sure either. This is helping, though. Thanks! Nov29 comment Limit of $1/x^2$ - Apostol 3.2, Example 4 Thanks for the formatting. I briefly tried to get it into LaTex, but I gave up after nothing worked. Haha. I've tried to ask questions where I didn't explain as much, and the person I was asking either refused to help or started talking about a part of the problem that I wasn't asking about. In this case, I had already burned through all of my mathy friends, but they are all at least two years out from real analysis, and couldn't help very much. They did help me get this far, though. Nov29 comment Limit of $1/x^2$ - Apostol 3.2, Example 4 I think I get it. Is this like saying that for a given δ1 such that for 0 < x < δ1, if δ1 does not work, than no δ > δ1 will work? Nov29 awarded Student Nov29 asked Limit of $1/x^2$ - Apostol 3.2, Example 4 Oct9 awarded Supporter Sep29 answered Determining the truth value of a statement | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8656005263328552, "perplexity": 1302.495572332501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011240269/warc/CC-MAIN-20140305092040-00042-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://crazyproject.wordpress.com/2011/05/29/the-bases-of-qq%E2%88%9B2/ | ## The bases of QQ(∛2)
Find a basis for $\mathbb{Q}(\sqrt[3]{2})$ over $\mathbb{Q}$. Describe all of the bases for $\mathbb{Q}(\sqrt[3]{2})$ in terms of this basis.
Note that $\sqrt[3]{2}$ is a root of $p(x) = x^3 - 2$, which is irreducible over $\mathbb{Q}$ by Eisenstein’s criterion. In particular, $p(x)$ is the minimal polynomial for $\sqrt[3]{2}$ over $\mathbb{Q}$. As we saw in Theorem 4.6 in TAN, every element of $\mathbb{Q}(\sqrt[3]{2})$ is uniquely of the form $a_0 + a_1\theta + a_2 \theta^2$, where $\theta = \sqrt[3]{2}$. In particular, $\{1,\theta,\theta^2\}$ is a basis for $\mathbb{Q}(\sqrt[3]{2})$ over $\mathbb{Q}$.
Suppose $\beta_1 = b_{1,1} + b_{2,1} \theta + b_{3,1} \theta^2$, $\beta_2 = b_{1,2} + b_{2,2} \theta + b_{3,2} \theta^2$, and $\beta_3 = b_{1,3} + b_{2,3} \theta + b_{3,3} \theta^2$ is a subset of $\mathbb{Q}(\sqrt[3]{2})$, and define $\varphi$ on $\mathbb{Q}(\sqrt[3]{2})$ by $1 \mapsto \beta_1$, $\theta \mapsto \beta_2$, and $\theta^2 \mapsto \beta_3$ and extending linearly. Now $B = \{\beta_1,\beta_2,\beta_3\}$ is a basis of $\mathbb{Q}(\sqrt[3]{2})$ if and only if $\theta$ is a $\mathbb{Q}$-isomorphism. Evidently, the matrix of $\varphi$ with respect to the basis $\{1,\theta,\theta^2\}$ (in both domain and codomain) is $A = [b_{i,j}]$. In turn, $\varphi$ is an isomorphism if and only if $A$ is invertible, which holds if and only if $\mathsf{det}(A)$ is nonzero. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 34, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9904161095619202, "perplexity": 27.436089693785036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00271-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://cs.stackexchange.com/questions/23787/why-use-languages-in-complexity-theory | # Why use languages in Complexity theory
I'm just starting to get into the theory of computation, which studies what can be computed, how quickly, using how much memory and with which computational model.
I have a pretty basic question, but am really hoping some of you guys can help me understand the concept behind it:
Why is everything centered around the notion and definition of LANGUAGES (i.e. regular languages and context free languages)? And how do these relate and describe the complexity of an algorithm and the possible computational models for solving them?
I read these sort of related questions:
but still don't have an answer to my doubts, since they provide a practical justification of why they are important (which I do understand) but don't help me understand why complexity theory is based upon them.
-
## migrated from cstheory.stackexchange.comApr 14 '14 at 16:58
This question came from our site for theoretical computer scientists and researchers in related fields.
Isn't this covered by our reference questions? – Raphael Apr 14 '14 at 18:11
@Raphael - Thanks for pointing me out to that question, it's a great reference! I am reading through it right now, but at the moment I believe this could be an addendum to the question cs.stackexchange.com/questions/13669/…. It doesn't seem to me that it is answered already, please let me know if you thin otherwise – Matteo Apr 14 '14 at 18:19
A language is just a set of finite length strings, which is the same as a function that maps finite strings to 1 or 0. So you are really asking "why is so much of complexity theory about decision problems" and the answer is that it's the simplest (nontrivial) kind of computational tasks, and often more complicated computational tasks can be reduced to decision problems. – Sasho Nikolov Apr 15 '14 at 0:08
It's because languages are the best (only?) way we have of formalizing the concept of a "problem."
An algorithm (Turing Machine) has performance, which we express via big-O complexity. A problem (language) belongs to a complexity class. These are usually defined by existence: if there exists a machine accepting a language $L$ which runs in given performance (space or time), then the language belongs to the corresponding complexity class.
There's a few reasons for this. First is that languages are platform independent. You're not worrying about whether an integer is 32 or 64 bits, or whether floating point operations run in parallel with other operations. These things give performance speedup at the micro-level, but complexity analysis is interested in the macro level. As you scale from 100 to $10^6$ to $10^9$ to $10^{12}$ inputs, how does the algorithm performance change? Does it go from using 1 million tape cells to 1 billion, or from 1 million to more cells than there are atoms in the universe?
The second is that languages are just a nice abstraction for data. You need something you can do proofs about, something you can model formally. Encoding your input and output as a string mean that you're now dealing not with bits in memory, but with mathematical objects with specific properties. You can reason about them and prove proofs about them in a formal, and very simple sense.
Complexity theory tends to be focused on decision problems because they end up being difficult. When the decision version of travelling salesman is NP-complete (i.e. is there a tour shorter than length $k$), then obviously finding the shortest path is harder. There isn't as much focus on function/optimization problems because there's still a lot of open questions and unsolved problems about the simpler decision problems.
I guess here's my challenge to you: find way to mathematically describe problems that isn't languages. I don't know if languages are special, but I think they're the simplest tool we've got, the easiest one to deal with.
-
Languages certainly aren't the only way of formulating problems. For example, you could formalize something like chromatic number as a function from graphs to natural numbers. And, actually, there's quite a lot of work on function and optimization problems. – David Richerby Apr 14 '14 at 17:44
True, but how would you deal with the complexity of calculating the chromatic number without some concept of a language or machine? – jmite Apr 14 '14 at 17:51
Thanks for your answer, I get your point. However I still have 2 questions: 1) won't the fact that we are using languages affect the results about complexity or decidability of a problem? i.e. could a problem be solvable in floating point arithmetic but not in integer arithmetic (i.e. integer programming)? 2) How do we do this mapping from any kind of data to a unique language that describes them all (since we want to evaluate complexity of a problem and abstract from the specific input)? Thanks again! – Matteo Apr 14 '14 at 18:10
@jmite You need a machine, yes, but not necessarily a language. – Raphael Apr 14 '14 at 18:12
@Raphael many complexity classes that are usually defined in terms of running time of machines can be characterized in terms of descriptive complexity. – Sasho Nikolov Apr 15 '14 at 0:04
1. There is more to complexity theory than languages, for example function classes, arithmetic complexity, and the subareas of approximation algorithms and inapproximability.
2. Historical reasons: one of the basic papers in computability theory was discussing Hilbert's Entscheidungsproblem (a form of the halting problem).
Unfortunately I don't know much about the latter, but let me expand on the former.
## Complexity beyond languages
Every computational complexity class comes with an associated function class. For example, the class P of all problems decidable in polynomial time is associated with FP, the class of all functions computable in polynomial time. FP is important since it is used to define NP-hardness: a language $L$ is NP-hard if for every language $M$ in NP there is a function $f_M$ in FP such that $x \in M$ iff $f_M(x) \in L$. Another complexity class of functions, #P, is related to the so-called polynomial hierarchy via Toda's theorem.
Arithmetic circuit complexity (or algebraic complexity theory) deals with the complexity of computing various polynomials. Important complexity classes here are VP and VNP, and geometric complexity theory is an important project attempting to separate VP and VNP (and later P and NP) using algebraic geometry and representation theory.
Another important example of algebraic complexity is fast matrix multiplication. Here the basic question is how fast can we multiply two matrices? Similar questions ask how fast we can multiply integers, how fast can we test integers for primality (this is a decision problem!) and how fast can we factor integers.
Convex optimization deals with optimization problems that can be solved (or almost solved) efficiently. Examples are linear programming and semidefinite programming, both of which have efficient algorithms. Here we are interested both in the optimum and in the optimal solution itself. Since there is often more than one optimal solution, computing an optimal solution is not well represented as a decision problem.
Approximability is the area that studies how good an approximation we can get for an optimization problem in polynomial time. Consider for example the classical problem of Set Cover: given a collection of sets, how many of them do we need to cover the entire universe? Finding the optimal number is NP-hard, but perhaps it is possible to compute an approximation? Approximation algorithms is the subarea studying algorithms for computing approximations, while inapproximability studies limits of approximation algorithms. In the particular case of Set Cover, we have an algorithm giving a $\ln n$ approximation (the greedy algorithm), and it is NP-hard to do any better.
-
Let's look at this question from the perspective of category theory. The decision problems (or languages) would then correspond to the objects of a category, and the allowed reductions between two problems would correspond to the morphisms (arrows) of a category.
Talking about languages has the advantage that equivalence of languages is well defined (namely by extensional equality). Two unrelated problems might lead to the same language, and then we are allowed to consider them as equivalent. If we would want to talk about isomorphic problems instead, we would have to define the allowed morphisms between two problems. But the allowed morphisms depend on the actual complexity class under consideration, which makes this approach less suitable for comparing different complexity classes.
The notion of isomorphic problems will normally be coarser than the notion of equivalent languages, i.e. two problems can be isomorphic, even if their associated languages are not equivalent. What is worse is that there are often different reasonable notions for the allowed morphisms, which only agree with respect to the allowed isomorphisms. Focusing on languages allows to postpone such problems until we feel like talking about some different reasonable notions of reduction (like Karp reduction vs. Cook reduction).
-
This doesn't seem to answer the question. One could still talk about morphisms between problems whatever one uses as the objects in the corresponding category. – David Richerby Apr 15 '14 at 7:43
@DavidRicherby The point I wanted to bring across is that nailing down the appropriate morphisms is more challenging than nailing down the appropriate objects (=languages). (Especially since there is normally more than one appropriate notion of morphisms.) Without morphisms, you can't talk about isomorphic problems (or algorithms). However, languages give you a way to still talk about equivalence of problems. Perhaps I didn't explain this properly, but (for me) this is a good reason for "using languages in complexity theory". – Thomas Klimpel Apr 15 '14 at 8:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8202256560325623, "perplexity": 451.65780902577194}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246655962.81/warc/CC-MAIN-20150417045735-00182-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/67549/given-operatornameimf-subset-operatornamekerx-does-that-imply-if | # Given $\operatorname{Im}(f) \subset \operatorname{ker}(x^*)$ does that imply if $f∈End_A(M)$ is not a right divisor of zero then f is surjective?
This question is a follow up to this post: Question about an endomorphsim of modules $N \subset M$ given $\exists f \neq 0$ an endomorphism such that $f(M) \subset N$
I have been working through a bunch of similar exercises and after I had the proof to the above problem explained to me I thought I was in good shape to attack the problem below but I keep getting stuck on choosing the correct submodule.
Let $A$ be a commutative ring with identity and let $M$ be an $A$ module. Suppose for every submodule $N \neq M$ with $N \subset M$ there exits a linear form $x^{*} \in M^{*}$ which is zero on $N$ and surjective,
How do we show that if $f \in End_A(M)$ is not a right divisor of zero then f is a surjective endomorphism?
I thought the proof would be very similar to the previous problem cited above. But when I consider $N = \operatorname{Im}(f)$ and assume $\operatorname{Im}(f) \neq M$ then I am having trouble getting a contradiction out of the behavior of the linear form on $N$. The next step in the argument I thought would give us a contradiction by considering the fact that $\operatorname{Im}(f) \subset \operatorname{ker}(x^*)$.
-
If the title and the question are related, you might want to explain how. – Did Sep 26 '11 at 1:52
What is $E$ in your third paragraph? What is $R$ in your fourth paragraph? – Mariano Suárez-Alvarez Sep 26 '11 at 3:50
If $f:M\to M$ is not surjective, its image $f(M)$ is a proper submodule of $M$ and, according to your hypothesis, there exists a surjective linear map $\phi:M\to A$ such that $\phi|_{f(M)}=0$. Let $m_0\in M$ be non-zero, and let $g:m\in M\mapsto \phi(m)m_0\in M$. This is a non-zero endomorphism (because there exists $m_1\in M$ such that $\phi(m_1)=1$, so that $g(m_1)=m_0\neq0$) and $gf=0$.
Thanks for your help. This makes perfect sense after seeing your construction of $g$. – user7980 Sep 26 '11 at 5:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9631780385971069, "perplexity": 96.35224333387877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997904391.23/warc/CC-MAIN-20140722025824-00090-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://proxieslive.com/tag/widetildew/ | Why $\widetilde{W}$ is closed?
let’s consider $$\mathscr{U}$$ a free ultrafilter on the natural numbers and consider its corresponding ultrapower \begin{align*} \widetilde{X} = (\ell ^{\infty}(X _{i})/\operatorname{ker}\mathcal{N},\Vert \cdot \Vert), \end{align*} where $$\widetilde{x}$$ is the equivalence class formed by the sequence $$(x _{n})$$ and the norm $$\Vert \cdot \Vert$$ is defined as \begin{align*} \Vert \widetilde{w} \Vert = \lim _{n, \mathscr{U}} \Vert w _{n} \Vert. \end{align*} We define \begin{align*} D(\widetilde{w}) = \lim _{m, \mathscr{U}} \left( \lim _{n, \mathscr{U}} \Vert w _{m} – w _{n} \Vert \right). \end{align*} Let $$T: C \to C$$ a non-expansive function and $$C$$ a set of $$X$$. For the minimal set of \begin{align*} \mathscr{A} = \{ K \subseteq C: \ K \ \mbox{is not empty, weak compact, convex and T -invariant}\} \end{align*} called $$C _{0}$$. We know that $$C _{0}$$ is closed, convex and $$T$$-invariant. We define the set \begin{align*} \widetilde{C _{0}} = \{ \widetilde{w}: w _{n} \in C _{0}, \ \mbox{for all} \ n \in \mathbb{N} \} \in \widetilde{X} \mbox{.} \end{align*} Let \begin{align*} \widetilde{W} = \left\{ \widetilde{w} \in \widetilde{C _{0}}: \ \Vert \widetilde{w} – \widetilde{x} \Vert \leqslant \frac{1}{2} \ \mbox{and} \ D(\widetilde{w}) \leqslant \frac{1}{2} \right\} \mbox{.} \end{align*} Why $$\widetilde{W}$$ is closed?. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 17, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000081062316895, "perplexity": 2093.6231132892854}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588153.7/warc/CC-MAIN-20211027115745-20211027145745-00260.warc.gz"} |
http://mathhelpforum.com/calculus/198865-need-help-differentiating-f-x-x.html | Math Help - Need help differentiating f(x) = a^x
1. Need help differentiating f(x) = a^x
How do I differentiate f(x) = a^x
The answer xa^(x-1) is not sufficient.
I think I'm supposed to substitute with e somehow to figure it out, but I'm not entirely sure how e works.
Can someone help me out? Please explain with detail.
2. Re: Need help differentiating f(x) = a^x
Originally Posted by TWN
How do I differentiate f(x) = a^x
The answer xa^(x-1) is not sufficient. CORRECT
Note that $a>0$ must be the case.
The answer is $f'(x)=a^x\ln(a)$. It is done with logarithmic differentiation.
3. Re: Need help differentiating f(x) = a^x
Can you explain in detail the process you underwent to reach that answer please?
4. Re: Need help differentiating f(x) = a^x
$y = a^x$
$\ln{y} = \ln{a^x}$
$\ln{y} = x \ln{a}$
$\frac{d}{dx} \left[\ln{y} = x \ln{a} \right]$
$\frac{y'}{y} = \ln{a}$
$y' = y \cdot \ln{a}$
$y' = a^x \cdot \ln{a}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9896303415298462, "perplexity": 1473.4467449641493}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500834663.62/warc/CC-MAIN-20140820021354-00312-ip-10-180-136-8.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/291514/markov-chain-transition-intensity-conversion | # Markov Chain Transition Intensity Conversion
I have a question about converting a 3-state discrete state, continuous-time, markov chain to a 2-state.
My 3-state model has states: Well (state 1), Ill (state 2) and Dead (state 3).
$$\begin{bmatrix}-(a12 + a13) & a12 & a13\\0 & -a23 & a23\\ 0 & 0 & 0\end{bmatrix}$$ This 3 state matrix is full.mat in the R code.
I would like to convert it to an Alive/Dead model. I am not sure if I can do it the following way: $$\begin{bmatrix}-(a13 + a23) & (a13+a23)\\0 & 0\end{bmatrix}$$ where I am simply adding the intensity of Well->Dead, and Ill->Dead to compute the intensity of Alive->Dead for a 2-state model? This matrix is small.mat in the R code.
I would expect that the sum of transition probabilities P(1->3) + P(2->3) from the three state model should equal P(alive -> dead) in the 2-state model.
Essentially, I am trying to determine $$\mathrm{Pr}(X(t+h) = 3 | X(t) =1~~OR~~X(t) =2)$$ But the final 2 lines of the R-code show that these values are not equivalent, they are slightly off... Am I doing things incorrectly, or is this just rounding approximation by expm()?
library(expm)
full.mat<- rbind(c(-0.003260632, 0.000514263, 0.002746369),
c(0.000000000, -0.007948859, 0.007948859),
c(0.000000000, 0.000000000, 0.000000000))
small.mat<-matrix(0,2,2)
small.mat[1,2]<-full.mat[1,3]+full.mat[2,3]
small.mat[1,1]<-small.mat[1,2]*-1
exp.full<-expm(full.mat)
exp.small<-expm(small.mat)
# COMPUTE PROBABILITY OF DEATH
exp.small[1,2] # this is probability of death in 2-state model
exp.full[1,3]+exp.full[2,3] # this is probability of death
in 3-state model
-
The first thing to realize is that in this model one needs to know whether one is well-being or ill to know the "chances" one has to become dead.
The exception is when $a_{13}=a_{23}=\alpha$, then the alive/dead process is indeed a Markov process on the state space $\{\mathtt{alive},\mathtt{dead}\}$ with rate transition matrix $\begin{pmatrix}-\alpha &\alpha\\ 0 & 0\end{pmatrix}$. In every other case, the usual Bayes decomposition yields $$\mathbb P(X(t+\mathrm dt)=3\mid X(t)=1\ \text{or}\ 2)=\alpha(t)\mathrm dt,$$ where $$\alpha(t)=\frac{a_{13}p_1(t)+a_{23}p_2(t)}{p_1(t)+p_2(t)},\qquad p_i(t)=\mathbb P(X(t)=i).$$ Note that each $p_i(t)$ depends on $t$ and on the initial distribution $(p_1(0),p_2(0))$. Recall that $$p_1(t)=p_1(0)\mathrm e^{-(a_{12}+a_{13})t},\quad p_2(t)=p_1(0)c(t)+p_2(0)\mathrm e^{-a_{23}t},$$ for some explicit function $c(t)$ you might want to write down.
To sum up, call $Y(t)=\mathtt{dead}$ if $X(t)=3$ and $Y(t)=\mathtt{alive}$ otherwise. Then $(Y(t))_{t\geqslant0}$ is not (in general) a Markov process on the state space $\{\mathtt{alive},\mathtt{dead}\}$ because the distribution of the state $Y(t+\mathrm dt)$ depends on the distribution of the state $Y(t)$ (good), on $t$ itself (medium good), and also on the initial distribution of $X(0)$ (not good).
-
Thank you. So because this isn't a markov process, I can't use the matrix exponential of the transition intensity matrix to compute the transition probability matrix? – MPahuta Feb 3 '13 at 21:34
There is no transition intensity matrix, if you think about it (except in the singular case $a_{13}=a_{23}$). – Did Feb 4 '13 at 12:11
All rates defining the progressive model can be organized in a rate matrix $\mathbf{\mathrm{R}}$. $\mathbf{\mathrm{R}}$ is organized such that rows correspond to the "from" state, and the columns to the "to" state. Disallowed transitions are assigned a rate of $0$, and each row must sum to $0$. \begin{align*} \mathbf{\mathrm{R}}=\left[\begin{array}{c c c} -(\alpha+b \alpha) & b \alpha &\alpha\\ 0 & -g \alpha & g \alpha\\ 0 & 0 & 0\\ \end{array} \right]. \end{align*} We also define the state occupation column vector $\mathbf{\mathrm{P}}(t)$ \begin{align*} \mathbf{\mathrm{P}}(t)=\left[\begin{array}{c} W(t)\\ I(t)\\ D_O(t)\\ \end{array} \right]. \end{align*} The rate matrix is used to compute the state occupation column vector at any arbitrary time $t$ using the relationship\cite{Cox1965} \begin{align} \mathbf{\mathrm{P}}(t) &= \mathbf{\mathrm{P}}(0) e^{\mathrm{\textbf{R}} t} \label{eq:p_rec}. \end{align} By adding two constraints, \begin{align} D_O(t) &= 1 - (W(t) + I(t)),~\mathrm{and}\\ D_O(0) &=0, \end{align} we can derive a general formula for the overall death function: $$D_O(t) = 1 - W(0) e^{- \alpha t (1+b)} - W(0) \frac{b }{g - 1- b }\left(e^{- \alpha t (1+b)} - e^{- \alpha t g}\right)- \left( 1 - W(0)\right)e^{-\alpha t g}$$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9861627817153931, "perplexity": 767.839217458062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122222204.92/warc/CC-MAIN-20150124175702-00055-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://orbit.dtu.dk/en/publications/characterization-and-optimization-of-four-wave-mixing-wavelength- | # Characterization and Optimization of Four-Wave-Mixing Wavelength Conversion System
*Corresponding author for this work
Research output: Contribution to journalJournal articleResearchpeer-review
## Abstract
In this work, we present a comprehensive experimental and numerical investigation of the impact of system parameters on wavelength converters based on four-wave-mixing, with focus on practical system implementations in addition to the interaction within the nonlinear medium. The input signal power optimization is emphasized according to the trade-off between the linear and the nonlinear impairments, and the origin of the limitations at the optimum is studied. The impact of the input signal quality on the converted idler is discussed, and depending on the dominant noise contribution a varying conversion penalty is demonstrated. The penalty is also shown to scale with increasing number of WDM channels due to additional nonlinear cross-talk between them. Finally, by means of numerical simulations we extend the experimental characterization to high pump powers, showing the impact of parametric noise amplification, and different pump laser linewidths, which lead to increased phase-noise transfer. The experimental characterization employs an integrated AlGaAs-on-insulator waveguide, and the numerical simulations accompany the results to make the analysis general for $\chi^{(3)}$ materials that satisfy the assumptions of the split-step Fourier method.
Original language English Journal of Lightwave Technology 37 21 5628-5636 9 0733-8724 https://doi.org/10.1109/JLT.2019.2933226 Published - 2019
## Keywords
• Four-wave mixing
• Integrated waveguides | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8702967166900635, "perplexity": 1429.4478684789658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058373.45/warc/CC-MAIN-20210927060117-20210927090117-00085.warc.gz"} |
http://mathoverflow.net/questions/69563/blow-up-removes-intersections | # Blow-up removes intersections?
Assume that $\beta:\tilde{X}\to X$ is the blow-up of a nonsinular $\Bbbk$-variety $X$ along a sheaf of ideals $\mathcal{I}$. Let $Y:=Z(\mathcal{I})$. Given nonsingular, closed subvarieties $Z_1,\ldots,Z_r\subseteq X$ such that $\bigcap_i Z_i \subseteq Y$, is it true that $\bigcap_i \tilde{Z}_i=\emptyset$, where $\tilde{Z}_i$ denotes the strict transform of $Z_i$? If not, does this hold if we require $Y$ to be nonsingular and/or the $Z_i$ to intersect transversally?
-
This is certainly false in the generality that you first state it. Think about the subvarieties defined by $y=0$ and $y=x^2$ in the plane. If you blow up at the origin, the strict transforms meet in a point, if I'm not mistaken (I'm thinking about the incidence correspondence style description of the blow-up). Of course, the problem with this example is the lack of transversality. My intuition suggests that it's true if the $Z_i$ meet transversely, but I'm not confident about it :) – Ramsey Jul 5 '11 at 19:51
Yea, that was my feeling as well. Thanks (and +1) for confirming that. – Jesko Hüttenhain Jul 5 '11 at 21:08
As Sasha and Ramsey point out, this isn't true in the generality requested. However, the following is true, see Hartshorne, Chapter II, Exercise 7.12.
Statement: Suppose that $X$ is a Noetherian scheme and let $Y, Z$ be closed subschemes, neither one containing the other. Let $\widetilde{X}$ be the blowing up of $Y \cap Z$ (defined by the sum of the ideal sheaves). Then the strict transforms of $Y$ and $Z$ do not meet.
In other words, you can't choose an arbitrary $Y$, but there always is a subscheme (supported where you want) which you can blow up which will work.
EDIT: With regards to why the sum of all the ideals can't work, consider the three coordinate hyperplanes $$H_1, H_2, H_3 \subseteq \mathbb{A}^3.$$ The sum of the ideals defining the hyperplanes is the ideal defining the origin in $\mathbb{A}^3$. Blowing up the origin cannot possibly separate $H_1$ and $H_2$ because $H_1 \cap H_2$ is a line.
EDIT2: As Jesko pointed out, the previous edit answers the wrong question. He's not interested in the pair-wise intersection, just the total intersection. My example in the above edit doesn't help there. I think his answer below is then correct.
-
So this is because we blow up in $\mathcal{I}_Y+\mathcal{I}_Z$, not its radical, I suppose. Since there is no proof here, I have to ask, does this generalize in the obvious way to more than two closed subschemes? By blowing up $\mathcal{I}_{Z_1}+\cdots+\mathcal{I}_{Z_r}$? – Jesko Hüttenhain Jul 6 '11 at 7:01
Yes, blowing up the radical won't work. It's an easy exercise, if you get stuck I can write down a proof. Also, it won't generalize in that way. You could blow up $(I_{Z_1} + I_{Z_2}) \cdot (I_{Z_1} + I_{Z_3}) \dots (I_{Z_{r-1}} + I_{Z_r})$ though. – Karl Schwede Jul 6 '11 at 7:20
Renitently, I tried to generalize the statement in my answer below, using the sum of the ideals. Would you be so kind to tell me whether it checks out? – Jesko Hüttenhain Jul 6 '11 at 10:47
In general the answer is no. For example if $X$ is a plane, $Y$ is a point and $Z_1,Z_2$ are curves tangent in $Y$, then the strict transforms intersect. If however $Y$ is smooth and normal bundles $N_{Z_i/Y}$ do not intersect in $N_{X/Y}$ then the intersection is empty. Note that transverslity is another condition, the transversality just means that $N_{Z_i/Y} + N_{Z_j/Y} = N_{X/Y}$ which is not the same as emptiness of the intersection.
For example let $X = A^3$, $Z_1$ being the line $x = y = 0$ and $Z_2$ being the hypersurface $f_2 + f_3 = 0$, where $f_i$ are homogeneous polynomials of degree $i$ such that $f_2(0,0,1) = 0$ and $f_3(0,0,1) \ne 0$. Let finally $Y$ be the intersection of $Z_1$ and $Z_2$ (it consists of two points, one of those being $(0,0,0)$). Then the strict transforms of $Z_1$ and $Z_2$ intersect in the exceptional divisor over the point $(0,0,0)$ since $f_2(0,0,1) = 0$.
-
This is sweet, do you have a proof or reference for the fact that the intersection is empty as long as the $N_{Z_i/Y}$ have empty intersection? – Jesko Hüttenhain Jul 6 '11 at 6:54
The intersection of the strict preimage of $Z$ with the exceptional divizor $E$ is $P(N_{Z/Y}) \subset P(N_{X/Y}) = E$. This is more or less by definition of the blowup. – Sasha Jul 6 '11 at 7:06
I have tried to generalize the Exercise referenced by Karl, even though he told me that it shouldn't be possible this way. I think, however, it works:
Edit: I made a mistake concerning $J_i$ - it cannot be equal to $I_i\oplus\bigoplus_{d\ge 1} I_i^dT^d$ because that is not necessarily an ideal - it might not be closed under multiplication by elements from the ring $S$. The version below looks better.
Proposition. Let $Z_0,\ldots,Z_r$ be closed subschemes of a Noetherian scheme $X$ such that $Z_i\not\subset Z_j$ for $i\ne j$. Let $I_i:=I(Z_i)$ and denote by $\tilde{Z}_i$ the respective strict transform of $Z_i$ under the blow-up $\beta:\tilde{X}\to X$ of $X$ along $I:=\sum_{i=0}^rI_i$. Then, $\bigcap_{i=0}^r\tilde{Z}_i=\emptyset$.
Proof. The statement can be checked locally, so we may assume that $X=\mathrm{Spec}(A)$ is affine. Let $f_i:Z_i\hookrightarrow X$ be the respective closed immersion, so $Z_i=\mathrm{Spec}(A/I_i)$ and $f_i^\sharp:A\twoheadrightarrow A/I_i$. Then, the inverse image ideal sheaf of $I$ under $f_i$ is $I\cdot A/I_i$ and hence,
$\displaystyle\tilde{Z}_i=\mathrm{Proj}\left(\bigoplus_{d\ge 0} \left(I\cdot A/I_i\right)^d\cdot T^d\right)$
With $S=\bigoplus_{d\ge 0} I^d\cdot T^d$, the homogeneous ideal defining $\tilde{Z}_i$ inside $\tilde{X}=\mathrm{Proj}(S)$ is equal to
$\displaystyle J_i = \bigoplus_{d\ge 0} (I^d\cap I_i)$
In particular, $J_0+\cdots+J_r\supseteq S_+$, so any point $P\in\tilde{Z}_0\cap\cdots\cap\tilde{Z}_r$ would correspond to a homogeneous prime ideal containing each of the $J_i$ and hence, the irrelevant ideal. There is no such point.
Did I miss something? Or is this correct?
-
I don't think this can be correct. Consider the coordinate hyperplances $H_1, \dots, H_4 \in \mathbb{A}^4$. Now, $I_1 + \dots I_4$ is just the ideal of the maximal ideal of the origin. Blowing up that maximal ideal will certainly not separate the $H_i$ though, since they intersect at points besides only the origin. I'll try to read through the proof now. – Karl Schwede Jul 6 '11 at 14:58
Perhaps I've totally missed the problem, but I don't see why $S_+ \subseteq J_0 + \dots + J_r$. – Karl Schwede Jul 6 '11 at 15:06
Well, shouldn't $S_1=(J_0+\cdots+J_r)_1$ verify this, since everything is generated in degree one? Also, I do not want to remove any pairwise intersection, just their mutual intersecion $\bigcap_i\tilde{Z}_i$ should be empty. – Jesko Hüttenhain Jul 6 '11 at 16:11
Ah, I misunderstood. I'll get back to you. – Karl Schwede Jul 6 '11 at 19:33
I think I confused you with the blunt mistake I made concerning $J_i$, check my edit. This looks quite believable to me now, but any comments are still welcome. I'll also accept your answer because it was extremely helpful. Thanks for all the effort! – Jesko Hüttenhain Jul 7 '11 at 16:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9500771164894104, "perplexity": 233.31494694914917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131309986.40/warc/CC-MAIN-20150323172149-00118-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://questions.examside.com/past-years/jee/question/from-a-solid-sphere-of-mass-m-and-radius-r-a-spheri-2015-marks-4-gh901l7w9tkb4quo.htm | ### JEE Mains Previous Years Questions with Solutions
4.5
star star star star star
1
### JEE Main 2015 (Offline)
From a solid sphere of mass $M$ and radius $R,$ a spherical portion of radius $R/2$ is removed, as shown in the figure. Taking gravitational potential $V=0$ at $r = \infty ,$ the potential at the center of the cavity thus formed is:
($G=gravitational$ $constant$)
A
${{ - 2GM} \over {3R}}$
B
${{ - 2GM} \over R}$
C
${{ - GM} \over {2R}}$
D
${{ - GM} \over R}$
## Explanation
Before removing the spherical portion, potential at point $P$(Center of cavity)
${V_{sphere}}\,\, = {{ - GM} \over {2{R^3}}}\left[ {3{R^2} - {{\left( {{R \over 2}} \right)}^2}} \right]$
$= {{ - GM} \over {2{R^3}}}\left( {{{11{R^2}} \over 4}} \right) = - 11{{GM} \over {8R}}$
Mass of removed part = ${M \over {{4 \over 3}\pi {R^3}}} \times {4 \over 3}\pi {\left( {{R \over 2}} \right)^2}$ = ${M \over 8}$
Due to cavity part potential at point $P$
${V_{cavity}}\,\,$ = $- {{G{M \over 8}} \over {2{{\left( {{R \over 2}} \right)}^3}}}\left[ {3{{\left( {{R \over 2}} \right)}^2} - {0^2}} \right]$
= $- {{3GM} \over {8R}}$
So potential at P due to remaining part,
$= {V_{sphere}}\,\, - \,\,{V_{cavity}}\,\,$
$= \,\, - {{11GM} \over {8R}} - \left( { - {3 \over 8}{{GM} \over R}} \right)$
$= {{ - GM} \over R}$
Note : Potential inside the sphere of radius R and at a distance r from the center,
V = $- {{GM} \over {2{R^3}}}\left[ {3{R^2} - {r^2}} \right]$
Here M = mass of the sphere, r = distance from the center of the sphere
2
### JEE Main 2014 (Offline)
Four particles, each of mass $M$ and equidistant from each other, move along a circle of radius $R$ under the action of their mutual gravitational attraction. The speed of each particle is :
A
$\sqrt {{{GM} \over R}}$
B
$\sqrt {2\sqrt 2 {{GM} \over R}}$
C
$\sqrt {{{GM} \over R}\left( {1 + 2\sqrt 2 } \right)}$
D
${1 \over 2}\sqrt {{{GM} \over R}\left( {1 + 2\sqrt 2 } \right)}$
## Explanation
All those particles are moving due to their mutual gravitational attraction.
The force between each masses are repulsive force.
On mass M at C, due to mass at D the repulsive force is F in the vertical direction.
On mass M at C, due to mass at B the repulsive force is F in the horizontal direction.
On mass M at C, due to mass at A the repulsive force is F'
Net force acting on particle at C,
= $2F\,\cos \,{45^ \circ } + F'$
Where $F = {{G{M^2}} \over {{{\left( {\sqrt 2 R} \right)}^2}}}$ and $F' = {{G{M^2}} \over {4{R^2}}}$
$\Rightarrow {{2 \times G{M^2}} \over {\sqrt 2 {{\left( {R\sqrt 2 } \right)}^2}}} + {{G{M^2}} \over {4{R^2}}}$
$\Rightarrow {{G{M^2}} \over R^2}\left[ {{1 \over 4} + {1 \over {\sqrt 2 }}} \right]$
This net force will balance by the centripetal force Fcp = ${{M{v^2}} \over R}$
$\therefore$ ${{M{v^2}} \over R} =$ ${{G{M^2}} \over R^2}\left[ {{1 \over 4} + {1 \over {\sqrt 2 }}} \right]$
$\Rightarrow$ $v = \sqrt {{{Gm} \over R}\left( {{{\sqrt 2 + 4} \over {4\sqrt 2 }}} \right)}$
$= {1 \over 2}\sqrt {{{Gm} \over R}\left( {1 + 2\sqrt 2 } \right)}$
3
### JEE Main 2013 (Offline)
What is the minimum energy required to launch a satellite of mass $m$ from the surface of a planet of mass $M$ and radius $R$ in a circular orbit at an altitude of $2R$?
A
${{5GmM} \over {6R}}$
B
${{2GmM} \over {3R}}$
C
${{GmM} \over {2R}}$
D
${{GmM} \over {3R}}$
## Explanation
Energy of the satellite on the surface of the planet
Ei = K.E + P.E = 0 + $\left( { - {{GMm} \over R}} \right)$ = ${ - {{GMm} \over R}}$
Energy of the satellite at 2R distance from the surface of the planet while moving with velocity v
Ef = ${1 \over 2}m{v^2}$ + $\left( { - {{GMm} \over {R + 2R}}} \right)$
In the orbital of planet, the centripetal force is provided by the gravitational force
$\therefore$ ${{m{v^2}} \over {R + 2R}} = {{GMm} \over {{{\left( {R + 2R} \right)}^2}}}$
$\Rightarrow {v^2} = {{GM} \over {3R}}$
$\therefore$ Ef = ${1 \over 2}m{v^2}$ + $\left( { - {{GMm} \over {R + 2R}}} \right)$
$= {1 \over 2}m{{GM} \over {3R}} - {{GMm} \over {3R}}$
= $- {{GMm} \over {6R}}$
$\therefore$ Minimum energy required required to launch the satellite
= Ef - Ei
= $- {{GMm} \over {6R}}$ - $\left( { - {{GMm} \over R}} \right)$
= ${{5GMm} \over {6R}}$
4
### AIEEE 2012
The mass of a spaceship is $1000$ $kg.$ It is to be launched from the earth's surface out into free space. The value of $g$ and $R$ (radius of earth ) are $10\,m/{s^2}$ and $6400$ $km$ respectively. The required energy for this work will be:
A
$6.4 \times {10^{11}}\,$ Joules
B
$6.4 \times {10^8}\,$ Joules
C
$6.4 \times {10^9}\,$ Joules
D
$6.4 \times {10^{10}}\,$ Joules
## Explanation
Potential energy at earth surface = $- {{GMm} \over R}$
and at free space potential energy = 0
Work done for this = $0 - \left( { - {{GMm} \over R}} \right)$ = ${{{GMm} \over R}}$
So the required energy for this work is
= ${{GMm} \over R}$
=${{{g{R^2}m} \over R}}$ [ as $g = {{GM} \over {{R^2}}}$ ]
= $mgR$
$= 1000 \times 10 \times 6400 \times {10^3}$
$= 6.4 \times {10^{10}}\,\,$ Joules | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.971379816532135, "perplexity": 709.3533600098733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301863.7/warc/CC-MAIN-20220120130236-20220120160236-00115.warc.gz"} |
https://cuicaihao.com/2017/11/07/starting-my-first-intern-at-melbourne-australia-tomorrow/ | # Starting My First Intern at Melbourne Australia Tomorrow
Dear All,
I am using the song above to thank you all for your help and support in the past. You know that I have spent the last three years (2014-2017) in pursuing my Ph.D. degree in Computer Science and got a plan to be graduated in 2018.
With the support of my supervisor and the AMSIIntern Institute, I just got the internship from Aurecon Group before submitting my final thesis. That means I will spend the next four months (may be extended, if applicable) from this November-2017 to March-2018 as an intern working in the industry project. I will get the opportunity to face the challenges and uncertainties during this experience. It will also be a golden time to test my expertise and knowledge beyond the academia.
Tomorrow (08-Nov-2017) is my first Day, hope everything goes well!
Best regards,
Caihao (Chris) Cui
PS: Music proved by 龙井说唱《感谢》 with Lyrics as follows:
Hu~Hu~hu I wanna the thank you thank you thank you all
I wanna thank you hey..wanna thank you hey…
hey hey just wanna thank you all thank you all…
wanna thank you hey..wanna thank you hey…
hey hey just wanna thank you all thank you all…
I wanna thank you hey..wanna thank you hey…
hey hey just wanna thank you all thank you all…
wanna thank you hey..wanna thank you hey…
hey hey just wanna thank you all thank you all…
I thank you everything
I thank you everyone
I thank you everytime
I thank you all
I wanna thank you hey..wanna thank you hey…
hey hey just wanna thank you all thank you all…
wanna thank you hey..wanna thank you hey…
hey hey just wanna thank you all thank you all…
-End-
## Author: Caihao (Chris) Cui
Data Scientist and Machine Learning Expert: Translating modern machine learning and computer vision techniques into engineering and bringing ideas to life to design a better future. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9548417925834656, "perplexity": 3916.553921280375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671053.31/warc/CC-MAIN-20191121231600-20191122015600-00187.warc.gz"} |
https://www.physicsforums.com/threads/magnetic-field-of-a-point-charge.745888/ | # Magnetic Field of a point charge
1. Mar 29, 2014
### majormaaz
1. The problem statement, all variables and given/known data
A point charge q = -2.9 μC moves along the z-axis with a velocity v = (+7.3 x 105 m/s) k . At the moment it passes the origin, what are the strength and direction of the magnetic field at the following positions? Express each field vector in Cartesian form.
(a) At position r1 = (2.0 cm, 0 cm, 0 cm)
(b) At position r2 = (0 cm, 4.0 cm, 0 cm).
(c) At position r3 = (0 cm, 0 cm 1.5 cm).
(d) At position r4 = (3.5 cm, 1.5 cm, 0 cm).
(e) At position r5 = (3.0 cm, 0 cm, 1.0 cm)
2. Relevant equations
Bpoint charge = [μ0/4pi] * [qv x sin Θ / r2]
3. The attempt at a solution
I saw that this problem has a negative charge, so I'd have to use the RHR and reverse direction to account for the charge being negative. I also got the fact that the magnetic field at a point along the same axis as the charge's velocity is 0 Teslas.
I ended up with
(a) 0 i + _ j + 0 k
(b) _ i + 0 j + 0 k
(c) 0 i + 0 j + 0 k
(d) _ i + _ j + 0 k
(e) 0 i + _ j + 0 k
However, everytime I used the point charge formula I have, I end up with an incorrect answer. For example, in part (a), with a r = 2 cm = 0.02 m, I plugged that into [μ0/4pi] * [|q|v x sin Θ / r2], and then reversing the sign to account for a negative charge, but it didn't work.
i.e. I calculated that for part (a), a value of 2 cm for r would have a magnetic field of -5.29e-4 T in the j direction, but that's apparently not right.
2. Mar 30, 2014
### Simon Bridge
What makes you think that is not right?
$$\vec B = \frac{\mu_0}{4\pi}\frac{q\vec v\times \vec r}{r^3}$$
For part a: $\vec v = (0,0,v)^t,\;\vec r = (x,0,0)^t$
$$\vec B = \frac{\mu_0q}{4\pi x^3}\left|\begin{matrix}\hat\imath & \hat\jmath & \hat k\\ 0 & 0 & v\\ x & 0 & 0\end{matrix}\right| = \frac{\mu_0 qv}{4\pi x^2}\hat\jmath$$... plug the numbers in and show me your working.
ref: http://maxwell.ucdavis.edu/~electro/magnetic_field/pointcharge.html
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add?
Draft saved Draft deleted
Similar Discussions: Magnetic Field of a point charge | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8490176200866699, "perplexity": 1140.2080269569562}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948522343.41/warc/CC-MAIN-20171213084839-20171213104839-00296.warc.gz"} |
http://doc.rero.ch/record/30797?ln=it | Faculté des sciences économiques et sociales
## Comparing smooth transition and Markov switching autoregressive models of US unemployment
### In: Journal of Applied Econometrics
Logistic smooth transition and Markov switching autoregressive models of a logistic transform of the monthly US unemployment rate are estimated by Markov chain Monte Carlo methods. The Markov switching model is identified by constraining the first autoregression coefficient to differ across regimes. The transition variable in the LSTAR model is the lagged seasonal difference of the unemployment... Di più
Aggiungi alla tua lista
# Esporta come
Summary
Logistic smooth transition and Markov switching autoregressive models of a logistic transform of the monthly US unemployment rate are estimated by Markov chain Monte Carlo methods. The Markov switching model is identified by constraining the first autoregression coefficient to differ across regimes. The transition variable in the LSTAR model is the lagged seasonal difference of the unemployment rate. Out of sample forecasts are obtained from Bayesian predictive densities. Although both models provide very similar descriptions, Bayes factors and predictive efficiency tests (both Bayesian and classical) favor the smooth transition model. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.881284236907959, "perplexity": 2700.445523367567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345760572/warc/CC-MAIN-20131218054920-00036-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://texhacks.blogspot.com/2009/09/numbering-every-paragraph.html | ## Friday, September 25, 2009
### Numbering every paragraph
One occasionally finds cause to number every graf in a document. Well, I never have, but I've seen it done. There's a neat little trick that is mentioned in passing in an exercise in the TeXbook that easily enables this. This relies on a TeX primitive \everypar which is essentially a token register which is to say that it holds a list of tokens that can be used again and again. What exactly a token is is a topic for another post. When TeX starts a new paragraph (or more precisely, when it enters horizontal mode), it does two things. First, it inserts an empty box of width \parindent (this is the indentation that occurs at the start of every graf) and then it will process the tokens defined by \everypar before going on to process the rest of the tokens that make up the graf. The upshot of this is that we can cause TeX to do something at the start of every graf, but after it inserts the indentation glue. The way to use this is to doing something like the following.
```\newcounter{grafcounter}
\setcounter{grafcounter}{0}
```\everypar={\addtocounter{grafcounter}{1}%
```\everypar=\expandafter{\the\everypar | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9940259456634521, "perplexity": 745.5697712902428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863650.42/warc/CC-MAIN-20180620143814-20180620163814-00501.warc.gz"} |
https://www.qalaxia.com/questions/The-terminal-side-of-an-angle-math-thetamath-in-standard | Krishna
0
Step 1: Know about the unit circle
NOTE: Any point on the unit circle will be a distance of one unit from the center
this is the definition of the unit circle.
• The terminal side of an angle \theta in standard position intersects the unit circle at (\cos \theta, \sin \theta).
• Another thing you can see from the unit circle is that the values of sine and cosine will never be more than 1 or less than –1.
Step 2: Use the unit circle properties to find the trigonometric ratio.
NOTE: The terminal side of an angle \theta in standard position intersects the
unit circle at (\cos \theta, \sin \theta).
Since the terminal side of \theta intersects the unit circle
at (\frac{3}{5},\frac{4}{5}) = (\cos \theta, \sin \theta)
Therefore, \cos\theta=\frac{3}{5} | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9544823169708252, "perplexity": 784.7863317599722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529737.79/warc/CC-MAIN-20190723215340-20190724001340-00360.warc.gz"} |
http://www.chegg.com/homework-help/questions-and-answers/1-difference-speed-velocity--speed-average-quantity-velocity--b-velocity-containsinformati-q635832 | 1. What is the difference between speed and velocity?
a. Speed is an average quantity while velocity is not.
b. Velocity containsinformation about the direction of motion while speed doesnot.
c. Speed is measured in mph, while velocity is measure inm/s.
d. The concept of speed applies only to objects that areneither speeding up or slowing down, while velocity applies toevery kind of motion.
e. Speed is used to measure how fast an object is moving in astraight line, while velocity is used for objects moving alongcurved paths.
2. The quantity 2.67 x 10^3 has how many significantfigures?
a.1
b.2
c.3
d.4
e.5
3. If Libby walks 100 m to the right, then 200 m to the left,her net displacement vector points
a. to the right.
b. to theleft.
c. has zero length.
4. Velocity vectors point
a. in the samedirection as displacement vectors.
b. in the opposite direction as displacement vectors.
c. perpendicular to displacement vectors.
d. in the same direction as acceleration vectors.
e. velocity is not represented by a vector.
Hi, so the answers highlighted in yellow are my "answers" canyou please provide me with the correct solutions to everythingso I can see what I need to work on. Thanks. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.957813560962677, "perplexity": 1487.8490208820256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802773061.155/warc/CC-MAIN-20141217075253-00067-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/does-amp-0-imply-that-a-wave-has-0-speed.230404/ | # Does Amp=0 imply that a wave has 0 speed?
1. Apr 21, 2008
### gerry73191
1. The problem statement, all variables and given/known data
Will a wave with an amplitude of 0 have any velocity?
2. Relevant equations
V=F(lambda) E=kA^2
3. The attempt at a solution
I think the wave dissappears because if it carries no energy then it ceases to be a wave, but Im not sure though.
2. Apr 21, 2008
### Hootenanny
Staff Emeritus
You are correct .
In a little more detail the solution y(x,t)=0, is the trivial solution of the wave-equation and is not usually considered to be physically meaningful.
3. Apr 21, 2008
### gerry73191
thank you.
Similar Discussions: Does Amp=0 imply that a wave has 0 speed? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9677104353904724, "perplexity": 2344.2436408297303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822513.17/warc/CC-MAIN-20171017215800-20171017235800-00095.warc.gz"} |
http://djconnel.blogspot.com/2010/01/transmission-of-road-vibration-through_04.html?showComment=1262721857570 | ## Monday, January 4, 2010
### transmission of road vibration through bike tires (2)
Last time, I analyzed a single bike tire, then assumed that tire suspended half the bicycle mass. This is equivalent to assuming the bike goes over a bump which hits each tire simultaneously, raising and lowering the bike as a unit. The result was an oscillation frequency around 10 Hz (depending on mass and tire pressure).
Well, it turns out this sort of thing has been measured before. Champoux used a bike with accelerometers attached, either riding on a treadmill with a bump attached, or riding outdoors and letting the cracks on the road provide the bumps. He did further experiments with shakers attached either to the front hub, or to the hub and handlebars.
Champoux on his treadmill (from his paper)
When this sort of thing is done, unless damping is extreme, you tend to see oscillations occur at near the normal oscillatory modes of the system. For example, one mode is the one I analyzed: the bike bounces up and down on its two tires together. But Champoux identified a number of modes:
Vibrational modes measured by Champoux. OMA: measured while riding on a treadmill; MIMO: measured with shakers on the front dropout and handlebars.
I'll deal more with this table next time. But the thing to observe here is that the "vertical motion of rider and bike" mode is at 21.9 Hz. Hey! I calculated 10 Hz for this. What's up with that? When I saw this table I was deeply disturbed for awhile. Until I figured out the problem. (Thanks to Kraig Willett for the hint).
The problem is, of course, that when the bike hits a bump both wheels don't respond together. The bump hits first the front wheel, which bounces, then the rear wheel, which bounces. At first I assumed these would respond the same as if both were hit together. But these are not uncoupled oscillators. They're coupled by the bicycle frame.
And when you have coupled oscillators you can have multiple oscillator modes. With two identical oscillators (not a terrible approximation) you can have an even mode (which I analyzed in this case) and an odd mode. The odd mode in this case is where the front wheel goes up by a certain amount, and the rear wheel goes down by the same amount. A real hit will be some combination of the odd and even mode, getting a bit of each. But for the standpoint of reducing the high frequency components, the vibrational mode with the higher natural frequency will tend to be the limiting factor.
bicycle oscillation modes
Any sort of oscillation can be considered as a sharing of energy between two reservoirs. For a sping-mass system, you have potential energy (when the spring is fully stretched or compressed), versus kinetic energy (when the spring is in its neutral position, but the object is moving at maximum speed). Each energy varies with time, but the sum is constant (for an undamped oscillator) or decreases smoothly over time (for a damped oscillator). As long as damping is modest, you can get a nice estimate of the oscillation frequency by neglecting the damping.
So consider two cases: (1) both wheels vibrate between ±h, and (2) the front wheel vibrates between ±h while the rear wheel vibrates between ∓ h (opposite sign). In the first case, the bike's center of mass moves back-and-forth, but there is no rotation. In the second case, assuming the bike is symmetric (not too bad an approximation for a bike with equal sized wheels) the center of mass is stationary, but the bike + rider rotates about the center of mass. In the first case, the kinetic energy is determined by the net mass of the system: that's the inertia term. But in the second case, the relevant inertia term is the moment of inertia, which depends on how far mass is located from the center-of-mass. If all the mass is at the center-of-mass, then the moment of inertia is zero.
So the result is that the two modes will have different oscillation frequencies. Consider a simple case: the mass is uniformly distributed in a line segment between the two axles. Then the oscillation frequency will be 73% higher in the odd mode than in the even mode. This is most of the difference between my estimate and Champoux's measurement. In reality, mass is likely clumped a bit closer to the center of mass than this.
Consider another approximation: the human body is a sphere of mass 56 kg, of the same density of sea water (around 1 kg / liter) and the bike has a wheelbase of 1 meter. The radius of the sphere is then around 23.7 cm. Then I add in the bike, which is the line segment I just mentioned, and weighs 6.9 kg (the UCI limit). If I did the calculation right, a very big if, this would result in the odd mode having a natural frequency 5.2 times larger than the even mode.
Reality is obviously somewhere in between. Given this, the factor of two difference with Champoux seems quite reasonable. And even though he failed to identify his "bouncing on the tires" mode as an odd (versus an even) mode, I've got to believe that's what it was.
So to summarize: around 20 Hz.
Next time I'll consider one of those other modes.
Ron said...
Dan,
Tires are unsprung masses that filter high frequency, low amplitude vibrations. I have read the study you quoted before and even made a mention of Velus' research work on my blog in the past, including some bits on EMA.
Some personal observations :
1) Difference in EMA between no cyclist vs occupied cyclist, as the latter introduces vibration at the handlebars. Still, the numbers behind the modes in EMA (SIMO) are sort of close to what would be seen in a modern sporting motorcycle, you know..for example, with a mass of 190 kg, 1st through 4th modes are 23, 28, 32 and 36 Hz respectively although structural elements and their planes of vibration differ. Your calculation of 10 Hz sounds a little low-ish to me.
2) OMA results in fewer modes than that seen in the EMA in the lab but perhaps this is from the fact that they exaggerated the treadmill bump and hence the excitation made available. I have written about the dynamic behavior of the bicycle, and in Part II of my series, I noted that the state-of-the-art mathematical model designed for the bicycle at Delft has about 24 degrees of freedom (DOF=independant co-ordinates). It is ridiculously complex than previously imagined. This may mean that the bicycle actually shows 20+ different modes of vibration as # of vibration modes should equal the DOF. I don't think it is practical to study all these little tiny modes, there's more sense in concentrating on the dominant ones that affect dynamic comfort.
3) In the past I have rummaged through available literature to find an objective set of absolute limits for vibration. I have found zero for the bicycle but some for the passenger car. SAE's Janeway Report for amplitude vs frequency is a good resource to have and is often quoted in vibration literature but I find that you cannot fix the limits for human comfort objectively. The problem is complicated by variations in individual sensitivity and diversity of test method and by the fact that such "limits" are based on just single sinusoidal frequencies.
4) I have thought a lot about vibration dampening of tires and it is almost a forbidding topic, specifically due to the math involved in handling the degrees of freedom and the modeling parameters. Computer programming and analysis is apt. Experimental measurements are even better.
Ron said...
Aah shoot, forgot to pass this along as well in my previous comment.
Measurement of vibration in Museeuw's flax bike.
djconnel said...
Ron:
Nice work! I'm going to be looking your posts over carefully. I think my calculation for the even mode is good (hard to believe I made a factor of four error in the spring constant of tires). Clearly the odd mode is more relevant to single-wheel events, and the odd mode frequency comes out a bit right. Maybe I'll take a stab at estimating my moment of inertia, which will allow me to estimate the odd mode frequency more precisely than assuming I'm a sphere of sea water. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8337101340293884, "perplexity": 831.4409803897566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507442288.9/warc/CC-MAIN-20141017005722-00049-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://www.clutchprep.com/chemistry/practice-problems/100561/in-the-following-diagram-the-white-spheres-represent-hydrogen-atoms-and-the-blue | # Problem: In the following diagram, the white spheres represent hydrogen atoms and the blue spheres represent nitrogen atoms.The two reactants combine to form a single product, ammonia, NH3, which is not shown. Write a balanced chemical equation for the reaction. Based on the equation and the contents of the left (reactants) box how many molecules should be shown in the right (products) box?
###### FREE Expert Solution
When balancing chemical equations the number of atoms of each element must be the same for both sides (reactants and products) of the chemical equation.
Complete reaction: H2 + N2 → NH3
• add coefficient to balance
79% (110 ratings)
###### Problem Details
In the following diagram, the white spheres represent hydrogen atoms and the blue spheres represent nitrogen atoms.
The two reactants combine to form a single product, ammonia, NH3, which is not shown. Write a balanced chemical equation for the reaction. Based on the equation and the contents of the left (reactants) box how many molecules should be shown in the right (products) box?
Frequently Asked Questions
What scientific concept do you need to know in order to solve this problem?
Our tutors have indicated that to solve this problem you will need to apply the Balancing Chemical Equations concept. You can view video lessons to learn Balancing Chemical Equations. Or if you need more Balancing Chemical Equations practice, you can also practice Balancing Chemical Equations practice problems. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8005337119102478, "perplexity": 1249.7596376030854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521139.30/warc/CC-MAIN-20210120151257-20210120181257-00383.warc.gz"} |
https://www.meritnation.com/cbse-class-11-humanities/math/rd-sharma-xi-2019/graphs-of-trigonometric-functions/textbook-solutions/69_1_3518_10206_6.5_137697 | Rd Sharma Xi 2019 Solutions for Class 11 Humanities Math Chapter 6 Graphs Of Trigonometric Functions are provided here with simple step-by-step explanations. These solutions for Graphs Of Trigonometric Functions are extremely popular among Class 11 Humanities students for Math Graphs Of Trigonometric Functions Solutions come handy for quickly completing your homework and preparing for exams. All questions and answers from the Rd Sharma Xi 2019 Book of Class 11 Humanities Math Chapter 6 are provided here for you for free. You will also love the ad-free experience on Meritnation’s Rd Sharma Xi 2019 Solutions. All Rd Sharma Xi 2019 Solutions for class Class 11 Humanities Math are prepared by experts and are 100% accurate.
#### Question 1:
Sketch the graphs of the following functions:
f(x) = 2 cosec πx
f(x) = 2 cosec πx
#### Question 2:
Sketch the graphs of the following functions:
f
(x) = 3 sec x
f(x) = 3 sec x
#### Question 3:
Sketch the graphs of the following functions:
f
(x) = cot 2x
f(x) = cot 2x
#### Question 4:
Sketch the graphs of the following functions:
f
(x) = 2 sec πx
f(x) = 2 sec πx
#### Question 5:
Sketch the graphs of the following functions:
f
(x) = tan2 x
f(x) = tan2 x
#### Question 6:
Sketch the graphs of the following functions:
f
(x) = cot2 x
f(x) = cot2 x
#### Question 7:
Sketch the graphs of the following functions:
$f\left(x\right)=\mathrm{cot}\frac{\pi x}{2}$
$f\left(x\right)=\mathrm{cot}\frac{\pi x}{2}$
#### Question 8:
Sketch the graphs of the following functions:
f
(x) = sec2 x
f(x) = sec2 x
#### Question 9:
Sketch the graphs of the following functions:
f(x) = cosec2 x
f(x) = cosec2 x
#### Question 10:
Sketch the graphs of the following functions:
f
(x) = tan 2x
Step I- We find the value of c and a by comparing y = 2 tan 2x with y = c tan ax, i.e. c = 1 and a = 2.
Step II- Then, we draw the graph of y = tan x and mark the point where it crosses the x-axis.
Step III- Divide the x-coordinates of the points where y = tan crosses x-axis by 2(i.e. a = 2) and mark the maximum value (i.e. c = 1) and minimum value (i.e.$-$c = $-$1).
Then , we obtain the following graph:
#### Question 1:
Sketch the graphs of the following functions:
(i) f(x) = 2 sin x, 0 ≤ x ≤ π
(ii)
(iii)
(iv)
(v)
(vi)
(vii)
(viii) f(x) = 2 sin πx, 0 ≤ x ≤ 2
The graphs of the following functions are:
(i) f(x) = 2 sin x, 0 ≤ x ≤ π
x 0 π f(x) = 2 sin x 0 0
(ii)
x $\frac{\mathrm{\pi }}{4}$ $\frac{5\mathrm{\pi }}{4}$ 0 0
(iii)
x 0 $\frac{\mathrm{\pi }}{3}$ $\frac{2\mathrm{\pi }}{3}$ 0 0 0
(iv)
x $\frac{\mathrm{\pi }}{6}$ $\frac{4\mathrm{\pi }}{6}$ 0 0
(v)
x $\frac{\mathrm{\pi }}{4}$ $\frac{7\mathrm{\pi }}{12}$ 0 0
(vi)
x $\frac{\mathrm{\pi }}{2}$ $\frac{5\mathrm{\pi }}{2}$ 0 0
(vii)
x 0 $\mathrm{\pi }$ 0 0
x 0 $\mathrm{\pi }$ 0 0
(viii) f(x) = 2 sin πx, 0 ≤ x ≤ 2
x 0 1 f(x) = 2 sin πx 0 0
#### Question 2:
Sketch the graphs of the following pairs of functions on the same axes:
(i)
(ii) f(x) = sin x, g(x) = sin 2x
(iii) f(x) = sin 2x, g(x) = 2 sin x
(iv)
(i)
Clearly, sin x and is a periodic function with period 2π.
The graphs of on different axes are shown below:
If these two graphs are drawn on the same axes, then the graph is shown below.
(ii) f(x) = sin x, g(x) = sin 2x
Clearly, sin x and sin 2x is a periodic function with period 2π and π, respectively.
The graphs of f(x) = sin x and g(x) = sin 2x on different axes are shown below:
If these two graphs are drawn on the same axes, then the graph is shown below.
(iii) f(x) = sin 2x, g(x) = 2 sin x
Clearly, sin 2x and 2 sin x is a periodic function with period π and 2π, respectively.
The graphs of f(x) = sin 2x and g(x) = 2 sin x on different axes are shown below:
If these two graphs are drawn on the same axes, then the graph is shown below.
(iv)
Clearly, sin $\frac{x}{2}$ and sin x is a periodic function with period 4π and 2π, respectively.
The graphs of f(x) = sin $\frac{x}{2}$ and g(x) = sin x on different axes are shown below:
If these two graphs are drawn on the same axes, then the graph is shown below.
#### Question 1:
Sketch the graphs of the following trigonometric functions:
(i) $f\left(x\right)=\mathrm{cos}\left(x-\frac{\pi }{4}\right)$
(ii) $g\left(x\right)=\mathrm{cos}\left(x+\frac{\pi }{4}\right)$
(iii) h(x) = cos2 2x
(iv)
(v) ψ(x) = cos 3x
(vi) $u\left(x\right)={\mathrm{cos}}^{2}\frac{x}{2}$
(vii) f(x) = cos π x
(viii) g(x) = cos 2π x
(i)
Then, we obtain the following graph:
(ii)
Then, we obtain the following graph:
(iii)
The following graph is:
(iv)
Then, we obtain the following graph:
(v)
The following graph is:
(vi)
The following graph is:
(vii)
The following graph is:
(viii)
The following graph is:
#### Question 2:
Sketch the graphs of the following curves on the same scale and the same axes:
(i)
(ii)
(iii)
(iv)
(i)
First, we draw the graph of y = cos x.
Let us now draw the graph of $y=\mathrm{cos}\left(x-\frac{\mathrm{\pi }}{4}\right)$.
Then, we will obtain the following graph:
(ii)
First, we draw the graph of y = cos 2x.
Let us now draw the graph of $y=\mathrm{cos}2\left(x-\frac{\mathrm{\pi }}{4}\right)$.
Then, we will obtain the following graph:
(iii)
First, we draw the graph of y = cos x.
Let us now draw the graph of $y=\mathrm{cos}\left(\frac{x}{2}\right)$.
Then, we will obtain the following graph:
(iv)
First, we draw the graph of y = cos2 x.
Let us now draw the graph of y = cos x.
Then, we will obtain the following graph:
View NCERT Solutions for all chapters of Class 14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 24, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8579419255256653, "perplexity": 1497.5321880275042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385534.85/warc/CC-MAIN-20210308235748-20210309025748-00389.warc.gz"} |
https://www.lessonplanet.com/teachers/volume-of-a-cube-cuboid | Volume of a Cube/Cuboid
In this math worksheet, students find the volume of six cuboids with given measurements. The second set of shapes requires the students to first convert the measurements to common units before using the formula. Students find the four possible measurements of boxes with the same volume.
Resource Details | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.917177140712738, "perplexity": 1121.8331874345301}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529745.80/warc/CC-MAIN-20191211021635-20191211045635-00314.warc.gz"} |
https://www.econometrics.blog/post/the-wilson-confidence-interval-for-a-proportion/ | # The Wilson Confidence Interval for a Proportion
This is the second in a series of posts about how to construct a confidence interval for a proportion. (Simple problems sometimes turn out to be surprisingly complicated in practice!) In the first part, I discussed the serious problems with the “textbook” approach, and outlined a simple hack that works amazingly well in practice: the Agresti-Coull confidence interval.
Somewhat unsatisfyingly, my earlier post gave no indication of where the Agresti-Coull interval comes from, how to construct it when you want a confidence level other than 95%, and why it works. In this post I’ll fill in some of the gaps by discussing yet another confidence interval for a proportion: the Wilson interval, so-called because it first appeared in Wilson (1927). While it’s not usually taught in introductory courses, it easily could be. Not only does the Wilson interval perform extremely well in practice, it packs a powerful pedagogical punch by illustrating the idea of “inverting a hypothesis test.” Spoiler alert: the Agresti-Coull interval is a rough-and-ready approximation to the Wilson interval.
To understand the Wilson interval, we first need to remember a key fact about statistical inference: hypothesis testing and confidence intervals are two sides of the same coin. We can use a test to create a confidence interval, and vice-versa. In case you’re feeling a bit rusty on this point, let me begin by refreshing your memory with the simplest possible example. If this is old hat to you, skip ahead to the next section.
# Tests and CIs – Two Sides of the Same Coin
Suppose that we observe a random sample $$X_1, \dots, X_n$$ from a normal population with unknown mean $$\mu$$ and known variance $$\sigma^2$$. Under these assumptions, the sample mean $$\bar{X}_n \equiv \left(\frac{1}{n} \sum_{i=1}^n X_i\right)$$ follows a $$N(\mu, \sigma^2/n)$$ distribution. Centering and standardizing, $\frac{\bar{X}_n - \mu}{\sigma/\sqrt{n}} \sim N(0,1).$ Now, suppose we want to test $$H_0\colon \mu = \mu_0$$ against the two-sided alternative $$H_1\colon \mu = \mu_0$$ at the 5% significance level. If $$\mu = \mu_0$$, then the test statistic $T_n \equiv \frac{\bar{X}_n - \mu_0}{\sigma/\sqrt{n}}$ follows a standard normal distribution. If $$\mu \neq \mu_0$$, then $$T_n$$ does not follow a standard normal distribution. To carry out the test, we reject $$H_0$$ if $$|T_n|$$ is greater than $$1.96$$, the $$(1 - \alpha/2)$$ quantile of a standard normal distribution for $$\alpha = 0.05$$. To put it another way, we fail to reject $$H_0$$ if $$|T_n| \leq 1.96$$. So for what values of $$\mu_0$$ will we fail to reject? By the definition of absolute value and the definition of $$T_n$$ from above, $$|T_n| \leq 1.96$$ is equivalent to $- 1.96 \leq \frac{\bar{X}_n - \mu_0}{\sigma/\sqrt{n}} \leq 1.96.$ Re-arranging, this in turn is equivalent to $\bar{X}_n - 1.96 \times \frac{\sigma}{\sqrt{n}} \leq \mu_0 \leq \bar{X}_n + 1.96 \times \frac{\sigma}{\sqrt{n}}.$ This tells us that the values of $$\mu_0$$ we will fail to reject are precisely those that lie in the interval $$\bar{X} \pm 1.96 \times \sigma/\sqrt{n}$$. Does this look familiar? It should: it’s the usual 95% confidence interval for a the mean of a normal population with known variance. The 95% confidence interval corresponds exactly to the set of values $$\mu_0$$ that we fail to reject at the 5% level.
This example is a special case a more general result. If you give me a $$(1 - \alpha)\times 100\%$$ confidence interval for a parameter $$\theta$$, I can use it to test $$H_0\colon \theta = \theta_0$$ against $$H_0 \colon \theta \neq \theta_0$$. All I have to do is check whether $$\theta_0$$ lies inside the confidence interval, in which case I fail to reject, or outside, in which case I reject. Conversely, if you give me a two-sided test of $$H_0\colon \theta = \theta_0$$ with significance level $$\alpha$$, I can use it to construct a $$(1 - \alpha) \times 100\%$$ confidence interval for $$\theta$$. All I have to do is collect the values of $$\theta_0$$ that are not rejected. This procedure is called inverting a test.
# How to Confuse Your Introductory Statistics Students
Around the same time as we teach students the duality between testing and confidence intervals–you can use a confidence interval to carry out a test or a test to construct a confidence interval–we throw a wrench into the works. The most commonly-presented test for a population proportion $$p$$ does not coincide with the most commonly-presented confidence interval for $$p$$. To quote from page 355 of Kosuke Imai’s fantastic textbook Quantitative Social Science: An Introduction
the standard error used for confidence intervals is different from the standard error used for hypothesis testing. This is because the latter standard error is derived under the null hypothesis … whereas the standard error for confidence intervals is computed using the estimated proportion.
Let’s translate this into mathematics. Suppose that $$X_1, ..., X_n \sim \text{iid Bernoulli}(p)$$ and let $$\widehat{p} \equiv (\frac{1}{n} \sum_{i=1}^n X_i)$$. The two standard errors that Imai describes are $\text{SE}_0 \equiv \sqrt{\frac{p_0(1 - p_0)}{n}} \quad \text{versus} \quad \widehat{\text{SE}} \equiv \sqrt{\frac{\widehat{p}(1 - \widehat{p})}{n}}.$ Following the advice of our introductory textbook, we test $$H_0\colon p = p_0$$ against $$H_1\colon p \neq p_0$$ at the $$5\%$$ level by checking whether $$|(\widehat{p} - p_0) / \text{SE}_0|$$ exceeds $$1.96$$. This is called the score test for a proportion. Again following the advice of our introductory textbook, we report $$\widehat{p} \pm 1.96 \times \widehat{\text{SE}}$$ as our 95% confidence interval for $$p$$. As you may recall from my earlier post, this is the so-called Wald confidence interval for $$p$$. Because the two standard error formulas in general disagree, the relationship between tests and confidence intervals breaks down.
To make this more concrete, let’s plug in some numbers. Suppose that $$n = 25$$ and our observed sample contains 5 ones and 20 zeros. Then $$\widehat{p} = 0.2$$ and we can calculate $$\widehat{\text{SE}}$$ and the Wald confidence interval as follows
n <- 25
n1 <- 5
p_hat <- n1 / n
alpha <- 0.05
SE_hat <- sqrt(p_hat * (1 - p_hat) / n)
p_hat + c(-1, 1) * qnorm(1 - alpha / 2) * SE_hat
## [1] 0.04320288 0.35679712
The value 0.07 is well within this interval. This suggests that we should fail to reject $$H_0\colon p = 0.07$$ against the two-sided alternative. But when we compute the score test statistic we obtain a value well above 1.96, so that $$H_0\colon p = 0.07$$ is soundly rejected:
p0 <- 0.07
SE0 <- sqrt(p0 * (1 - p0) / n)
abs((p_hat - p0) / SE0)
## [1] 2.547551
The test says reject $$H_0\colon p = 0.07$$ and the confidence interval says don’t. Upon encountering this example, your students decide that statistics is a tangled mess of contradictions, despair of ever making sense of it, and resign themselves to simply memorizing the requisite formulas for the exam.
# Should we teach the Wald test instead?
How can we dig our way out of this mess? One idea is to use a different test, one that agrees with the Wald confidence interval. If we had used $$\widehat{\text{SE}}$$ rather than $$\text{SE}_0$$ to test $$H_0\colon p = 0.07$$ above, our test statistic would have been
abs((p_hat - p0) / SE_hat)
## [1] 1.625
which is clearly less than 1.96. Thus we would fail to reject $$H_0\colon p = 0.7$$ exactly as the Wald confidence interval instructed us above. This procedure is called the Wald test for a proportion. Its main benefit is that it agrees with the Wald interval, unlike the score test, restoring the link between tests and confidence intervals that we teach our students. Unfortunately the Wald confidence interval is terrible and you should never use it. Because the Wald test is equivalent to checking whether $$p_0$$ lies inside the Wald confidence interval, it inherits all of the latter’s defects.
Indeed, compared to the score test, the Wald test is a disaster, as I’ll now show. Suppose we carry out a 5% test. If the null is true, we should reject it 5% of the time. Because the Wald and Score tests are both based on an approximation provided by the central limit theorem, we should allow a bit of leeway here: the actual rejection rates may be slightly different from 5%. Nevertheless, we’d expect them to at least be fairly close to the nominal value of 5%. The following plot shows the actual type I error rates of the score and Wald tests, over a range of values for the true population proportion $$p$$ with sample sizes of 25, 50, and 100. In each case the nominal size of each test, shown as a dashed red line, is 5%.1
The score test isn’t perfect: if $$p$$ is extremely close to zero or one, its actual type I error rate can be appreciably higher than its nominal type I error rate: as much as 10% compared to 5% when $$n = 25$$. But in general, its performance is good. In contrast, the Wald test is absolutely terrible: its nominal type I error rate is systematically higher than 5% even when $$n$$ is not especially small and $$p$$ is not especially close to zero or one.
Granted, teaching the Wald test alongside the Wald interval would reduce confusion in introductory statistics courses. But it would also equip students with lousy tools for real-world inference. There is a better way: rather than teaching the test that corresponds to the Wald interval, we could teach the confidence interval that corresponds to the score test.
# Inverting the Score Test
Suppose we collect all values $$p_0$$ that the score test does not reject at the 5% level. If the score test is working well–if its nominal type I error rate is close to 5%–the resulting set of values $$p_0$$ will be an approximate $$(1 - \alpha) \times 100\%$$ confidence interval for $$p$$. Why is this so? Suppose that $$p_0$$ is the true population proportion. Then an interval constructed in this way will cover $$p_0$$ precisely when the score test does not reject $$H_0\colon p = p_0$$. This occurs with probability $$(1 - \alpha)$$. Because the score test is much more accurate than the Wald test, the confidence interval that we obtain by inverting it way will be much more accurate than the Wald interval. This interval is called the score interval or the Wilson interval.
So let’s do it: let’s invert the score test. Our goal is to find all values $$p_0$$ such that $$|(\widehat{p} - p_0)/\text{SE}_0|\leq c$$ where $$c$$ is the normal critical value for a two-sided test with significance level $$\alpha$$. Squaring both sides of the inequality and substituting the definition of $$\text{SE}_0$$ from above gives $(\widehat{p} - p_0)^2 \leq c^2 \left[ \frac{p_0(1 - p_0)}{n}\right].$ Multiplying both sides of the inequality by $$n$$, expanding, and re-arranging leaves us with a quadratic inequality in $$p_0$$, namely $(n + c^2) p_0^2 - (2n\widehat{p} + c^2) p_0 + n\widehat{p}^2 \leq 0.$ Remember: we are trying to find the values of $$p_0$$ that satisfy the inequality. The terms $$(n + c^2)$$ along with $$(2n\widehat{p})$$ and $$n\widehat{p}^2$$ are constants. Once we choose $$\alpha$$, the critical value $$c$$ is known. Once we observe the data, $$n$$ and $$\widehat{p}$$ are known. Since $$(n + c^2) > 0$$, the left-hand side of the inequality is a parabola in $$p_0$$ that opens upwards. This means that the values of $$p_0$$ that satisfy the inequality must lie between the roots of the quadratic equation $(n + c^2) p_0^2 - (2n\widehat{p} + c^2) p_0 + n\widehat{p}^2 = 0.$ By the quadratic formula, these roots are $p_0 = \frac{(2 n\widehat{p} + c^2) \pm \sqrt{4 c^2 n \widehat{p}(1 - \widehat{p}) + c^4}}{2(n + c^2)}.$ Factoring $$2n$$ out of the numerator and denominator of the right-hand side and simplifying, we can re-write this as \begin{align*} p_0 &= \frac{1}{2\left(n + \frac{n c^2}{n}\right)}\left\{\left(2n\widehat{p} + \frac{2n c^2}{2n}\right) \pm \sqrt{4 n^2c^2 \left[\frac{\widehat{p}(1 - \widehat{p})}{n}\right] + 4n^2c^2\left[\frac{c^2}{4n^2}\right] }\right\} \\ \\ p_0 &= \frac{1}{2n\left(1 + \frac{ c^2}{n}\right)}\left\{2n\left(\widehat{p} + \frac{c^2}{2n}\right) \pm 2nc\sqrt{ \frac{\widehat{p}(1 - \widehat{p})}{n} + \frac{c^2}{4n^2}} \right\} \\ \\ p_0 &= \left( \frac{n}{n + c^2}\right)\left\{\left(\widehat{p} + \frac{c^2}{2n}\right) \pm c\sqrt{ \widehat{\text{SE}}^2 + \frac{c^2}{4n^2} }\right\}\\ \\ \end{align*} using our definition of $$\widehat{\text{SE}}$$ from above. And there you have it: the right-hand side of the final equality is the $$(1 - \alpha)\times 100\%$$ Wilson confidence interval for a proportion, where $$c = \texttt{qnorm}(1 - \alpha/2)$$ is the normal critical value for a two-sided test with significance level $$\alpha$$, and $$\widehat{\text{SE}}^2 = \widehat{p}(1 - \widehat{p})/n$$.
Compared to the Wald interval, $$\widehat{p} \pm c \times \widehat{\text{SE}}$$, the Wilson interval is certainly more complicated. But it is constructed from exactly the same information: the sample proportion $$\widehat{p}$$, two-sided critical value $$c$$ and sample size $$n$$. Computing it by hand is tedious, but programming it in R is a snap:
get_wilson_CI <- function(x, alpha = 0.05) {
#-----------------------------------------------------------------------------
# Compute the Wilson (aka Score) confidence interval for a popn. proportion
#-----------------------------------------------------------------------------
# x vector of data (zeros and ones)
# alpha 1 - (confidence level)
#-----------------------------------------------------------------------------
n <- length(x)
p_hat <- mean(x)
SE_hat_sq <- p_hat * (1 - p_hat) / n
crit <- qnorm(1 - alpha / 2)
omega <- n / (n + crit^2)
A <- p_hat + crit^2 / (2 * n)
B <- crit * sqrt(SE_hat_sq + crit^2 / (4 * n^2))
CI <- c('lower' = omega * (A - B),
'upper' = omega * (A + B))
return(CI)
}
Notice that this is only slightly more complicated to implement than the Wald confidence interval:
get_wald_CI <- function(x, alpha = 0.05) {
#-----------------------------------------------------------------------------
# Compute the Wald confidence interval for a popn. proportion
#-----------------------------------------------------------------------------
# x vector of data (zeros and ones)
# alpha 1 - (confidence level)
#-----------------------------------------------------------------------------
n <- length(x)
p_hat <- mean(x)
SE_hat <- sqrt(p_hat * (1 - p_hat) / n)
ME <- qnorm(1 - alpha / 2) * SE_hat
CI <- c('lower' = p_hat - ME,
'upper' = p_hat + ME)
return(CI)
}
With a computer rather than pen and paper there’s very little cost using the more accurate interval. Indeed, the built-in R function prop.test() reports the Wilson confidence interval rather than the Wald interval:
set.seed(1234)
x <- rbinom(20, 1, 0.5)
prop.test(sum(x), length(x), correct = FALSE) # no continuity correction
##
## 1-sample proportions test without continuity correction
##
## data: sum(x) out of length(x), null probability 0.5
## X-squared = 0.2, df = 1, p-value = 0.6547
## alternative hypothesis: true p is not equal to 0.5
## 95 percent confidence interval:
## 0.3420853 0.7418021
## sample estimates:
## p
## 0.55
get_wilson_CI(x)
## lower upper
## 0.3420853 0.7418021
# Understanding the Wilson Interval
You could stop reading here and simply use the code from above to construct the Wilson interval. But computing is only half the battle: we want to understand our measures of uncertainty. While the Wilson interval may look somewhat strange, there’s actually some very simple intuition behind it. It amounts to a compromise between the sample proportion $$\widehat{p}$$ and $$1/2$$.
The Wald estimator is centered around $$\widehat{p}$$, but the Wilson interval is not. Manipulating our expression from the previous section, we find that the midpoint of the Wilson interval is \begin{align*} \widetilde{p} &\equiv \left(\frac{n}{n + c^2} \right)\left(\widehat{p} + \frac{c^2}{2n}\right) = \frac{n \widehat{p} + c^2/2}{n + c^2} \\ &= \left( \frac{n}{n + c^2}\right)\widehat{p} + \left( \frac{c^2}{n + c^2}\right) \frac{1}{2}\\ &= \omega \widehat{p} + (1 - \omega) \frac{1}{2} \end{align*} where the weight $$\omega \equiv n / (n + c^2)$$ is always strictly between zero and one. In other words, the center of the Wilson interval lies between $$\widehat{p}$$ and $$1/2$$. In effect, $$\widetilde{p}$$ pulls us away from extreme values of $$p$$ and towards the middle of the range of possible values for a population proportion. For a fixed confidence level, the smaller the sample size, the more that we are pulled towards $$1/2$$. For a fixed sample size, the higher the confidence level, the more that we are pulled towards $$1/2$$.
Continuing to use the shorthand $$\omega \equiv n /(n + c^2)$$ and $$\widetilde{p} \equiv \omega \widehat{p} + (1 - \omega)/2$$, we can write the Wilson interval as $\widetilde{p} \pm c \times \widetilde{\text{SE}}, \quad \widetilde{\text{SE}} \equiv \omega \sqrt{\widehat{\text{SE}}^2 + \frac{c^2}{4n^2}}.$ So what can we say about $$\widetilde{\text{SE}}$$? It turns out that the value $$1/2$$ is lurking behind the scenes here as well. The easiest way to see this is by squaring $$\widehat{\text{SE}}$$ to obtain \begin{align*} \widetilde{\text{SE}}^2 &= \omega^2\left(\widehat{\text{SE}}^2 + \frac{c^2}{4n^2} \right) = \left(\frac{n}{n + c^2}\right)^2 \left[\frac{\widehat{p}(1 - \widehat{p})}{n} + \frac{c^2}{4n^2}\right]\\ &= \frac{1}{n + c^2} \left[\frac{n}{n + c^2} \cdot \widehat{p}(1 - \widehat{p}) + \frac{c^2}{n + c^2}\cdot \frac{1}{4}\right]\\ &= \frac{1}{\widetilde{n}} \left[\omega \widehat{p}(1 - \widehat{p}) + (1 - \omega) \frac{1}{2} \cdot \frac{1}{2}\right] \end{align*} defining $$\widetilde{n} = n + c^2$$. To make sense of this result, recall that $$\widehat{\text{SE}}^2$$, the quantity that is used to construct the Wald interval, is a ratio of two terms: $$\widehat{p}(1 - \widehat{p})$$ is the usual estimate of the population variance based on iid samples from a Bernoulli distribution and $$n$$ is the sample size. Similarly, $$\widetilde{\text{SE}}^2$$ is a ratio of two terms. The first is a weighted average of the population variance estimator and $$1/4$$, the population variance under the assumption that $$p = 1/2$$. Once again, the Wilson interval “pulls” away from extremes. In this case it pulls away from extreme estimates of the population variance towards the largest possible population variance: $$1/4$$.2 We divide this by the sample size augmented by $$c^2$$, a strictly positive quantity that depends on the confidence level.3
To make this more concrete, Consider the case of a 95% Wilson interval. In this case $$c^2 \approx 4$$ so that $$\omega \approx n / (n + 4)$$ and $$(1 - \omega) \approx 4/(n+4)$$.4 Using this approximation we find that $\widetilde{p} \approx \frac{n}{n + 4} \cdot \widehat{p} + \frac{4}{n + 4} \cdot \frac{1}{2} = \frac{n \widehat{p} + 2}{n + 4}$ which is precisely the midpoint of the Agresti-Coul confidence interval. And while $\widetilde{\text{SE}}^2 \approx \frac{1}{n + 4} \left[\frac{n}{n + 4}\cdot \widehat{p}(1 - \widehat{p}) +\frac{4}{n + 4} \cdot \frac{1}{2} \cdot \frac{1}{2}\right]$ is slightly different from the quantity that appears in the Agresti-Coul interval, $$\widetilde{p}(1 - \widetilde{p})/\widetilde{n}$$, the two expressions give very similar results in practice. The Agresti-Coul interval is nothing more than a rough-and-ready approximation to the 95% Wilson interval. This not only provides some intuition for the Wilson interval, it shows us how to construct an Agresti-Coul interval with a confidence level that differs from 95%: just construct the Wilson interval!
# Comparing the Wald and Wilson Intervals
Another way of understanding the Wilson interval is to ask how it will differ from the Wald interval when computed from the same dataset. In large samples, these two intervals will be quite similar. This is because $$\omega \rightarrow 1$$ as $$n \rightarrow \infty$$. Using the expressions from the preceding section, this implies that $$\widehat{p} \approx \widetilde{p}$$ and $$\widehat{\text{SE}} \approx \widetilde{\text{SE}}$$ for very large sample sizes. For smaller values of $$n$$, however, the two intervals can differ markedly. To make a long story short, the Wilson interval gives a much more reasonable description of our uncertainty about $$p$$ for any sample size. Wilson, unlike Wald, is always an interval; it cannot collapse to a single point. Moreover, unlike the Wald interval, the Wilson interval is always bounded below by zero and above by one.
## Wald Can Collapse to a Single Point; Wilson Can’t
A strange property of the Wald interval is that its width can be zero. Suppose that $$\widehat{p} = 0$$, i.e. that we observe zero successes. In this case, regardless of sample size and regardless of confidence level, the Wald interval only contains a single point: zero $\widehat{p} \pm c \sqrt{\widehat{p}(1 - \widehat{p})/n} = 0 \pm c \times \sqrt{0(1 - 0)/n} = \{0 \}.$ This is clearly insane. If we observe zero successes in a sample of ten observations, it is reasonable to suspect that $$p$$ is small, but ridiculous to conclude that it must be zero. We encounter a similarly absurd conclusion if $$\widehat{p} = 1$$. In contrast, the Wilson interval can never collapse to a single point. Using the expression from the preceding section, we see that its width is given by $2c \left(\frac{n}{n + c^2}\right) \times \sqrt{\frac{\widehat{p}(1 - \widehat{p})}{n} + \frac{c^2}{4n^2}}$ The first factor in this product is strictly positive. And even when $$\widehat{p}$$ equals zero or one, the second factor is also positive: the additive term $$c^2/(4n^2)$$ inside the square root ensures this. For $$\widehat{p}$$ equal to zero or one, the width of the Wilson interval becomes $2c \left(\frac{n}{n + c^2}\right) \times \sqrt{\frac{c^2}{4n^2}} = \left(\frac{c^2}{n + c^2}\right) = (1 - \omega).$ Compared to the Wald interval, this is quite reasonable. A sample proportion of zero (or one) conveys much more information when $$n$$ is large than when $$n$$ is small. Accordingly, the Wilson interval is shorter for large values of $$n$$. Similarly, higher confidence levels should demand wider intervals at a fixed sample size. The Wilson interval, unlike the Wald, retains this property even when $$\widehat{p}$$ equals zero or one.
## Wald Can Include Impossible Values; Wilson Can’t
A population proportion necessarily lies in the interval $$[0,1]$$, so it would make sense that any confidence interval for $$p$$ should as well. An awkward fact about the Wald interval is that it can extend beyond zero or one. In contrast, the Wilson interval always lies within $$[0,1]$$. For example, suppose that we observe two successes in a sample of size 10. Then the 95% Wald confidence interval is approximately [-0.05, 0.45] while the corresponding Wilson interval is [0.06, 0.51]. Similarly, if we observe eight successes in ten trials, the 95% Wald interval is approximately [0.55, 1.05] while the Wilson interval is [0.49, 0.94].
With a bit of algebra we can show that the Wald interval will include negative values whenever $$\widehat{p}$$ is less than $$(1 - \omega) \equiv c^2/(n + c^2)$$. Why is this so? The lower confidence limit of the Wald interval is negative if and only if $$\widehat{p} < c \times \widehat{\text{SE}}$$. Substituting the definition of $$\widehat{\text{SE}}$$ and re-arranging, this is equivalent to \begin{align} \widehat{p} &< c \sqrt{\widehat{p}(1 - \widehat{p})/n}\\ n\widehat{p}^2 &< c^2(\widehat{p} - \widehat{p}^2)\\ 0 &> \widehat{p}\left[(n + c^2)\widehat{p} - c^2\right] \end{align} The right-hand side of the preceding inequality is a quadratic function of $$\widehat{p}$$ that opens upwards. Its roots are $$\widehat{p} = 0$$ and $$\widehat{p} = c^2/(n + c^2) = (1 - \omega)$$. Thus, whenever $$\widehat{p} < (1 - \omega)$$, the Wald interval will include negative values of $$p$$. A nearly identical argument, exploiting symmetry, shows that the upper confidence limit of the Wald interval will extend beyond one whenever $$\widehat{p} > \omega \equiv n/(n + c^2)$$. Putting these two results together, the Wald interval lies within $$[0,1]$$ if and only if $$(1 - \omega) < \widehat{p} < \omega$$. This is equivalent to \begin{align} n(1 - \omega) &< \sum_{i=1}^n X_i < n \omega\\ \left\lceil n\left(\frac{c^2}{n + c^2} \right)\right\rceil &\leq \sum_{i=1}^n X_i \leq \left\lfloor n \left( \frac{n}{n + c^2}\right) \right\rfloor \end{align} where $$\lceil \cdot \rceil$$ is the ceiling function and $$\lfloor \cdot \rfloor$$ is the floor function.5 Using this inequality, we can calculate the minimum and maximum number of successes in $$n$$ trials for which a 95% Wald interval will lie inside the range $$[0,1]$$ as follows:
n <- 10:20
omega <- n / (n + qnorm(0.975)^2)
cbind("n" = n,
"min_success" = ceiling(n * (1 - omega)),
"max_success" = floor(n * omega))
## n min_success max_success
## [1,] 10 3 7
## [2,] 11 3 8
## [3,] 12 3 9
## [4,] 13 3 10
## [5,] 14 4 10
## [6,] 15 4 11
## [7,] 16 4 12
## [8,] 17 4 13
## [9,] 18 4 14
## [10,] 19 4 15
## [11,] 20 4 16
This agrees with our calculations for $$n = 10$$ from above. With a sample size of ten, any number of successes outside the range $$\{3, ..., 7\}$$ will lead to a 95% Wald interval that extends beyond zero or one. With a sample size of twenty, this range becomes $$\{4, ..., 16\}$$.
Finally, we’ll show that the Wilson interval can never extend beyond zero or one. There’s nothing more than algebra to follow, but there’s a fair bit of it. If you feel that we’ve factorized too many quadratic equations already, you have my express permission to skip ahead. Suppose by way of contradiction that the lower confidence limit of the Wilson confidence interval were negative. The only way this could occur is if $$\widetilde{p} - \widetilde{\text{SE}} < 0$$, i.e. if $\omega\left\{\left(\widehat{p} + \frac{c^2}{2n}\right) - c\sqrt{ \widehat{\text{SE}}^2 + \frac{c^2}{4n^2}} \,\,\right\} < 0.$ But since $$\omega$$ is between zero and one, this is equivalent to $\left(\widehat{p} + \frac{c^2}{2n}\right) < c\sqrt{ \widehat{\text{SE}}^2 + \frac{c^2}{4n^2}}.$ We will show that this leads to a contradiction, proving that lower confidence limit of the Wilson interval cannot be negative. To begin, factorize each side as follows $\frac{1}{2n}\left(2n\widehat{p} + c^2\right) < \frac{c}{2n}\sqrt{ 4n^2\widehat{\text{SE}}^2 + c^2}.$ Cancelling the common factor of $$1/(2n)$$ from both sides and squaring, we obtain $\left(2n\widehat{p} + c^2\right)^2 < c^2\left(4n^2\widehat{\text{SE}}^2 + c^2\right).$ Expanding, subtracting $$c^4$$ from both sides, and dividing through by $$4n$$ gives $n\widehat{p}^2 + \widehat{p}c^2 < nc^2\widehat{\text{SE}}^2 = c^2 \widehat{p}(1 - \widehat{p}) = \widehat{p}c^2 - c^2 \widehat{p}^2$ by the definition of $$\widehat{\text{SE}}$$. Subtracting $$\widehat{p}c^2$$ from both sides and rearranging, this is equivalent to $$\widehat{p}^2(n + c^2) < 0$$. Since the left-hand side cannot be negative, we have a contradiction.
A similar argument shows that the upper confidence limit of the Wilson interval cannot exceed one. Suppose by way of contradiction that it did. This can only occur if $$\widetilde{p} + \widetilde{SE} > 1$$, i.e. if $\left(\widehat{p} + \frac{c^2}{2n}\right) - \frac{1}{\omega} > c \sqrt{\widehat{\text{SE}}^2 + \frac{c^2}{4n^2}}.$ By the definition of $$\omega$$ from above, the left-hand side of this inequality simplifies to $-\frac{1}{2n} \left[2n(1 - \widehat{p}) + c^2\right]$ so the original inequality is equivalent to $\frac{1}{2n} \left[2n(1 - \widehat{p}) + c^2\right] < c \sqrt{\widehat{\text{SE}}^2 + \frac{c^2}{4n^2}}.$ Now, if we introduce the change of variables $$\widehat{q} \equiv 1 - \widehat{p}$$, we obtain exactly the same inequality as we did above when studying the lower confidence limit, only with $$\widehat{q}$$ in place of $$\widehat{p}$$. This is because $$\widehat{\text{SE}}^2$$ is symmetric in $$\widehat{p}$$ and $$(1 - \widehat{p})$$. Since we’ve reduced our problem to one we’ve already solved, we’re done!
# More to Come on Inference for a Proportion!
This has been a post of epic proportions, pun very much intended. Amazingly, we have yet to fully exhaust this seemingly trivial problem. In a future post I will explore yet another approach to inference: the likelihood ratio test and its corresponding confidence interval. This will complete the classical “trinity” of tests for maximum likelihood estimation: Wald, Score (Lagrange Multiplier), and Likelihood Ratio. In yet another future post, I will revisit this problem from a Bayesian perspective, uncovering many unexpected connections along the way. Until then, be sure to maintain a sense of proportion in all your inferences and never use the Wald confidence interval for a proportion.
# Appendix: R Code
get_test_size <- function(p_true, n, test, alpha = 0.05) {
# Compute the size of a hypothesis test for a population proportion
# p_true true population proportion
# n sample size
# test function of p_hat, n, and p_0 that computes test stat
# alpha nominal size of the test
x <- 0:n
p_x <- dbinom(x, n, p_true)
test_stats <- test(p_hat = x / n, sample_size = n, p0 = p_true)
reject <- abs(test_stats) > qnorm(1 - alpha / 2)
sum(reject * p_x)
}
get_score_test_stat <- function(p_hat, sample_size, p0) {
SE_0 <- sqrt(p0 * (1 - p0) / sample_size)
return((p_hat - p0) / SE_0)
}
get_wald_test_stat <- function(p_hat, sample_size, p0) {
SE_hat <- sqrt(p_hat * (1 - p_hat) / sample_size)
return((p_hat - p0) / SE_hat)
}
plot_size <- function(n, test, nominal = 0.05, title = '') {
p_seq <- seq(from = 0.01, to = 0.99, by = 0.001)
size <- sapply(p_seq, function(p) get_test_size(p, n, test, nominal))
plot(p_seq, size, type = 'l', xlab = 'p',
ylab = 'Type I Error Rate',
main = title)
text(0.5, 0.98 * max(size), bquote(n == .(n)))
abline(h = nominal, lty = 2, col = 'red', lwd = 2)
}
plot_size_comparison <- function(n, nominal = 0.05) {
par(mfrow = c(1, 2))
plot_size(n, get_score_test_stat, nominal, title = 'Score Test')
plot_size(n, get_wald_test_stat, nominal, title = 'Wald Test')
par(mfrow = c(1, 1))
}
1. For the R code used to generate these plots, see the Appendix at the end of this post.↩︎
2. The value of $$p$$ that maximizes $$p(1-p)$$ is $$p=1/2$$ and $$(1/2)^2 = 1/4$$.↩︎
3. If you know anything about Bayesian statistics, you may be suspicious that there’s a connection to be made here. Indeed this whole exercise looks very much like a dummy observation prior in which we artificially augment the sample with “fake data.” There is a Bayesian connection here, but the details will have to wait for a future post.↩︎
4. As far as I’m concerned, 1.96 is effectively 2. If you disagree, please replace all instances of “95%” with “95.45%\$.↩︎
5. The final inequality follows because $$\sum_{i}^n X_i$$ can only take on a value in $$\{0, 1, ..., n\}$$ while $$n\omega$$ and $$n(1 - \omega)$$ may not be integers, depending on the values of $$n$$ and $$c^2$$.↩︎ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9514889121055603, "perplexity": 443.77950869909733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529658.48/warc/CC-MAIN-20220519172853-20220519202853-00319.warc.gz"} |
http://www.impan.pl/cgi-bin/dict?downward(s) | ## downward(s)
We adopt the convention that the first coordinate $i$ increases as one goes downwards, and the second coordinate $j$ increases as one goes from left to right.
a direction pointing downward with respect to $\tau$
[Note that downward and downwards can be used without distinction as adverbs, but the standard form of the adjective is downward (e.g., in a downward direction).]
Go to the list of words starting with: a b c d e f g h i j k l m n o p q r s t u v w y z | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9325532913208008, "perplexity": 147.1777643215253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802767247.82/warc/CC-MAIN-20141217075247-00093-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://planetmath.org/RegularRepresentation | # regular representation
Given a group $G$, the of $G$ over a field $K$ is the representation $\rho:G\longrightarrow\operatorname{GL}(K^{G})$ whose underlying vector space $K^{G}$ is the $K$–vector space of formal linear combinations of elements of $G$, defined by
$\rho(g)\left(\sum_{i=1}^{n}k_{i}g_{i}\right):=\sum_{i=1}^{n}k_{i}(gg_{i})$
for $k_{i}\in K$, $g,g_{i}\in G$.
Equivalently, the regular representation is the induced representation on $G$ of the trivial representation on the subgroup $\{1\}$ of $G$.
Title regular representation RegularRepresentation 2013-03-22 12:17:40 2013-03-22 12:17:40 djao (24) djao (24) 5 djao (24) Definition msc 20C99 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 13, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9598279595375061, "perplexity": 646.8109810972843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514571360.41/warc/CC-MAIN-20190915114318-20190915140318-00192.warc.gz"} |
https://mathvault.ca/hub/higher-math/math-symbols/calculus-analysis-symbols/ | Calculus and Analysis Symbols
A comprehensive collection of the most notable symbols in calculus and analysis, categorized by topic and function into charts and tables along each symbol's meaning and example.
In mathematics, calculus formalizes the study of continuous change, while analysis provides it with a rigorous foundation in logic. The following list documents some of the most notable symbols and notations in calculus and analysis, along with each symbol’s usage and meaning.
For readability purpose, these symbols are categorized by topic and function into tables. Other comprehensive lists of math symbols — as categorized by subject and type — can be also found in the relevant pages below (or in the navigational panel).
Get the master summary of mathematical symbols in eBook form — along with each symbol’s usage and LaTeX code.
Constants and Variables
In calculus and analysis, constants and variables are often reserved for key mathematical numbers and arbitrarily small quantities. The following table documents some of the most notable symbols in these categories — along with each symbol’s example and meaning.
Sequence, Series and Limit
The concepts of sequence, series and limit form the foundation of calculus (and by extension real and complex analysis). The following table features some of the most common symbols related to these topics — along with each symbol’s usage and meaning.
Derivative and Integral
The field of calculus (e.g., multivariate/vector calculus, differential equations) is often said to revolve around two opposing but complementary concepts: derivative and integral. The following tables document the most notable symbols related to these — along with each symbol’s usage and meaning.
(For a review on function and related operators, see function-related operators.)
Asymptotic Analysis
In calculus and analysis, the need for comparing the rates of growth of different functions leads to the study of asymptotic analysis. The following table documents some of the most notable symbols related to this topic — along with each symbol’s usage and meaning.
The following table features some of the most common functions arranged according to their asymptotic hierarchy — where each function is asymptotically dominated by what follows it:
Key Functions and Transforms
In calculus and analysis, one often makes reference to a wide range of key functions and transforms. The following table documents the most notable of these — along with each symbol’s usage and meaning.
(For a review on elementary functions, see key functions in algebra.)
Key Transforms
For the master list of symbols, see mathematical symbols. For lists of symbols categorized by type and subject, refer to the relevant pages below for more. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8727349042892456, "perplexity": 1091.6646232917055}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347439213.69/warc/CC-MAIN-20200604063532-20200604093532-00107.warc.gz"} |
http://www.stripersonline.com/t/555542/does-anyone-know-how-turn-a-picture-in-a-word-document-into-a-jpeg-or-similar-format | StripersOnline › SurfTalk › Community Forums › The Town Tavern › Does anyone know how turn a picture in a word document into a jpeg or similar format
New Posts All Forums:Forum Nav:
# Does anyone know how turn a picture in a word document into a jpeg or similar format
Thanks
copy and paste it into MS Paint and save as a jpeg
to get to MS Paint:
START>ALL PROGRAMS>ACCESSORIES>MS PAINT
I tried that before I posted, but Paint program says it can not open the word document. It is the whole document that my wife wants in a jpeg format. Thanks for the suggestion though.
or another stupid question, doe anyone know how to turn a word document into wallpaper for the computer screen.
Right click the image within the Word Document, choose Copy. Then launch MS Paint, and choose Edit, Paste. You should see your image within Paint now.
Quote:
Originally Posted by fishsticking It is the whole document that my wife wants in a jpeg format. Thanks for the suggestion though. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.821201741695404, "perplexity": 2602.722329024144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921872.11/warc/CC-MAIN-20140909035528-00290-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://socratic.org/questions/what-is-the-difference-between-var-s-function-and-var-p-function-on-microsoft-ex | Statistics
Topics
# What is the difference between VAR.S function and VAR.P function on Microsoft Excel?
## Recently, I answered a question posted by @Yiu A. https://socratic.org/questions/how-to-do-this-regression-question In the question, I tried to calculate the variance of the following $15$ samples. $42 , 34 , 25 , 35 , 37 , 38 , 31 , 33 , 19 , 29 , 38 , 28 , 29 , 36 , 18$ The result obtained by Microsoft Excel was VAR.P($42 , 34 , 25 , 35 , 37 , 38 , 31 , 33 , 19 , 29 , 38 , 28 , 29 , 36 , 18$)=44.78222 and VAR.S($42 , 34 , 25 , 35 , 37 , 38 , 31 , 33 , 19 , 29 , 38 , 28 , 29 , 36 , 18$)=47.98095. I thought that the function VAR.P gives us the population variance and VAR.S gives us the sample variance. If so, the population variance should be larger than the sample variance, but the result is quite the opposite. I would like to know how to interpret the result. Thank you in advance.
Dec 5, 2017
VAR.S > VAR.P
#### Explanation:
VAR.S calculates the variance assuming given data is a sample.
VAR.P calculates the variance assuming that given data is a population.
VAR.S $= \setminus \frac{\setminus \sum {\left(x - \setminus \overline{x}\right)}^{2}}{n - 1}$
VAR.P $= \setminus \frac{\setminus \sum {\left(x - \setminus \overline{x}\right)}^{2}}{N}$
Since you are using the same data for both, VAR.S will give a value higher than VAR.P, always.
But you should use VAR.S because the given data is in fact sample data.
Edit: Why the two formulas differ? Check out Bessel's Correction.
##### Impact of this question
24241 views around the world
You can reuse this answer | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.987451434135437, "perplexity": 1168.6205132011642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585768.3/warc/CC-MAIN-20211023193319-20211023223319-00285.warc.gz"} |
http://mathoverflow.net/questions/109800/structure-of-units-in-a-maximal-order?sort=oldest | # Structure of units in a maximal order
Hello,
my question is simple: do we have a "Dirichlet's unit theorem" for the group of units of a maximal order of a central division algebra ?
In other words: let $k$ be a number field, let $D$ be a central division $k$-algebra (i.e. a skew field with center $k$), and let $\Lambda$ be a maximal order over $\mathcal{O}_k$.
Is $\Lambda^\times$ a finitely generated group ? what is known about its group structure ?
I browsed the web and looked at Reiner's "Maximal orders" but didn't find anything.
I'm happy to assume that $D$ satisfies Eichler's condition if necessary.
In fact, my original question is even more precise: $k/\mathbb{Q}$ is quadratic imaginary, $D$ carries a unitary involution $\tau$ (which therefore restricts to complex conjugation on $k$), and I am interested in the structure of UNITARY units in a maximal order $\Lambda$.
If anyone knows any results/references, I would be happy to know them.
Greg
-
I think it's true that $\Lambda^\times$ is finitely generated for any maximal order $\Lambda$ of a finite dimensional division algebra over $\mathbf{Q}$. See Eichler, Über die Einheiten der Divisionsalgebren, Math. Ann. 114 (1937), n°1, 635-654. I don't know about your particular setting. – François Brunault Oct 16 '12 at 11:59
I feel like the place to look would be Weil's Basic Number Theory. The modern way to prove both Dirichlet and the finiteness of the class number is via a Fujisaki's lemma argument: compactness of a norm one idele group. This sort of analysis can be repeated in the situation of a $k$-central division algebra, and this was the aim of Weil's basic number theory. – stankewicz Oct 16 '12 at 12:15
Let $G$ be the finite type affine $O_k$-group scheme representing the functor $A \mapsto (\Lambda\otimes A)^{\times}$ on commutative $O_k$-algebras. The Weil restriction $\mathcal{G} = {\rm{R}}_{O_k/\mathbf{Z}}(G)$ is a finite type affine $\mathbf{Z}$-group scheme (perhaps not $\mathbf{Z}$-flat), and $\mathcal{G}(\mathbf{Z}) = \Lambda^{\times}$. Thus, $\Lambda^{\times}$ is an arithmetic subgroup of the rational points of the reductive generic fiber of $\mathcal{G}$. Arithmetic groups are finitely generated (see Borel's beautiful thin book on arithmetic groups), even finitely presented. – user27056 Oct 16 '12 at 12:25
thanks for your answers. I will try to look in the direction of arithmetic groups, then. – GreginGre Oct 16 '12 at 12:47
The discussion in section 4.5 of my joint paper with Werner Bley may be relevant: arxiv.org/pdf/1006.4381v2.pdf – Henri Johnston Oct 18 '12 at 19:47
Hello Greg!
I'm not a specialist on the subject (and might have misunderstood something), but I did search on this issue while ago so here are my impressions.
I think that the answer to the first general question is negative (at least in strong sense). Dirichlet's theorem describes the unit group algebraically almost completely in terms of signature. In most cases (and in particular in the case you are interested in) the unit group of a maximal order of a division algebra is a very complicated object and as far as I know there is no general theorem that gives a good idea of the algebraic structure of this group.
An example of troubles one encounters in division algebras: "Presentations of the unit group of an order in a non-split quaternion algebra" Capi Corrales,a, Eric Jespers, Guilherme Leal, and Angel del Riod, Advances in Mathematics 186 (2004) 498–524.
Probably the best overall sources on the subject are Ernst Kleinert's book (Units in skew fields) and a survey article(Units of classical orders, Enseigment mathematique, 1994). One of his central themes is consideration of what should an analogue of Dirichlet unit theorem look like in a division algebra. So these references are probably quite a good answer to your general question.
While the algebraic side of Dirichlet's theorem seem to be quite hard to generalize, there is also the geometric side, which describes how "dense" the unit group is geometrically if we consider the ring of algebraic integers as a lattice through the usual Minkowski embedding.
In the case you are interested the unit group has a subgroup of finite index (the norm 1 group), which is a co-compact subgroup in $SL_n(C)$. The "density" of this norm 1 group is decided by algebraic invariants of the division algebra. So in this sense we can generalize the geometric side of Dirichlet's theorem. Here the key word is point counting in Lie groups. There is a recent book on the subject: The Ergodic Theory of Lattice Subgroups, A. Gorodnik and A. Nevo, Princeton University Press, 2010.
If you are interested in for example of the number of unitary units (if I understood what you are asking it is indeed a finite number) I don't think this approach helps much. Actually even the original Dirichlet's theorem does not directly tell much about the roots of unity part, except that it exist and is generated by a single element.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8979700803756714, "perplexity": 215.3096791573319}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989234.2/warc/CC-MAIN-20150728002309-00219-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/limit-problem.45496/ | # Limit problem
1. Sep 30, 2004
### lilman7769
i did this problem but i just wanted some confirmation if im right.....
lim (3+h)^-1 - (3)^-1 / h
h->0
i worked it out...but still got 0....maybe some confirmation and if u could show ur work that would be nice too......thx in advance
ps my first post!!!
2. Sep 30, 2004
### T@P
if im not mistaken thats a derivative in one of it (many) forms.
basically, it x' at the point 3... which would be 1 if i understood your notation right. (is the h under the whole fractoin or just the end?)
hope im not confusing you too much.
3. Oct 1, 2004
### Townsend
Based on what you are saying the function in question is a constant function. Meaning the function is of the form y=1/3, which is just a boring horizontal line. Now consider what a derivative is suppose to be. A derivative gives information about a functions slope at values of x in the domain of the function. Now consider your horizontal line, what can you tell me about its slope?
Best Regards
4. Oct 1, 2004
### HallsofIvy
Staff Emeritus
How did you get that interpretation?
Assuming the orginal post was lim ((3+h)-1- 3-1)/h
(notice the additional parentheses) then this is the derivative of x-1 evaluated at x= 3. Assuming that the purpose of this is to actually calculate that derivative (so that you can't just use the derivative itself to get the limit!) then the best way to do it is to combine the fractions:
$$\frac{1}{3+h}- \frac{1}{3}= \frac{3- (3+h)}{3(3+h)}=\frac{-h}{3(3+h)}[/itex] so the "difference quotient" becomes [tex]\frac{-h}{3h(3+h)}[/itex]. As long as h is not 0 that is the same as [tex]\frac{-1}{3(3+h)}[/itex] and it should be easy to find the limit as h goes to 0. 5. Oct 1, 2004 ### JasonRox I got zero on the work as well. Find the limit. [tex]\frac{\frac{-1}{3(3+h)}}{h}$$
Simplify and use direct substitution.
I got zero.
I could be wrong though.
$$\frac{\frac{-1}{3(3+h)}}{h}$$
to
$$\frac{-h}{3(3+h)}$$
Now, substitute h~0.
Note: I excluded lim in my work for simplicities sake.
Last edited: Oct 1, 2004
6. Oct 1, 2004
### JasonRox
Also, I'm a Brock University student. ;)
7. Oct 1, 2004
### Tide
Jason,
You dropped a factor of h in the numerator. Halls' analysis is correct.
8. Oct 1, 2004
### JasonRox
We aren't looking for the rate of change at x=3.
We are looking for the limit.
The limit of 1/x is 0.
9. Oct 2, 2004
### HallsofIvy
Staff Emeritus
Brock University may get upset at you for using their name!
In the first place, "the limit of 1/x is 0" is meaningless- you have to say "limit of 1/x" as x goes to some specific value4. The only number that would give a limit of 0 for 1/x is infinity and infinity has nothing to do with the original problem.
"We aren't looking for the rate of change at x=3."
Perhaps you aren't but anyone who is trying to answer the original question is!
10. Oct 2, 2004
### JasonRox
He/she wasn't looking for that. I am in the same program because we have the same assignment. The two questions she asked came from the same school.
About the limit mistake, there's an even bigger one. I thought about it last night while going to bed, and thought about what I said. First, it didn't make any sense, like you explained. Second, and I can't believe you didn't spot this, the limit doesn't really exist because the left hand limit doesn't equal the right hand limit.
Note: I have every right to say I'm a Brock student.
11. Oct 2, 2004
### JasonRox
Just so you know the assignments have been handed in.
12. Oct 2, 2004
### Tide
Well then be sure to report back to us when you find the "right" answer!
13. Oct 2, 2004
### T@P
whoa
why doesnt the original poster just re-state the question so there's no more ambiguity?
I personally am not sure wether the last h refers to the whole limit or not...
14. Oct 3, 2004
### Townsend
I have no idea why I thought what I did but I am glad you were able to correct it. Sorry about that. Next time I try to help I will be a lot more careful before posting.
Regards | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8436818718910217, "perplexity": 1365.4577907105777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00068-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://link.springer.com/article/10.1007/s11671-007-9090-4 | , Volume 2, Issue 10, pp 504-508,
Open Access
Date: 12 Sep 2007
# Surface Morphology Evolution of GaAs by Low Energy Ion Sputtering
## Abstract
Low energy Ar+ ion sputtering, typically below 1,200 eV, of GaAs at normal beam incident angle is investigated. Surface morphology development with respect to varying energy is analyzed and discussed. Dot-like patterns in the nanometer scale are obtained above 600 eV. As the energy approaches upper eV range regular dots have evolved. The energy dependent dot evolution is evaluated based on solutions of the isotropic Kuramoto-Sivashinsky equation. The results are in agreement with the theoretical model which describes a power law dependency of the characteristic wavelength on ion energy in the ion-induced diffusion regime. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9663457870483398, "perplexity": 2642.3457236594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802774894.154/warc/CC-MAIN-20141217075254-00009-ip-10-231-17-201.ec2.internal.warc.gz"} |
http://de.vroniplag.wikia.com/wiki/Analyse:Jem/Fragment_047_08 | # Analyse:Jem/Fragment 047 08
31.199Seiten in
diesem Wiki
Typus Verschleierung Bearbeiter Graf Isolan Gesichtet
Untersuchte Arbeit:
Seite: 47, Zeilen: 8-21
Quelle: Kollar et al 2005
Seite(n): 73-74, Zeilen: 73:3-4.11-18.22-24 - 74:1-2
3.3.2 Outcome of Collision
The binary droplet collision phenomenon is discussed in this section. The outcome of collisions can be described by three non-dimensional parameters: the collisional Weber number, the impact parameter, and the droplet size ratio (Orme, 1997; Post and Abraham, 2002; Ko and Ryou, 2005b).
The collisional Weber number is defined as
$We_{coll}= \frac {\rho_l U_{rel}^2 d_2}{\sigma}$ (3.21)
where Urel is the relative velocity of the interacting droplets and d2 is the diameter of the smaller droplet.
The dimensional impact parameter b is defined as the distance from the center of one droplet to the relative velocity vector, $\vec{u}_{rel},$ placed on the center of the other droplet. This definition is illustrated in Figure 3.6. The non-dimensional impact parameter is calculated as
[$B=\frac{2b}{d_1+d_2}= \sin \theta$(3.22)
where d1 is the diameter of the larger droplet and θ is the angle between the line of centers of the droplets at the moment of impact and the relative velocity vector.]
Ko, G.H. and H.S. Ryou (2005b). Modeling of droplet collision-induced breakup process. Int. J. Multiphase Flow 31, pp. 723-738.
Orme, M. (1997). Experiments on droplet collisions, bounce, coalescence and disruption. Prog. Energy Combust Sci. 23, pp. 65-79.
Post, S.L. and J. Abraham (2002). Modeling the outcome of drop-drop collisions in Diesel sprays. Int. J. Multiphase Flow 28, pp. 997-1019.
[page 73]
3. Droplet collision
The binary droplet collision phenomenon is discussed in this section. The phenomenon of droplet collision is mainly controlled by the following physical parameters: droplet velocities, droplet diameters, dimensional impact parameter, surface tension of the liquid, and the densities and viscosity coefficients of the liquid and the surrounding gas, but further components may also be important, such as the pressure, the molecular weight and the molecular structure of the gas. From these physical parameters several dimensionless quantities can be formed, namely, the Weber number, the Reynolds number, impact parameter, droplet size ratio, the ratio of densities, and the ratio of viscosity coefficients. Thus, for a fixed liquid-gas system, the outcome of collisions can be described by three non-dimensional parameters: either the Weber number or the Reynolds number, the impact parameter, and the droplet size ratio.
(i) The Weber number is the ratio of the inertial force to the surface force and is defined as follows:
$We = \frac{\rho_d U_r^2 D_S}{\sigma},$(2)
where ρd is the droplet density, Ur is the relative velocity of the interacting droplets, DS is the diameter of the smaller droplet, and σ is the surface tension. [...]
(ii) The dimensional impact parameter b is defined as the distance from the center of one droplet to the relative velocity vector placed on the center of the other droplet. This definition is illustrated in Fig. 2. The non-dimensional impact parameter is calculated as follows:
[Page 74]
$B=\frac{2b}{D_L+D_S}$(3)
where DL is the diameter of the larger droplet.
Orme, M., 1997. Experiments on droplet collisions, bounce, coalescence and disruption. Progress. Energy Combust. Sci. 23, 65–79.
Post, S.L., Abraham, J., 2002. Modeling the outcome of drop-drop collisions in Diesel sprays. Int. J. Multiphase Flow 28, 997–1019.
Anmerkungen Shortened severely but otherwise left intact. Again, significant details necessary for understanding the formulas have been left out. Nothing has been marked as a citation. No reference to the original author is given. Sichter (Graf Isolan)
Fakten zu „Jem/Fragment 047 08RDF-Feed
Bearbeiter Graf Isolan FragmentStatus ZuSichten + Kuerzel Jem + Quelle Kollar et al 2005 + SeiteArbeit 47 + SeiteQuelle 73-74 + Sichter (Graf Isolan) Typus Verschleierung + ZeileArbeit 8-21 + ZeileQuelle 73:3-4.11-18.22-24 - 74:1-2 + | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9856998920440674, "perplexity": 2794.04677813807}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719027.25/warc/CC-MAIN-20161020183839-00546-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://www.physicsforums.com/showthread.php?p=4201106 | # Combinatorics, permutations of letters
by Hannisch
Tags: combinatorics, letters, permutations
P: 116 1. The problem statement, all variables and given/known data How many "words" with 5 letters can be created from the letters in the word ALGEBRA? Each letter can be used only once. 2. Relevant equations 3. The attempt at a solution I know the answer to this (1320) and I know how I got to it (which I'll describe in a minute), but I know that while my way of getting it is right, it's. not the easiest, and I can't seem to figure out how on earth they want me to do it. I know that choosing 5 things from a set with 7 elements can be done in $\frac{7!}{(7-5)!}$ ways. I also know that if I were to use all the letters of ALGEBRA the answer would be $\frac{7!}{2!}$ since one letter is used twice. In this case these happen to be exactly same, but I can't seem to put these principles together to get a correct answer. Help, please? The way I solved it was more of a brute force solution. The correct answer is 11 * 5!, and I got to that conclusion by reasoning that 5! of the words are made by the letters LGEBR, which have no recurring letters. Then I "exchanged" one of the letters from that to an A, one at a time, and then to both As. This can be done in 11 ways: LGEBR AGEBR LAEBR LGABR LGEAR LGEBA AAEBR LAABR LGAAR LGEAA AGEBA And all of them can be chosen in 5! different combinations. I know this is the correct answer (it's an online based homework, I've inputted this and it says I'm right).
Sci Advisor HW Helper Thanks P: 26,167 Hi Hannisch! That certainly works. But quicker would be to split the problem into three … count separately the number of words with no As, with one A, and with 2As.
Homework
HW Helper
Thanks ∞
P: 9,162
Quote by tiny-tim But quicker would be to split the problem into three …count separately the number of words with no As, with one A, and with 2As.
Slightly quicker still, just separate the cases "at most one A" (just as with all the other letters) and "2 As" | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8540072441101074, "perplexity": 426.47002253568456}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://blog.sciencemusings.com/2006/03/tse2.html | ## Friday, March 24, 2006
### TSE2
On the ground in Istanbul, and yes the hotel (on a narrow little street in the old city) has free WiFi. A couple of days here seeing the sights, then to the south coast for the eclipse on Wednesday.
Most people know why a solar eclipse happens: The Moon blocks the Sun's light. What is less well understood is why total solar eclipses are so rare. Think of the Earth as a grapefruit. On this scale, the Moon would be a grape about 20 feet away. Both Earth and Moon cast conical shadows pointing away from the Sun (Shelley's "pyramid of night/ which points into the heaven."). To have a total eclipse of the Sun, the Moon's conical shadow must touch some part of the Earth's surface.
And here's the kicker: The Moon's shadow is almost exactly as long as the average distance of the Moon from the Earth (think of the grape with its 20-foot-long shadow tapering to a point near the grapefruit Earth). Because of the tilt of the Moon's orbit to the plane of the ecliptic, most months the tip of the Moon's shadow passes above or below the Earth, but about twice a year, when the Moon is near the ecliptic plane (the plane of the Earth's orbit), a solar eclipse is possible.
But it's not a sure thing. The Moon moves around the Earth in an almost perfectly circular orbit, but the Earth is not quite at the center of the circle. Sometimes the Moon is a bit further from the Earth, and sometimes a bit closer. When the Moon is near apogee -- its greatest distance from the Earth -- the tip of its shadow does not quite reach to the Earth's surface and a total solar eclipse cannot occur. When the Moon is at perigee -- its nearest distance to Earth -- the rapierlike tip of its shadow just reaches Earth, or even extends a bit beyond.
To see a total solar eclipse, one must be in the narrow path where the rapier's tip slices the surface of the Earth -- like a fencer's rapier scoring the cheek of his opponent. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8225796818733215, "perplexity": 680.1803637992364}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768197.70/warc/CC-MAIN-20141217075248-00115-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://getrevising.co.uk/revision-cards/unit-4-section-4-electromagnetic-induction | # Unit 4 Section 4 Electromagnetic Induction
HideShow resource information
## Electromagnetic Induction
An e.m.f. is induced if a conductor is moved through a magnetic field. The conductor can move and the magnetic field can stay still or the other way around - you get an e.m.f. either way.
Flux cutting (i.e. Moving a conductor through a magnetic field) always induces an e.m.f. but will only induce a current if the circuit is complete.
Flux linking is when an e.m.f. is induced by the changing magnitude or direction of the magnetic flux
A change in flux of one weber per second will induce an electromotive force of 1 volt in a loop of wire.
1 of 6
## Magnetic Flux
Magnetic Flux: The magnetic flux (in Wb) passing through an area is given by the magnetic flux density multiplied by area. It can also be thought of as the number of magnetic field lines passing through an area.
Magnetic flux(OI) = B x A
OI = magnetic flux in Wb (webers)
B = magnetic flux density in T
A = area in m^2
2 of 6
## Flux Linkage
Flux linkage: The magnetic flux in a coil multiplied by the number of turns on the coil.
Flux linkage (in weber turns) = N x magnetic flux (Wb) = B x A x N
N = number of turns on the coil cutting the flux.
B = magnetic flux density density in Tesla (T)
A = area of the coil in m^2
3 of 6
## Flux Linkage Example
E.g.
The flux linkage of a coil with a cross-sectional area of 0.33m^2 normal to a magnetic field of flux density 0.15 T is 4.0 Wb turns. How many turns are in the coil?
Just rearrange the equation for for flux linkage to make N the subject:
1. (N x magnetic flux) = B x A x N
2. N = (N x magnetic flux) / B x A
3. N = 4.0 / (0.15 x 0.33)
4. N = 81 turns
4 of 6
## Flux Linkage at an Angle
• Magnetic flux, Omega (Wb) = B x A x cos(theta)
• Flux linkage, N Omega in Weber turns = B x A N x cos(theta)
B = magnetic flux density in Tesla (T)
A = area of the coil in m^2
N = number of turns on the coil cutting the flux
Theta = angle between the normal to the plane of the coil and the magnetic field (degrees)
5 of 6
## Flux Linkage at an Angle Example
E.g.
A rectangular coil of wire with 200 turns and sides of length 5.00cm and 6.51cm is rotating in a magnetic field with B = 8.56 x 10^-3 T. Find the flux linkage of the coil when it's at 29.5° to the magnetic field.
1. Find the area of the coil:
Area = (5.00 x 10^-2) x (6.51 x 10^-2) = 3.255 x 19^-3 m^2
2. Then just put the numbers into the equation:
N x magnetic flux = BANcos(theta)
= (8.56 x 10^-3) x (3.255 x 10^-3) x 200 x cos(29.5°)
= 4.85 x 10^-3 Wb turns
6 of 6
## Comments
No comments have yet been made
## Similar Physics resources:
See all Physics resources »See all Magnetic fields resources » | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9138837456703186, "perplexity": 1510.4579612284153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00451-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://www.geocomputation.org/1999/064/gc_064.htm | # Error-constrained change detection
J. Mark Ware and Christopher B. Jones
School of Computing, University of Glamorgan, CF37 1DL United Kingdom
E-mail: [email protected]
David R. Miller
Macaulay Land Use Research Institute, Aberdeen, AB9 2QJ United Kingdom
## 1. Introduction
In this paper we consider the problem of detecting land-cover change that occurs over a period of time. We achieve this by comparing pairs of vector-defined polygon coverages, C1 and C2. C1 represents land-cover for a particular region at the start of the time-period (t1) while C2 represents land-cover for the same region at the end of the time-period (t2). The standard technique for detecting change in this situation is to intersect C1 and C2 so as to produce a third coverage Ci. Each intersection-polygon in Ci is associated with two land-cover classification codes; one describes land-cover at time t1 while the other describes land-cover at time t2. If the classification codes differ, then the polygon is labeled as corresponding to an area that has undergone land-cover change; however, a problem with adopting this approach is that it assumes that the source coverages are free of error. This will not usually be the case; it is likely that C1 and C2 are subject to both classification error (i.e., some polygons are assigned to a wrong class) and positional error (i.e., some polygon boundaries are in the wrong location).
In the work presented here we consider locational error only. The problem we address concerns the fact that if C1 and C2 have been generated independently then, because of loacational error in both coverages, equivalent boundaries will not match exactly with regards to their geometry. The term equivalent boundaries are used here to describe a pair of boundaries that are intended to be representing the same real-world phenomena. The result is that in situations where boundaries are approximately equivalent, sliver polygons can be expected to occur; the difficulty we face is distinguishing between slivers and polygons that may represent genuine change. It should be noted that this and similar problems have been addressed widely in the literature. The interested reader is referred to Dougenik 1979, Chrisman 1987 and 1991, Zhang and Tulip 1990, Pullar 1991, Harvey and Vauglin 1996, and Edwards and Lowell 1996, for related work.
In an attempt to overcome the problem described, we propose a method that seeks to ensure that equivalent boundaries are represented by precisely the same geometry. This is achieved by aligning C1 and C2 prior to intersection. The method is based on the principle of conflation (Saalfield 1988) in which equivalent elements from the pair of coverages are identified and merged in a manner that, where appropriate, preserves the better quality representation. The alignment process makes use of a boundary matching procedure that takes locational error into consideration. Various metrics are evaluated to provide evidence for the equivalence of boundary pairs. Multivariate Bayesian methods (Johnson and Wichern, 1998) are used to exploit this evidence to provide a probabilistic measure of equivalence. In adopting a Bayesian approach it is assumed that training data can be acquired based on expert assertions of the existence of change, or no change, involving movements of boundaries. Having established equivalencies, matching boundary pairs B1 and B2 are replaced by an updated boundary, Bi, within their respective coverages. Bi is derived by calculating a weighted average of the two original features.
## 2. Coverage alignment procedure
Coverage alignment involves the matching and merging of equivalent boundary pairs. Boundary matching is achieved using Bayesian multivariate classification, in which candidate matching pairs are classed as being actual matches or not. Matching boundary pairs can be thought of as belonging to a population P1 while non-matching pairs can be thought of as belonging to a population P2.
### 2.1 Conditional probability density function
To begin, conditional probability density functions are calculated using geometric signatures obtained from a training set of manually classified boundary pairs. Three signatures are used, namely, length ratio L, Hausdorff distance D and average distance between boundary sections A. These signatures are derived for each boundary pair in a particular population Pj, and are subsequently used to determine a mean column vector MPj and covariance matrix XPj. Provided that the individual items of evidence are approximately normally distributed (or transformed to a normal distribution), the conditional probability density function for the items of evidence E supporting the hypothesis HPj is then represented by-
where E = (L, D, A)T, HPj is the hypothesis that a boundary pair belong to the population Pj, and n equals 3 (the number of items of evidence used).
### 2.2 Finding candidate matches
When searching for matching boundaries we only consider boundary pairs that are likely to represent a match. These candidate matching boundary pairs are found using a basic feature matching procedure based on buffering. In essence, this procedure deems two boundaries (or parts of boundaries), B1 and B2, to be a candidate match if: (i) all of B1 lies within a pre-defined distance of B2; (ii) all of B2 lies within a pre-defined distance of B1; and (iii) the difference between B1 and B2 in terms of their orientation is less than some pre-defined angle. The values used for the distance and angle thresholds are determined by analysis of the manually classified training data.
### 2.3 Classifying candidate matches
Each candidate matching pair (B1,B2) is then assigned a posteriori probability of it being an actual match. This probability is calculated as-
where E represents the geometric signatures derived from (B1,B2), p(HP1) is the prior probability of a candidate boundary pair being a match, and p(E) is the prior probability of the evidence. The prior probabilities are estimated from all previously manually analysed data. Having been calculated, the a posteriori probabilities of all candidate matching boundary pairs are examined and compared against a pre-defined probability threshold value, and a list of matching boundary pairs is produced.
### 2.4 Boundary alignment
Each matching boundary pair is aligned using a boundary merging procedure. This procedure makes use of weighted interpolation, thus allowing for a range of results (i.e., B1 replaces B2 in C2, or B2 replaces B1 in C1, or B1 and B2 are both replaced by a weighted average). There are three steps involved in the alignment process. The first step involves adding vertices to B1 and B2 such that each boundary has the same number of vertices and these vertices appear at proportionally equivalent distances along the boundaries. Next, the aligned boundary Bi is found by calculating a weighted average of corresponding vertices in B1 and B2. Finally, the sharp breaks that occur between the merged boundaries and adjacent unmerged boundaries are spanned by inserting smooth connecting boundaries. More details may be found in Ware and Jones (1998).
## 3. Experimental results
The coverage alignment procedure has been implemented as a set of C functions. These functions have been made callable from the Avenue scripting language of the ArcView GIS. The functions can all be called from a modified version of the default ArcView user interface. The procedure has been tested using data supplied by the Macaulay Land Use Research Institute (MLURI), who also provided expert advice during the training stage. Three data sets, representing land-cover in the Glen Feshie region of the Cairngorms (Scotland) in 1946, 1964 and 1988, have been used. Figures 1 and 2 show the 1946 and 1988 data.
Figure 1. 1946 Data Set (5km x 6km).
Figure 2. 1988 Data Set (5km x 6km).
In experiments, 84% of the selected boundary matches with a posteriori probability exceeding 90% were found to correspond to expert assertions of boundary equivalencies. While this represents a reasonable result, it is hoped that larger training sets and the use of additional signatures will lead to greater success. Figure 3 shows part of the 1946/1988 change map using the standard intersection technique, while Figure 4 shows the corresponding change map produced subsequent to coverage alignment.
Figure 3. Part of 46/88 change map without coverage alignment (2km x 2km. Shaded polygons correspond to regions that have undergone change.
Figure 4. The same part of 46/88 change map, this time with coverage alignment. Notice that sliver polygons have been removed.
## 4. Conclusions
This paper has proposed a new approach to evaluating and improving automated environmental change detection when comparing vector-defined land-cover coverages. In particular we have considered the problem of locational error. A method for aligning coverages prior to intersection has been described. Our particular aim was to ensure that boundaries in separate coverages that are intended to be representing the same real-world phenomena are represented by the same geometry. Preliminary results and training data from land cover maps of Scotland found a reasonably close match between boundary matches found by our procedure and those found manually by experts.
There is much scope for future research. This might include the evaluation of alternative geometric metrics. There also is the opportunity to develop more versatile methods to combine conventional quantitative multivariate statistics with qualitative sources of evidence. Also, it is important to point out that even if the locational errors of map intersections could be reliably addressed, the result of change detection based on the comparison of the respective source-interpreted land cover classes is still subject to potentially large error, which is a function of the accuracy of the individual interpretations. The authors are currently carrying out work aimed at addressing this issue and hope to report findings in the near future.
## Acknowledgments
Acknowledgment is due to the Scottish Office Agriculture, Environment and Fisheries Department for use of the historical and 1988 land cover data. Thanks also to John Bell for his assistance in interpreting aerial photographs. JMW was supported by NERC Grant GR3/10569.
## References
Chrisman N.R., 1987. 'The accuracy of map overlays: A reassessment'. Landscape and Urban Planning 14, pp. 427-439.
Chrisman N.R., 1991. 'A diagnostic test for error in categorical maps'. Proc Auto-Carto 10, pp. 330-348.
Dougenik J.A., 1979. 'WHIRLPOOL: A geometric processor for polygon coverage data'. Proc Auto-Carto 4, pp. 304-311.
Edwards G. and K.E. Lowell, 1996. 'Modelling uncertainty in photointerpreted boundaries'. Photogrammetric Engineering and Remote Sensing 62(4), pp. 337-391.
Zhang G. and J. Tulip, 1990. 'An algorithm for the avoidance of sliver polygons and clusters of points in spatial overlay'. Proc 4th International Symposium on Spatial Data Handling 1, pp. 141-150. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8020833730697632, "perplexity": 1402.9161903446347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948589512.63/warc/CC-MAIN-20171216220904-20171217002904-00587.warc.gz"} |
http://math.stackexchange.com/questions/148589/divisibility-remainders-and-greatest-common-divisors?answertab=active | # Divisibility: Remainders and Greatest Common Divisors [duplicate]
Possible Duplicate:
Why is $\gcd(a,b)=\gcd(b,r)$ when $a = qb + r$?
Any idea how to prove that if $a,b \in \Bbb Z$ with $b = aq + r$, then $\gcd(a,b) = \gcd(a,r)$?
-
## marked as duplicate by Alex Becker♦, Arturo Magidin, mixedmath♦, Chris Eagle, Zev ChonolesJun 8 '12 at 15:36
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
HINT Note that any common divisor of $a$ and $b$ divides $r$. (Why?) Similarly, any common divisor of $a$ and $r$ divides $b$. (Why?) – user17762 May 23 '12 at 5:23
Prove $\gcd(a,b)=\gcd(a,b-a)$, then invoke induction. – anon May 23 '12 at 5:23
## 1 Answer
Note that $k$ divides both $a$ and $b$ if and only if it divides both $a$ and $b+ta$ for any $t$. (Prove that if it divides $a$ and $b$ then it divides $a$ and $b+ta$. The converse follows by applying the argument again to $a$ and $b+ta$ to get $a$ and $(b+ta)+(-t)a$.
Now, if $b=aq+r$ then we have: \begin{align*} k|a,b &\iff k|a,b+a(-q)\\ &\iff k|a,r \end{align*} So the set of common divisors of $a$ and $b$ and the set of common divisors of $a$ and $r$ coincides.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9238284826278687, "perplexity": 308.21928767969587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137056.60/warc/CC-MAIN-20140914011217-00111-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/groups-normalizer-abstract-algebra-dihedral-groups-help.262641/ | # Groups, Normalizer, Abstract Algebra, Dihedral Groups help?
1. Oct 8, 2008
### nobody56
1. Let G be a Group, and let H be a subgroup of G. Define the normalizer of H in G to be the set NG(H)= the set of g in G such that gHg-1=H.
a) Prove Ng(H) is a subgroup of G
b) In each of the part (i) to (ii) show that the specified group G and subgroup H of G, CG(H)=H, and NG(H)=G
(i) G = D4 and H = {1, s, r2, sr2}
(ii) G = D5 and H = {1, r, r2, r3, r4}
2. Relevant equations
Notice that if g is an element of CG(H), then ghg-1 = h for all h elements of H so, CG(H) is a sub group of NG(H).
3. The attempt at a solution
a) Using the 1 step subgroup test.
If a and b are elements in NG(H) then show ab-1 is an element
Let a and b be elements in NG(H), further let a = g= b meaning gHg-1=H=gHg-1. So (gHg-1)(gHg-1)-1=(gHg-1)(g-1H-1g), which by associativity and definition of inverses and closed under inverses, = e which is an element in NG(H), therefor ab-1 is an element of NG(H) for every a,b elements of NG(H) and by the one step subgroup test, NG(H) is a subgroup of G.
b) I'm not sure where to begin, or if part a is even right...
2. Oct 8, 2008
### Dick
The proof is not very good. You can't assume a=b! You take a and b in NG(H). So aHa^(-1)=H and bHb^(-1)=H. You want to show c=ab^(-1) is in NG(H). Which means cHc^(-1)=H. First off, what is c^(-1)?
3. Oct 8, 2008
### nobody56
Would c^(-1)=(ab^(-1))^(-1)=a^(-1)b?
4. Oct 8, 2008
### nobody56
could i say let a, b be elements of NG(H), and let a = aHa^-1 and b = bHb^(-1), and since a and b are elements in NG(H) aHa^(-1)=bHb^(-1), then use the division algorithm for right cancellation to say aHa^(-1)b=bHb^(-1)b which goes to aHc^(-1)=bHe and then similarly by the division algorithm for left cancellation, a^(-1)aHc^(-1)=a^(-1)bHe, which simplifies to eHc^(-1)=c^(-1)He....would that lets us say c^(-1) is and element in NG(H), getting us to c^(-1)H(c^(-1))=c^(-1)Hc....but can i claim commutativity and say cHc^(-1)?
5. Oct 8, 2008
### Dick
Mind the ordering. (ab^(-1))^(-1)=ba^(-1). To prove it multiply that by ab^(-1). Do you see how that works?
6. Oct 8, 2008
### Dick
You cannot say a=aHa^-1 for a start. H=aHa^-1. Not a. The rest of it is sort of ok, If you can get to c^(-1)Hc=H that's fine, you don't need any commutativity to turn that into cHc^(-1)=H. Do you see why?
7. Oct 8, 2008
### nobody56
yeah, i forgot the ordering....as for c^(-1)Hc=(c^(-1)Hc)^(-1)=cHc^(-1) by the same ordering property right?
8. Oct 8, 2008
### Dick
Ok, since H^(-1)=H.
9. Oct 8, 2008
### nobody56
right, so for b could I take the definition of a Centralizer CG(H)={g element of G, such that ga=ag} and just plug in the elements to show CG(H)=H? and do the same for the normalizer?
10. Oct 8, 2008
### Dick
Yes, now you just have to do some calculations in the dihedral groups to try and figure out what the normalizer and centralizer are.
11. Oct 10, 2008
### nobody56
So then would i just be able to say that since CD4(r2)={1, r2, s, sr2},and CD4(s)={1, r2, s, sr2}, and CD4(sr2)={1, r2, s, sr2}, then CD4(1, s, r2, sr2)={1, r2, s, sr2}...or should i try and show each part, as in 1*r=r*1, which seems kinda tidious
12. Oct 10, 2008
### Dick
I would think you could just state the answers without showing every calculation you did. But that's just my opinion.
13. Oct 10, 2008
### nobody56
i decided to play it safe, and just wrote it all out, thank you for your time and help!
Similar Discussions: Groups, Normalizer, Abstract Algebra, Dihedral Groups help? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9137185215950012, "perplexity": 2641.9709552117474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686465.34/warc/CC-MAIN-20170920052220-20170920072220-00197.warc.gz"} |
https://electronics.stackexchange.com/questions/532355/obtaining-frequency-of-550-hz-from-simple-triangle-and-square-waveform-generator | # Obtaining frequency of 550 Hz from Simple Triangle and Square Waveform Generator Circuit
First time asking a question so sorry if I miss some things. I'll try to be thorough and straightforward.
I'm trying to design a square wave from a triangle wave-square wave generator circuit.
The output wave should be running at 550 Hz with a 2 Vpk-pk, and an offset of 1 V, which is why I added a diode network at the output of the second op-amp.
The thing is originally I was having trouble getting the right frequency of the signal. I solved for R1 using the following relationships, setting C1 to 0.1uF:
$$t = R1C1 = \frac{1}{f}$$
Which gave me:
$$R1 = 18.18 k\Omega$$
Also, finding the time using frequency gives me:
$$\tau = 1.82ms$$ My problem is I'm not too sure how R2 and R3 affect the frequency. Basically, I 've been switching out resistor values and found changing R2 affects the frequency the most. 3.42 k, gives me the right tau, but I'm not sure why though.
I know that the second op amp acts as a comparator, right? So I'm guessing the voltage between R3 and R2 is the threshold. But I'm still a little lost on how that relates to the frequency of the signal.
• You may find it more useful to take Waveform A from the output of the 2nd op amp, rather than the point you have chosen. – WhatRoughBeast Nov 14 '20 at 20:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9523630142211914, "perplexity": 472.4737765455423}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038879305.68/warc/CC-MAIN-20210419080654-20210419110654-00111.warc.gz"} |
http://mathhelpforum.com/calculus/153854-integrate-completing-square.html | # Thread: Integrate by completing the square.
1. ## Integrate by completing the square.
2. Hello, roza!
By completing the square, show that:
. . $\displaystyle \int^{\frac{1}{2}}_0 \frac{dx}{x^2-x+1} \;=\;\frac{\pi}{3\sqrt{3}}$
Complete the square: . $(x^2-x+\frac{1}{4}) + 1 - \frac{1}{4}$
. . . . . . $=\;(x-\frac{1}{2})^2 + \frac{3}{4} \;=\; (x - \frac{1}{2})^2 + \left(\frac{\sqrt{3}}{2}\right)^2$
We have: . $\displaystyle \int^{\frac{1}{2}}_0 \frac{dx}{\left(x-\frac{1}{2}\right)^2 + \left(\frac{\sqrt{3}}{2}\right)^2}$
Let: . $x - \frac{1}{2} \:=\:\frac{\sqrt{3}}{2}\tan\theta \quad\Rightarrow\quad dx \:=\:\frac{\sqrt{3}}{2}\sec^2\!\theta\,d\theta$
. . and: . $\left(x-\frac{1}{2}\right)^2 + \left(\frac{\sqrt{3}}{2}\right)^2 \;=\; \frac{3}{4}\sec^2\!\theta$
Substitute: . $\displaystyle \int\dfrac{\frac{\sqrt{3}}{2}\sec^2\!\theta\,d\the ta}{\frac{3}{4}\sec^2\!\theta} \;=\; \frac{2}{\sqrt{3}}\!\int d\theta \;=\;\frac{2}{\sqrt{3}}\,\theta + C$
Back-substitute: . $\tan\theta \:=\:\dfrac{x-\frac{1}{2}}{\frac{\sqrt{3}}{2}} \;=\;\dfrac{2x-1}{\sqrt{3}}$
We have: . $\dfrac{2}{\sqrt{3}}\,\arctan\left(\dfrac{2x-1}{\sqrt{3}}\right)\,\bigg]^{\frac{1}{2}}_0$
. . . . $=\; \dfrac{2}{\sqrt{3}}\,\bigg[\arctan 0 - \arctan\left(\text{-}\dfrac{1}{\sqrt{3}}\right)\bigg]$
. . . . $=\;\dfrac{2}{\sqrt{3}}\,\bigg[0 - \left(\text{-}\dfrac{\pi}{6}\right)\bigg]$
. . . . $=\;\dfrac{\pi}{3\sqrt{3}}$
3. $\int{0}{1\2}\frac{dx}{(x^2 - x + 1)}$
Write $x^2 - x + 1$ as $x^2 - x + 1/4 + 3/4 = (x-\frac{1}{2})^2 + (\frac{\sqrt{3}}{2})^2$
This reduces the given integration to a standard form. Now solve it. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9973271489143372, "perplexity": 1585.2837758303776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661915.89/warc/CC-MAIN-20160924173741-00082-ip-10-143-35-109.ec2.internal.warc.gz"} |
http://link.springer.com/article/10.1007%2Fs10950-015-9489-9 | Journal of Seismology
, Volume 19, Issue 3, pp 721–739
# Modeling earthquake dynamics
Original Article
DOI: 10.1007/s10950-015-9489-9
Charpentier, A. & Durand, M. J Seismol (2015) 19: 721. doi:10.1007/s10950-015-9489-9
## Abstract
In this paper, we investigate questions arising in Parsons and Geist (Bull Seismol Soc Am 102:1–11, 2012). Pseudo causal models connecting magnitudes and waiting times are considered, through generalized regression. We do use conditional model (magnitude given previous waiting time, and conversely) as an extension to joint distribution model described in Nikoloulopoulos and Karlis (Environmetrics 19: 251–269, 2008). On the one hand, we fit a Pareto distribution for earthquake magnitudes, where the tail index is a function of waiting time following previous earthquake; on the other hand, waiting times are modeled using a Gamma or a Weibull distribution, where parameters are functions of the magnitude of the previous earthquake. We use those two models, alternatively, to generate the dynamics of earthquake occurrence, and to estimate the probability of occurrence of several earthquakes within a year or a decade.
### Keywords
Duration Earthquakes Generalized linear models Seismic gap hypothesis | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9198242425918579, "perplexity": 2589.350100033795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189092.35/warc/CC-MAIN-20170322212949-00565-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://en.wikipedia.org/wiki/Levi-Civita_connection | # Levi-Civita connection
In Riemannian geometry, the Levi-Civita connection is a specific connection on the tangent bundle of a manifold. More specifically, it is the torsion-free metric connection, i.e., the torsion-free connection on the tangent bundle (an affine connection) preserving a given (pseudo-)Riemannian metric.
The fundamental theorem of Riemannian geometry states that there is a unique connection which satisfies these properties.
In the theory of Riemannian and pseudo-Riemannian manifolds the term covariant derivative is often used for the Levi-Civita connection. The components of this connection with respect to a system of local coordinates are called Christoffel symbols.
## History
The Levi-Civita connection is named after Tullio Levi-Civita, although originally "discovered" by Elwin Bruno Christoffel. Levi-Civita,[1] along with Gregorio Ricci-Curbastro, used Christoffel's symbols[2] to define the notion of parallel transport and explore the relationship of parallel transport with the curvature, thus developing the modern notion of holonomy.[3]
The Levi-Civita notions of intrinsic derivative and parallel displacement of a vector along a curve make sense on an abstract Riemannian manifold, even though the original motivation relied on a specific embedding
${\displaystyle M^{n}\subset \mathbf {R} ^{\frac {n(n+1)}{2}},}$
since the definition of the Christoffel symbols make sense in any Riemannian manifold. In 1869, Christoffel discovered that the components of the intrinsic derivative of a vector transform as the components of a contravariant vector. This discovery was the real beginning of tensor analysis. It was not until 1917 that Levi-Civita interpreted the intrinsic derivative in the case of an embedded surface as the tangential component of the usual derivative in the ambient affine space.
## Notation
The metric g can take up to two vectors or vector fields X, Y as arguments. In the former case the output is a number, the (pseudo-)inner product of X and Y. In the latter case, the inner product of Xp, Yp is taken at all points p on the manifold so that g(X, Y) defines a smooth function on M. Vector fields act as differential operators on smooth functions. In a basis, the action reads
${\displaystyle Xf=X^{i}{\frac {\partial }{\partial x^{i}}}f=X^{i}\partial _{i}f}$,
where Einstein's summation convention is used.
## Formal definition
An affine connection is called a Levi-Civita connection if
1. it preserves the metric, i.e., g = 0.
2. it is torsion-free, i.e., for any vector fields X and Y we have XY − ∇YX = [X,Y], where [X,Y] is the Lie bracket of the vector fields X and Y.
Condition 1 above is sometimes referred to as compatibility with the metric, and condition 2 is sometimes called symmetry, cf. DoCarmo's text.
Assuming a Levi-Civita connection exists it is uniquely determined. Using conditions 1 and the symmetry of the metric tensor g we find:
${\displaystyle X(g(Y,Z))+Y(g(Z,X))-Z(g(Y,X))=g(\nabla _{X}Y+\nabla _{Y}X,Z)+g(\nabla _{X}Z-\nabla _{Z}X,Y)+g(\nabla _{Y}Z-\nabla _{Z}Y,X).}$
By condition 2 the right hand side is equal to
${\displaystyle 2g(\nabla _{X}Y,Z)-g([X,Y],Z)+g([X,Z],Y)+g([Y,Z],X)}$
so we find
${\displaystyle g(\nabla _{X}Y,Z)={\frac {1}{2}}\{X(g(Y,Z))+Y(g(Z,X))-Z(g(X,Y))+g([X,Y],Z)-g([Y,Z],X)-g([X,Z],Y)\}.}$
Since Z is arbitrary, this uniquely determines XY. Conversely, using the last line as a definition one shows that the expression so defined is a connection compatible with the metric, i.e. is a Levi-Civita connection.
## Christoffel symbols
Let ∇ be the connection of the Riemannian metric. Choose local coordinates ${\displaystyle x^{1}\ldots x^{n}}$ and let ${\displaystyle \Gamma ^{l}{}_{jk}}$ be the Christoffel symbols with respect to these coordinates. The torsion freeness condition 2 is then equivalent to the symmetry
${\displaystyle \Gamma ^{l}{}_{jk}=\Gamma ^{l}{}_{kj}.}$
The definition of the Levi-Civita connection derived above is equivalent to a definition of the Christoffel symbols in terms of the metric as
${\displaystyle \Gamma ^{l}{}_{jk}={\tfrac {1}{2}}g^{lr}\left\{\partial _{k}g_{rj}+\partial _{j}g_{rk}-\partial _{r}g_{jk}\right\}}$
where as usual ${\displaystyle g^{ij}}$ are the coefficients of the dual metric tensor, i.e. the entries of the inverse of the matrix ${\displaystyle (g_{kl})}$.
## Derivative along curve
The Levi-Civita connection (like any affine connection) also defines a derivative along curves, sometimes denoted by D.
Given a smooth curve γ on (M,g) and a vector field V along γ its derivative is defined by
${\displaystyle D_{t}V=\nabla _{{\dot {\gamma }}(t)}V.}$
Formally, D is the pullback connection ${\displaystyle \gamma ^{*}\nabla }$ on the pullback bundle γ*TM.
In particular, ${\displaystyle {\dot {\gamma }}(t)}$ is a vector field along the curve γ itself. If ${\displaystyle \nabla _{{\dot {\gamma }}(t)}{\dot {\gamma }}(t)}$ vanishes, the curve is called a geodesic of the covariant derivative. Formally, the condition can be restated as the vanishing of the pullback connection applied to ${\displaystyle {\dot {\gamma }}}$ :
${\displaystyle (\gamma ^{*}\nabla ){\dot {\gamma }}\equiv 0.}$
If the covariant derivative is the Levi-Civita connection of a certain metric, then the geodesics for the connection are precisely those geodesics of the metric that are parametrised proportionally to their arc length.
## Parallel transport
In general, parallel transport along a curve with respect to a connection defines isomorphisms between the tangent spaces at the points of the curve. If the connection is a Levi-Civita connection, then these isomorphisms are orthogonal – that is, they preserve the inner products on the various tangent spaces.
The images below show parallel transport of the Levi-Civita connection associated to two different Riemannian metrics on the plane, expressed in polar coordinates. The metric of left image corresponds to the standard Euclidean metric, while the right metric has standard form in polar coordinates, and thus preserves the vector ${\displaystyle \partial /\partial _{\theta }}$ tangent to the circle. This metric has a singularity at the origin in Euclidean coordinates.
Parallel Transports Under Levi-Civita Connections
This transport is given by the metric ds2 = dr2 + r2dθ2.
This transport is given by the metric ds2 = dr2 + dθ2.
## Example: The unit sphere in R3
Let ${\displaystyle \langle \cdot ,\cdot \rangle }$ be the usual scalar product on R3. Let S2 be the unit sphere in R3. The tangent space to S2 at a point m is naturally identified with the vector sub-space of R3 consisting of all vectors orthogonal to m. It follows that a vector field Y on S2 can be seen as a map Y: S2R3, which satisfies
${\displaystyle \langle Y(m),m\rangle =0,\qquad \forall m\in \mathbf {S} ^{2}.}$
Denote by dmY(X) the covariant derivative of the map Y in the direction of the vector X. Then we have:
Lemma: The formula
${\displaystyle \left(\nabla _{X}Y\right)(m)=d_{m}Y(X)+\langle X(m),Y(m)\rangle m}$
defines an affine connection on S2 with vanishing torsion.
Proof: It is straightforward to prove that ∇ satisfies the Leibniz identity and is C(S2) linear in the first variable. It is also a straightforward computation to show that this connection is torsion free. So all that needs to be proved here is that the formula above does indeed define a vector field. That is, we need to prove that for all m in S2
${\displaystyle \langle \left(\nabla _{X}Y\right)(m),m\rangle =0\qquad (1).}$
Consider the map f that sends every m in S2 to <Y(m), m>, which is always 0. The map f is constant, hence its differential vanishes. In particular
${\displaystyle d_{m}f(X)=\langle d_{m}Y(X),m\rangle +\langle Y(m),X(m)\rangle =0.}$
The equation (1) above follows.${\displaystyle \Box }$
In fact, this connection is the Levi-Civita connection for the metric on S2 inherited from R3. Indeed, one can check that this connection preserves the metric.
## Notes
1. ^ See Levi-Civita (1917)
2. ^ See Christoffel (1869)
3. ^ See Spivak (1999) Volume II, page 238
## References
### Primary historical references
• Christoffel, Elwin Bruno (1869), "Über die Transformation der homogenen Differentialausdrücke zweiten Grades", J. für die Reine und Angew. Math., 70: 46–70
• Levi-Civita, Tullio (1917), "Nozione di parallelismo in una varietà qualunque e consequente specificazione geometrica della curvatura Riemanniana", Rend. Circ. Mat. Palermo, 42: 73–205, doi:10.1007/bf03014898
### Secondary references
• Boothby, William M. (1986). An introduction to differentiable manifolds and Riemannian geometry. Academic Press. ISBN 0-12-116052-1.
• Kobayashi, S.; Nomizu, K. (1963). Foundations of differential geometry. John Wiley & Sons. ISBN 0-470-49647-9. See Volume I pag. 158
• Spivak, Michael (1999). A Comprehensive introduction to differential geometry (Volume II). Publish or Perish Press. ISBN 0-914098-71-3. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 24, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9776324033737183, "perplexity": 460.0823641994987}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00474-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://zackmdavis.net/blog/tag/notation/ | # Iff as Conditional Chain
I'm not sure I like how when we want to prove that two statements are equivalent, we typically say "A if and only if B" and we prove it by separately proving "both directions" AB and BA, but when we want to prove three or more statements are equivalent, we typically say "The following are equivalent" and prove a "circular chain" of conditionals (1) ⇒ (2) ⇒ [...] ⇒ (n) ⇒ (1), as if these were different proof strategies. Because really, the "both directions" business is just a special case of the chain-of-conditionals idea: (1) ⇒ (2) ⇒ (1). At the very least, one of my books ought to have mentioned this.
# Subscripting as Function Composition
Dear reader, don't laugh: I had thought I already understood subsequences, but then it turned out that I was mistaken. I should have noticed the vague, unverbalized discomfort I felt about the subscripted-subscript notation, (ank). But really it shouldn't be confusing at all: as Bernd S. W. Schröder points out in his Mathematical Analysis: A Concise Introduction, it's just a function composition. If it helps (it helped me), say that (an) is mere syntactic sugar for a(n): ℕ → ℝ, a function from the naturals to the reals. And (ank) is just the composition a(n(k)), with n(k): ℕ → ℕ being a strictly increasing function from the naturals to the naturals.
# Colon-Equals
Sometimes I think it's sad that the most popular programming languages use "=" for assignment rather than ":=" (like Pascal). Equality is a symmetrical relationship: "a equals b" means that a and b are the same thing or have the same value, and this is clearly the same as saying that "b equals a". Assignment isn't like that: putting the value b in a box named a isn't the same as putting the value a in a box named b!—surely an asymmetrical operation deserves an asymmetrical notation? Okay, so it is an extra character, but any decent editor can be configured to save you the keystroke.
I'd like to see the colon-equals assignment symbol more often in math, too. For example, shouldn't we be writing lower indices of summation like this?—
—the rationale being that the text under the sigma isn't asserting that j equals zero, but rather that j is assigned zero as the initial index value of what is, in fact, a for loop:
sum = 0;
for (int j=0; j<=n; j++)
{
sum += f(j);
}
return sum; | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8581699132919312, "perplexity": 907.6254929163972}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887065.16/warc/CC-MAIN-20180118032119-20180118052119-00149.warc.gz"} |
https://en.wikipedia.org/wiki/History_of_the_function_concept | # History of the function concept
The mathematical concept of a function emerged in the 17th century in connection with the development of the calculus; for example, the slope ${\displaystyle \operatorname {d} \!y/\operatorname {d} \!x}$ of a graph at a point was regarded as a function of the x-coordinate of the point. Functions were not explicitly considered in antiquity, but some precursors of the concept can perhaps be seen in the work of medieval philosophers and mathematicians such as Oresme.
Mathematicians of the 18th century typically regarded a function as being defined by an analytic expression. In the 19th century, the demands of the rigorous development of analysis by Weierstrass and others, the reformulation of geometry in terms of analysis, and the invention of set theory by Cantor, eventually led to the much more general modern concept of a function as a single-valued mapping from one set to another.
## Functions before the 17th century
According to Dieudonné [1] and Ponte,[2] the concept of a function emerged in the 17th century as a result of the development of analytic geometry and the infinitesimal calculus. Nevertheless, Medvedev suggests that the implicit concept of a function is one with an ancient lineage.[3] Ponte also sees more explicit approaches to the concept in the Middle Ages:
Historically, some mathematicians can be regarded as having foreseen and come close to a modern formulation of the concept of function. Among them is Oresme (1323–1382) . . . In his theory, some general ideas about independent and dependent variable quantities seem to be present.[4]
The development of analytical geometry around 1640 allowed mathematicians to go between geometric problems about curves and algebraic relations between "variable coordinates x and y."[5] Calculus was developed using the notion of variables, with their associated geometric meaning, which persisted well into the eighteenth century.[6] However, the terminology of "function" came to be used in interactions between Leibniz and Bernoulli towards the end of the 17th century.[7]
## The notion of "function" in analysis
The term "function" was introduced by Gottfried Leibniz, in a 1673 letter, to describe a quantity related to a curve, such as a curve's slope at a specific point.[8][not in citation given] Johann Bernoulli started calling expressions made of a single variable "functions." In 1698, he agreed with Leibniz that any quantity formed "in an algebraic and transcendental manner" may be called a function of x.[9] By 1718, he came to regard as a function "any expression made up of a variable and some constants."[10] Alexis Claude Clairaut (in approximately 1734) and Leonhard Euler introduced the familiar notation ${\displaystyle {f(x)}}$ for the value of a function.[11]
The functions considered in those times are called today differentiable functions. For this type of function, one can talk about limits and derivatives; both are measurements of the output or the change in the output as it depends on the input or the change in the input. Such functions are the basis of calculus.
### Euler
In the first volume of his fundamental text Introductio in Analysin Infinitorum, published in 1748, Euler gave essentially the same definition of a function as his teacher Bernoulli, as an expression or formula involving variables and constants e.g., ${\displaystyle {x^{2}+3x+2}}$.[12] Euler's own definition reads:
A function of a variable quantity is an analytic expression composed in any way whatsoever of the variable quantity and numbers or constant quantities.[13]
Euler also allowed multi-valued functions whose values are determined by an implicit equation.
In 1755, however, in his Institutiones Calculi Differentialis, Euler gave a more general concept of a function:
When certain quantities depend on others in such a way that they undergo a change when the latter change, then the first are called functions of the second. This name has an extremely broad character; it encompasses all the ways in which one quantity can be determined in terms of others.[14]
Medvedev[15] considers that "In essence this is the definition that became known as Dirichlet's definition." Edwards[16] also credits Euler with a general concept of a function and says further that
The relations among these quantities are not thought of as being given by formulas, but on the other hand they are surely not thought of as being the sort of general set-theoretic, anything-goes subsets of product spaces that modern mathematicians mean when they use the word "function".
### Fourier
In his Théorie Analytique de la Chaleur,[17] Fourier claimed that an arbitrary function could be represented by a Fourier series.[18] Fourier had a general conception of a function, which included functions that were neither continuous nor defined by an analytical expression.[19] Related questions on the nature and representation of functions, arising from the solution of the wave equation for a vibrating string, had already been the subject of dispute between d'Alembert and Euler, and they had a significant impact in generalizing the notion of a function. Luzin observes that:
The modern understanding of function and its definition, which seems correct to us, could arise only after Fourier's discovery. His discovery showed clearly that most of the misunderstandings that arose in the debate about the vibrating string were the result of confusing two seemingly identical but actually vastly different concepts, namely that of function and that of its analytic representation. Indeed, prior to Fourier's discovery no distinction was drawn between the concepts of "function" and of "analytic representation," and it was this discovery that brought about their disconnection.[20]
### Cauchy
During the 19th century, mathematicians started to formalize all the different branches of mathematics. One of the first to do so was Cauchy; his somewhat imprecise results were later made completely rigorous by Weierstrass, who advocated building calculus on arithmetic rather than on geometry, which favoured Euler's definition over Leibniz's (see arithmetization of analysis). According to Smithies, Cauchy thought of functions as being defined by equations involving real or complex numbers, and tacitly assumed they were continuous:
Cauchy makes some general remarks about functions in Chapter I, Section 1 of his Analyse algébrique (1821). From what he says there, it is clear that he normally regards a function as being defined by an analytic expression (if it is explicit) or by an equation or a system of equations (if it is implicit); where he differs from his predecessors is that he is prepared to consider the possibility that a function may be defined only for a restricted range of the independent variable.[21]
### Lobachevsky and Dirichlet
Nikolai Lobachevsky[22] and Peter Gustav Lejeune Dirichlet[23] are traditionally credited with independently giving the modern "formal" definition of a function as a relation in which every first element has a unique second element.
Lobachevsky (1834) writes that
The general concept of a function requires that a function of x be defined as a number given for each x and varying gradually with x. The value of the function can be given either by an analytic expression, or by a condition that provides a means of examining all numbers and choosing one of them; or finally the dependence may exist but remain unknown.[24]
while Dirichlet (1837) writes
If now a unique finite y corresponding to each x, and moreover in such a way that when x ranges continuously over the interval from a to b, ${\displaystyle {y=f(x)}}$ also varies continuously, then y is called a continuous function of x for this interval. It is not at all necessary here that y be given in terms of x by one and the same law throughout the entire interval, and it is not necessary that it be regarded as a dependence expressed using mathematical operations.[25]
Eves asserts that "the student of mathematics usually meets the Dirichlet definition of function in his introductory course in calculus,.[26]
Dirichlet's claim to this formalization has been disputed by Imre Lakatos:
There is no such definition in Dirichlet's works at all. But there is ample evidence that he had no idea of this concept. In his [1837] paper for instance, when he discusses piecewise continuous functions, he says that at points of discontinuity the function has two values: ...[27]
However, Gardiner says "...it seems to me that Lakatos goes too far, for example, when he asserts that 'there is ample evidence that [Dirichlet] had no idea of [the modern function] concept'."[28] Moreover, as noted above, Dirichlet's paper does appear to include a definition along the lines of what is usually ascribed to him, even though (like Lobachevsky) he states it only for continuous functions of a real variable.
Similarly, Lavine observes that:
It is a matter of some dispute how much credit Dirichlet deserves for the modern definition of a function, in part because he restricted his definition to continuous functions....I believe Dirichlet defined the notion of continuous function to make it clear that no rule or law is required even in the case of continuous functions, not just in general. This would have deserved special emphasis because of Euler's definition of a continuous function as one given by single expression-or law. But I also doubt there is sufficient evidence to settle the dispute.[29]
Because Lobachevsky and Dirichlet have been credited as among the first to introduce the notion of an arbitrary correspondence, this notion is sometimes referred to as the Dirichlet or Lobachevsky-Dirichlet definition of a function.[30] A general version of this definition was later used by Bourbaki (1939), and some in the education community refer to it as the "Dirichlet–Bourbaki" definition of a function.
### Dedekind
Dieudonné, who was one of the founding members of the Bourbaki group, credits a precise and general modern definition of a function to Dedekind in his work Was sind und was sollen die Zahlen,[31] which appeared in 1888 but had already been drafted in 1878. Dieudonné observes that instead of confining himself, as in previous conceptions, to real (or complex) functions, Dedekind defines a function as a single-valued mapping between any two sets:
What was new and what was to be essential for the whole of mathematics was the entirely general conception of a function.[32]
### Hardy
Hardy 1908, pp. 26–28 defined a function as a relation between two variables x and y such that "to some values of x at any rate correspond values of y." He neither required the function to be defined for all values of x nor to associate each value of x to a single value of y. This broad definition of a function encompasses more relations than are ordinarily considered functions in contemporary mathematics. For example, Hardy's definition includes multivalued functions and what in computability theory are called partial functions.
## The logician's "function" prior to 1850
Logicians of this time were primarily involved with analyzing syllogisms (the 2000-year-old Aristotelian forms and otherwise), or as Augustus De Morgan (1847) stated it: "the examination of that part of reasoning which depends upon the manner in which inferences are formed, and the investigation of general maxims and rules for constructing arguments".[33] At this time the notion of (logical) "function" is not explicit, but at least in the work of De Morgan and George Boole it is implied: we see abstraction of the argument forms, the introduction of variables, the introduction of a symbolic algebra with respect to these variables, and some of the notions of set theory.
De Morgan's 1847 "FORMAL LOGIC OR, The Calculus of Inference, Necessary and Probable" observes that "[a] logical truth depends upon the structure of the statement, and not upon the particular matters spoken of"; he wastes no time (preface page i) abstracting: "In the form of the proposition, the copula is made as abstract as the terms". He immediately (p. 1) casts what he calls "the proposition" (present-day propositional function or relation) into a form such as "X is Y", where the symbols X, "is", and Y represent, respectively, the subject, copula, and predicate. While the word "function" does not appear, the notion of "abstraction" is there, "variables" are there, the notion of inclusion in his symbolism "all of the Δ is in the О" (p. 9) is there, and lastly a new symbolism for logical analysis of the notion of "relation" (he uses the word with respect to this example " X)Y " (p. 75) ) is there:
" A1 X)Y To take an X it is necessary to take a Y" [or To be an X it is necessary to be a Y]
" A1 Y)X To take a Y it is sufficient to take a X" [or To be a Y it is sufficient to be an X], etc.
In his 1848 The Nature of Logic Boole asserts that "logic . . . is in a more especial sense the science of reasoning by signs", and he briefly discusses the notions of "belonging to" and "class": "An individual may possess a great variety of attributes and thus belonging to a great variety of different classes" .[34] Like De Morgan he uses the notion of "variable" drawn from analysis; he gives an example of "represent[ing] the class oxen by x and that of horses by y and the conjunction and by the sign + . . . we might represent the aggregate class oxen and horses by x + y".[35]
In the context of "the Differential Calculus" Boole defined (circa 1849) the notion of a function as follows:
"That quantity whose variation is uniform . . . is called the independent variable. That quantity whose variation is referred to the variation of the former is said to be a function of it. The Differential calculus enables us in every case to pass from the function to the limit. This it does by a certain Operation. But in the very Idea of an Operation is . . . the idea of an inverse operation. To effect that inverse operation in the present instance is the business of the Int[egral] Calculus."[36]
## The logicians' "function" 1850–1950
Eves observes "that logicians have endeavored to push down further the starting level of the definitional development of mathematics and to derive the theory of sets, or classes, from a foundation in the logic of propositions and propositional functions".[37] But by the late 19th century the logicians' research into the foundations of mathematics was undergoing a major split. The direction of the first group, the Logicists, can probably be summed up best by Bertrand Russell 1903 – "to fulfil two objects, first, to show that all mathematics follows from symbolic logic, and secondly to discover, as far as possible, what are the principles of symbolic logic itself."
The second group of logicians, the set-theorists, emerged with Georg Cantor's "set theory" (1870–1890) but were driven forward partly as a result of Russell's discovery of a paradox that could be derived from Frege's conception of "function", but also as a reaction against Russell's proposed solution.[38] Zermelo's set-theoretic response was his 1908 Investigations in the foundations of set theory I – the first axiomatic set theory; here too the notion of "propositional function" plays a role.
### George Boole's The Laws of Thought 1854; John Venn's Symbolic Logic 1881
In his An Investigation into the laws of thought Boole now defined a function in terms of a symbol x as follows:
"8. Definition. – Any algebraic expression involving symbol x is termed a function of x, and may be represented by the abbreviated form f(x)"[39]
Boole then used algebraic expressions to define both algebraic and logical notions, e.g., 1−x is logical NOT(x), xy is the logical AND(x,y), x + y is the logical OR(x, y), x(x+y) is xx+xy, and "the special law" xx = x2 = x.[40]
In his 1881 Symbolic Logic Venn was using the words "logical function" and the contemporary symbolism ( x = f(y), y = f−1(x), cf page xxi) plus the circle-diagrams historically associated with Venn to describe "class relations",[41] the notions "'quantifying' our predicate", "propositions in respect of their extension", "the relation of inclusion and exclusion of two classes to one another", and "propositional function" (all on p. 10), the bar over a variable to indicate not-x (page 43), etc. Indeed he equated unequivocally the notion of "logical function" with "class" [modern "set"]: "... on the view adopted in this book, f(x) never stands for anything but a logical class. It may be a compound class aggregated of many simple classes; it may be a class indicated by certain inverse logical operations, it may be composed of two groups of classes equal to one another, or what is the same thing, their difference declared equal to zero, that is, a logical equation. But however composed or derived, f(x) with us will never be anything else than a general expression for such logical classes of things as may fairly find a place in ordinary Logic".[42]
### Frege's Begriffsschrift 1879
Gottlob Frege's Begriffsschrift (1879) preceded Giuseppe Peano (1889), but Peano had no knowledge of Frege 1879 until after he had published his 1889.[43] Both writers strongly influenced Russell (1903). Russell in turn influenced much of 20th-century mathematics and logic through his Principia Mathematica (1913) jointly authored with Alfred North Whitehead.
At the outset Frege abandons the traditional "concepts subject and predicate", replacing them with argument and function respectively, which he believes "will stand the test of time. It is easy to see how regarding a content as a function of an argument leads to the formation of concepts. Furthermore, the demonstration of the connection between the meanings of the words if, and, not, or, there is, some, all, and so forth, deserves attention".[44]
Frege begins his discussion of "function" with an example: Begin with the expression[45] "Hydrogen is lighter than carbon dioxide". Now remove the sign for hydrogen (i.e., the word "hydrogen") and replace it with the sign for oxygen (i.e., the word "oxygen"); this makes a second statement. Do this again (using either statement) and substitute the sign for nitrogen (i.e., the word "nitrogen") and note that "This changes the meaning in such a way that "oxygen" or "nitrogen" enters into the relations in which "hydrogen" stood before".[46] There are three statements:
• "Hydrogen is lighter than carbon dioxide."
• "Oxygen is lighter than carbon dioxide."
• "Nitrogen is lighter than carbon dioxide."
Now observe in all three a "stable component, representing the totality of [the] relations";[47] call this the function, i.e.,
"... is lighter than carbon dioxide", is the function.
Frege calls the argument of the function "[t]he sign [e.g., hydrogen, oxygen, or nitrogen], regarded as replaceable by others that denotes the object standing in these relations".[48] He notes that we could have derived the function as "Hydrogen is lighter than . . .." as well, with an argument position on the right; the exact observation is made by Peano (see more below). Finally, Frege allows for the case of two (or more arguments). For example, remove "carbon dioxide" to yield the invariant part (the function) as:
• "... is lighter than ... "
The one-argument function Frege generalizes into the form Φ(A) where A is the argument and Φ( ) represents the function, whereas the two-argument function he symbolizes as Ψ(A, B) with A and B the arguments and Ψ( , ) the function and cautions that "in general Ψ(A, B) differs from Ψ(B, A)". Using his unique symbolism he translates for the reader the following symbolism:
"We can read |--- Φ(A) as "A has the property Φ. |--- Ψ(A, B) can be translated by "B stands in the relation Ψ to A" or "B is a result of an application of the procedure Ψ to the object A".[49]
### Peano's The Principles of Arithmetic 1889
Peano defined the notion of "function" in a manner somewhat similar to Frege, but without the precision.[50] First Peano defines the sign "K means class, or aggregate of objects",[51] the objects of which satisfy three simple equality-conditions,[52] a = a, (a = b) = (b = a), IF ((a = b) AND (b = c)) THEN (a = c). He then introduces φ, "a sign or an aggregate of signs such that if x is an object of the class s, the expression φx denotes a new object". Peano adds two conditions on these new objects: First, that the three equality-conditions hold for the objects φx; secondly, that "if x and y are objects of class s and if x = y, we assume it is possible to deduce φx = φy".[53] Given all these conditions are met, φ is a "function presign". Likewise he identifies a "function postsign". For example if φ is the function presign a+, then φx yields a+x, or if φ is the function postsign +a then yields x+a.[52]
### Bertrand Russell's The Principles of Mathematics 1903
While the influence of Cantor and Peano was paramount,[54] in Appendix A "The Logical and Arithmetical Doctrines of Frege" of The Principles of Mathematics, Russell arrives at a discussion of Frege's notion of function, "...a point in which Frege's work is very important, and requires careful examination".[55] In response to his 1902 exchange of letters with Frege about the contradiction he discovered in Frege's Begriffsschrift Russell tacked this section on at the last moment.
For Russell the bedeviling notion is that of "variable": "6. Mathematical propositions are not only characterized by the fact that they assert implications, but also by the fact that they contain variables. The notion of the variable is one of the most difficult with which logic has to deal. For the present, I openly wish to make it plain that there are variables in all mathematical propositions, even where at first sight they might seem to be absent. . . . We shall find always, in all mathematical propositions, that the words any or some occur; and these words are the marks of a variable and a formal implication".[56]
As expressed by Russell "the process of transforming constants in a proposition into variables leads to what is called generalization, and gives us, as it were, the formal essence of a proposition ... So long as any term in our proposition can be turned into a variable, our proposition can be generalized; and so long as this is possible, it is the business of mathematics to do it";[57] these generalizations Russell named propositional functions".[58] Indeed he cites and quotes from Frege's Begriffsschrift and presents a vivid example from Frege's 1891 Function und Begriff: That "the essence of the arithmetical function 2x3 + x is what is left when the x is taken away, i.e., in the above instance 2( )3 + ( ). The argument x does not belong to the function but the two taken together make the whole".[55] Russell agreed with Frege's notion of "function" in one sense: "He regards functions – and in this I agree with him – as more fundamental than predicates and relations" but Russell rejected Frege's "theory of subject and assertion", in particular "he thinks that, if a term a occurs in a proposition, the proposition can always be analysed into a and an assertion about a".[55]
### Evolution of Russell's notion of "function" 1908–1913
Russell would carry his ideas forward in his 1908 Mathematical logical as based on the theory of types and into his and Whitehead's 1910–1913 Principia Mathematica. By the time of Principia Mathematica Russell, like Frege, considered the propositional function fundamental: "Propositional functions are the fundamental kind from which the more usual kinds of function, such as "sin x" or log x or "the father of x" are derived. These derivative functions . . . are called "descriptive functions". The functions of propositions . . . are a particular case of propositional functions".[59]
Propositional functions: Because his terminology is different from the contemporary, the reader may be confused by Russell's "propositional function". An example may help. Russell writes a propositional function in its raw form, e.g., as φŷ: "ŷ is hurt". (Observe the circumflex or "hat" over the variable y). For our example, we will assign just 4 values to the variable ŷ: "Bob", "This bird", "Emily the rabbit", and "y". Substitution of one of these values for variable ŷ yields a proposition; this proposition is called a "value" of the propositional function. In our example there are four values of the propositional function, e.g., "Bob is hurt", "This bird is hurt", "Emily the rabbit is hurt" and "y is hurt." A proposition, if it is significant—i.e., if its truth is determinate—has a truth-value of truth or falsity. If a proposition's truth value is "truth" then the variable's value is said to satisfy the propositional function. Finally, per Russell's definition, "a class [set] is all objects satisfying some propositional function" (p. 23). Note the word "all'" – this is how the contemporary notions of "For all ∀" and "there exists at least one instance ∃" enter the treatment (p. 15).
To continue the example: Suppose (from outside the mathematics/logic) one determines that the propositions "Bob is hurt" has a truth value of "falsity", "This bird is hurt" has a truth value of "truth", "Emily the rabbit is hurt" has an indeterminate truth value because "Emily the rabbit" doesn't exist, and "y is hurt" is ambiguous as to its truth value because the argument y itself is ambiguous. While the two propositions "Bob is hurt" and "This bird is hurt" are significant (both have truth values), only the value "This bird" of the variable ŷ satisfies the propositional function φŷ: "ŷ is hurt". When one goes to form the class α: φŷ: "ŷ is hurt", only "This bird" is included, given the four values "Bob", "This bird", "Emily the rabbit" and "y" for variable ŷ and their respective truth-values: falsity, truth, indeterminate, ambiguous.
Russell defines functions of propositions with arguments, and truth-functions f(p).[60] For example, suppose one were to form the "function of propositions with arguments" p1: "NOT(p) AND q" and assign its variables the values of p: "Bob is hurt" and q: "This bird is hurt". (We are restricted to the logical linkages NOT, AND, OR and IMPLIES, and we can only assign "significant" propositions to the variables p and q). Then the "function of propositions with arguments" is p1: NOT("Bob is hurt") AND "This bird is hurt". To determine the truth value of this "function of propositions with arguments" we submit it to a "truth function", e.g., f(p1): f( NOT("Bob is hurt") AND "This bird is hurt" ), which yields a truth value of "truth".
The notion of a "many-one" functional relation": Russell first discusses the notion of "identity", then defines a descriptive function (pages 30ff) as the unique value ιx that satisfies the (2-variable) propositional function (i.e., "relation") φŷ.
N.B. The reader should be warned here that the order of the variables are reversed! y is the independent variable and x is the dependent variable, e.g., x = sin(y).[61]
Russell symbolizes the descriptive function as "the object standing in relation to y": R'y =DEF (ιx)(x R y). Russell repeats that "R'y is a function of y, but not a propositional function [sic]; we shall call it a descriptive function. All the ordinary functions of mathematics are of this kind. Thus in our notation "sin y" would be written " sin 'y ", and "sin" would stand for the relation sin 'y has to y".[62]
## The formalist's "function": David Hilbert's axiomatization of mathematics (1904–1927)
David Hilbert set himself the goal of "formalizing" classical mathematics "as a formal axiomatic theory, and this theory shall be proved to be consistent, i.e., free from contradiction" .[63] In Hilbert 1927 The Foundations of Mathematics he frames the notion of function in terms of the existence of an "object":
13. A(a) --> A(ε(A)) Here ε(A) stands for an object of which the proposition A(a) certainly holds if it holds of any object at all; let us call ε the logical ε-function".[64] [The arrow indicates "implies".]
Hilbert then illustrates the three ways how the ε-function is to be used, firstly as the "for all" and "there exists" notions, secondly to represent the "object of which [a proposition] holds", and lastly how to cast it into the choice function.
Recursion theory and computability: But the unexpected outcome of Hilbert's and his student Bernays's effort was failure; see Gödel's incompleteness theorems of 1931. At about the same time, in an effort to solve Hilbert's Entscheidungsproblem, mathematicians set about to define what was meant by an "effectively calculable function" (Alonzo Church 1936), i.e., "effective method" or "algorithm", that is, an explicit, step-by-step procedure that would succeed in computing a function. Various models for algorithms appeared, in rapid succession, including Church's lambda calculus (1936), Stephen Kleene's μ-recursive functions(1936) and Alan Turing's (1936–7) notion of replacing human "computers" with utterly-mechanical "computing machines" (see Turing machines). It was shown that all of these models could compute the same class of computable functions. Church's thesis holds that this class of functions exhausts all the number-theoretic functions that can be calculated by an algorithm. The outcomes of these efforts were vivid demonstrations that, in Turing's words, "there can be no general process for determining whether a given formula U of the functional calculus K [Principia Mathematica] is provable";[65] see more at Independence (mathematical logic) and Computability theory.
## Development of the set-theoretic definition of "function"
Set theory began with the work of the logicians with the notion of "class" (modern "set") for example De Morgan (1847), Jevons (1880), Venn (1881), Frege (1879) and Peano (1889). It was given a push by Georg Cantor's attempt to define the infinite in set-theoretic treatment (1870–1890) and a subsequent discovery of an antinomy (contradiction, paradox) in this treatment (Cantor's paradox), by Russell's discovery (1902) of an antinomy in Frege's 1879 (Russell's paradox), by the discovery of more antinomies in the early 20th century (e.g., the 1897 Burali-Forti paradox and the 1905 Richard paradox), and by resistance to Russell's complex treatment of logic[66] and dislike of his axiom of reducibility[67] (1908, 1910–1913) that he proposed as a means to evade the antinomies.
In 1902 Russell sent a letter to Frege pointing out that Frege's 1879 Begriffsschrift allowed a function to be an argument of itself: "On the other hand, it may also be that the argument is determinate and the function indeterminate . . .."[68] From this unconstrained situation Russell was able to form a paradox:
"You state ... that a function, too, can act as the indeterminate element. This I formerly believed, but now this view seems doubtful to me because of the following contradiction. Let w be the predicate: to be a predicate that cannot be predicated of itself. Can w be predicated of itself?"[69]
Frege responded promptly that "Your discovery of the contradiction caused me the greatest surprise and, I would almost say, consternation, since it has shaken the basis on which I intended to build arithmetic".[70]
From this point forward development of the foundations of mathematics became an exercise in how to dodge "Russell's paradox", framed as it was in "the bare [set-theoretic] notions of set and element".[71]
### Zermelo's set theory (1908) modified by Skolem (1922)
The notion of "function" appears as Zermelo's axiom III—the Axiom of Separation (Axiom der Aussonderung). This axiom constrains us to use a propositional function Φ(x) to "separate" a subset MΦ from a previously formed set M:
"AXIOM III. (Axiom of separation). Whenever the propositional function Φ(x) is definite for all elements of a set M, M possesses a subset MΦ containing as elements precisely those elements x of M for which Φ(x) is true".[72]
As there is no universal set—sets originate by way of Axiom II from elements of (non-set) domain B – "...this disposes of the Russell antinomy so far as we are concerned".[73] But Zermelo's "definite criterion" is imprecise, and is fixed by Weyl, Fraenkel, Skolem, and von Neumann.[74]
In fact Skolem in his 1922 referred to this "definite criterion" or "property" as a "definite proposition":
"... a finite expression constructed from elementary propositions of the form a ε b or a = b by means of the five operations [logical conjunction, disjunction, negation, universal quantification, and existential quantification].[75]
van Heijenoort summarizes:
"A property is definite in Skolem's sense if it is expressed . . . by a well-formed formula in the simple predicate calculus of first order in which the sole predicate constants are ε and possibly, =. ... Today an axiomatization of set theory is usually embedded in a logical calculus, and it is Weyl's and Skolem's approach to the formulation of the axiom of separation that is generally adopted.[76]
In this quote the reader may observe a shift in terminology: nowhere is mentioned the notion of "propositional function", but rather one sees the words "formula", "predicate calculus", "predicate", and "logical calculus." This shift in terminology is discussed more in the section that covers "function" in contemporary set theory.
### The Wiener–Hausdorff–Kuratowski "ordered pair" definition 1914–1921
The history of the notion of "ordered pair" is not clear. As noted above, Frege (1879) proposed an intuitive ordering in his definition of a two-argument function Ψ(A, B). Norbert Wiener in his 1914 (see below) observes that his own treatment essentially "revert(s) to Schröder's treatment of a relation as a class of ordered couples".[77] Russell (1903) considered the definition of a relation (such as Ψ(A, B)) as a "class of couples" but rejected it:
"There is a temptation to regard a relation as definable in extension as a class of couples. This is the formal advantage that it avoids the necessity for the primitive proposition asserting that every couple has a relation holding between no other pairs of terms. But it is necessary to give sense to the couple, to distinguish the referent [domain] from the relatum [converse domain]: thus a couple becomes essentially distinct from a class of two terms, and must itself be introduced as a primitive idea. . . . It seems therefore more correct to take an intensional view of relations, and to identify them rather with class-concepts than with classes."[78]
By 1910–1913 and Principia Mathematica Russell had given up on the requirement for an intensional definition of a relation, stating that "mathematics is always concerned with extensions rather than intensions" and "Relations, like classes, are to be taken in extension".[79] To demonstrate the notion of a relation in extension Russell now embraced the notion of ordered couple: "We may regard a relation ... as a class of couples ... the relation determined by φ(x, y) is the class of couples (x, y) for which φ(x, y) is true". [80] In a footnote he clarified his notion and arrived at this definition:
"Such a couple has a sense, i.e., the couple (x, y) is different from the couple (y, x) unless x = y. We shall call it a "couple with sense," ... it may also be called an ordered couple. [80]
But he goes on to say that he would not introduce the ordered couples further into his "symbolic treatment"; he proposes his "matrix" and his unpopular axiom of reducibility in their place.
An attempt to solve the problem of the antinomies led Russell to propose his "doctrine of types" in an appendix B of his 1903 The Principles of Mathematics.[81] In a few years he would refine this notion and propose in his 1908 The Theory of Types two axioms of reducibility, the purpose of which were to reduce (single-variable) propositional functions and (dual-variable) relations to a "lower" form (and ultimately into a completely extensional form); he and Alfred North Whitehead would carry this treatment over to Principia Mathematica 1910–1913 with a further refinement called "a matrix".[82] The first axiom is *12.1; the second is *12.11. To quote Wiener the second axiom *12.11 "is involved only in the theory of relations".[83] Both axioms, however, were met with skepticism and resistance; see more at Axiom of reducibility. By 1914 Norbert Wiener, using Whitehead and Russell's symbolism, eliminated axiom *12.11 (the "two-variable" (relational) version of the axiom of reducibility) by expressing a relation as an ordered pair "using the null set. At approximately the same time, Hausdorff (1914, p. 32) gave the definition of the ordered pair (a, b) as { {a,1}, {b, 2} }. A few years later Kuratowski (1921) offered a definition that has been widely used ever since, namely { {a, b}, {a} }".[84] As noted by Suppes (1960) "This definition . . . was historically important in reducing the theory of relations to the theory of sets.[85]
Observe that while Wiener "reduced" the relational *12.11 form of the axiom of reducibility he did not reduce nor otherwise change the propositional-function form *12.1; indeed he declared this "essential to the treatment of identity, descriptions, classes and relations".[86]
### Schönfinkel's notion of "function" as a many-one "correspondence" 1924
Where exactly the general notion of "function" as a many-one correspondence derives from is unclear. Russell in his 1920 Introduction to Mathematical Philosophy states that "It should be observed that all mathematical functions result form one-many [sic – contemporary usage is many-one] relations . . . Functions in this sense are descriptive functions".[87] A reasonable possibility is the Principia Mathematica notion of "descriptive function" – R 'y =DEFx)(x R y): "the singular object that has a relation R to y". Whatever the case, by 1924, Moses Schonfinkel expressed the notion, claiming it to be "well known":
"As is well known, by function we mean in the simplest case a correspondence between the elements of some domain of quantities, the argument domain, and those of a domain of function values ... such that to each argument value there corresponds at most one function value".[88]
According to Willard Quine, Schönfinkel 1924 "provide[s] for ... the whole sweep of abstract set theory. The crux of the matter is that Schönfinkel lets functions stand as arguments. For Schönfinkel, substantially as for Frege, classes are special sorts of functions. They are propositional functions, functions whose values are truth values. All functions, propositional and otherwise, are for Schönfinkel one-place functions".[89] Remarkably, Schönfinkel reduces all mathematics to an extremely compact functional calculus consisting of only three functions: Constancy, fusion (i.e., composition), and mutual exclusivity. Quine notes that Haskell Curry (1958) carried this work forward "under the head of combinatory logic".[90]
### Von Neumann's set theory 1925
By 1925 Abraham Fraenkel (1922) and Thoralf Skolem (1922) had amended Zermelo's set theory of 1908. But von Neumann was not convinced that this axiomatization could not lead to the antinomies.[91] So he proposed his own theory, his 1925 An axiomatization of set theory.[92] It explicitly contains a "contemporary", set-theoretic version of the notion of "function":
"[Unlike Zermelo's set theory] [w]e prefer, however, to axiomatize not "set" but "function". The latter notion certainly includes the former. (More precisely, the two notions are completely equivalent, since a function can be regarded as a set of pairs, and a set as a function that can take two values.)".[93]
At the outset he begins with I-objects and II-objects, two objects A and B that are I-objects (first axiom), and two types of "operations" that assume ordering as a structural property[94] obtained of the resulting objects [x, y] and (x, y). The two "domains of objects" are called "arguments" (I-objects) and "functions" (II-objects); where they overlap are the "argument functions" (he calls them I-II objects). He introduces two "universal two-variable operations" – (i) the operation [x, y]: ". . . read 'the value of the function x for the argument y . . . it itself is a type I object", and (ii) the operation (x, y): ". . . (read 'the ordered pair x, y') whose variables x and y must both be arguments and that itself produces an argument (x, y). Its most important property is that x1 = x2 and y1 = y2 follow from (x1 = y2) = (x2 = y2)". To clarify the function pair he notes that "Instead of f(x) we write [f,x] to indicate that f, just like x, is to be regarded as a variable in this procedure". To avoid the "antinomies of naive set theory, in Russell's first of all . . . we must forgo treating certain functions as arguments".[95] He adopts a notion from Zermelo to restrict these "certain functions".[96]
Suppes[97] observes that von Neumann's axiomatization was modified by Bernays "in order to remain nearer to the original Zermelo system . . . He introduced two membership relations: one between sets, and one between sets and classes". Then Gödel [1940][98] further modified the theory: "his primitive notions are those of set, class and membership (although membership alone is sufficient)".[99] This axiomatization is now known as von Neumann-Bernays-Gödel set theory.
### Bourbaki 1939
In 1939, Bourbaki, in addition to giving the well-known ordered pair definition of a function as a certain subset of the cartesian product E x F, gave the following:
"Let E and F be two sets, which may or may not be distinct. A relation between a variable element x of E and a variable element y of F is called a functional relation in y if, for all x ∈ E, there exists a unique y ∈ F which is in the given relation with x. "We give the name of function to the operation which in this way associates with every element x ∈ E the element y ∈ F which is in the given relation with x, and the function is said to be determined by the given functional relation. Two equivalent functional relations determine the same function."
## Since 1950
### Notion of "function" in contemporary set theory
Both axiomatic and naive forms of Zermelo's set theory as modified by Fraenkel (1922) and Skolem (1922) define "function" as a relation, define a relation as a set of ordered pairs, and define an ordered pair as a set of two "dissymetric" sets.
While the reader of Suppes (1960) Axiomatic Set Theory or Halmos (1970) Naive Set Theory observes the use of function-symbolism in the axiom of separation, e.g., φ(x) (in Suppes) and S(x) (in Halmos), they will see no mention of "proposition" or even "first order predicate calculus". In their place are "expressions of the object language", "atomic formulae", "primitive formulae", and "atomic sentences".
Kleene (1952) defines the words as follows: "In word languages, a proposition is expressed by a sentence. Then a 'predicate' is expressed by an incomplete sentence or sentence skeleton containing an open place. For example, "___ is a man" expresses a predicate ... The predicate is a propositional function of one variable. Predicates are often called 'properties' ... The predicate calculus will treat of the logic of predicates in this general sense of 'predicate', i.e., as propositional function".[100]
In 1954, Bourbaki, on p. 76 in Chapitre II of Theorie des Ensembles (theory of sets), gave a definition of a function as a triple f = (F, A, B).[101] Here F is a functional graph, meaning a set of pairs where no two pairs have the same first member. On p. 77 (op. cit.) Bourbaki states (literal translation): "Often we shall use, in the remainder of this Treatise, the word function instead of functional graph."
Suppes (1960) in Axiomatic Set Theory, formally defines a relation (p. 57) as a set of pairs, and a function (p. 86) as a relation where no two pairs have the same first member.
### Relational form of a function
The reason for the disappearance of the words "propositional function" e.g., in Suppes (1960), and Halmos (1970), is explained by Tarski (1946) together with further explanation of the terminology:
"An expression such as x is an integer, which contains variables and, on replacement of these variables by constants becomes a sentence, is called a SENTENTIAL [i.e., propositional cf his index] FUNCTION. But mathematicians, by the way, are not very fond of this expression, because they use the term "function" with a different meaning. ... sentential functions and sentences composed entirely of mathematical symbols (and not words of everyday language), such as: x + y = 5 are usually referred to by mathematicians as FORMULAE. In place of "sentential function" we shall sometimes simply say "sentence" – but only in cases where there is no danger of any misunderstanding".[102]
For his part Tarski calls the relational form of function a "FUNCTIONAL RELATION or simply a FUNCTION".[103] After a discussion of this "functional relation" he asserts that:
"The concept of a function which we are considering now differs essentially from the concepts of a sentential [propositional] and of a designatory function .... Strictly speaking ... [these] do not belong to the domain of logic or mathematics; they denote certain categories of expressions which serve to compose logical and mathematical statements, but they do not denote things treated of in those statements... . The term "function" in its new sense, on the other hand, is an expression of a purely logical character; it designates a certain type of things dealt with in logic and mathematics."[104]
See more about "truth under an interpretation" at Alfred Tarski.
## Notes
1. ^ Dieudonné 1992, p. 55.
2. ^ "The emergence of a notion of function as an individualized mathematical entity can be traced to the beginnings of infinitesimal calculus". (Ponte 1992)
3. ^ "...although we do not find in [the mathematicians of Ancient Greece] the idea of functional dependence distinguished in explicit form as a comparatively independent object of study, nevertheless one cannot help noticing the large stock of functional correspondences they studied." (Medvedev 1991, pp. 29–30)
4. ^
5. ^ Gardiner 1982, p. 255.
6. ^ Gardiner 1982, p. 256.
7. ^ Kleiner, Israel (2009). "Evolution of the Function Concept: A Brief Survey". In Marlow Anderson; Victor Katz; Robin Wilson. Who Gave You the Epsilon?: And Other Tales of Mathematical History. MAA. pp. 14–26. ISBN 978-0-88385-569-0.
8. ^ Eves dates Leibniz's first use to the year 1694 and also similarly relates the usage to "as a term to denote any quantity connected with a curve, such as the coordinates of a point on the curve, the slope of the curve, and so on" (Eves 1990, p. 234).
9. ^ N. Bourbaki (18 September 2003). Elements of Mathematics Functions of a Real Variable: Elementary Theory. Springer Science & Business Media. pp. 154–. ISBN 978-3-540-65340-0.
10. ^ Eves 1990, p. 234.
11. ^ Eves 1990, p. 235.
12. ^ Eves 1990, p. 235
13. ^ Euler 1988, p. 3.
14. ^ Euler 2000, p. VI.
15. ^ Medvedev 1991, p. 47.
16. ^ Edwards 2007, p. 47.
17. ^
18. ^ Contemporary mathematicians, with much broader and more precise conceptions of functions, integration, and different notions of convergence than was possible in Fourier's time (including examples of functions that were regarded as pathological and referred to as "monsters" until as late as the turn of the 20th century), would not agree with Fourier that a completely arbitrary function can be expanded in Fourier series, even if its Fourier coefficients are well-defined. For example, Kolmogorov (1922) constructed a Lebesgue integrable function whose Fourier series diverges pointwise almost everywhere. Nevertheless, a very wide class of functions can be expanded in Fourier series, especially if one allows weaker forms of convergence, such as convergence in the sense of distributions. Thus, Fourier's claim was a reasonable one in the context of his time.
19. ^ For example: "A general function f(x) is a sequence of values or ordinates, each of which is arbitrary...It is by no means assumed that these ordinates are subject to any general law; they may follow one another in a completely arbitrary manner, and each of them is defined as if it were a unique quantity." (Fourier 1822, p. 552)
20. ^ Luzin 1998, p. 263. Translation by Abe Shenitzer of an article by Luzin that appeared (in the 1930s) in the first edition of The Great Soviet Encyclopedia
21. ^ Smithies 1997, p. 187.
22. ^ "On the vanishing of trigonometric series," 1834 (Lobachevsky 1951, pp. 31–80).
23. ^ Über die Darstellung ganz willkürlicher Funktionen durch Sinus- und Cosinusreihen," 1837 (Dirichlet 1889, pp. 135–160).
24. ^ Lobachevsky 1951, p. 43 as quoted in Medvedev 1991, p. 58.
25. ^ Dirichlet 1889, p. 135 as quoted in Medvedev 1991, pp. 60–61.
26. ^ Eves asserts that Dirichlet "arrived at the following formulation: "[The notion of] a variable is a symbol that represents any one of a set of numbers; if two variables x and y are so related that whenever a value is assigned to x there is automatically assigned, by some rule or correspondence, a value to y, then we say y is a (single-valued) function of x. The variable x . . . is called the independent variable and the variable y is called the dependent variable. The permissible values that x may assume constitute the domain of definition of the function, and the values taken on by y constitute the range of values of the function . . . it stresses the basic idea of a relationship between two sets of numbers" Eves 1990, p. 235
27. ^ Lakatos, Imre (1976). Worrall, John; Zahar, Elie, eds. Proofs and Refutations. Cambridge: Cambridge University Press. p. 151. ISBN 0-521-29038-4. Published posthumously.
28. ^ Gardiner, A. (1982). Understanding infinity,the mathematics of infinite processes. Courier Dover Publications. p. 275. ISBN 0-486-42538-X.
29. ^ Lavine 1994, p. 34.
30. ^ See Medvedev 1991, pp. 55–70 for further discussion.
31. ^ "By a mapping φ of a set S we understand a law that assigns to each element s of S a uniquely determined object called the image of s, denoted as φ(s). Dedekind 1995, p. 9
32. ^ Dieudonné 1992, p. 135.
33. ^ De Morgan 1847, p. 1.
34. ^ Boole 1848 in Grattan-Guinness & Bornet 1997, pp. 1, 2
35. ^ Boole 1848 in Grattan-Guinness & Bornet 1997, p. 6
36. ^ Boole circa 1849 Elementary Treatise on Logic not mathematical including philosophy of mathematical reasoning in Grattan-Guinness & Bornet 1997, p. 40
37. ^ Eves 1990, p. 222.
38. ^ Some of this criticism is intense: see the introduction by Willard Quine preceding Russell 1908a Mathematical logic as based on the theory of types in van Heijenoort 1967, p. 151. See also in von Neumann 1925 the introduction to his Axiomatization of Set Theory in van Heijenoort 1967, p. 395
39. ^ Boole 1854, p. 86.
40. ^ cf Boole 1854, pp. 31–34. Boole discusses this "special law" with its two algebraic roots x = 0 or 1, on page 37.
41. ^ Although he gives others credit, cf Venn 1881, p. 6
42. ^ Venn 1881, pp. 86–87.
43. ^ cf van Heijenoort's introduction to Peano 1889 in van Heijenoort 1967. For most of his logical symbolism and notions of propositions Peano credits "many writers, especially Boole". In footnote 1 he credits Boole 1847, 1848, 1854, Schröder 1877, Peirce 1880, Jevons 1883, MacColl 1877, 1878, 1878a, 1880; cf van Heijenoort 1967, p. 86).
44. ^ Frege 1879 in van Heijenoort 1967, p. 7
45. ^ Frege's exact words are "expressed in our formula language" and "expression", cf Frege 1879 in van Heijenoort 1967, pp. 21–22.
46. ^ This example is from Frege 1879 in van Heijenoort 1967, pp. 21–22
47. ^ Frege 1879 in van Heijenoort 1967, pp. 21–22
48. ^ Frege cautions that the function will have "argument places" where the argument should be placed as distinct from other places where the same sign might appear. But he does not go deeper into how to signify these positions and Russell 1903 observes this.
49. ^ Frege 1879 in van Heijenoort 1967, pp. 21–24
50. ^ "...Peano intends to cover much more ground than Frege does in his Begriffsschrift and his subsequent works, but he does not till that ground to any depth comparable to what Frege does in his self-allotted field", van Heijenoort 1967, p. 85
51. ^ van Heijenoort 1967, p. 89.
52. ^ a b van Heijenoort 1967, p. 91.
53. ^ All symbols used here are from Peano 1889 in van Heijenoort 1967, p. 91).
54. ^ "In Mathematics, my chief obligations, as is indeed evident, are to Georg Cantor and Professor Peano. If I had become acquainted sooner with the work of Professor Frege, I should have owed a great deal to him, but as it is I arrived independently at many results which he had already established", Russell 1903, p. viii. He also highlights Boole's 1854 Laws of Thought and Ernst Schröder's three volumes of "non-Peanesque methods" 1890, 1891, and 1895 cf Russell 1903, p. 10
55. ^ a b c Russell 1903, p. 505.
56. ^ Russell 1903, pp. 5–6.
57. ^ Russell 1903, p. 7.
58. ^ Russell 1903, p. 19.
59. ^ Russell 1910–1913:15
60. ^ Whitehead and Russell 1910–1913:6, 8 respectively
61. ^ Something similar appears in Tarski 1946. Tarski refers to a "relational function" as a "ONE-MANY [sic!] or FUNCTIONAL RELATION or simply a FUNCTION". Tarski comments about this reversal of variables on page 99.
62. ^ Whitehead and Russell 1910–1913:31. This paper is important enough that van Heijenoort reprinted it as Whitehead & Russell 1910 Incomplete symbols: Descriptions with commentary by W. V. Quine in van Heijenoort 1967, pp. 216–223
63. ^ Kleene 1952, p. 53.
64. ^ Hilbert in van Heijenoort 1967, p. 466
65. ^ Turing 1936–7 in Davis, Martin (1965). The undecidable: basic papers on undecidable propositions, unsolvable problems and computable functions. Courier Dover Publications. p. 145. ISBN 978-0-486-43228-1.
66. ^ Kleene 1952, p. 45.
67. ^ "The nonprimitive and arbitrary character of this axiom drew forth severe criticism, and much of subsequent refinement of the logistic program lies in attempts to devise some method of avoiding the disliked axiom of reducibility" Eves 1990, p. 268.
68. ^ Frege 1879 in van Heijenoort 1967, p. 23
69. ^ Russell (1902) Letter to Frege in van Heijenoort 1967, p. 124
70. ^ Frege (1902) Letter to Russell in van Heijenoort 1967, p. 127
71. ^ van Heijenoort's commentary to Russell's Letter to Frege in van Heijenoort 1967, p. 124
72. ^ The original uses an Old High German symbol in place of Φ cf Zermelo 1908a in van Heijenoort 1967, p. 202
73. ^ Zermelo 1908a in van Heijenoort 1967, p. 203
74. ^ cf van Heijenoort's commentary before Zermelo 1908 Investigations in the foundations of set theory I in van Heijenoort 1967, p. 199
75. ^ Skolem 1922 in van Heijenoort 1967, pp. 292–293
76. ^ van Heijenoort's introduction to Abraham Fraenkel's The notion "definite" and the independence of the axiom of choice in van Heijenoort 1967, p. 285.
77. ^ But Wiener offers no date or reference cf Wiener 1914 in van Heijenoort 1967, p. 226
78. ^ Russell 1903, p. 99.
79. ^ both quotes from Whitehead & Russell 1913, p. 26
80. ^ a b Whitehead & Russell 1913, p. 26.
81. ^ Russell 1903, pp. 523–529.
82. ^ "*12 The Hierarchy of Types and the axiom of Reducibility". Principia Mathematica. 1913. p. 161.
83. ^ Wiener 1914 in van Heijenoort 1967, p. 224
84. ^ commentary by van Heijenoort preceding Wiener 1914 A simplification of the logic of relations in van Heijenoort 1967, p. 224.
85. ^ Suppes 1960, p. 32. This same point appears in van Heijenoort's commentary before Wiener (1914) in van Heijenoort 1967, p. 224.
86. ^ Wiener 1914 in van Heijenoort 1967, p. 224
87. ^ Russell 1920, p. 46.
88. ^ Schönfinkel (1924) On the building blocks of mathematical logic in van Heijenoort 1967, p. 359
89. ^ commentary by W. V. Quine preceding Schönfinkel (1924) On the building blocks of mathematical logic in van Heijenoort 1967, p. 356.
90. ^ cf Curry and Feys 1958; Quine in van Heijenoort 1967, p. 357.
91. ^ von Neumann's critique of the history observes the split between the logicists (e.g., Russell et. al.) and the set-theorists (e.g., Zermelo et. al.) and the formalists (e.g., Hilbert), cf von Neumann 1925 in van Heijenoort 1967, pp. 394–396.
92. ^ In addition to the 1925 appearance in van Heijenoort, Suppes 1970:12 cites two more: 1928a and 1929.
93. ^ von Neumann 1925 in van Heijenoort 1967, p. 396
94. ^ In his 1930-1931 The Philosophy of Mathematics and Hilbert's Proof Theory Bernays asserts (in the context of rebutting Logicism's construction of the numbers from logical axioms) that "the Number concept turns out to be an elementary structural concept". This paper appears on page 243 in Paolo Mancosu 1998 From Brouwer to Hilbert, Oxford University Press, NY, ISBN 0-19-509632-0.
95. ^ All quotes from von Neumann 1925 in van Heijenoort 1967, pp. 396–398
96. ^ This notion is not easy to summarize; see more at van Heijenoort 1967, p. 397.
97. ^ See also van Heijenoort's introduction to von Neumann's paper on pages 393-394.
98. ^ cf in particular p. 35 where Gödel declares his primitive notions to be class, set, and "the diadic relation ε between class and class, class and set, set and class, or set and set". Gödel 1940 The consistency of the axiom of choice and of the generalized continuum hypothesis with the axioms of set theory appearing on pages 33ff in Volume II of Kurt Godel Collected Works, Oxford University Press, NY, ISBN 0-19-514721-9 (v.2, pbk).
99. ^ All quotes from Suppes 1960, p. 12 footnote. He also references "a paper by R. M. Robinson [1937] [that] provides a simplified system close to von Neumann's original one".
100. ^ Kleene 1952, pp. 143–145.
101. ^ N.Bourbaki (1954). Elements de Mathematique,Theorie des Ensembles. Hermann & cie. p. 76.
102. ^ Tarski 1946, p. 5.
103. ^ Tarski 1946, p. 98.
104. ^ Tarski 1946, p. 102. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8507050275802612, "perplexity": 1476.7615383858163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00344-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://blog.easychipamp.com/2020/11/audio-amplifier-feedback-basics-part-1.html | ## Friday, November 13, 2020
### Audio Amplifier Feedback - Basics
This post is a part of the series on audio amplifier feedback. The contents of the series can be found here.
To make sure everyone is on the same page, here is a super simplified feedback loop that can be found in an audio amplifier:
In the diagram:
• $A$ is the gain of the amplifier with no feedback applied (its open loop gain)
• $B$ is the gain of the feedback network (typically, the feedback network attenuates the signal, so $|B|<1$)
• $x$ is the input signal
• $\epsilon$ is the error (noise, distortion) that the amplifier adds to the signal
• $y$ is the resulting output signal
That is, the amplifier receives the input signal $x$, amplifies it by $A$, adds some noise and distortion $\epsilon$, resulting in the output signal $y$. A portion $B$ of the output signal $y$ is fed back to the input by adding it to the input signal (hence 'feedback').
Working from the right side of the diagram to the left, we can write:
$$y=\epsilon+A(x+y*B)$$
Solving for $y$:
$$y={A \over (1-A*B)}*x+{1 \over (1-A*B)}*\epsilon$$
The output signal $y$ has two components:
• Input signal $x$ amplified by ${A \over (1-A*B)}$
• Distortion $\epsilon$ amplified by ${1 \over (1-A*B)}$
Let us call the input signal amplification factor ${A \over (1-A*B)}$ the Signal Transfer Function, or $STF$, and distortion amplification factor ${1 \over (1-A*B)}$, the Error Transfer Function, or $ETF$:
$$STF={A \over (1-A*B)}$$ $$ETF={1 \over (1-A*B)}$$
The promise of feedback is that, as the open loop gain $A$ increases, the $ETF$ approaches zero, while the $STF$ approaches $-{1 \over B}$. In other words, the contribution of noise and distortion in the output signal becomes small, and the closed loop gain of the amplifier becomes independent of the amplifier's open loop gain $A$.
The sum of the input signal $x$ and the feedback $y*B$ that the amplifier sees at its input is normally small and gets smaller as $A$ increases: $$x+y*B={1 \over (1-A*B)}*x+{B \over (1-A*B)}*\epsilon$$
The quantity $A*B$ is called loop gain, and is the correct term for the "amount of feedback". We can rewrite $STF$ and $ETF$ with the loop gain $LG$:
$$STF={LG \over (1-LG)}*{1 \over B}$$ $$ETF={1 \over (1-LG)}$$
In the next post, I will show how the above math applies to an opamp. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8833431005477905, "perplexity": 609.6249190264633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991693.14/warc/CC-MAIN-20210512004850-20210512034850-00529.warc.gz"} |
https://encyclopediaofmath.org/wiki/Fibration | # Fibre space
(Redirected from Fibration)
An object $(X,\pi,B)$, where $\pi: X \to B$ is a continuous surjective mapping of a topological space $X$ onto a topological space $B$ (i.e., a fibration). Note that $X$, $B$ and $\pi$ are also called the total space, the base space and the projection of the fibre space, respectively, and ${\pi^{\leftarrow}}[\{ b \}]$ is called the fibre above $b$. A fibre space can be regarded as the union of the fibres ${\pi^{\leftarrow}}[\{ b \}]$, parametrized by the base space $B$ and glued by the topology of $X$. For example, there is the product $\pi: B \times F \to B$, where $\pi$ is the projection onto the first factor; the fibration-base $\pi: B \to B$, where $\pi = \operatorname{id}$ and $X$ is identified with $B$; and the fibre space over a point, where $X$ is identified with a (unique) space $F$.
A section of a fibration (fibre space) is a continuous mapping $s: B \to X$ such that $\pi \circ s = \operatorname{id}_{B}$.
The restriction of a fibration (fibre space) $\pi: X \to B$ to a subset $A \subseteq B$ is the fibration $\pi': X' \to A$, where $X' \stackrel{\text{df}}{=} {\pi^{\leftarrow}}[A]$ and $\pi' \stackrel{\text{df}}{=} \pi|_{X'}$. A generalization of the operation of restriction is the construction of an induced fibre bundle.
A mapping $F: X \to X_{1}$ is called a morphism of a fibre space $\pi: X \to B$ into a fibre space $\pi_{1}: X_{1} \to B_{1}$ if and only if it maps fibres into fibres, i.e., if for each point $b \in B$, there exists a point $b_{1} \in B_{1}$ such that $F[{\pi^{\leftarrow}}[\{ b \}]] \subseteq {\pi^{\leftarrow}}[\{ b_{1} \}]$. Such an $F$ determines a mapping $f: B \to B_{1}$, given by $f(b) \stackrel{\text{df}}{=} (\pi \circ F)[{\pi^{\leftarrow}}[\{ b \}]]$. Note that $F$ is a covering of $f$ and that $\pi_{1} \circ F = f \circ \pi$; the restrictions $F_{b}: {\pi^{\leftarrow}}[\{ b \}] \to {\pi_{1}^{\leftarrow}}[\{ b_{1} \}]$ are mappings of fibres. If $B = B_{1}$ and $f = \operatorname{id}$, then $F$ is called a $B$-morphism. Fibre spaces with their morphisms form a category — one that contains fibre spaces over $B$ with their $B$-morphisms as a subcategory.
Any section of a fibration $\pi: X \to B$ is a fibre-space $B$-morphism $s: B \to X$ from $(B,\operatorname{id},B)$ into $(X,\pi,B)$. If $A \subseteq B$, then the canonical imbedding $i: {\pi^{\leftarrow}}[A] \to B$ is a fibre-space morphism from $\pi|_{A}$ to $\pi$.
When $F$ is a homeomorphism, it is called a fibre-space isomorphism. A fibre space isomorphic to a product is called a trivial fibre space. An isomorphism $\theta: X \to B \times F$ is called a trivialization of $\pi$.
If each fibre ${\pi^{\leftarrow}}[\{ b \}]$ is homeomorphic to a space $F$, then $\pi$ is called a fibration with fibre $F$. For example, in any locally trivial fibre space over a connected base space $B$, all the fibres ${\pi^{\leftarrow}}[\{ b \}]$ are homeomorphic to one another, and one can take $F$ to be any ${\pi^{\leftarrow}}[\{ b_{0} \}]$; this determines homeomorphisms $\phi_{b}: F \to {\pi^{\leftarrow}}[\{ b \}]$.
Both the notations $\pi: X \to B$ and $(X,\pi,B)$ are used to denote a fibration, a fibre space or a fibre bundle.
In the West, a mapping $\pi: X \to B$ would only be called a fibration if it satisfied some suitable condition, for example, the homotopy lifting property for cubes (such a fibration is known as a Serre fibration; see Covering homotopy for the homotopy lifting property ([a3])). A mapping $F: X \to X_{1}$ would be called a morphism (respectively, an isomorphism) only if the induced function $f: B \to B_{1}$ were continuous (respectively, a homeomorphism). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9917269349098206, "perplexity": 128.88909464125337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708010.98/warc/CC-MAIN-20221126144448-20221126174448-00653.warc.gz"} |
https://earthscience.stackexchange.com/questions/4982/whitecapping-in-ocean-surface-waves/4984 | # Whitecapping in ocean surface waves
Even though the physics of wave breaking for ocean surface waves may not be well understood, what wave breaking is and what it looks like is no mystery to the average beach-goer. However, I am confused to as what "whitecapping" is, in the context of "whitecapping dissipation" of waves that appears in literature.
1. What is whitecapping? Is it the white "stuff" that is created when a wave break?
2. If so, what causes the white "stuff"? Is it the same as "foam", a term that is found in the literature as well?
3. Are the terms "whitecapping" and "wave breaking" synonymously used? Are they in fact one and the same?
Relevant references would also be appreciated. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9194595813751221, "perplexity": 1201.5895389051286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989756.81/warc/CC-MAIN-20210518063944-20210518093944-00052.warc.gz"} |
http://mathhelpforum.com/algebra/33831-system.html | 1. ## system
I go to the first class of lyceum and I have to solve the systems in algebra:
x+y=-1
2x+2y=-2
and another one:
x+2y=13
3x+y=4
Thanks a lot.
Pigaino stin proti lukeiou kai prepei na luso ta sustimata stin algebra:
x+y=-1
2x+2y=-2
kai akomi ena:
x+2y=13
3x+y=4
Parakalo boithiste me sta mathimatika.
Euxaristo polu.
2. Hello,
x+y=-1
2x+2y=-2
There is an infinity of solutions for this one as one is the double of the other...
x+2y=13
3x+y=4
from the first one, we can say that x=13-2y
thus 3(13-2y)+y=4
39-6y+y=4
-5y=-35
y=7
x=13-2*7=-1
(-1,7) is solution
3. Thanks.
And what about the other system?
x+y=-1
2x+2y=-2
4. Originally Posted by gdespina
Thanks.
And what about the other system?
x+y=-1
2x+2y=-2
Originally Posted by Moo
Hello,
x+y=-1
2x+2y=-2
There is an infinity of solutions for this one as one is the double of the other...
-Dan
5. There is an infinity of solutions for this one as one is the double of the other...
x=-1-y
And it works for any couple that verify it.
For example :
(1,-2)
(2,-3)
etc...
6. Yes..
x+y=-1
2x+2y=-2
the first -> x=-1-y
and the second:
-> 2(-1-y)+2y=-2
-2-2y+2y=-2
-2y+2y=-2+2
0=0
is this correct?
how could this system solve?
7. ## Same line
If you divide your second equation by 2, it turns out to be the same as the first equation....thus, the same line. If two lines coincide with each other (one on top of the other), then there are an infinite number of solutions.
Remember to check the slopes of the lines in a system. If the slopes are equal, but the y-intercepts are different, then there are no solutions. The lines are parallel.
Dale | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.892544150352478, "perplexity": 1389.1004954225737}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00539-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://brilliant.org/courses/special-relativity/ | Back
# Special Relativity
## Get up to (light) speed on Einstein's theory of relativity.
Beginning with famous thought experiments and a few intuitive principles from Newtonian mechanics, this course closely follows ... Read more
Beginning with famous thought experiments and a few intuitive principles from Newtonian mechanics, this course closely follows Einstein's arguments leading to the epiphany that time and space are a single entity, spacetime, where physical processes unfold.
After gaining experience with the mathematical machinery of relativistic physics, you will open the door to high-energy phenomena and Einstein's famous relationship, E = mc². In the end this course will boost you to the cusp of the most elegant of physical theories: general relativity.
Show less
× | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8189179301261902, "perplexity": 1869.4380459593856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160853.60/warc/CC-MAIN-20180925004528-20180925024928-00221.warc.gz"} |
http://stacks.math.columbia.edu/tag/01EO | # The Stacks Project
## Tag 01EO
### 20.12. Čech cohomology and cohomology
Lemma 20.12.1. Let $X$ be a ringed space. Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$ be a covering. Let $\mathcal{I}$ be an injective $\mathcal{O}_X$-module. Then $$\check{H}^p(\mathcal{U}, \mathcal{I}) = \left\{ \begin{matrix} \mathcal{I}(U) & \text{if} & p = 0 \\ 0 & \text{if} & p > 0 \end{matrix} \right.$$
Proof. An injective $\mathcal{O}_X$-module is also injective as an object in the category $\textit{PMod}(\mathcal{O}_X)$ (for example since sheafification is an exact left adjoint to the inclusion functor, using Homology, Lemma 12.26.1). Hence we can apply Lemma 20.11.5 (or its proof) to see the result. $\square$
Lemma 20.12.2. Let $X$ be a ringed space. Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$ be a covering. There is a transformation $$\check{\mathcal{C}}^\bullet(\mathcal{U}, -) \longrightarrow R\Gamma(U, -)$$ of functors $\textit{Mod}(\mathcal{O}_X) \to D^{+}(\mathcal{O}_X(U))$. In particular this provides canonical maps $\check{H}^p(\mathcal{U}, \mathcal{F}) \to H^p(U, \mathcal{F})$ for $\mathcal{F}$ ranging over $\textit{Mod}(\mathcal{O}_X)$.
Proof. Let $\mathcal{F}$ be an $\mathcal{O}_X$-module. Choose an injective resolution $\mathcal{F} \to \mathcal{I}^\bullet$. Consider the double complex $\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}^\bullet)$ with terms $\check{\mathcal{C}}^p(\mathcal{U}, \mathcal{I}^q)$. There is a map of complexes $$\alpha : \Gamma(U, \mathcal{I}^\bullet) \longrightarrow \text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}^\bullet))$$ coming from the maps $\mathcal{I}^q(U) \to \check{H}^0(\mathcal{U}, \mathcal{I}^q)$ and a map of complexes $$\beta : \check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}) \longrightarrow \text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}^\bullet))$$ coming from the map $\mathcal{F} \to \mathcal{I}^0$. We can apply Homology, Lemma 12.22.7 to see that $\alpha$ is a quasi-isomorphism. Namely, Lemma 20.12.1 implies that the $q$th row of the double complex $\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}^\bullet)$ is a resolution of $\Gamma(U, \mathcal{I}^q)$. Hence $\alpha$ becomes invertible in $D^{+}(\mathcal{O}_X(U))$ and the transformation of the lemma is the composition of $\beta$ followed by the inverse of $\alpha$. We omit the verification that this is functorial. $\square$
Lemma 20.12.3. Let $X$ be a topological space. Let $\mathcal{H}$ be an abelian sheaf on $X$. Let $\mathcal{U} : X = \bigcup_{i \in I} U_i$ be an open covering. The map $$\check{H}^1(\mathcal{U}, \mathcal{H}) \longrightarrow H^1(X, \mathcal{H})$$ is injective and identifies $\check{H}^1(\mathcal{U}, \mathcal{H})$ via the bijection of Lemma 20.5.3 with the set of isomorphism classes of $\mathcal{H}$-torsors which restrict to trivial torsors over each $U_i$.
Proof. To see this we construct an inverse map. Namely, let $\mathcal{F}$ be a $\mathcal{H}$-torsor whose restriction to $U_i$ is trivial. By Lemma 20.5.2 this means there exists a section $s_i \in \mathcal{F}(U_i)$. On $U_{i_0} \cap U_{i_1}$ there is a unique section $s_{i_0i_1}$ of $\mathcal{H}$ such that $s_{i_0i_1} \cdot s_{i_0}|_{U_{i_0} \cap U_{i_1}} = s_{i_1}|_{U_{i_0} \cap U_{i_1}}$. A computation shows that $s_{i_0i_1}$ is a Čech cocycle and that its class is well defined (i.e., does not depend on the choice of the sections $s_i$). The inverse maps the isomorphism class of $\mathcal{F}$ to the cohomology class of the cocycle $(s_{i_0i_1})$. We omit the verification that this map is indeed an inverse. $\square$
Lemma 20.12.4. Let $X$ be a ringed space. Consider the functor $i : \textit{Mod}(\mathcal{O}_X) \to \textit{PMod}(\mathcal{O}_X)$. It is a left exact functor with right derived functors given by $$R^pi(\mathcal{F}) = \underline{H}^p(\mathcal{F}) : U \longmapsto H^p(U, \mathcal{F})$$ see discussion in Section 20.8.
Proof. It is clear that $i$ is left exact. Choose an injective resolution $\mathcal{F} \to \mathcal{I}^\bullet$. By definition $R^pi$ is the $p$th cohomology presheaf of the complex $\mathcal{I}^\bullet$. In other words, the sections of $R^pi(\mathcal{F})$ over an open $U$ are given by $$\frac{\mathop{\rm Ker}(\mathcal{I}^n(U) \to \mathcal{I}^{n + 1}(U))} {\mathop{\rm Im}(\mathcal{I}^{n - 1}(U) \to \mathcal{I}^n(U))}.$$ which is the definition of $H^p(U, \mathcal{F})$. $\square$
Lemma 20.12.5. Let $X$ be a ringed space. Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$ be a covering. For any sheaf of $\mathcal{O}_X$-modules $\mathcal{F}$ there is a spectral sequence $(E_r, d_r)_{r \geq 0}$ with $$E_2^{p, q} = \check{H}^p(\mathcal{U}, \underline{H}^q(\mathcal{F}))$$ converging to $H^{p + q}(U, \mathcal{F})$. This spectral sequence is functorial in $\mathcal{F}$.
Proof. This is a Grothendieck spectral sequence (see Derived Categories, Lemma 13.22.2) for the functors $$i : \textit{Mod}(\mathcal{O}_X) \to \textit{PMod}(\mathcal{O}_X) \quad\text{and}\quad \check{H}^0(\mathcal{U}, - ) : \textit{PMod}(\mathcal{O}_X) \to \text{Mod}_{\mathcal{O}_X(U)}.$$ Namely, we have $\check{H}^0(\mathcal{U}, i(\mathcal{F})) = \mathcal{F}(U)$ by Lemma 20.10.2. We have that $i(\mathcal{I})$ is Čech acyclic by Lemma 20.12.1. And we have that $\check{H}^p(\mathcal{U}, -) = R^p\check{H}^0(\mathcal{U}, -)$ as functors on $\textit{PMod}(\mathcal{O}_X)$ by Lemma 20.11.5. Putting everything together gives the lemma. $\square$
Lemma 20.12.6. Let $X$ be a ringed space. Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$ be a covering. Let $\mathcal{F}$ be an $\mathcal{O}_X$-module. Assume that $H^i(U_{i_0 \ldots i_p}, \mathcal{F}) = 0$ for all $i > 0$, all $p \geq 0$ and all $i_0, \ldots, i_p \in I$. Then $\check{H}^p(\mathcal{U}, \mathcal{F}) = H^p(U, \mathcal{F})$ as $\mathcal{O}_X(U)$-modules.
Proof. We will use the spectral sequence of Lemma 20.12.5. The assumptions mean that $E_2^{p, q} = 0$ for all $(p, q)$ with $q \not = 0$. Hence the spectral sequence degenerates at $E_2$ and the result follows. $\square$
Lemma 20.12.7. Let $X$ be a ringed space. Let $$0 \to \mathcal{F} \to \mathcal{G} \to \mathcal{H} \to 0$$ be a short exact sequence of $\mathcal{O}_X$-modules. Let $U \subset X$ be an open subset. If there exists a cofinal system of open coverings $\mathcal{U}$ of $U$ such that $\check{H}^1(\mathcal{U}, \mathcal{F}) = 0$, then the map $\mathcal{G}(U) \to \mathcal{H}(U)$ is surjective.
Proof. Take an element $s \in \mathcal{H}(U)$. Choose an open covering $\mathcal{U} : U = \bigcup_{i \in I} U_i$ such that (a) $\check{H}^1(\mathcal{U}, \mathcal{F}) = 0$ and (b) $s|_{U_i}$ is the image of a section $s_i \in \mathcal{G}(U_i)$. Since we can certainly find a covering such that (b) holds it follows from the assumptions of the lemma that we can find a covering such that (a) and (b) both hold. Consider the sections $$s_{i_0i_1} = s_{i_1}|_{U_{i_0i_1}} - s_{i_0}|_{U_{i_0i_1}}.$$ Since $s_i$ lifts $s$ we see that $s_{i_0i_1} \in \mathcal{F}(U_{i_0i_1})$. By the vanishing of $\check{H}^1(\mathcal{U}, \mathcal{F})$ we can find sections $t_i \in \mathcal{F}(U_i)$ such that $$s_{i_0i_1} = t_{i_1}|_{U_{i_0i_1}} - t_{i_0}|_{U_{i_0i_1}}.$$ Then clearly the sections $s_i - t_i$ satisfy the sheaf condition and glue to a section of $\mathcal{G}$ over $U$ which maps to $s$. Hence we win. $\square$
Lemma 20.12.8. Let $X$ be a ringed space. Let $\mathcal{F}$ be an $\mathcal{O}_X$-module such that $$\check{H}^p(\mathcal{U}, \mathcal{F}) = 0$$ for all $p > 0$ and any open covering $\mathcal{U} : U = \bigcup_{i \in I} U_i$ of an open of $X$. Then $H^p(U, \mathcal{F}) = 0$ for all $p > 0$ and any open $U \subset X$.
Proof. Let $\mathcal{F}$ be a sheaf satisfying the assumption of the lemma. We will indicate this by saying ''$\mathcal{F}$ has vanishing higher Čech cohomology for any open covering''. Choose an embedding $\mathcal{F} \to \mathcal{I}$ into an injective $\mathcal{O}_X$-module. By Lemma 20.12.1 $\mathcal{I}$ has vanishing higher Čech cohomology for any open covering. Let $\mathcal{Q} = \mathcal{I}/\mathcal{F}$ so that we have a short exact sequence $$0 \to \mathcal{F} \to \mathcal{I} \to \mathcal{Q} \to 0.$$ By Lemma 20.12.7 and our assumptions this sequence is actually exact as a sequence of presheaves! In particular we have a long exact sequence of Čech cohomology groups for any open covering $\mathcal{U}$, see Lemma 20.11.2 for example. This implies that $\mathcal{Q}$ is also an $\mathcal{O}_X$-module with vanishing higher Čech cohomology for all open coverings.
Next, we look at the long exact cohomology sequence $$\xymatrix{ 0 \ar[r] & H^0(U, \mathcal{F}) \ar[r] & H^0(U, \mathcal{I}) \ar[r] & H^0(U, \mathcal{Q}) \ar[lld] \\ & H^1(U, \mathcal{F}) \ar[r] & H^1(U, \mathcal{I}) \ar[r] & H^1(U, \mathcal{Q}) \ar[lld] \\ & \ldots & \ldots & \ldots \\ }$$ for any open $U \subset X$. Since $\mathcal{I}$ is injective we have $H^n(U, \mathcal{I}) = 0$ for $n > 0$ (see Derived Categories, Lemma 13.20.4). By the above we see that $H^0(U, \mathcal{I}) \to H^0(U, \mathcal{Q})$ is surjective and hence $H^1(U, \mathcal{F}) = 0$. Since $\mathcal{F}$ was an arbitrary $\mathcal{O}_X$-module with vanishing higher Čech cohomology we conclude that also $H^1(U, \mathcal{Q}) = 0$ since $\mathcal{Q}$ is another of these sheaves (see above). By the long exact sequence this in turn implies that $H^2(U, \mathcal{F}) = 0$. And so on and so forth. $\square$
Lemma 20.12.9. (Variant of Lemma 20.12.8.) Let $X$ be a ringed space. Let $\mathcal{B}$ be a basis for the topology on $X$. Let $\mathcal{F}$ be an $\mathcal{O}_X$-module. Assume there exists a set of open coverings $\text{Cov}$ with the following properties:
1. For every $\mathcal{U} \in \text{Cov}$ with $\mathcal{U} : U = \bigcup_{i \in I} U_i$ we have $U, U_i \in \mathcal{B}$ and every $U_{i_0 \ldots i_p} \in \mathcal{B}$.
2. For every $U \in \mathcal{B}$ the open coverings of $U$ occurring in $\text{Cov}$ is a cofinal system of open coverings of $U$.
3. For every $\mathcal{U} \in \text{Cov}$ we have $\check{H}^p(\mathcal{U}, \mathcal{F}) = 0$ for all $p > 0$.
Then $H^p(U, \mathcal{F}) = 0$ for all $p > 0$ and any $U \in \mathcal{B}$.
Proof. Let $\mathcal{F}$ and $\text{Cov}$ be as in the lemma. We will indicate this by saying ''$\mathcal{F}$ has vanishing higher Čech cohomology for any $\mathcal{U} \in \text{Cov}$''. Choose an embedding $\mathcal{F} \to \mathcal{I}$ into an injective $\mathcal{O}_X$-module. By Lemma 20.12.1 $\mathcal{I}$ has vanishing higher Čech cohomology for any $\mathcal{U} \in \text{Cov}$. Let $\mathcal{Q} = \mathcal{I}/\mathcal{F}$ so that we have a short exact sequence $$0 \to \mathcal{F} \to \mathcal{I} \to \mathcal{Q} \to 0.$$ By Lemma 20.12.7 and our assumption (2) this sequence gives rise to an exact sequence $$0 \to \mathcal{F}(U) \to \mathcal{I}(U) \to \mathcal{Q}(U) \to 0.$$ for every $U \in \mathcal{B}$. Hence for any $\mathcal{U} \in \text{Cov}$ we get a short exact sequence of Čech complexes $$0 \to \check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}) \to \check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}) \to \check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{Q}) \to 0$$ since each term in the Čech complex is made up out of a product of values over elements of $\mathcal{B}$ by assumption (1). In particular we have a long exact sequence of Čech cohomology groups for any open covering $\mathcal{U} \in \text{Cov}$. This implies that $\mathcal{Q}$ is also an $\mathcal{O}_X$-module with vanishing higher Čech cohomology for all $\mathcal{U} \in \text{Cov}$.
Next, we look at the long exact cohomology sequence $$\xymatrix{ 0 \ar[r] & H^0(U, \mathcal{F}) \ar[r] & H^0(U, \mathcal{I}) \ar[r] & H^0(U, \mathcal{Q}) \ar[lld] \\ & H^1(U, \mathcal{F}) \ar[r] & H^1(U, \mathcal{I}) \ar[r] & H^1(U, \mathcal{Q}) \ar[lld] \\ & \ldots & \ldots & \ldots \\ }$$ for any $U \in \mathcal{B}$. Since $\mathcal{I}$ is injective we have $H^n(U, \mathcal{I}) = 0$ for $n > 0$ (see Derived Categories, Lemma 13.20.4). By the above we see that $H^0(U, \mathcal{I}) \to H^0(U, \mathcal{Q})$ is surjective and hence $H^1(U, \mathcal{F}) = 0$. Since $\mathcal{F}$ was an arbitrary $\mathcal{O}_X$-module with vanishing higher Čech cohomology for all $\mathcal{U} \in \text{Cov}$ we conclude that also $H^1(U, \mathcal{Q}) = 0$ since $\mathcal{Q}$ is another of these sheaves (see above). By the long exact sequence this in turn implies that $H^2(U, \mathcal{F}) = 0$. And so on and so forth. $\square$
Lemma 20.12.10. Let $f : X \to Y$ be a morphism of ringed spaces. Let $\mathcal{I}$ be an injective $\mathcal{O}_X$-module. Then
1. $\check{H}^p(\mathcal{V}, f_*\mathcal{I}) = 0$ for all $p > 0$ and any open covering $\mathcal{V} : V = \bigcup_{j \in J} V_j$ of $Y$.
2. $H^p(V, f_*\mathcal{I}) = 0$ for all $p > 0$ and every open $V \subset Y$.
In other words, $f_*\mathcal{I}$ is right acyclic for $\Gamma(V, -)$ (see Derived Categories, Definition 13.16.3) for any $V \subset Y$ open.
Proof. Set $\mathcal{U} : f^{-1}(V) = \bigcup_{j \in J} f^{-1}(V_j)$. It is an open covering of $X$ and $$\check{\mathcal{C}}^\bullet(\mathcal{V}, f_*\mathcal{I}) = \check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}).$$ This is true because $$f_*\mathcal{I}(V_{j_0 \ldots j_p}) = \mathcal{I}(f^{-1}(V_{j_0 \ldots j_p})) = \mathcal{I}(f^{-1}(V_{j_0}) \cap \ldots \cap f^{-1}(V_{j_p})) = \mathcal{I}(U_{j_0 \ldots j_p}).$$ Thus the first statement of the lemma follows from Lemma 20.12.1. The second statement follows from the first and Lemma 20.12.8. $\square$
The following lemma implies in particular that $f_* : \textit{Ab}(X) \to \textit{Ab}(Y)$ transforms injective abelian sheaves into injective abelian sheaves.
Lemma 20.12.11. Let $f : X \to Y$ be a morphism of ringed spaces. Assume $f$ is flat. Then $f_*\mathcal{I}$ is an injective $\mathcal{O}_Y$-module for any injective $\mathcal{O}_X$-module $\mathcal{I}$.
Proof. In this case the functor $f^*$ transforms injections into injections (Modules, Lemma 17.18.2). Hence the result follows from Homology, Lemma 12.26.1. $\square$
Lemma 20.12.12. Let $(X, \mathcal{O}_X)$ be a ringed space. Let $I$ be a set. For $i \in I$ let $\mathcal{F}_i$ be an $\mathcal{O}_X$-module. Let $U \subset X$ be open. The canonical map $$H^p(U, \prod\nolimits_{i \in I} \mathcal{F}_i) \longrightarrow \prod\nolimits_{i \in I} H^p(U, \mathcal{F}_i)$$ is an isomorphism for $p = 0$ and injective for $p = 1$.
Proof. The statement for $p = 0$ is true because the product of sheaves is equal to the product of the underlying presheaves, see Sheaves, Section 6.29. Proof for $p = 1$. Set $\mathcal{F} = \prod \mathcal{F}_i$. Let $\xi \in H^1(U, \mathcal{F})$ map to zero in $\prod H^1(U, \mathcal{F}_i)$. By locality of cohomology, see Lemma 20.8.2, there exists an open covering $\mathcal{U} : U = \bigcup U_j$ such that $\xi|_{U_j} = 0$ for all $j$. By Lemma 20.12.3 this means $\xi$ comes from an element $\check \xi \in \check H^1(\mathcal{U}, \mathcal{F})$. Since the maps $\check H^1(\mathcal{U}, \mathcal{F}_i) \to H^1(U, \mathcal{F}_i)$ are injective for all $i$ (by Lemma 20.12.3), and since the image of $\xi$ is zero in $\prod H^1(U, \mathcal{F}_i)$ we see that the image $\check \xi_i = 0$ in $\check H^1(\mathcal{U}, \mathcal{F}_i)$. However, since $\mathcal{F} = \prod \mathcal{F}_i$ we see that $\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})$ is the product of the complexes $\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}_i)$, hence by Homology, Lemma 12.29.1 we conclude that $\check \xi = 0$ as desired. $\square$
The code snippet corresponding to this tag is a part of the file cohomology.tex and is located in lines 1348–1824 (see updates for more information).
\section{{\v C}ech cohomology and cohomology}
\label{section-cech-cohomology-cohomology}
\begin{lemma}
\label{lemma-injective-trivial-cech}
Let $X$ be a ringed space.
Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$ be a covering.
Let $\mathcal{I}$ be an injective $\mathcal{O}_X$-module.
Then
$$\check{H}^p(\mathcal{U}, \mathcal{I}) = \left\{ \begin{matrix} \mathcal{I}(U) & \text{if} & p = 0 \\ 0 & \text{if} & p > 0 \end{matrix} \right.$$
\end{lemma}
\begin{proof}
An injective $\mathcal{O}_X$-module is also injective as an object in
the category $\textit{PMod}(\mathcal{O}_X)$ (for example since
sheafification is an exact left adjoint to the inclusion functor,
Hence we can apply Lemma \ref{lemma-cech-cohomology-derived-presheaves}
(or its proof) to see the result.
\end{proof}
\begin{lemma}
\label{lemma-cech-cohomology}
Let $X$ be a ringed space.
Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$ be a covering.
There is a transformation
$$\check{\mathcal{C}}^\bullet(\mathcal{U}, -) \longrightarrow R\Gamma(U, -)$$
of functors
$\textit{Mod}(\mathcal{O}_X) \to D^{+}(\mathcal{O}_X(U))$.
In particular this provides canonical maps
$\check{H}^p(\mathcal{U}, \mathcal{F}) \to H^p(U, \mathcal{F})$ for
$\mathcal{F}$ ranging over $\textit{Mod}(\mathcal{O}_X)$.
\end{lemma}
\begin{proof}
Let $\mathcal{F}$ be an $\mathcal{O}_X$-module. Choose an injective resolution
$\mathcal{F} \to \mathcal{I}^\bullet$. Consider the double complex
$\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}^\bullet)$ with terms
$\check{\mathcal{C}}^p(\mathcal{U}, \mathcal{I}^q)$.
There is a map of complexes
$$\alpha : \Gamma(U, \mathcal{I}^\bullet) \longrightarrow \text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}^\bullet))$$
coming from the maps
$\mathcal{I}^q(U) \to \check{H}^0(\mathcal{U}, \mathcal{I}^q)$
and a map of complexes
$$\beta : \check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}) \longrightarrow \text{Tot}(\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}^\bullet))$$
coming from the map $\mathcal{F} \to \mathcal{I}^0$.
We can apply
Homology, Lemma \ref{homology-lemma-double-complex-gives-resolution}
to see that $\alpha$ is a quasi-isomorphism.
Namely, Lemma \ref{lemma-injective-trivial-cech} implies that
the $q$th row of the double complex
$\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}^\bullet)$ is a
resolution of $\Gamma(U, \mathcal{I}^q)$.
Hence $\alpha$ becomes invertible in $D^{+}(\mathcal{O}_X(U))$ and
the transformation of the lemma is the composition of $\beta$
followed by the inverse of $\alpha$. We omit the verification
that this is functorial.
\end{proof}
\begin{lemma}
\label{lemma-cech-h1}
Let $X$ be a topological space. Let $\mathcal{H}$ be an abelian sheaf
on $X$. Let $\mathcal{U} : X = \bigcup_{i \in I} U_i$ be an open covering.
The map
$$\check{H}^1(\mathcal{U}, \mathcal{H}) \longrightarrow H^1(X, \mathcal{H})$$
is injective and identifies $\check{H}^1(\mathcal{U}, \mathcal{H})$ via
the bijection of Lemma \ref{lemma-torsors-h1}
with the set of isomorphism classes of $\mathcal{H}$-torsors
which restrict to trivial torsors over each $U_i$.
\end{lemma}
\begin{proof}
To see this we construct an inverse map. Namely, let $\mathcal{F}$ be a
$\mathcal{H}$-torsor whose restriction to $U_i$ is trivial. By
Lemma \ref{lemma-trivial-torsor} this means there
exists a section $s_i \in \mathcal{F}(U_i)$. On $U_{i_0} \cap U_{i_1}$
there is a unique section $s_{i_0i_1}$ of $\mathcal{H}$ such that
$s_{i_0i_1} \cdot s_{i_0}|_{U_{i_0} \cap U_{i_1}} = s_{i_1}|_{U_{i_0} \cap U_{i_1}}$. A computation shows
that $s_{i_0i_1}$ is a {\v C}ech cocycle and that its class is well
defined (i.e., does not depend on the choice of the sections $s_i$).
The inverse maps the isomorphism class of $\mathcal{F}$ to the cohomology
class of the cocycle $(s_{i_0i_1})$.
We omit the verification that this map is indeed an inverse.
\end{proof}
\begin{lemma}
\label{lemma-include}
Let $X$ be a ringed space.
Consider the functor
$i : \textit{Mod}(\mathcal{O}_X) \to \textit{PMod}(\mathcal{O}_X)$.
It is a left exact functor with right derived functors given by
$$R^pi(\mathcal{F}) = \underline{H}^p(\mathcal{F}) : U \longmapsto H^p(U, \mathcal{F})$$
see discussion in Section \ref{section-locality}.
\end{lemma}
\begin{proof}
It is clear that $i$ is left exact.
Choose an injective resolution $\mathcal{F} \to \mathcal{I}^\bullet$.
By definition $R^pi$ is the $p$th cohomology {\it presheaf}
of the complex $\mathcal{I}^\bullet$. In other words, the
sections of $R^pi(\mathcal{F})$ over an open $U$ are given by
$$\frac{\Ker(\mathcal{I}^n(U) \to \mathcal{I}^{n + 1}(U))} {\Im(\mathcal{I}^{n - 1}(U) \to \mathcal{I}^n(U))}.$$
which is the definition of $H^p(U, \mathcal{F})$.
\end{proof}
\begin{lemma}
\label{lemma-cech-spectral-sequence}
Let $X$ be a ringed space.
Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$ be a covering.
For any sheaf of $\mathcal{O}_X$-modules $\mathcal{F}$ there
is a spectral sequence $(E_r, d_r)_{r \geq 0}$ with
$$E_2^{p, q} = \check{H}^p(\mathcal{U}, \underline{H}^q(\mathcal{F}))$$
converging to $H^{p + q}(U, \mathcal{F})$.
This spectral sequence is functorial in $\mathcal{F}$.
\end{lemma}
\begin{proof}
This is a Grothendieck spectral sequence
(see
Derived Categories, Lemma \ref{derived-lemma-grothendieck-spectral-sequence})
for the functors
$$i : \textit{Mod}(\mathcal{O}_X) \to \textit{PMod}(\mathcal{O}_X) \quad\text{and}\quad \check{H}^0(\mathcal{U}, - ) : \textit{PMod}(\mathcal{O}_X) \to \text{Mod}_{\mathcal{O}_X(U)}.$$
Namely, we have $\check{H}^0(\mathcal{U}, i(\mathcal{F})) = \mathcal{F}(U)$
by Lemma \ref{lemma-cech-h0}. We have that $i(\mathcal{I})$ is
{\v C}ech acyclic by Lemma \ref{lemma-injective-trivial-cech}. And we
have that $\check{H}^p(\mathcal{U}, -) = R^p\check{H}^0(\mathcal{U}, -)$
as functors on $\textit{PMod}(\mathcal{O}_X)$
by Lemma \ref{lemma-cech-cohomology-derived-presheaves}.
Putting everything together gives the lemma.
\end{proof}
\begin{lemma}
\label{lemma-cech-spectral-sequence-application}
Let $X$ be a ringed space.
Let $\mathcal{U} : U = \bigcup_{i \in I} U_i$ be a covering.
Let $\mathcal{F}$ be an $\mathcal{O}_X$-module.
Assume that $H^i(U_{i_0 \ldots i_p}, \mathcal{F}) = 0$
for all $i > 0$, all $p \geq 0$ and all $i_0, \ldots, i_p \in I$.
Then $\check{H}^p(\mathcal{U}, \mathcal{F}) = H^p(U, \mathcal{F})$
as $\mathcal{O}_X(U)$-modules.
\end{lemma}
\begin{proof}
We will use the spectral sequence of
Lemma \ref{lemma-cech-spectral-sequence}.
The assumptions mean that $E_2^{p, q} = 0$ for all $(p, q)$ with
$q \not = 0$. Hence the spectral sequence degenerates at $E_2$
and the result follows.
\end{proof}
\begin{lemma}
\label{lemma-ses-cech-h1}
Let $X$ be a ringed space.
Let
$$0 \to \mathcal{F} \to \mathcal{G} \to \mathcal{H} \to 0$$
be a short exact sequence of $\mathcal{O}_X$-modules.
Let $U \subset X$ be an open subset.
If there exists a cofinal system of open coverings $\mathcal{U}$
of $U$ such that $\check{H}^1(\mathcal{U}, \mathcal{F}) = 0$,
then the map $\mathcal{G}(U) \to \mathcal{H}(U)$ is
surjective.
\end{lemma}
\begin{proof}
Take an element $s \in \mathcal{H}(U)$. Choose an open covering
$\mathcal{U} : U = \bigcup_{i \in I} U_i$ such that
(a) $\check{H}^1(\mathcal{U}, \mathcal{F}) = 0$ and (b)
$s|_{U_i}$ is the image of a section $s_i \in \mathcal{G}(U_i)$.
Since we can certainly find a covering such that (b) holds
it follows from the assumptions of the lemma that we can find
a covering such that (a) and (b) both hold.
Consider the sections
$$s_{i_0i_1} = s_{i_1}|_{U_{i_0i_1}} - s_{i_0}|_{U_{i_0i_1}}.$$
Since $s_i$ lifts $s$ we see that $s_{i_0i_1} \in \mathcal{F}(U_{i_0i_1})$.
By the vanishing of $\check{H}^1(\mathcal{U}, \mathcal{F})$ we can
find sections $t_i \in \mathcal{F}(U_i)$ such that
$$s_{i_0i_1} = t_{i_1}|_{U_{i_0i_1}} - t_{i_0}|_{U_{i_0i_1}}.$$
Then clearly the sections $s_i - t_i$ satisfy the sheaf condition
and glue to a section of $\mathcal{G}$ over $U$ which maps to $s$.
Hence we win.
\end{proof}
\begin{lemma}
\label{lemma-cech-vanish}
\begin{slogan}
If higher {\v C}ech cohomology of an abelian sheaf vanishes for all open covers,
then higher cohomology vanishes.
\end{slogan}
Let $X$ be a ringed space.
Let $\mathcal{F}$ be an $\mathcal{O}_X$-module such that
$$\check{H}^p(\mathcal{U}, \mathcal{F}) = 0$$
for all $p > 0$ and any open covering $\mathcal{U} : U = \bigcup_{i \in I} U_i$
of an open of $X$. Then $H^p(U, \mathcal{F}) = 0$ for all $p > 0$
and any open $U \subset X$.
\end{lemma}
\begin{proof}
Let $\mathcal{F}$ be a sheaf satisfying the assumption of the lemma.
We will indicate this by saying $\mathcal{F}$ has vanishing higher
{\v C}ech cohomology for any open covering''.
Choose an embedding $\mathcal{F} \to \mathcal{I}$ into an
injective $\mathcal{O}_X$-module.
By Lemma \ref{lemma-injective-trivial-cech} $\mathcal{I}$ has vanishing higher
{\v C}ech cohomology for any open covering.
Let $\mathcal{Q} = \mathcal{I}/\mathcal{F}$
so that we have a short exact sequence
$$0 \to \mathcal{F} \to \mathcal{I} \to \mathcal{Q} \to 0.$$
By Lemma \ref{lemma-ses-cech-h1} and our assumptions
this sequence is actually exact as a sequence of presheaves!
In particular we have a long exact sequence of {\v C}ech cohomology
groups for any open covering $\mathcal{U}$, see
Lemma \ref{lemma-cech-cohomology-delta-functor-presheaves}
for example. This implies that $\mathcal{Q}$ is also an $\mathcal{O}_X$-module
with vanishing higher {\v C}ech cohomology for all open coverings.
\medskip\noindent
Next, we look at the long exact cohomology sequence
$$\xymatrix{ 0 \ar[r] & H^0(U, \mathcal{F}) \ar[r] & H^0(U, \mathcal{I}) \ar[r] & H^0(U, \mathcal{Q}) \ar[lld] \\ & H^1(U, \mathcal{F}) \ar[r] & H^1(U, \mathcal{I}) \ar[r] & H^1(U, \mathcal{Q}) \ar[lld] \\ & \ldots & \ldots & \ldots \\ }$$
for any open $U \subset X$. Since $\mathcal{I}$ is injective we
have $H^n(U, \mathcal{I}) = 0$ for $n > 0$ (see
Derived Categories, Lemma \ref{derived-lemma-higher-derived-functors}).
By the above we see that $H^0(U, \mathcal{I}) \to H^0(U, \mathcal{Q})$
is surjective and hence $H^1(U, \mathcal{F}) = 0$.
Since $\mathcal{F}$ was an arbitrary $\mathcal{O}_X$-module with
vanishing higher {\v C}ech cohomology we conclude that also
$H^1(U, \mathcal{Q}) = 0$ since $\mathcal{Q}$ is another of these
sheaves (see above). By the long exact sequence this in turn implies
that $H^2(U, \mathcal{F}) = 0$. And so on and so forth.
\end{proof}
\begin{lemma}
\label{lemma-cech-vanish-basis}
(Variant of Lemma \ref{lemma-cech-vanish}.)
Let $X$ be a ringed space.
Let $\mathcal{B}$ be a basis for the topology on $X$.
Let $\mathcal{F}$ be an $\mathcal{O}_X$-module.
Assume there exists a set of open coverings $\text{Cov}$
with the following properties:
\begin{enumerate}
\item For every $\mathcal{U} \in \text{Cov}$
with $\mathcal{U} : U = \bigcup_{i \in I} U_i$ we have
$U, U_i \in \mathcal{B}$ and every $U_{i_0 \ldots i_p} \in \mathcal{B}$.
\item For every $U \in \mathcal{B}$ the open coverings of $U$
occurring in $\text{Cov}$ is a cofinal system of open coverings
of $U$.
\item For every $\mathcal{U} \in \text{Cov}$ we have
$\check{H}^p(\mathcal{U}, \mathcal{F}) = 0$ for all $p > 0$.
\end{enumerate}
Then $H^p(U, \mathcal{F}) = 0$ for all $p > 0$ and any $U \in \mathcal{B}$.
\end{lemma}
\begin{proof}
Let $\mathcal{F}$ and $\text{Cov}$ be as in the lemma.
We will indicate this by saying $\mathcal{F}$ has vanishing higher
{\v C}ech cohomology for any $\mathcal{U} \in \text{Cov}$''.
Choose an embedding $\mathcal{F} \to \mathcal{I}$ into an
injective $\mathcal{O}_X$-module.
By Lemma \ref{lemma-injective-trivial-cech} $\mathcal{I}$
has vanishing higher {\v C}ech cohomology for any $\mathcal{U} \in \text{Cov}$.
Let $\mathcal{Q} = \mathcal{I}/\mathcal{F}$
so that we have a short exact sequence
$$0 \to \mathcal{F} \to \mathcal{I} \to \mathcal{Q} \to 0.$$
By Lemma \ref{lemma-ses-cech-h1} and our assumption (2)
this sequence gives rise to an exact sequence
$$0 \to \mathcal{F}(U) \to \mathcal{I}(U) \to \mathcal{Q}(U) \to 0.$$
for every $U \in \mathcal{B}$. Hence for any $\mathcal{U} \in \text{Cov}$
we get a short exact sequence of {\v C}ech complexes
$$0 \to \check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}) \to \check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}) \to \check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{Q}) \to 0$$
since each term in the {\v C}ech complex is made up out of a product of
values over elements of $\mathcal{B}$ by assumption (1).
In particular we have a long exact sequence of {\v C}ech cohomology
groups for any open covering $\mathcal{U} \in \text{Cov}$.
This implies that $\mathcal{Q}$ is also an $\mathcal{O}_X$-module
with vanishing higher {\v C}ech cohomology for all
$\mathcal{U} \in \text{Cov}$.
\medskip\noindent
Next, we look at the long exact cohomology sequence
$$\xymatrix{ 0 \ar[r] & H^0(U, \mathcal{F}) \ar[r] & H^0(U, \mathcal{I}) \ar[r] & H^0(U, \mathcal{Q}) \ar[lld] \\ & H^1(U, \mathcal{F}) \ar[r] & H^1(U, \mathcal{I}) \ar[r] & H^1(U, \mathcal{Q}) \ar[lld] \\ & \ldots & \ldots & \ldots \\ }$$
for any $U \in \mathcal{B}$. Since $\mathcal{I}$ is injective we
have $H^n(U, \mathcal{I}) = 0$ for $n > 0$ (see
Derived Categories, Lemma \ref{derived-lemma-higher-derived-functors}).
By the above we see that $H^0(U, \mathcal{I}) \to H^0(U, \mathcal{Q})$
is surjective and hence $H^1(U, \mathcal{F}) = 0$.
Since $\mathcal{F}$ was an arbitrary $\mathcal{O}_X$-module with
vanishing higher {\v C}ech cohomology for all $\mathcal{U} \in \text{Cov}$
we conclude that also $H^1(U, \mathcal{Q}) = 0$ since $\mathcal{Q}$ is
another of these sheaves (see above). By the long exact sequence this in
turn implies that $H^2(U, \mathcal{F}) = 0$. And so on and so forth.
\end{proof}
\begin{lemma}
\label{lemma-pushforward-injective}
Let $f : X \to Y$ be a morphism of ringed spaces.
Let $\mathcal{I}$ be an injective $\mathcal{O}_X$-module.
Then
\begin{enumerate}
\item $\check{H}^p(\mathcal{V}, f_*\mathcal{I}) = 0$
for all $p > 0$ and any open covering
$\mathcal{V} : V = \bigcup_{j \in J} V_j$ of $Y$.
\item $H^p(V, f_*\mathcal{I}) = 0$ for all $p > 0$ and
every open $V \subset Y$.
\end{enumerate}
In other words, $f_*\mathcal{I}$ is right acyclic for $\Gamma(V, -)$
(see
Derived Categories, Definition \ref{derived-definition-derived-functor})
for any $V \subset Y$ open.
\end{lemma}
\begin{proof}
Set $\mathcal{U} : f^{-1}(V) = \bigcup_{j \in J} f^{-1}(V_j)$.
It is an open covering of $X$ and
$$\check{\mathcal{C}}^\bullet(\mathcal{V}, f_*\mathcal{I}) = \check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{I}).$$
This is true because
$$f_*\mathcal{I}(V_{j_0 \ldots j_p}) = \mathcal{I}(f^{-1}(V_{j_0 \ldots j_p})) = \mathcal{I}(f^{-1}(V_{j_0}) \cap \ldots \cap f^{-1}(V_{j_p})) = \mathcal{I}(U_{j_0 \ldots j_p}).$$
Thus the first statement of the lemma follows from
Lemma \ref{lemma-injective-trivial-cech}. The second statement
follows from the first and Lemma \ref{lemma-cech-vanish}.
\end{proof}
\noindent
The following lemma implies in particular that
$f_* : \textit{Ab}(X) \to \textit{Ab}(Y)$ transforms injective
abelian sheaves into injective abelian sheaves.
\begin{lemma}
\label{lemma-pushforward-injective-flat}
Let $f : X \to Y$ be a morphism of ringed spaces.
Assume $f$ is flat.
Then $f_*\mathcal{I}$ is an injective $\mathcal{O}_Y$-module
for any injective $\mathcal{O}_X$-module $\mathcal{I}$.
\end{lemma}
\begin{proof}
In this case the functor $f^*$ transforms injections into injections
(Modules, Lemma \ref{modules-lemma-pullback-flat}).
Hence the result follows from
\end{proof}
\begin{lemma}
\label{lemma-cohomology-products}
Let $(X, \mathcal{O}_X)$ be a ringed space. Let $I$ be a set.
For $i \in I$ let $\mathcal{F}_i$ be an $\mathcal{O}_X$-module.
Let $U \subset X$ be open. The canonical map
$$H^p(U, \prod\nolimits_{i \in I} \mathcal{F}_i) \longrightarrow \prod\nolimits_{i \in I} H^p(U, \mathcal{F}_i)$$
is an isomorphism for $p = 0$ and injective for $p = 1$.
\end{lemma}
\begin{proof}
The statement for $p = 0$ is true because the product of sheaves
is equal to the product of the underlying presheaves, see
Sheaves, Section \ref{sheaves-section-limits-sheaves}.
Proof for $p = 1$. Set $\mathcal{F} = \prod \mathcal{F}_i$.
Let $\xi \in H^1(U, \mathcal{F})$ map to zero in
$\prod H^1(U, \mathcal{F}_i)$. By locality of cohomology, see
Lemma \ref{lemma-kill-cohomology-class-on-covering},
there exists an open covering $\mathcal{U} : U = \bigcup U_j$ such that
$\xi|_{U_j} = 0$ for all $j$. By Lemma \ref{lemma-cech-h1} this means
$\xi$ comes from an element
$\check \xi \in \check H^1(\mathcal{U}, \mathcal{F})$.
Since the maps
$\check H^1(\mathcal{U}, \mathcal{F}_i) \to H^1(U, \mathcal{F}_i)$
are injective for all $i$ (by Lemma \ref{lemma-cech-h1}), and since
the image of $\xi$ is zero in $\prod H^1(U, \mathcal{F}_i)$ we see
that the image
$\check \xi_i = 0$ in $\check H^1(\mathcal{U}, \mathcal{F}_i)$.
However, since $\mathcal{F} = \prod \mathcal{F}_i$ we see that
$\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F})$ is the
product of the complexes
$\check{\mathcal{C}}^\bullet(\mathcal{U}, \mathcal{F}_i)$,
hence by
Homology, Lemma \ref{homology-lemma-product-abelian-groups-exact}
we conclude that $\check \xi = 0$ as desired.
\end{proof}
There are no comments yet for this tag.
## Add a comment on tag 01EO
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the lower-right corner). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9989446997642517, "perplexity": 171.40658899491936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424148.78/warc/CC-MAIN-20170722202635-20170722222635-00353.warc.gz"} |
https://brilliant.org/problems/combinatorics-5/ | # Combinatorics #5
Find the sum of all the $$5$$-digit integers which are not multiples of $$11$$ and whose digits are $$1,3,4,7,9$$, with each of these digits appearing exactly once. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8668621778488159, "perplexity": 176.71906675534063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00171-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://newton.cam.ac.uk/seminar/20170330133014301 | # Homotopy theory with C*-categories
Presented by:
Ulrich Bunke Universität Regensburg
Date:
Thursday 30th March 2017 - 13:30 to 14:30
Venue:
INI Seminar Room 1
Abstract:
In this talk I propose a presentable infinity category of C*-categories. It is modeled by a simplicial combinatorial model category structure on the category of C*-categories. This allows to set up a theory of presheaves with values in C*-categories on the orbit category of a group together with various induction and coinduction functors. As an application we provide a simple construction of equivariant K-theory spectra (first constructed by Davis-Lück). We discuss further applications to equivariant coarse homology theories. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8602102994918823, "perplexity": 893.1771190487083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948541253.29/warc/CC-MAIN-20171214055056-20171214075056-00410.warc.gz"} |
http://math.stackexchange.com/questions/592467/how-to-solve-y-frac1x-1-frac-1x-5-for-x | # How to solve $y = \frac{1}{x-1} +\frac {1}{x-5}$ for $x$
I'm stuck on this one (too long ago for me I guess).
I expanded the fractions coming to $y = \frac{2x-6}{x^2-6x+5}$ and even tried to apply a polynom division (translation?) but this came to nothing.
What's the proper approach on this one?
-
Try to write the what you've got in the form of quadratic equation with parameter $y$ and then solve this quadratic equatiom for $x$. – SnowAngel6147 Dec 4 '13 at 14:27
## 2 Answers
You have $y (x^2 - 6 x + 5) = 2x - 6$, or $y x^2 - (6y + 2)x + (5y + 6) = 0$, so applying the quadratic formula yields $$x = \frac{(6y + 2) \pm \sqrt{(6y +2)^2 - 4 y (5y + 6)}}{2y}.$$
This simplifies to $$x = \frac{(3y + 1) \pm \sqrt{4y^2 + 1}}{y}.$$
-
Right. Thanks a lot, now it seems too simple ;-) (will accept as soon as possible). – Alfe Dec 4 '13 at 14:31
sometimes it is instructive to explore an approach which makes use of some specific features of a problem. in this case suppose we make the substitution:
$$x = 3 + 2t$$ where $t=tan\theta$
after a little algebra you will see that $$-2y = \frac{2t}{1-t^2} = tan 2\theta$$ hence $$x = 3 + 2tan(\frac12\; arctan(-2y))$$ in the general case this will give both values of $x$ because $tan \alpha = tan (\alpha+\pi)$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9638364911079407, "perplexity": 575.5297616846537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768636.90/warc/CC-MAIN-20141217075248-00072-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://minhyongkim.wordpress.com/2009/05/ | ## Monthly Archives: May 2009
### Special office hour (ANT)
I will be in the fifth floor common room tomorrow from 4-5 PM to answer last minute questions.
### Film
Far be it from my intention to clutter the blog with film reviews, but maybe this is a good time, just to give all of you a bit of relief from studying. In any case, it will only be a short remark aboutThe Class (Entre les murs),’ describing a year’s work for a teacher at a school in northeastern Paris. I do try to keep up with films about education and this one has been highly acclaimed (Palme d’Or and all that), so I was glad to catch it on the plane last week. Reasonably pleasant to watch, sure enough, but the show left me with a question, which perhaps some of you can help me with if you’ve seen the film as well: Are such innocent and impressionable students as depicted there really supposed to be problematic in some way? Somehow, I couldn’t at all comprehend that the situation could be perceived as a difficult one. There were minor obstructions here and there, but everyone in class seemed quite communicative and engaged.
After asking around a bit, I thought to pose the question here for some feedback. So let me know.
### A subtle question about principality
Dear Professor Kim,
I just have a few small questions. In lectures, we calculated the class group of $Q(\sqrt{10})$, which has ring of algebraic integers $Z[\sqrt{10}]$. We then found that that (maximal ideal) $P_2 = (2,\sqrt{10})$, $N(P_2) = 2.$
After a bunch of calculations we had to see whether $P_2$ was principal or not. Using the result,
—————————————————————-
$I$ (non-zero) is principal iff there exists $\alpha \in I$ s.t
$|N(\alpha)| = N(I)$
—————————————————————–
We had to consider if there was $n,m \in Z$ s.t
$|N(n+m\sqrt{10})| = 2$
Since the general element of the ideal $P_2$ is $2n+m\sqrt{10}$, is it ‘more correct’ to consider if there was $n,m$ s.t
$|N(2n+m\sqrt{10})| = 2$
I know this doesn’t make a whole lot of difference, it’s just one of those things.
So
$n^2 - 10m^2 = \pm2$
which was equivalent to the statement
$n^2 = 2 \mod 5$ or $n^2 = 3 \mod 5$
In general, do we consider any ‘modulo n’ so that the statement is simplified?
Many Thanks
———————————————————
You are absolutely right about the principality question. That is, when we have an ideal $I$ in the ring of integers $O_K$ of an algebraic number field $K$ such that $N(I)=n$, then we are led to consider solutions to the equation
$N(z)=\pm n$
for various $z$. As we’ve seen in many examples, once $z$ is expressed in terms of an integral basis, this becomes an equation in $d=[K:Q]$ variables with integer coefficients to which we can consider solutions. Now, if there are no solutions with $z\in O_K$, then, a fortiori, there are no solutions $z\in I$ and we can conclude that $I$ is not principal. However, although I’m too lazy now to cook up an example, there are situations where there *is* a solution $z\in O_K$, but no solution $z\in I$. In the example you mention, the equation corresponding to $z\in I$ is easily found to be
$4n^2-10m^2=\pm 1$
which is obviously more resrictive than
$n^2-10m^2=\pm 1.$
In our case, the latter already has no solution, so we don’t need to consider the more refined equation.
Regarding your second question, firstly, the equation is not *equivalent* to the congruence equation but just implies it. So if the congruence equation has no solution, neither does the original, which is how we used it. Now, I don’t quite understand your final question, but perhaps I should remark that considering congruences is a standard way of investigating solutions to quadratic equations. In fact, it is useful for *any* Diophantine equation. However, a rather deep theorem says that for quadratic equations, sufficiently many congruence equations completely determine whether or not the original equation has rational solutions.
### Primitive elements
Dear Dr Kim,
There is a post on your blog regarding finding primitive elements. Your advice was to look at the Primitive elements: an example document. Can I assume this would be an acceptable answer in the exam and, would just stating that the method is the same as the one used for proving the Primitive Element Theorem be sufficient justification or do we need to provide further explanation?
———————————————————-
Of course the method is acceptable, but I don’t understand what you mean by sufficient justification.’
Let’s remind ourselves what method we are speaking of:
To find a primitive element in $Q(\alpha, \beta)$, we need to locate a linear combination $\alpha + c\beta$ with $c\in Q$ satisfying certain conditions spelled out in the proof of the Primitive Element Theorem. It might be $\alpha+\beta$, $\alpha-\beta$, $\alpha+(1/2)\beta$, etc. depending on the situation, even though the result tends to be rather simple in the examples that have come up. To use the method of the theorem would mean checking that the conditions are satified for some specific $c$. If you did this, yes it would be sufficient justification.
### Norms, class groups, more, …
Sorry to bombard you with my problems professor, but i was attempting problem sheet 6 in a bid to understand how to calculate class groups properly, and have no real problem with it up until the point where we start to deduce which of the prime ideals are principal. When i say prime ideals i hope i’m right in calling the curly p with subscript of a prime number that. If the norm of a general element is a prime we say that the prime ideal is maximal right? I was attempting question one for root 11 and the one step i seem to have difficulty with is when we calculate the other norms of $(3+(11)^1/2)$ and above, i have a vague understanding of what we do where we assign the norms to the prime ideals depending on what they are. But this step in general seems to allude me whenever i attempt these questions. So if you could shed some light on this step or could just guide me to the theorems or lemmas that would help with this area that would be extremely helpful. Thanks
———————————————————–
I’m sorry to say this so close to the exam, but some of your questions are a bit worrisome. For example, the question If the norm of a general element is a prime we say that the prime ideal is maximal right?’. It’s hard to make out what you mean. A correct statement is that inside the ring $O_K$ of integers inside an algebraic number field $K$, all non-zero prime ideals are maximal. This fact is actually a bit tricky: it follows from the fact that $O_K/I$ is finite for any non-zero ideal $I$ and that any finite integral domain is a field. I hope you’re not confused about the *definition* of prime and maximal ideals, which just come from general algebra.
Let me guess a bit at what the confusion might be. When factorizing an ideal $I$, very relevant are the prime factors of $N(I)$. This is because if
$I=P_1P_2\cdots P_k$
then
$N(I)=N(P_1)N(P_2)\cdots N(P_k).$
We know in fact that each $N(P_i)$ will be a prime power factor of $N(I)$. This allows us to look for prime ideal factors of $I$.
I hope you’ve already thoroughly reviewed the online notes. Chapter 4 is the relevant part for this material.
### Rings of integers
Hi professor,
I was just wandering in the 2007 paper when it says from first principles to determine the ring of algebraic integers in $Q[(103)^{1/2}]$, what this actually means, hope it doesn’t sound like a dumb question. Do we just bear in mind the definition of an algebraic integer and produce a basis for $Q[(103)^{1/2}]$ and show that for any element $a + b[(103)^{1/2}]$ that $a$ and $b$ are integers?
As regards with the previous question i think the $nZ^d$ was more like $n(Z^d)$.
Thanks
—————————————————————————-
In essence, yes. A general element of $Q[(103)^{1/2}]$ is of the form $a+b(103)^{1/2}$ for $a,b\in Q$. In that problem, you are expected to show that $a+b(103)^{1/2}$ is an algebraic integer if and only if $a,b\in Z$, using just the definition. The context of the problem might help you to understand what is expected from the solution: I noticed at some point that there were students who knew the (important!) formula for the ring of integers in general $Q[\sqrt{d}]$ and could justify it, but then got awfully confused when presented with the same problem for specific $d$.
By the way, to defend my notation $nZ^d$, note that there are two different ways to insert brackets, both leading to the same subgroups of $Z^d$. Hence, it’s OK to omit them :-).
### Integers, class groups, multiplying ideals…
Many thanks, that wasn’t meant to sound like quite such a leading questions, I think I was in the midst of exam panic when I sent it! Sorry to fire off another list of questions, I’m fully aware that you must be inundated with emails at this time of year, so thank you again for being so prompt and clear in your responses!
1) In the 2008 exam qu1 part d: Is it possible to just calculate the minimal polynomial and see if the degree is the same as the extension?
2) In A Few Past Exam questions:
The discriminant in the second part is given as $3^5.17.19$, no matter how many different methods I use to calculate this I don’t get the right answer! I was using the assumption that as a cubic we can use $-4a^3-27b^2$ but this gives me $3^5.-13$?
3) How do we multiply maximal ideals explicitly? For example in ‘A few past exam questions’ I can see how this is true for $P_2P_5$ but do we then just deduce $P_2P_7$ or is their some way of calculating this? I can also see that an alternative factorization could be as
$(2)=P_2^2 (5+\alpha)=P_5P_7$
but again don’t see how to explicitly calculate the second result.
4) In Integral Bases and Translations: The discriminant of B is given as $4^4.(-p)^3$, is this correct? My calculations gave me either $4^2$ or equivalently $2^4.$
5) In Few Class Groups:
How do we know the ring of algebraic integers is $Z[\alpha]$? When calculating I get a possible algebraic integer with prime 2 (which is eliminated using Eisenstein) but am still left with prime 3 giving the possibility of algebraic integer
$1/3(a_0+a_1x+a_2x^2)?$
Kindest regards,
—————————————————————
(1) Yes, it’s fine to do this. Another way is to use the proof of the primitive element theorem, as explained in Primitive elements: an example.’
(2) You are right! That was a silly error on my part. Thankfully, it doesn’t affect the rest of the argument at all, so it went unnoticed.
(3) Multiplying explicitly isn’t too hard by just multiplying the generators. For example, in the case of $P_2=(2, \alpha)$ and $P_7=(7, \alpha-2)$, we would get
$P_2P_7=(14, 2\alpha-4, 7\alpha, \alpha^2-2\alpha).$
But that isn’t how I obtained the formula you mention. It would have been a bit tricky to guess the generator $(\alpha-2)$ just from the presentation above. What I actually did was factorize $(\alpha-2)$. Since $N(\alpha-2)=14$, then only possibilities are
$(\alpha-2)=P_2P_7$ or $(\alpha-2)=P_2P_7'.$
But it has to be the former since $(\alpha-2)\subset P_7$, so that $P_7|(\alpha-2)$.
(4) In this case, I think I’m right (surprise, surprise). Just follow the computation in that article using $N(\alpha)=-p$.
(5) I’m supposing you mean the problem where $\alpha=2^{1/3}$. The point is that for the translation $\beta=\alpha+1$, we get the irreducible polynomial
$(x-1)^3-2=x^3-3x^2+3x-3,$
which is Eisenstein for the prime 3. Now follow the reasoning in `Integral bases and translations.
### Norms of elements and principality
Dear Professor Kim,
Sorry to bombard you with these questions. I have come across a problem on your note ‘some principle ideals’. When we factorize $m(x)=x^3 + x - 1$ modulo 3 we get $(x+1)(x^2-x-1)$ we then associate these factors with the ideals $P_3$ and $P_9$ respectively. When we compute the norm of $x^2-x-1$ we do so by calculating the determinant of the matrix $L_{a^2-a-1}$ and find that the norm is in fact 9, so $P_9$ is a principle ideal. However, we could just have easily used $x^2+2x-1$ or $x^2+2x+2$ and in each case I get a different answer for the determinant. Have I made an error or is there a canonical form of sort that I should be aware of?
———————————————-
First of all, I presume your $x^2-x-1$ etc. are $a^2-a-1$ etc. All the elements you mention do indeed belong to the ideal and can be used as generators *when used together with the element 3*. Indeed they are all all evaluations at $a$ of polynomials that are congruent to $x^2-x-1$ mod $3$. However, this does not mean they are generators on their own. Of course different elements in an ideal $I$ will have different norms in general. However, an element $b\in I$ is a generator *by itself* (making $I$ into a principal ideal), exactly when $|N(b)|=N(I)$. Of course such a $b$ need not exist. I haven’t calculated the norms of the elements you mention, but if their norms come out larger than 9, it merely says they are not generators (again, by themselves), while $a^2-a-1$ is.
A thorny point that comes out of this discussion is that if you had initially presented the ideal as $(3, a^2+2a-1)$, for example, then it might have been harder to see that it is principal.
### More norms of ideals
Dear Professor Kim,
I am unsure of how to calculate the norm of $(2,2\sqrt{15})$ in the ring $Q(\sqrt{15})$, which is on Sheet 5 Question 4a.
I can see that this ideal can be written as $(2)$ so it will have norm 4. Also, in the ring $Z[\sqrt{15}]$ a general element looks like $n + m\sqrt{15},$ where $n,m$ belong to $latex Z$. So if we calculate the norm using the principle
$|(2)| = |Z[x]/(2)| = |Z[x]/(2(n + m\sqrt{15}))| = 4.$
From what I understand this is the reasoning you give in ‘Some remarks on factorization’.
However, if we use the method which given further down that sheet I get:
$Z[\sqrt{15}]/(2) = Z[x]/((x^2 - 15),2) = F_2[x]/(x^2 - 15)$
$= F_2[x]/(x^2 - 1) = F_2[x]/(x+1)(x-1) = F_2/(2)$
$|F_2/(2)| = 2$
Please tell me where I’m going wrong.
Many Thanks!
—————————————————-
first of all, I hope you can see that the line
$|(2)| = |Z[x]/(2)| = |Z[x]/(2(n + m\sqrt{15}))| = 4$
above doesn’t make too much sense. The second displayed equation is almost right, except an error occurs when computing
$|F_2[x]/(x+1)(x-1)|.$
Because the coefficients are in $F_2$, we have $x-1=x+1$. So
$F_2[x]/(x+1)(x-1)=F_2[x]/(x-1)^2\simeq F_2[t]/(t^2),$
from the isomorphism $F_2[x]\simeq F_2[t]$ that takes $x$ to $t+1$. It is easy to see that the $F_2$-vector space $F_2[t]/(t^2)$ has dimension 2 with basis $1, t$. Hence,
$|F_2[x]/(x+1)(x-1)|=|F_2[t]/(t^2) | =4.$
### Long email
I received a long last minute email. I’ll just copy it together with my answers without proper formatting or corrections because I’m also a bit tired.
MK
—————————————————————
Because of lack of time, I will be brief with the answers:
1. Everything is OK except for the statement about the JCF and
the linearly independent eigenvector. A basis for the eigenspace
can be taken as 1. There is just one Jordan block. This should tell you
what the JCF is. (Obviously can’#t be zero).
2. You are right.
3. Right again.
4. The definition as given is correct. For example, might have
f=m. There must have been a misprint in the mark scheme.
5. Yes, these are the same. More precisely, Ker(A)=Ker(-A) for
obvious reasons.
6. One way of thinking about C is as R^2 with some multiplication.
Anyways, the dimension is 2 for the reason you say as well. The dimension
of any field F as a vector space over itself is one. The dimension of
R as a Q-vector space is infinite.
7. See what you can do with L^{98}v.
8. Yes to both questions.
9. This is somewhat tricky. Look at example 125 on the 3704 lecture notes for the general idea.
10. You could, but it might be quicker to take a general symmetric
matrix and show that it can’t work.
11. This is a bit complicated to explain by email in a short amount of
time.
> Dear Prof Kim,
>
> Sorry for the late email. I have quite a few questions on 2201 course
> material. I will be very grateful if you could clear my queries.
>
>
> COURSE SUMMARY AND LECTURE NOTES:
> 1) Summary sheet on course mentions ‘vector space of polynomials with
> complex coefficients of degree at most 5’ Would this be {i, ix, i(x^2),
> i(x^3), i(x^4), i(x^5)} So on D(differentiation):VtoV we get min
> polynomial=ch polynomial=x^5 and so JCF is the zero matrix? And so D has
> only 1 linearly independent eigenvector (i.e. 0)?
>
> 2) Lecture notes page 43: In the proof for diagonalisation theorem, 3
> lines before the end of proof, ‘Hence by the inductive hypothesis, there
> is an orthonormal basis’ shouldn’t this be orthogonal basis?
>
> 3) Page 47 proof of sylvesters law of inertia: In the new basis ci =
> bi/sqrt(q(bi)) and not bi/sqrt(q(bi,bi))?
>
> Also further in the proof ‘q(u)=x1^2+…+xr^2>0’ shouldn’t this be >=0 as
> only then further on U intersection W = {0}
>
> 4) Lecture notes page 25: Definition of minimal polynomial second
> condition on f(T)=0 and f=/0 then deg(f)>=deg(m). Shouldnt deg(f) be
> strictly > deg(m)as suggested in the 2008 mark scheme.
>
> 5) Definition of t-th generalised eigenspace is V(t)(la)=ker((la.Id-T)^t)
> (page27) but in primary decomposition theorem (page28) we use this def as
> V(t)(la)=ker((T-la.Id)^t). Would this not change signs? Are they the same
> because for e.g. kernel implies T(v)=0 so T(-v)=0 too.
>
> 6) What is the dimension of C as an R-vector space? Is this 2 because
> basis is {1, i}?
> What is the basis of C as a C-vector space?
> What is the basis of Q as a Q-vector space?
> What is the basis of R as a Q-vector space?
>
> 7) Suppose there exists a vector v such
> that (L)^100(v)=0,(L)^99(v)=/0. Prove that there exists a
> vector w such that (L)^2(w)=0 and (L)(w)=/0.
> What would be the steps for this proof?
>
>
> HOMEWORK SHEETS:
> 8) Sheet 2 qu7 – Do we only do one long division to get ans?
> Sheet 2 qu9 – Not on syllabus?
>
> 9) How to do sheet 3 qu7?
>
> 10) Sheet 6 qu4 – Can we do this by finding all sym bilinear forms on
> field 2 (8 matrices) and showing none of their quadratic forms are xy?
>
> 11) Sheet 7 qu4 – How do I do part a? Part b is simple. For part c, do we
> explicitly show this for the 9 different possibilities. Most of these
> already in required form (i.e. q00, q01, q10, q11, etc..)
>
>
> Sorry for so many questions. I don’t live near university. Otherwise I
> would have come in to see you. If its easier for you to reply over a
> telephone conversation, please call me on 07971530295.
>
> Thank you very much for your time.
>
> | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 136, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8621158003807068, "perplexity": 272.43957082792593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649095.35/warc/CC-MAIN-20180323220107-20180324000107-00793.warc.gz"} |
https://www.jobilize.com/trigonometry/course/6-5-logarithmic-properties-exponential-and-logarithmic-by-openstax | 6.5 Logarithmic properties
Page 1 / 10
In this section, you will:
• Use the product rule for logarithms.
• Use the quotient rule for logarithms.
• Use the power rule for logarithms.
• Expand logarithmic expressions.
• Condense logarithmic expressions.
• Use the change-of-base formula for logarithms.
In chemistry, pH is used as a measure of the acidity or alkalinity of a substance. The pH scale runs from 0 to 14. Substances with a pH less than 7 are considered acidic, and substances with a pH greater than 7 are said to be alkaline. Our bodies, for instance, must maintain a pH close to 7.35 in order for enzymes to work properly. To get a feel for what is acidic and what is alkaline, consider the following pH levels of some common substances:
• Battery acid: 0.8
• Stomach acid: 2.7
• Orange juice: 3.3
• Pure water: 7 (at 25° C)
• Human blood: 7.35
• Fresh coconut: 7.8
• Sodium hydroxide (lye): 14
To determine whether a solution is acidic or alkaline, we find its pH, which is a measure of the number of active positive hydrogen ions in the solution. The pH is defined by the following formula, where $\text{\hspace{0.17em}}a\text{\hspace{0.17em}}$ is the concentration of hydrogen ion in the solution
The equivalence of $\text{\hspace{0.17em}}-\mathrm{log}\left(\left[{H}^{+}\right]\right)\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\mathrm{log}\left(\frac{1}{\left[{H}^{+}\right]}\right)\text{\hspace{0.17em}}$ is one of the logarithm properties we will examine in this section.
Using the product rule for logarithms
Recall that the logarithmic and exponential functions “undo” each other. This means that logarithms have similar properties to exponents. Some important properties of logarithms are given here. First, the following properties are easy to prove.
$\begin{array}{l}{\mathrm{log}}_{b}1=0\\ {\mathrm{log}}_{b}b=1\end{array}$
For example, $\text{\hspace{0.17em}}{\mathrm{log}}_{5}1=0\text{\hspace{0.17em}}$ since $\text{\hspace{0.17em}}{5}^{0}=1.\text{\hspace{0.17em}}$ And $\text{\hspace{0.17em}}{\mathrm{log}}_{5}5=1\text{\hspace{0.17em}}$ since $\text{\hspace{0.17em}}{5}^{1}=5.$
Next, we have the inverse property.
For example, to evaluate $\text{\hspace{0.17em}}\mathrm{log}\left(100\right),$ we can rewrite the logarithm as $\text{\hspace{0.17em}}{\mathrm{log}}_{10}\left({10}^{2}\right),$ and then apply the inverse property $\text{\hspace{0.17em}}{\mathrm{log}}_{b}\left({b}^{x}\right)=x\text{\hspace{0.17em}}$ to get $\text{\hspace{0.17em}}{\mathrm{log}}_{10}\left({10}^{2}\right)=2.$
To evaluate $\text{\hspace{0.17em}}{e}^{\mathrm{ln}\left(7\right)},$ we can rewrite the logarithm as $\text{\hspace{0.17em}}{e}^{{\mathrm{log}}_{e}7},$ and then apply the inverse property $\text{\hspace{0.17em}}{b}^{{\mathrm{log}}_{b}x}=x\text{\hspace{0.17em}}$ to get $\text{\hspace{0.17em}}{e}^{{\mathrm{log}}_{e}7}=7.$
Finally, we have the one-to-one property.
We can use the one-to-one property to solve the equation $\text{\hspace{0.17em}}{\mathrm{log}}_{3}\left(3x\right)={\mathrm{log}}_{3}\left(2x+5\right)\text{\hspace{0.17em}}$ for $\text{\hspace{0.17em}}x.\text{\hspace{0.17em}}$ Since the bases are the same, we can apply the one-to-one property by setting the arguments equal and solving for $\text{\hspace{0.17em}}x:$
But what about the equation $\text{\hspace{0.17em}}{\mathrm{log}}_{3}\left(3x\right)+{\mathrm{log}}_{3}\left(2x+5\right)=2?\text{\hspace{0.17em}}$ The one-to-one property does not help us in this instance. Before we can solve an equation like this, we need a method for combining terms on the left side of the equation.
Recall that we use the product rule of exponents to combine the product of exponents by adding: $\text{\hspace{0.17em}}{x}^{a}{x}^{b}={x}^{a+b}.\text{\hspace{0.17em}}$ We have a similar property for logarithms, called the product rule for logarithms , which says that the logarithm of a product is equal to a sum of logarithms. Because logs are exponents, and we multiply like bases, we can add the exponents. We will use the inverse property to derive the product rule below.
Given any real number $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ and positive real numbers and $\text{\hspace{0.17em}}b,$ where $\text{\hspace{0.17em}}b\ne 1,$ we will show
find general solution of the Tanx=-1/root3,secx=2/root3
find general solution of the following equation
Nani
the value of 2 sin square 60 Cos 60
0.75
Lynne
0.75
Inkoom
when can I use sin, cos tan in a giving question
depending on the question
Nicholas
I am a carpenter and I have to cut and assemble a conventional roof line for a new home. The dimensions are: width 30'6" length 40'6". I want a 6 and 12 pitch. The roof is a full hip construction. Give me the L,W and height of rafters for the hip, hip jacks also the length of common jacks.
John
I want to learn the calculations
where can I get indices
I need matrices
Nasasira
hi
Raihany
Hi
Solomon
need help
Raihany
maybe provide us videos
Nasasira
Raihany
Hello
Cromwell
a
Amie
What do you mean by a
Cromwell
nothing. I accidentally press it
Amie
you guys know any app with matrices?
Khay
Ok
Cromwell
Solve the x? x=18+(24-3)=72
x-39=72 x=111
Suraj
Solve the formula for the indicated variable P=b+4a+2c, for b
Need help with this question please
b=-4ac-2c+P
Denisse
b=p-4a-2c
Suddhen
b= p - 4a - 2c
Snr
p=2(2a+C)+b
Suraj
b=p-2(2a+c)
Tapiwa
P=4a+b+2C
COLEMAN
b=P-4a-2c
COLEMAN
like Deadra, show me the step by step order of operation to alive for b
John
A laser rangefinder is locked on a comet approaching Earth. The distance g(x), in kilometers, of the comet after x days, for x in the interval 0 to 30 days, is given by g(x)=250,000csc(π30x). Graph g(x) on the interval [0, 35]. Evaluate g(5) and interpret the information. What is the minimum distance between the comet and Earth? When does this occur? To which constant in the equation does this correspond? Find and discuss the meaning of any vertical asymptotes.
The sequence is {1,-1,1-1.....} has
how can we solve this problem
Sin(A+B) = sinBcosA+cosBsinA
Prove it
Eseka
Eseka
hi
Joel
yah
immy
June needs 45 gallons of punch. 2 different coolers. Bigger cooler is 5 times as large as smaller cooler. How many gallons in each cooler?
7.5 and 37.5
Nando
how would this look as an equation?
Hayden
5x+x=45
Khay
find the sum of 28th term of the AP 3+10+17+---------
I think you should say "28 terms" instead of "28th term"
Vedant
the 28th term is 175
Nando
192
Kenneth
if sequence sn is a such that sn>0 for all n and lim sn=0than prove that lim (s1 s2............ sn) ke hole power n =n | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 24, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8527196049690247, "perplexity": 935.4914394258132}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514571651.9/warc/CC-MAIN-20190915155225-20190915181225-00404.warc.gz"} |
https://socratic.org/questions/58117c89b72cff3c33e03c5e | Chemistry
Topics
# Question #03c5e
Nov 12, 2016
$- \text{3267.4 kJ/mol}$
The heat of formation of oxygen is $0$ so I didn't take it. Sorry, I forgot to write units. The units are $\text{kJ/mol}$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.898173987865448, "perplexity": 957.0302515477146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576122.89/warc/CC-MAIN-20190923064347-20190923090347-00356.warc.gz"} |
http://mathhelpforum.com/calculus/76842-integrating-fraction-u.html | Thread: integrating a fraction of u
1. integrating a fraction of u
Im having an issue with:
(u^(3/2))/2
i know that once i evaluate it i am supposed to add +1 to r (which is 3/2 in this case) and that would give me u^(5/2) but I thought i was also supposed to multiply 5/2 times the denominator which would give me 10/2 or 5.......
However I am confused because the answer is supposed to be:
(1/10)(u)^5/2 and when i do it my way i get 5(u)^5/2
What am I doing wrong? Any help would be great, thanks!
2. Is the original expression $\frac{u^{\frac{3}{2}}}{2}$?
You are right you add 1, or 2/2, to the exponent and divide by the new exponent. If you do that you get $\frac{u^{\frac{5}{2}}}{2*\frac{5}{2}}$
So I find that both you and your other answer are wrong.
3. Originally Posted by Jameson
Is the original expression $\frac{u^{\frac{3}{2}}}{2}$?
You are right you add 1, or 2/2, to the exponent and divide by the new exponent. If you do that you get $\frac{u^{\frac{5}{2}}}{2*\frac{5}{2}}$
So I find that both you and your other answer are wrong.
Haha your right i meant from what I have up there i get (1/5)(u)^5/2 but the solutions is supposed to be (1/10)(u)^5/2, so im guessing i messed up somewhere else in the problem because this is only part of it the original problem which was:
Find the following indefinite integral:
(x)(square root of 2x-1)dx
sorry i dont know how to make it look prettier
(1/10)(2x-1)^(5/2) + (1/6)(2x-1)^(3/2) +c
4. If you mean $\int x\sqrt{2x-1}dx$, then I again find your solution wrong. At least Mathmatica says so.
5. Originally Posted by Jameson
If you mean $\int x\sqrt{2x-1}dx$, then I again find your solution wrong. At least Mathmatica says so.
Yes that was the equation, was I correct in putting 1/5 instead of 1/10 and i got 1/3 instead of 1/6?
6. Originally Posted by ckylek
Yes that was the equation, was I correct in putting 1/5 instead of 1/10 and i got 1/3 instead of 1/6?
I think I know why you're off from the solution by 1/2. When you let u=2x-1, you must notice that du=2dx, thus dx = (1/2)du. So if you forgot to factor in this constant from your change in variables, you would be off by 1/2.
7. Originally Posted by Jameson
I think I know why you're off from the solution by 1/2. When you let u=2x-1, you must notice that du=2dx, thus dx = (1/2)du. So if you forgot to factor in this constant from your change in variables, you would be off by 1/2.
yes sir that was it, i was suspecting a hidden 2 or something, thanks again for all your help. I really appreciate it. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9439095854759216, "perplexity": 581.8770893572199}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982290765.41/warc/CC-MAIN-20160823195810-00258-ip-10-153-172-175.ec2.internal.warc.gz"} |
http://mathhelpforum.com/trigonometry/10603-graphing-sine-need-help-print.html | Graphing sine, need help
Printable View
• Jan 24th 2007, 07:22 PM
zbrownleep7
Graphing sine, need help
Alright, my assingment is to use trig graph paper (which was given to us) and graph 0 to 360 degree's going by the special angles (30,45,60,90,120..etc).
I already know what the picture is going to look like:
http://img444.imageshack.us/img444/5810/sine1yb.th.png
We have to use the graph paper and mark exactly where 30 degree's would be and all the other special angles.
Can anyone help me understand what I'm graphing or how to graph it?
Appreciate any help before 7:00 am January 25th. :)
• Jan 24th 2007, 08:01 PM
ThePerfectHacker
Quote:
Originally Posted by zbrownleep7
Alright, my assingment is to use trig graph paper (which was given to us) and graph 0 to 360 degree's going by the special angles (30,45,60,90,120..etc).
I already know what the picture is going to look like:
http://img444.imageshack.us/img444/5810/sine1yb.th.png
We have to use the graph paper and mark exactly where 30 degree's would be and all the other special angles.
Can anyone help me understand what I'm graphing or how to graph it?
Appreciate any help before 7:00 am January 25th. :)
Take graph paper and mark of (make your own scale up) for the number of degrees.
Then you need to find $y=\sin x$.
You pick a point (one of the special ones) and find the sine of that.
For example, the important angles are: 0,30,45,60
Then, sin(0)=0, sin(30)=.5, sin(45)=.707, sin(60)=.866
And you create the point on the graph,
(0,0),(30,.5),(45,.707),(60,.866) and thus one.
And connect the dots. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8340886831283569, "perplexity": 2233.938957962635}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541134.5/warc/CC-MAIN-20161202170901-00326-ip-10-31-129-80.ec2.internal.warc.gz"} |
http://thousandfold.net/cz/2013/11/12/a-useful-trick-for-computing-gradients-w-r-t-matrix-arguments-with-some-examples/ | # A useful trick for computing gradients w.r.t. matrix arguments, with some examples
I’ve spent hours this week and last week computing, recomputing, and checking expressions for matrix gradients of functions. It turns out that except in the simplest of cases, the most painfree method for finding such gradients is to use the Frechet derivative (this is one of the few concrete benefits I derived from the differential geometry course I took back in grad school).
Remember that the Frechet derivative of a function $$f : X \rightarrow \mathbb{R}$$ at a point $$x$$ is defined as the unique linear operator $$d$$ that is tangent to $$f$$ at $$x$$, i.e. that satisfies
$f(x+h) = f(x) + d(h) + o(\|h\|).$
This definition of differentiability makes sense whenever $$X$$ is a normed linear space. If $$f$$ has a gradient, then the Frechet derivative exists and the gradient satisfies the relation $$d(h) = \langle \nabla f(x), h \rangle.$$
### Simple application
As an example application, lets compute the gradient of the function
$f(X) = \langle A, XX^T \rangle := \mathrm{trace}(A^T XX^T) = \sum_{ij} A_{ij} (XX^T)_{ij}$
over the linear space of $$m$$ by $$n$$ real-valued matrices equipped with the Frobenius norm. First we can expand out $$f(X+H)$$ as
$f(X + H) = \langle A, (X+H)(X+H)^T \rangle = \langle A, XX^T + XH^T + HX^T + HH^T \rangle$
Now we observe that the terms which involve more than one power of $$H$$ are $$O(\|H\|^2) = o(\|H\|)$$ as $$H \rightarrow 0$$, so
$f(X + H) = f(X) + \langle A, XH^T + HX^T \rangle + o(\|H\|).$
It follows that
$d(H) = \langle A, XH^T + HX^T \rangle = \mathrm{trace}(A^TXH^T) + \mathrm{trace}(A^THX^T),$
which is clearly a linear function of $$H$$ as desired. To write this in a way that exposes the gradient, we use the
cyclicity properties of the trace, and exploit its invariance under transposes to see that
\begin{align}
d(H) & = \mathrm{trace}(HX^TA) + \mathrm{trace}(X^TA^T H) \\
& = \mathrm{trace}(X^TAH) + \mathrm{trace}(X^TA^T H) \\
& = \langle AX, H \rangle + \langle A^TX, H \rangle \\
& = \langle (A + A^T)X, H \rangle.
\end{align}
The gradient of $$f$$ at $$X$$ is evidently $$(A + A^T)X$$.
### More complicated application
If you have the patience to work through a lot of algebra, you could probably calculate the above gradient component by component using the standard rules of differential calculus, then back out the simple matrix expression $$(A + A^T)X$$. But what if we partitioned $$X$$ into $$X = [\begin{matrix}X_1^T & X_2^T \end{matrix}]^T$$ and desired the derivative of
$f(X_1, X_2) = \mathrm{trace}\left(A \left[\begin{matrix} X_1 \\ X_2 \end{matrix}\right] \left[\begin{matrix}X_1 \\ X_2 \end{matrix} \right]^T\right)$
with respect to $$X_2$$? Then the bookkeeping necessary becomes even more tedious if you want to compute component by component derivatives (I imagine, not having attempted it). On the other hand, the Frechet derivative route is not significantly more complicated.
Some basic manipulations allow us to claim
\begin{align}
f(X_1, X_2 + H) & = \mathrm{trace}\left(A \left[\begin{matrix} X_1 \\ X_2 + H \end{matrix}\right] \left[\begin{matrix}X_1 \\ X_2 + H \end{matrix} \right]^T\right) \\
& = f(X_1, X_2) + \mathrm{trace}\left(A \left[\begin{matrix} 0 & X_1 H^T \\
H X_2^T & H X_2^T + X_2 H^T + H H^T \end{matrix} \right]\right)
\end{align}
Once again we drop the $$o(\|H\|)$$ terms to see that
$d(H) = \mathrm{trace}\left(A \left[\begin{matrix} 0 & X_1 H^T \\ H X_2^T & H X_2^T + X_2 H^T \end{matrix} \right]\right).$
To find a simple expression for the gradient, we partition $$A$$ (conformally with our partitioning of $$X$$ into $$X_1$$ and $$X_2$$) as
$A = \left[\begin{matrix} A_1 & A_2 \\ A_3 & A_4 \end{matrix} \right].$
Given this partitioning,
\begin{align}
d(H) & = \mathrm{trace}\left(\left[\begin{matrix}
A_2 H X_1^T & \\
& A_3 X_1 H^T + A_4 H X_2^T + A_4 X_2 H^T
\end{matrix}\right] \right) \\
& = \langle A_2^TX_1, H \rangle + \langle A_3X_1, H \rangle + \langle A_4^T X_2, H \rangle + \langle A_4X_2, H \rangle \\
& = \langle (A_2^T + A_3)X_1 + (A_4^T + A_4)X_2, H \rangle.
\end{align}
The first equality comes from noting that the trace of a block matrix is simply the trace of its diagonal parts, and the second comes from manipulating the traces using their cyclicity and invariance to transposes.
Thus $$\nabla_{X_2} f(X_1, X_2) = (A_2^T + A_3)X_1 + (A_4^T + A_4)X_2.$$
### A masterclass application
Maybe you didn’t find the last example convincing. Here’s a function I needed to compute the matrix gradient for— a task which I defy you to accomplish using standard calculus operations—:
$f(V) = \langle 1^T K^T, \log(1^T \mathrm{e}^{VV^T}) \rangle = \log(1^T \mathrm{e}^{VV^T})K1.$
Here, $$K$$ is an $$n \times n$$ matrix (nonsymmetric in general), $$V$$ is an $$n \times d$$ matrix, and $$1$$ is a column vector of ones of length $$n$$. The exponential $$\mathrm{e}^{VV^T}$$ is computed entrywise, as is the $$\log$$.
To motivate why you might want to take the gradient of this function, consider the situation that $$K_{ij}$$ measures how similar items $$i$$ and $$j$$ are in a nonsymmetric manner, and the rows of $$V$$ are coordinates for representations of the items in Euclidean space. Then $$(1^T K)_j$$ measures how similar item $$j$$ is to all the items, and
$(1^T \mathrm{e}^{VV^T})_j = \sum_{\ell=1}^n \mathrm{e}^{v_\ell^T v_j}$
is a measure of how similar the embedding $$v_j$$ is to the embeddings of all the items. Thus, if we constrain all the embeddings to have norm 1, maximizing $$f(V)$$ with respect to $$V$$ ensures that the embeddings capture the item similarities in some sense. (Why do you care about this particular sense? That’s another story altogether.)
Ignoring the constraints (you could use a projected gradient method for the optimization problem), we’re now interested in finding the gradient of $$f$$. In the following, I use the notation $$A \odot B$$ to indicate the pointwise product of two matrices.
\begin{align}
f(V + H) & = \langle 1^T K, \log(1^T \mathrm{e}^{(V+H)(V+H)^T} \rangle \\
& = \langle 1^T K, \log(1^T [\mathrm{e}^{VV^T} \odot \mathrm{e}^{VH^T} \odot \mathrm{e}^{HV^T} \odot \mathrm{e}^{HH^T} ]) \rangle
\end{align}
One can use the series expansion of the exponential to see that
\begin{align}
\mathrm{e}^{VH^T} & = 11^T + VH^T + o(\|H\|), \\
\mathrm{e}^{HV^T} & = 11^T + HV^T + o(\|H\|), \text{ and}\\
\mathrm{e}^{HH^T} & = 11^T + o(\|H\|).
\end{align}
It follows that
\begin{multline}
f(V + H) = \langle 1^T K, \log(1^T [\mathrm{e}^{VV^T} \odot (11^T + VH^T + o(\|H\|)) \\
\odot (11^T + HV^T + o(\|H\|)) \odot (11^T + o(\|H\|)) ]) \rangle.
\end{multline}
\begin{align}
f(V + H) & = \langle 1^T K, \log(1^T [\mathrm{e}^{VV^T} \odot(11^T + VH^T + HV^T + o(\|H\|) )]) \rangle \\
& = \langle 1^T K, \log(1^T [\mathrm{e}^{VV^T} + e^{VV^T} \odot (VH^T + HV^T) + o(\|H\|) )]) \rangle
\end{align}
Now recall the linear approximation of $$\log$$:
$\log(x) = \log(x_0) + \frac{1}{x_0} (x-x_0) + o(|x- x_0|^2).$
Apply this approximation pointwise to conclude that
\begin{multline}
f(V + H) = \langle 1^T K, \log(1^T \mathrm{e}^{VV^T}) + \\
\{1^T \mathrm{e}^{VV^T}\}^{-1}\odot (1^T [\mathrm{e}^{VV^T} \odot (VH^T + HV^T) + o(\|H\|)]) \rangle,
\end{multline}
where $$\{x\}^{-1}$$ denotes the pointwise inverse of a vector.
Take $$D$$ to be the diagonal matrix with diagonal entries given by $$1^T \mathrm{e}^{VV^T}$$. We have shown that
$f(V + H) = f(V) + \langle K^T1, D^{-1} [\mathrm{e}^{VV^T} \odot (VH^T + HV^T)]1 \rangle + o(\|H\|),$
so
\begin{align}
d(H) & = \langle K^T1, D^{-1} [\mathrm{e}^{VV^T} \odot (VH^T + HV^T)]1 \rangle \\
& = \langle D^{-1}K^T 11^T, \mathrm{e}^{VV^T} \odot (VH^T + HV^T) \rangle \\
& = \langle \mathrm{e}^{VV^T} \odot D^{-1}K^T 11^T, (VH^T + HV^T) \rangle.
\end{align}
The second inequality follows from the standard properties of inner products and the third from the observation that
$\langle A, B\odot C \rangle = \sum_{ij} A_{ij}*B_{ij}*C_{ij} = \langle B \odot A, C \rangle.$
Finally, manipulations in the vein of the two preceding examples allow us to claim that
$\nabla_V f(V) = [\mathrm{e}^{VV^T} \odot (11^T K D^{-1} + D^{-1} K^T 11^T)] V.$
As a caveat, note that if instead $$f(V) = \log(1^T \mathrm{e}^{VV^T} ) K^T 1$$, then one should substitute $$K$$ for $$K^T$$ in the last expression.
• http://thousandfold.net/cz Alex Gittens
I realize that the second example is a trivial application of the first. Lets pretend that I intended it to be that way …
• Stephen
Very nice. This comes up for me a lot too and I’ve spent a lot of time with this. I’m now using these notes: http://research.microsoft.com/en-us/um/people/minka/papers/matrix/minka-matrix.pdf They go over constraints too. For example, if f(X) = tr( A^T X ) then usually grad f = A. But suppose we require X to be symmetric (but maybe A isn’t) as part of the definition of f. Then grad f = (A+A^T) – diag(A), which is not obvious.
• http://thousandfold.net/cz Alex Gittens
That’s a nice compendium of results. Thanks for the link.
• glynnec2008 .
The Frobenius product A:B has some nice properties. One of which, as you noted, is that it commutes with the Hadamard product, i.e. A : B @ C = A @ B : C
Another is its behavior with respect to skew/sym decompositions
sym(A) : skew(B) = 0
A : sym(B) = sym(A) : B
Finally, for a function G applied entrywise to a matrix argument Y, we have the nice result
dG(Y) = g(Y) @ dY (where g is the derivative of G)
Your masterclass is an example of this pattern of problem
Z = V.V’
Y = B.H(Z)
f = A : G(Y)
Specifically
G(x)=log(x), g(x)=1/x,
H(x)=exp(x), h(x)=exp(x),
A=1′.K’, B=1′
Hopefully the following isn’t too cryptic.
df = A:dG = A:g@dY = A@g:dY = A@g:(B.dH) = B’.A@g:dH
= B’.A@g:(h@dZ) = (B’.A@g)@h:dZ
= (B’.A@g)@h:(dV.V’ + V.dV’) = (B’.A@g)@h : 2 sym(dV.V’)
= 2 sym[(B'.A@g)@h] : dV.V’
= 2 sym[(B'.A@g)@h].V : dV
df/dV = 2 sym[(B'.A@g)@h].V | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9737944006919861, "perplexity": 1139.8357901724437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657125488.38/warc/CC-MAIN-20140914011205-00264-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
https://wumbo.net/figure/cosine-function/ | # Cosine Function Plot
## Description
This figure illustrates the graph of cosine function from to (tau) radians. The cosine function returns the horizontal component of the point formed by the angle (theta) on the unit circle. The geometric constant (tau) has the approximate value of .
Figure | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.988818347454071, "perplexity": 1336.7675921008497}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107881551.11/warc/CC-MAIN-20201023234043-20201024024043-00362.warc.gz"} |
https://jp.b-ok.org/book/3373273/8244ad | メイン Error Control Coding: Fundamentals and Applications
# Error Control Coding: Fundamentals and Applications
,
Using a minimum of mathematics, this volume covers the fundamentals of coding and the applications of codes to the design of real error control systems
1983
エディション:
1ed.
Prentice-Hall
english
ページ数:
603 / 625
ISBN 13:
9780132837965
ISBN:
013283796X
Series:
Prentice-Hall Computer Applications in Electrical Engineerin
File:
DJVU, 9.30 MB
You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.
1
1999
english
File:
DJVU, 14.95 MB
2
### Neural Networks and Statistical Learning
2014
english
File:
PDF, 7.82 MB
```SHO LIN I DANIEL J. CO\$TELLO,Jr. unda entals an* p«lications MKNTfCr-fMU SffffES IN COMFUTH APPLICATIONS IN tUCTXtCAL GINEERING Fronklin F. Kuo, Editor
ERROR CONTROL CODING Fundamentals and Applications
PRENTICE-HALL COMPUTER APPLICATIONS IN ELECTRICAL ENGINEERING SERIES FRANKLIN F. KUO, editor Abramson and Kuo, Computer-Communication Networks Bowers and Sedore, Sceptre: A Computer Program for Circuit and Systems Anal: Cadzow, Discrete Time Systems: An Introduction with Interdisciplinary Applications Cadzow and Martens, Discrete-Time and Computer Control Systems Davis, Computer Data Displays Friedman and Menon, Fault Detection in Digital Circuits Huelsman, Basic Circuit Theory Jensen and Lieberman, IBM Circuit Analysis Program: Techniques and Applications Jensen and Watkins, Network Analysis: Theory and Computer Methods Kline, Digital Computer Design Kochenburger, Computer Simulation of Dynamic Systems Kuo, (ed.) Protocols and Techniques for Data Communication Networks Kuo and Magnuson, Computer Oriented Circuit Design Lin, An Introduction to Error-Correcting Codes Lin and Costello, Error Control Coding: Fundamentals and Applications Nagle, Carroll, and Irwin, An Introduction to Computer Logic Rhyne, Fundamentals of Digitals Systems Design Sifferlen and Vartanian, Digital Electronics with Engineering Applications Staudhammer, Circuit Analysis by Digital Computer Stoutemyer, PL/1 Programming for Engineering and Science
ERROR CONTROL CODING Fundamentals and Applications SHU LIN University of Hawaii Texas A&M University DANIEL J. COSTELLO, JR. Illinois Institute of Technology Prentice-Hall, Inc. Englewood Cliffs, New Jersey 07632
Library of Congress Cataloging in Publication Data Lin, Shu. Error control coding. (Prentice-Hall computer applications in electrical engineering series) Includes bibliographical references and index. 1. Error-correcting codes (Information theory) 1. Costello, Daniel J. . II. Title. III. Series. QA268.L55 001.53'9 82-5255 ISBN 0-13-283796-X AACR2 Editorial/production supervisi; on and interior design by Anne Simpson Cover design by Marvin Warshaw Manufacturing buyer: Joyce Levatino © 1983 by Prentice-Hall, Inc., Englewood Cliffs, N.J. 07632 All rights reserved. No part of this book may be reproduced in any form or by any means without permission in writing from the publisher. Printed in the United States of America 10 9 8 7 ISBN O-ia-SflBT^b-X PRENTICE-HALL INTERNATIONAL, INC., London PRENTICE-HALL OF AUSTRALIA PTY. LIMITED, Sydney EDITORA PRENTICE-HALL DO BRAZIL, LTDA, Rio de Janeiro PRENTICE-HALL CANADA INC., Toronto PRENTICE-HALL OF INDIA PRIVATE LIMITED, New Delhi PRENTICE-HALL OF JAPAN, INC., Tokyo PRENTICE-HALL OF SOUTHEAST ASIA PTE. LTD., Singapore WHITEHALL BOOKS LIMITED, Wellington, New Zealand
With Love and Affection for Ivy, Julian, Patrick, and Michelle Lin and Lucretia, Kevin, Nick, Daniel, and Anthony Costello
Contents PREFACE xiii CHAPTER I CODING FOR REUABLE DIGITAL TRANSMISSION AND STORAGE 1 1.1 Introduction 7 1.2 Types of Codes 3 1.3 Modulation and Demodulation 5 1.4 Maximum Likelihood Decoding 8 1.5 Types of Errors 77 1.6 Error Control Strategies 12 References 74 CHAPTER 2 INTRODUCTION TO ALGEBRA 15 2.1 Groups 75 2.2 Fields 19 2.3 Binary Field Arithmetic 24 2.4 Construction of Galois Field GF(2m) 29 2.5 Basic Properties of Galois Field GF(2,n) 34 2.6 Computations Using Galois Field GF(2m) Arithmetic 39 2.7 Vector Spaces 40 2.8 Matrices 46 Problems 48 References 50 vii
CHAPTEH 3 LINEAR BLOCK CODES 57 3.1 Introduction to Linear Block Codes 57 3.2 Syndrome and Error Detection 58 3.3 Minimum Distance of a Block Code 63 3.4 Error-Detecting and Error-Correcting Capabilities of a Block Code 65 3.5 Standard Array and Syndrome Decoding 68 3.6 Probability of an Undetected Error for Linear Codes over a BSC 76 3.7 Hamming Codes 79 Problems 82 References 84 CHAPTER 4 CYCUC CODES 85 4.1 Description of Cyclic Codes 85 4.2 Generator and Parity-Check Matrices of Cyclic Codes 92 4.3 Encoding of Cyclic Codes 95 4.4 Syndrome Computation and Error Detection 98 4.5 Decoding of Cyclic Codes 703 4.6 Cyclic Hamming Codes 777 4.7 Shortened Cyclic Codes 776 Problems 727 References 723 CHAPTER 5 ERROR-TRAPPING DECODING FOR CYCUC CODES 725 5.1 Error-Trapping Decoding 725 5.2 Improved Error-Trapping Decoding 737 5.3 The Golay Code 734 Problems 739 References 739 CHAPTER 6 BCH CODES 141 6.1 Description of the Codes 742 6.2 Decoding of the BCH Codes 757 6.3 Implementation of Galois Field Arithmetic 767 6.4 Implementation of Error Correction 767 6.5 Nonbinary BCH Codes and Reed-Solomon Codes 770 6.6 Weight Distribution and Error Detection of Binary BCH Codes 777 Problems 780 References 782 viii Contents
CHAPTER 7 MAJORITY-LOGIC DECODING FOR CYCLIC CODES 184 7.1 One-Step Majority-Logic Decoding 184 7.2 Class of One-Step Majority-Logic Decodable Codes 7.3 Other One-Step Majority-Logic Decodable Codes 7.4 Multiple-Step Majority-Logic Decoding 209 Problems 219 References 227 194 201 CHAPTER 8 FINITE GEOMETRY CODES 223 8.1 Euclidean Geometry 223 8.2 Majority-Logic Decodable Cyclic Codes Based on Euclidean Geometry 227 8.3 Projective Geometry and Projective Geometry Codes 8.4 Modifications of Majority-Logic Decoding 245 Problems 253 References 255 240 CHAPTER 9 BURST-ERROR-CORRECTING CODES 257 9.1 Introduction 257 9.2 Decoding of Single-Burst-Error-Correcting Cyclic Codes 9.3 Single-Burst-Error-Correcting Codes 267 9.4 Interleaved Codes 277 9.5 Phased-Burst-Error-Correcting Codes 272 9.6 Burst-and-Random-Error-Correcting Codes 274 9.7 Modified Fire Codes for Simultaneous Correction of Burst and Random Errors 280 Problems 282 References 284 259 CHAPTER 10 CONVOLUTIONAL CODES 287 10.1 Encoding of Convolutional Codes 288 10.2 Structural Properties of Convolutional Codes 10.3 Distance Properties of Convolutional Codes Problems 372 References 373 295 308 CHAPTER II MAXIMUM UKEUHOOD DECODING OF CONVOLUTIONAL CODES 315 11.1 The Viterbi Algorithm 375 11.2 Performance Bounds for Convolutional Codes 322 Contents IX
11.3 Construction of Good Convolutional Codes 329 11.4 Implementation of the Viterbi Algorithm 337 11.5 Modifications of the Viterbi Algorithm 345 Problems 346 References 348 CHAPTER 12 SEQUENTIAL DECODING OF CONVOLUTIONAL CODES 350 12.1 The Stack Algorithm 357 12.2 The Fano Algorithm 360 12.3 Performance Characteristics of Sequential Decoding 364 12.4 Code Construction for Sequential Decoding 374 12.5 Other Approaches to Sequential Decoding 380 Problems 384 References 386 CHAPTER 13 MAJORITY-LOGIC DECODING OF CONVOLUTIONAL CODES 388 13.1 Feedback Decoding 389 13.2 Error Propagation and Definite Decoding 406 13.3 Distance Properties and Code Performance 408 13.4 Code Construction for Majority-Logic Decoding 414 13.5 Comparison with Probabilistic Decoding 424 Problems 426 References 428 CHAPTER 14 BURST-ERROR-CORRECTING CONVOLUTIONAL CODES 429 14.1 Bounds on Burst-Error-Correcting Capability 430 14.2 Burst-Error-Correcting Convolutional Codes 430 14.3 Interleaved Convolutional Codes 441 14.4 Burst-and-Random-Error-Correcting Convolutional Codes 442 Problems 455 References 456 CHAPTER 15 AUTOMATIC-REPEAT-REQUEST STRATEGIES 458 15.1 Basic ARQ Schemes 459 15.2 Selective-Repeat ARQ System with Finite Receiver Buffer 465 15.3 ARQ Schemes with Mixed Modes of Retransmission 474 15.4 Hybrid ARQ Schemes 477 15.5 Class of Half-Rate Invertible Codes 481 x Contents
15.6 Type II Hybrid Selective-Repeat ARQ with Finite Receiver Buffer 483 Problems 494 References 495 CHAPTER 16 APPUCATIONS OF BLOCK CODES FOR ERROR CONTROL IN DATA STORAGE SYSTEMS 498 16.1 Error Control for Computer Main Processor and Control Storages 498 16.2 Error Control for Magnetic Tapes 503 16.3 Error Control in IBM 3850 Mass Storage System 576 16.4 Error Control for Magnetic Disks 525 16.5 Error Control in Other Data Storage Systems 537 Problems 532 References 532 CHAPTER 17 PRACTICAL APPUCATIONS OF CONVOLUTIONAL CODES 533 17.1 Applications of Viterbi Decoding 533 17.2 Applications of Sequential Decoding 539 17.3 Applications of Majority-Logic Decoding 543 17.4 Applications to Burst-Error Correction 547 17.5 Applications of Convolutional Codes in ARQ Systems 557 Problems 556 References 557 Appendix A Tables of Galois Fields 561 Appendix B Minimal Polynomials of Elements in GF(2m) 579 Appendix C Generator Polynomials of Binary Primitive BCH Codes of Length up to 210 — 1 583 INDEX 599 Contents XI
Preface This book owes its beginnings to the pioneering work of Claude Shannon in 1948 on achieving reliable communication over a noisy transmission channel. Shannon's central theme was that if the signaling rate of the system is less than the channel capacity, reliable communication can be achieved if one chooses proper encoding and decoding techniques. The design of good codes and of efficient decoding methods, initiated by Hamming, Slepian, and others in the early 1950s, has occupied the energies of many researchers since then. Much of this work is highly mathematical in nature, and requires an extensive background in modern algebra and probability theory to understand. This has acted as an impediment to many practicing engineers and computer scientists, who are interested in applying these techniques to real systems. One of the purposes of this book is to present the essentials of this highly complex material in such a manner that it can be understood and applied with only a minimum of mathematical background. Work on coding in the 1950s and 1960s was devoted primarily to developing the theory of efficient encoders and decoders. In 1970, the first author published a book entitled An Introduction to Error-Correcting Codes, which presented the fundamentals of the previous two decades of work covering both block and convolutional codes. The approach was to explain the material in an easily understood manner, with a minimum of mathematical rigor. The present book takes the same approach to covering the fundamentals of coding. However, the entire manuscript has been rewritten and much new material has been added. In particular, during the 1970s the emphasis in coding research shifted from theory to practical applications. Consequently, three completely new chapters on the applications of coding to digital transmission and storage systems have been added. Other major additions include a comprehensive treatment of the error-detecting capabilities of block codes, and an emphasis on probabilistic decoding methods for convolutional codes. A brief description of each chapter follows. Chapter 1 presents an overview of coding for error control in data transmission XIII
and storage systems. A brief discussion of modulation and demodulation serves to place coding in the context of a complete system. Chapter 2 develops those concepts from modern algebra that are necessary to an understanding of the material in later chapters. The presentation is at a level that can be understood by students in the senior year as well as by practicing engineers and computer scientists. Chapters 3 through 8 cover in detail block codes for random-error correction. The fundamentals of linear codes are presented in Chapter 3. Also included is an extensive section on error detection with linear codes, an important topic which is discussed only briefly in most other books on coding. Most linear codes used in practice are cyclic codes. The basic structure and properties of cyclic codes are presented in Chapter 4. A simple way of decoding some cyclic codes, known as error- trapping decoding, is covered in Chapter 5. The important class of BCH codes for multiple-error correction is presented in detail in Chapter 6. A discussion of hardware and software implementation of BCH decoders is included, as well as the use of BCH codes for error detection. Chapters 7 and 8 provide detailed coverage of majority- logic decoding and majority-logic decodable codes. The material on fundamentals of block codes concludes with Chapter 9 on burst-error correction. This discussion includes codes for correcting a combination of burst and random errors. Chapters 10 through 14 are devoted to the presentation of the fundamentals of convolutional codes. Convolutional codes are introduced in Chapter 10, with the encoder state diagram serving as the basis for studying code structure and distance properties. The Viterbi decoding algorithm for both hard and soft demodulator decisions is covered in Chapter 11. A detailed performance analysis based on code distance properties is also included. Chapter 12 presents the basics of sequential decoding using both the stack and Fano algorithms. The difficult problem of the computational performance of sequential decoding is discussed without including detailed proofs. Chapter 13 covers majority-logic decoding of convolutional codes. The chapter concludes with a comparison of the three primary decoding methods for convolutional codes. Burst-error-correcting convolutional codes are presented in Chapter 14. A section is included on convolutional codes that correct a combination of burst and random errors. Burst-trapping codes, which embed a block code in a convolutional code, are also covered here. Chapters 15 through 17 cover a variety of applications of coding to modern day data communication and storage systems. Although they are not intended to be comprehensive, they are representative of the many different ways in which coding is used as a method of error control. This emphasis on practical applications makes the book unique in the coding literature. Chapter 15 is devoted to automatic-repeat- request (ARQ) error control schemes used for data communications. Both pure ARQ (error detection with retransmission) and hybrid ARQ (a combination of error correction and error detection with retransmission) are discussed. Chapter 16 covers the application of block codes for error control in data storage systems. Coding techniques for computer memories, magnetic tape, magnetic disk, and optical storage systems are included. Finally, Chapter 17 presents a wide range of applications of convolutional codes to digital communication systems. Codes actually used on many space and satellite systems are included, as well as a section on using convolutional codes in a hybrid ARQ system. Preface
Several additional features are included to make the book useful both as a classroom text and as a comprehensive reference for engineers and computer scientists involved in the design of error control systems. Three appendices are given which include details of algebraic structure used in the construction of block codes. Many tables of the best known codes for a given decoding structure are presented throughout the book. These should prove valuable to designers looking for the best code for a particular application. A set of problems is given at the end of each chapter. Most of the problems are relatively straightforward applications of material covered in the text, although some more advanced problems are also included. There are a total of over 250 problems. A solutions manual will be made available to instructors using the text. Over 300 references are also included. Although no attempt was made to compile a complete bibliography on coding, the references listed serve to provide additional detail on topics covered in the book. The book can be used as a text for an introductory course on error-correcting codes and their applications at the senior or beginning graduate level. It can also be used as a self-study guide for engineers and computer scientists in industry who want to learn the fundamentals of coding and how they can be applied to the design of error control systems. As a text, the book can be used as the basis for a two-semester sequence in coding theory and applications, with Chapters 1 through 9 on block codes covered in one semester and Chapters 10 through 17 on convolutional codes and applications in a second semester. Alternatively, portions of the book can be covered in a one- semester course. One possibility is to cover Chapters 1 through 6 and 10 through 12, which include the basic fundamentals of both block and convolutional codes. A course on block codes and applications can be comprised of Chapters 1 through 6, 9, 15, and 16, whereas Chapters 1 through 3, 10 through 14, and 17 include convolutional codes and applications as well as the rudiments of block codes. Preliminary versions of the notes on which the book is based have been classroom tested by both authors for university courses and for short courses in industry, with very gratifying results. It is difficult to identify the many individuals who have influenced this work over the years. Naturally, we both owe a great deal of thanks to our thesis advisors, Professors Paul E. PfeifTer and James L. Massey. Without their stimulating our interest in this exciting field and their constant encouragement and guidance through the early years of our research, this book would not have been possible. Much of the material in the first half of the book on block codes owes a great deal to Professors W. Wesley Peterson and Tadao Kasami. Their pioneering work in algebraic coding and their valuable discussions and suggestions had a major impact on the writing of this material. The second half of the book on convolutional codes was greatly influenced by Professor James L. Massey. His style of clarifying the basic elements in highly complex subject matter was instrumental throughout the preparation of this material. In particular, most of Chapter 14 was based on a set of notes that he prepared. We are grateful to the National Science Foundation, and to Mr. Elias Schutz- man, for their continuing support of our research in the coding field. Without this assistance, our interest in coding could never have developed to the point of writing Preface xv
this book. We thank the University of Hawaii and Illinois Institute of Technology for their support of our efforts in writing this book and for providing facilities. We also owe thanks to Professor Franklin F. Kuo for suggesting that we write this book, and for his constant encouragement and guidance during the preparation of the manuscript. Another major source of stimulation for this effort came from our graduate students, who have provided a continuing stream of new ideas and insights. Those who have made contributions directly reflected in this book include Drs. Pierre Chevillat, Farhad Hemmati, Alexander Drukarev, and Michael J. Miller. We would like to express our special appreciation to Professors Tadao Kasami, Michael J. Miller, and Yu-ming Wang, who read the first draft very carefully and made numerous corrections and suggestions for improvements. We also wish to thank our secretaries for their dedication and patience in typing this manuscript. Deborah Waddy and Michelle Masumoto deserve much credit for their perseverence in preparing drafts and redrafts of this work. Finally, we would like to give special thanks to our parents, wives, and children for their continuing love and affection throughout this project. Shu Lin Daniel J. Costello, Jr. xvi Preface
ERROR CONTROL CODING Fundamentals and Applications
7 Coding for Reliable Digital Transmission and Storage 1.1 INTRODUCTION In recent years, there has been an increasing demand for efficient and reliable digital data transmission and storage systems. This demand has been accelerated by the emergence of large-scale, high-speed data networks for the exchange, processing, and storage of digital information in the military, governmental, and private spheres. A merging of communications and computer technology is required in the design of these systems. A major concern of the designer is the control of errors so that reliable reproduction of data can be obtained. In 1948, Shannon [1] demonstrated in a landmark paper that, by proper encoding of the information, errors induced by a noisy channel or storage medium can be reduced to any desired level without sacrificing the rate of information transmission or storage. Since Shannon's work, a great deal of effort has been expended on the problem of devising efficient encoding and decoding methods for error control in a noisy environment. Recent developments have contributed toward achieving the reliability required by today's high-speed digital systems, and the use of coding for error control has become an integral part in the design of modern communication systems and digital computers. The transmission and storage of digital information have much in common. They both transfer data from an information source to a destination (or user). A typical transmission (or storage) system may be represented by the block diagram shown in Figure 1.1. The information source can be either a person or a machine (e.g., a digital computer). The source output, which is to be communicated to the destination, can be either a continuous waveform or a sequence of discrete symbols. 1
Information source Source encoder Destination Source decoder u A 11 Channel encoder Noise — Channel decoder V r Modulator (writing unit) y Channel (storage medium) Y Demodulator (reading unit) Figure 1.1 Block diagram of a typical data transmission or storage system. The source encoder transforms the source output into a sequence of binary digits (bits) called the information sequence u. In the case of a continuous source, this involves analog-to-digital (A/D) conversion. The source encoder is ideally designed so that (1) the number of bits per unit time required to represent the source output is minimized, and (2) the source output can be reconstructed from the information sequence u without ambiguity. The subject of source coding is not discussed in this book. For a thorough treatment of this important topic, see References 2 and 3. The channel encoder transforms the information sequence u into a discrete encoded sequence v called a code word. In most instances v is also a binary sequence, although in some applications nonbinary codes have been used. The design and implementation of channel encoders to combat the noisy environment in which code words must be transmitted or stored is one of the major topics of this book. Discrete symbols are not suitable for transmission over a physical channel or recording on a digital storage medium. The modulator (or writing unit) transforms each output symbol of the channel encoder into a waveform of duration T seconds which is suitable for transmission (or recording). This waveform enters the channel (or storage medium) and is corrupted by noise. Typical transmission channels include telephone lines, high-frequency radio links, telemetry links, microwave links, satellite links, and so on. Typical storage media include core and semiconductor memories, magnetic tapes, drums, disk files, optical memory units, and so on. Each of these examples is subject to various types of noise disturbances. On a telephone line, the disturbance may come from switching impulse noise, thermal noise, crosstalk from other lines, or lightning. On magnetic tape, surface defects are regarded as a noise disturbance. The demodulator (or reading unit) processes each received waveform of duration T and produces an output that may be discrete (quantized) or continuous (unquantized). The sequence of demodulator outputs corresponding to the encoded sequence v is called the received sequence r. The channel decoder transforms the received sequence r into a binary sequence u called the estimated sequence. The decoding strategy is based on the rules of channel encoding and the noise characteristics of the channel (or storage medium). Ideally, u 2 Coding for Reliable Digital Transmission and Storage Chap. 1
will be a replica of the information sequence u, although the noise may cause some decoding errors. Another major topic of this book is the design and implementation of channel decoders that minimize the probability of decoding error. The source decoder transforms the estimated sequence ft into an estimate of the source output and delivers this estimate to the destination. When the source is continuous, this involves digital-to-analog (D/A) conversion. In a well-designed system, the estimate will be a faithful reproduction of the source output except when the channel (or storage medium) is very noisy. To focus attention on the channel encoder and channel decoder, (1) the information source and source encoder are combined into a digital source with output u; (2) the modulator (or writing unit), the channel (or storage medium), and the demodulator (or reading unit) are combined into a coding channel with input v and output r; and (3) the source decoder and destination are combined into a digital sink with input u. A simplified block diagram is shown in Figure 1.2. Digital source Digital sink u u Encoder Nois Decoder V r i r Coding channel Figure 1.2 Simplified model of a coded system. The major engineering problem that is addressed in this book is to design and implement the channel encoder/decoder pair such that (1) information can be transmitted (or recorded) in a noisy environment as fast as possible, (2) reliable reproduction of the information can be obtained at the output of the channel decoder, and (3) the cost of implementing the encoder and decoder falls within acceptable limits. 1.2 TYPES OF CODES There are two different types of codes in common use today, block codes and convolu- tional codes. The encoder for a block code divides the information sequence into message blocks of k information bits each. A message block is represented by the binary /:-tuple u = (w,, w2,. .. , uk) called a message. (In block coding, the symbol u is used to denote a fc-bit message rather than the entire information sequence.) There are a total of 2k different possible messages. The encoder transforms each message u independently into an «-tuple v = (vu v2, • • • , vn) of discrete symbols called a code word. (In block coding, the symbol v is used to denote an w-symbol block rather than the entire encoded sequence.) Therefore, corresponding to the 2k different possible messages, there are 2k different possible code words at the encoder Sec. 1.2 Types of Codes 3
output. This set of 2k code words of length n is called an («, k) block code. The ratio R = k/n is called the code rate, and can be interpreted as the number of information bits entering the encoder per transmitted symbol. Since the /z-symbol output code word depends only on the corresponding k-bit input message, the encoder is memory- less, and can be implemented with a combinational logic circuit. In a binary code, each code word v is also binary. Hence, for a binary code to be useful (i.e., to have a different code word assigned to each message), k < n or R < 1. When k < n, n — k redundant bits can be added to each message to form a code word. These redundant bits provide the code with the capability of combating the channel noise. For a fixed code rate R, more redundant bits can be added by increasing the block length n of the code while holding the ratio k/n constant. How to choose these redundant bits to achieve reliable transmission over a noisy channel is the major problem in designing the encoder. An example of a binary block code with k = 4 and n = 7 is shown in Table 1.1. Chapters 3 through 9 are devoted to the analysis and design of block codes for controlling errors in a noisy environment. TABLE 1.1 BINARY BLOCK CODE WITH * = 4 AND /? = 7 Messages Code words (0 0 0 0) (0000000) (10 0 0) (110 10 0 0) (0 10 0) (0110100) (110 0) (10 1110 0) (0 0 10) (1110 0 10) (10 10) (0011010) (0 110) (10 0 0 110) (1110) (0101110) (0 0 0 1) (10 10 0 0 1) (10 0 1) (0111001) (0 10 1) (110 0 10 1) (110 1) (0001101) (0 0 11) (0100011) (10 11) (10 0 10 11) (0 111) (0010111) (1111) (1111111) The encoder for a convolutional code also accepts k-bit blocks of the information sequence u and produces an encoded sequence (code word) v of w-symbol blocks. (In convolutional coding, the symbols u and v are used to denote sequences of blocks rather than a single block.) However, each encoded block depends not only on the corresponding Ar-bit message block at the same time unit, but also on m previous message blocks. Hence, the encoder has a memory order of m. The set of encoded sequences produced by a &-input, rt-output encoder of memory order m is called an (n, k, m) convolutional code. The ratio R = k/n is called the code rate. Since the encoder contains memory, it must be implemented with a sequential logic circuit. In a binary convolutional code, redundant bits for combating the channel noise 4 Coding for Reliable Digital Transmission and Storage Chap. 1
can be added to the information sequence when k < n or R < 1. Typically, k and n are small integers and more redundancy is added by increasing the memory order m of the code while holding k and ny and hence the code rate R, fixed. How to use the memory to achieve reliable transmission over a noisy channel is the major problem in designing the encoder. An example of a binary convolutional encoder with k = 1, n = 2, and m = 2 is shown in Figure 1.3. As an illustration of how code words are generated, consider the information sequence u = (1 10 10 0 0...), where the leftmost bit is assumed to enter the encoder first. Using the rules of exclusive-or addition, and assuming that the multiplexer takes the first encoded bit from the top output, it is easy to see that the encoded sequence is v = (1 1, I 0, 1 0, 0 0, 0 1, 1 1, 0 0, 0 0, 0 0, . . .). Chapters 10 through 14 are devoted to the analysis and design of convolutional codes for controlling errors in a noisy environment. u \ A * 1 J -*HD Shift register stage © EXCLUSIVE-OR gate Multiplexer Figure 1.3 Binary convultional encoder with k — 1, n = 2, and m = 2. 1.3 MODULATION AND DEMODULATION The modulator in a communication system must select a waveform of duration T seconds, which is suitable for transmission, for each encoder output symbol. In the case of a binary code, the modulator must generate one of two signals, sQ(t) for an encoded "0" or ^(f) for an encoded "I." For a wideband channel, the optimum choice of signals is *o(0 - V¥ Sin {2nfot + f)' ° ~ ' - T *i«=V¥sin (27r/o/ - ■ t) ' ° -' - r> (i.i) Sec. 1.3 Modulation and Demodulation 5
where/0 is a multiple of 1/Jand E is the energy of each signal. This is called binary- phase-shift-keyed (BPSK) modulation, since the transmitted signal is a sine-wave pulse whose phase is either +njl or —n/2, depending on the encoder output. The BPSK modulated waveform corresponding to the code word v = (1 1 0 1 0 0 0) in the code of Table 1.1 is shown in Figure 1.4. s(t) I I ' I I ' I I I I I I I lll|0ll|0|0l0l • ! " ! ! ' ! i i i i i i i Figure 1.4 BPSK modulated waveform corresponding to the code word v = (110 10 0 0). A common form of noise disturbance present in any communication system is additive white Gaussian noise (AWGN). If the transmitted signal is s(t) [= s0(t) or Si(t)]9 the received signal is r(t) = s(t) + n(t), (1.2) where n(t) is a Gaussian random process with one-sided power spectral density (PSD) N0. Other forms of noise are also present in many systems. For example, in a communication system subject to multipath transmission, the received signal is observed to fade (lose strength) during certain time intervals. This fading can be modeled as a multiplicative noise component on the signal s(t). The demodulator must produce an output corresponding to the received signal in each T-second interval. This output may be a real number or one of a discrete set of preselected symbols, depending on the demodulator design. An optimum demodulator always includes a matched filter or correlation detector followed by a sampling switch. For BPSK modulation with coherent detection the sampled output is a real number, P = [ *i)Jy- sin {lnfQt + fj dt. (1.3) The sequence of unquantized demodulator outputs can be passed on directly to the channel decoder for processing. In this case, the channel decoder must be capable of handling analog inputs; that is, it must be an analog decoder. A much more common approach to decoding is to quantize the continuous detector output p into one of a finite number Q of discrete output symbols. In this case, the channel decoder has discrete inputs; that is, it must be a digital decoder. Almost all coded communication systems use some form of digital decoding. If the detector output in a given interval depends only on the transmitted signal 'IE f2E I 6 Coding for Reliable Digital Transmission and Storage Chap. 1
in that interval, and not on any previous transmission, the channel is said to be memoryless. In this case, the combination of an A/-ary input modulator, the physical channel, and a <2-ary output demodulator can be modeled as a discrete memoryless channel (DMC). A DMC is completely described by a set of transition probabilities P(j \i),0<i< M — 1, 0<j<Q— 1, where / represents a modulator input symbol, j represents a demodulator output symbol, and P(j\i) is the probability of receivings given that / was transmitted. As an example, consider a communication system in which (1) binary modulation is used (M = 2), (2) the amplitude distribution of the noise is symmetric, and (3) the demodulator output is quantized to Q = 2 levels. In this case a particularly simple and practically important channel model, called the binary symmetric channel (BSC), results. The transition probability diagram for a BSC is shown in Figure 1.5(a). Note that the transition probability p completely describes the channel. (a) (b) Figure 1.5 Transition probability diagrams: (a) binary symmetric channel; (b) binary-input, @-ary-output discrete memoryless channel. The transition probability/? can be calculated from a knowledge of the signals used, the probability distribution of the noise, and the output quantization threshold of the demodulator. When BPSK modulation is used on an AWGN channel with optimum coherent detection and binary output quantization, the BSC transition probability is just the BPSK bit error probability for equally likely signals given by where Q(x) A (l/^ln) e~yV1 dy is the complementary error function of Gaussian statistics. An upper bound on Q(x) which will be used later in evaluating the error performance of codes on a BSC is e(x)<ie-*V2, x>0. (1.5) When binary coding is used, the modulator has only binary inputs (M = 2). Similarly, when binary demodulator output quantization is used (Q = 2), the decoder has only binary inputs. In this case, the demodulator is said to make hard decisions. Most coded digital communication systems, whether block or convolutional, use binary coding with hard-decision decoding, owing to the resulting simplicity of Sec. 1.3 Modulation and Demodulation 7
implementation compared to nonbinary systems. However, some binary coded systems do not use hard decisions at the demodulator output. When Q > 2 (or the output is left unquantized) the demodulator is said to make soft decisions. In this case the decoder must accept multilevel (or analog) inputs. Although this makes the decoder more difficult to implement, soft-decision decoding offers significant performance improvement over hard-decision decoding, as discussed in Chapter 11. A transition probability diagram for a soft-decision DMC with M = 2 and Q > 2 is shown in Figure 1.5(b). This is the appropriate model for a binary-input AWGN channel with finite output quantization. The transition probabilities can be calculated from a knowledge of the signals used, the probability distribution of the noise, and the output quantization thresholds of the demodulator in a manner similar to the calculation of the BSC transition probability/?. For a more thorough treatment of the calculation of DMC transition probabilities, see References 4 and 5. If the detector output in a given interval depends on the transmitted signal in previous intervals as well as the transmitted signal in the present interval, the channel is said to have memory. A fading channel is a good example of a channel with memory, since the multipath transmission destroys the independence from interval to interval. Appropriate models for channels with memory are difficult to construct, and coding for these channels is normally done on an ad hoc basis. Two important and related parameters in any digital communication system are the speed of information transmission and the bandwidth of the channel. Since one encoded symbol is transmitted every T seconds, the symbol transmission rate (baud rate) is l/T. In a coded system, if the code rate is R = k/n, k information bits correspond to the transmission of n symbols, and the information transmission rate (data rate) is R/Tb'\ls per second (bps). In addition to signal modification due to the effects of noise, all communication channels are subject to signal distortion due to bandwidth limitations. To minimize the effect of this distortion on the detection process, the channel should have a bandwidth W of roughly \/2T hertz (Hz).1 In an uncoded system (R = 1), the data rate is l/T = 2W, and is limited by the channel bandwidth. In a binary-coded system, with a code rate R < 1, the data rate is R/T = 2RW, and is reduced by the factor R compared to an uncoded system. Hence, to maintain the same data rate as the uncoded system, the coded system requires a bandwidth expansion by a factor of \/R. This is characteristic of binary-coded systems: they require some bandwidth expansion to maintain a constant data rate. If no additional bandwidth is available without undergoing severe signal distortion, binary coding is not feasible, and other means of reliable communication must be sought.2 1.4 MAXIMUM LIKELIHOOD DECODING A block diagram of a coded system on an AWGN channel with finite output quantization is shown in Figure 1.6. In a block-coded system, the source output u represents a fc-bit message, the encoder output v represents an ^-symbol code word, the demodu- JThe exact bandwidth required depends on the shape of the signal waveform, the acceptable limits of distortion, and the definition of bandwidth. 2This does not preclude the use of coding, but requires only that a larger set of signals be found. See References 4 to 6. 8 Coding for Reliable Digital Transmission and Storage Chap. 1
Discrete memoryless channel Digital source Encoder Digital sink A u Decoder I I _ r | 1 0-level quantizer Modulator sit) n(t) ► AWGN channel Demodulator r(t) •v Matched filter detector Figure 1.6 Coded system on an additive white Gaussian noise channel. lator output r represents the corresponding Q-ary received H-tuple, and the decoder output u represents thefc-bit estimate of the encoded message. In a convolutional coded system, u represents a sequence of kL information bits and v represents a code word containing N A.nL + nm = n(L + m) symbols, where kL is the length of the information sequence and N is the length of the code word. The additional nm encoded symbols are produced after the last block of information bits has entered the encoder. This is due to the m time unit memory of the encoder, and is discussed more fully in Chapter 10. The demodulator output r is a g-ary received A^-tuple, and the decoder output u is a £L-bit estimate of the information sequence. The decoder must produce an estimate ti of the information sequence u based on the received sequence r. Equivalently, since there is a one-to-one correspondence between the information sequence u and the code word v, the decoder can produce an estimate v of the code word v. Clearly, u = u if and only if v = v. A decoding rule is a strategy for choosing an estimated code word v for each possible received sequence r. If the code word v was transmitted, a decoding error occurs if and only if 9 ^= v. Given that r is received, the conditional error probability of the decoder is defined as P(£|r)AP(v^v|r). (1.6) The error probability of the decoder is then given by P(£)=SP(£|r)P(r). (1.7) r P(r) is independent of the decoding rule used since r is produced prior to decoding. Hence, an optimum decoding rule [i.e., one that minimizes P(E)] must minimize P(E\ r) = P(\ ^ v | r) for all r. Since minimizing P(\$ ^ v | r) is equivalent to maximiz- Sec. 1.4 Maximum Likelihood Decoding 9
ing P(\ = v|r), P(E\r) is minimized for a given r by choosing \$ as the code word v which maximizes P(T|r) = ^lgp. (1.8) that is, v is chosen as the most likely code word given that r is received. If all information sequences, and hence all code words, are equally likely [i.e., P(\) is the same for all v], maximizing (1.8) is equivalent to maximizing P(r\\). For a DMC MLD>- Ar|v) = n^-k-)> (1.9) since for a memoryless channel each received symbol depends only on the corresponding transmitted symbol. A decoder that chooses its estimate to maximize (1.9) is called a maximum likelihood decoder (MLD). Since log x is a monotone increasing function of x, maximizing (1.9) is equivalent to maximizing the log-likelihood function 2L^log/>(r|v)=Elog/>(rf|t;,). (1.10) i An MLD for a DMC then chooses 9 as the code word v that maximizes the sum in (1.10). If the code words are not equally likely, an MLD is not necessarily optimum, since the conditional probabilities P(r|v) must be weighted by the code word probabilities ^(v) to determine which code word maximizes P(y\r). However, in many systems, the code word probabilities are not known exactly at the receiver, making optimum decoding impossible, and an MLD then becomes the best feasible decoding rule. Now consider specializing the MLD decoding rule to the BSC. In this case r is a binary sequence which may differ from the transmitted code word v in some positions because of the channel noise. When rt ^ vt, P(rt. [ v,) = p, and when rt = vt9 P{rt \ vt) = 1 — p. Let d(r, v), be the distance between r and v (i.e., the number of positions in which r and v differ). For a block code of length /*, (1.10) becomes f2<r~ ;>_ ,. log P(r | v) = d(j, v) log/? + [n - d(r, v)] log (1 - p) ^~ n (l.H) = d(r, v) log y£— + n log (1 - p). [For a convolutional code, n in (1.11) is replaced by N.] Since log [p/(\ — /?)]< 0 for p < \ and n log (1 — p) is a constant for all v, the MLD decoding rule for the BSC chooses ? as the code wordy which minimizes the distance d(r, v) between r and\; that is, it chooses the code word that differs from the received sequence in the fewest number of positions. Hence, an MLD for the BSC is sometimes called a minimum distance decoder. The capability of a noisy channel to transmit information reliably was determined by Shannon [1] in his original work. This result, called the noisy channel coding theorem, states that every channel has a channel capacity C, and that for any rate R < C, there exists codes of rate R which, with maximum likelihood decoding, have an arbitrarily small decoding error probability P{E). In particular, for any R<C, there exists block codes of length n such that P(E)<2'nEM, (1.12) and there exists convolutional codes of memory order m such that -1>(E) < 2-{m+1)nE<{R) = 2-"aEAR\ (1.13) 10 Coding for Reliable Digital Transmission and Storage Chap. 1
where nA A (m + \)n is called the code constraint lengths Eb(R) and £c(#)are positive functions of R for R < Cand are completely determined by the channel characteristics. The bound of (1.12) implies that arbitrarily small error probabilities are achievable with block coding for any fixed R < C by increasing the block length n while holding the ratio kjn constant. The bound of (1.13) implies that arbitrarily small error probabilities are achievable with convolutional coding for any fixed R < C by increasing the constraint length nA (i.e., by increasing the memory order m while holding k and n constant). The noisy channel coding theorem is based on an argument called random coding. The bound obtained is actually on the average error probability of the ensemble of all codes. Since some codes must perform better than the average, the noisy channel coding theorem guarantees the existence of codes satisfying (1.12) and (1.13), but does not indicate how to construct these codes. Furthermore, to achieve very low error probabilities for block codes of fixed rate R < C, long block lengths are needed. This requires that the number of code words 2k = 2nR must be very large. Since a MLD must compute logP(r|v) for each code word, and then choose the code word that gives the maximum, the number of computations that must be performed by a MLD becomes excessively large. For convolutional codes, low error probabilities require a large memory order m. As will be seen in Chapter 11, a MLD for convolutional codes requires approximately 2km computations to decode each block of A: information bits. This, too, becomes excessively large as m increases. Hence, it is impractical to achieve very low error probabilities with maximum likelihood decoding. Therefore, two major problems are encountered when designing a coded system to achieve low error probabilities: (1) to construct good long codes whose performance with maximum likelihood decoding would satisfy (1.12) and (1.13), and (2) to find easily implement- able methods of encoding and decoding these codes such that their actual performance is close to what could be achieved with maximum likelihood decoding. The remainder of this book is devoted to finding solutions to these two problems. 1.5 TYPES OF ERRORS On memoryless channels, the noise affects each transmitted symbol independently. As an example, consider the BSC whose transition diagram is shown in Figure 1.5(a). Each transmitted bit has a probability/? of being received incorrectly and a probability 1 — p of being received correctly, independently of other transmitted bits. Hence transmission errors occur randomly in the received sequence, and memoryless channels are called random-error channels. Good examples of random-error channels are the deep-space channel and many satellite channels. Most line-of-sight transmission facilities, as well, are affected primarily by random errors. The codes devised for correcting random errors are called random-error-correcting codes. Most of the codes presented in this book are random-error-correcting codes. In particular, Chapters 3 through 8 and 10 through 13 are devoted to codes of this type. On channels with memory, the noise is not independent from transmission to transmission. A simplified model of a channel with memory is shown in Figure 1.7. This model contains two states, a "good state," in which transmission errors occur infrequently, px *& 0, and a "bad state," in which transmission errors are highly Sec. 1.5 Types of Errors 11
1-*1 1 ~q2 Figure 1.7 Simplified model of a channel with memory. probable, p2 ^ 0.5. The channel is in the good state most of the time, but on occasion shifts to the bad state due to a change in the transmission characteristic of the channel (e.g., a "deep fade" caused by multipath transmission). As a consequence, transmission errors occur in clusters or bursts because of the high transition probability in the bad state, and channels with memory are called burst-error channels. Examples of burst-error channels are radio channels, where the error bursts are caused by signal fading due to multipath transmission, wire and cable transmission, which is affected by impulsive switching noise and crosstalk, and magnetic recording, which is subject to tape dropouts due to surface defects and dust particles. The codes devised for correcting burst errors are called burst-error-correcting codes. Sections 9.1 to 9.5 and 14.1 to 14.3 are devoted to codes of this type. Finally, some channels contain a combination of both random and burst errors. These are called compound channels, and codes devised for correcting errors on these channels are called burst-and-random-error-correcting codes. Sections 9.6, 9.7, and 14.4 are devoted to codes of this type. 1.6 ERROR CONTROL STRATEGIES The block diagram shown in Figure 1.1 represents a one-way system. The transmission (or recording) is strictly in one direction, from transmitter to receiver. Error control for a one-way system must be accomplished using forward error correction (FEC), that is, by employing error-correcting codes that automatically correct errors detected at the receiver. Examples are magnetic tape storage systems, in which the 12 Coding for Reliable Digital Transmission and Storage Chap. 1
the potential for improving throughput in two-way systems subject to a high channel error rate. Various types of ARQ and hybrid ARQ schemes are discussed in Chapter 15 and Section 17.5. REFERENCES 1. C. E. Shannon, "A Mathematical Theory of Communication,*' Bell Syst. Tech. J., 27, pp. 379-423 (Part 1), 623-656 (Part II), July 1948. 2. T. Berger, Rate Distortion Theory, Prentice-Hall, Englewood Cliffs, N.J., 1971. 3. L. Davisson and R. Gray, eds., Data Compression, Dowden, Hutchinson, & Ross, Stroudsburg, Pa., 1976. 4. J. M. Wozencraft and I. M. Jacobs, Principles of Communication Engineering, Wiley, New York, 1965. 5. A. J. Viterbi and J. K. Omura, Principles of Digital Communication and Coding, McGraw- Hill, New York, 1979. 6. R. G. Gallager, Information Theory and Reliable Communication, Wiley, New York, 1968. 14 Coding for Reliable Digital Transmission and Storage Chap. 1
2 Introduction to Algebra The purpose of this chapter is to provide the reader with an elementary knowledge of algebra that will aid in the understanding of the material in the following chapters. The treatment is basically descriptive and no attempt is made to be mathematically rigorous. There are many good textbooks on algebra. The reader who is interested in more advance algebraic coding theory is referred to the textbooks listed at the end of the chapter. BirkhofF and MacLane [2] is probably the most easily understood text on modern algebra. Fraleigh [4] is also a good and fairly simple text. 2.1 GROUPS Let G be a set of elements. A binary operation * on G is a rule that assigns to each pair of elements a and b a uniquely defined third element c = a * b in G. When such a binary operation * is defined on G, we say that G is closed under *. For example, let G be the set of all integers and let the binary operation on G be real addition +. We all know that, for any two integers / and j in G, / + j is a uniquely denned integer in G. Hence, the set of integers is closed under real addition. A binary operation * on G is said to be associative if, for any a, b, and c in G, a * (b * c) — (a * b) * c. Now, we introduce a useful algebraic system called a group. Definition 2.1. A set G on which a binary operation * is defined is called a group if the following conditions are satisfied: (i) The binary operation * is associative. / 15
m addition. We shall call this group an additive group. For m = 2, we obtain the binary group given in Example 2.1. The additive group under modulo-5 addition is given by Table 2.1. TABLE 2.1 MODULO-5 ADDITION ffl 0 1 2 3 4 0 0 1 2 3 4 1 1 2 3 4 0 2 2 3 4 0 1 3 3 4 0 1 2 4 4 0 1 2 3 Finite groups with a binary operation similar to real multiplication can also be constructed. Example 2.3 Let p be a prime (e.g., /? = 2, 3, 5, 7, 11,...). Consider the set of integers, G = {1, 2, 3,..., p — 1}. Let • denote real multiplication. Define a binary operation Q] on G as follows: For i and/ in G, ' □; = '*, where /* is the remainder resulting from dividing i*j by p. First we note that i-j is not divisible by p. Hence, 0 < r < p and r is an element in G. Therefore, the set G is closed under the binary operation El, which is referred to as modulo-p multiplication. The set G = {1, 2,.. ., p — 1} is a group under modulo-p multiplication. We can easily check that modulo-/? multiplication is commutative and associative. The identity element is 1. The only thing left to be proved is that every element in G has an inverse. Let / be an element in G. Since p is a prime and / < /?, / and p must be relatively prime (i.e., i and/? do not have any common factor greater than 1). It is well known that there exist two integers a and b such that a.i + b-p = \ (2.3) and a and p are relatively prime (Euclid's theorem). Rearranging (2.3), we have „./= -£./> +1. (2.4) This says that when a*i is divided by /?, the remainder is 1. If 0 < a < /?, a is in G and it follows from (2.4) and the definition of modulo-/? multiplication that flHi = iQfl = 1. Therefore, a is the inverse of /. However, if a is not in G, we divide a by /?, a = q>p + r. (2.5) Since a and p are relatively prime, the remainder r cannot be 0 and it must be between 1 and/? — 1. Therefore, r is in G. Now, combining (2.4) and (2.5), we obtain r./ = -(/> +qi)p + 1. Therefore, r □ i = i □ r — 1 and r is the inverse of /. Hence, any element i in G has an inverse with respect to modulo-/? multiplication. The group G = {1, 2,...,/? — 1} 18 Introduction to Algebra Chap. 2
under modulo-/? multiplication is called a multiplicative group. For p = 2, we obtain a group G = {1} with only one element under modulo-2 multiplication. If p is /to/ a prime, the set <7 = [1, 2,..., p — 1} is not a group under modulo-/? multiplication (see Problem 2.3). Table 2.2 illustrates the group G = {1, 2, 3, 4} under modulo-5 multiplication. TABLE 2.2 MODULO-5 MULTIPLICATION El l 2 3 4 1 1 2 3 4 2 2 4 1 3 3 3 1 4 2 4 4 3 2 1 Let H be a nonempty subset of G. The subset H is said to be a subgroup of G if//is closed under the group operation of G and satisfies all the conditions of a group. For example, the set of all rational numbers is a group under real addition. The set of all integers is a subgroup of the group of rational numbers under real addition. 2.2 FIELDS Now, we use the group concepts to introduce another algebraic system, called a field. Roughly speaking, a field is a set of elements in which we can do addition, subtraction, multiplication, and division without leaving the set. Addition and multiplication must satisfy the commutative, associative, and distributive laws. A formal definition of a field is given below. Definition 2.2. Let F be a set of elements on which two binary operations, called addition "+" and multiplication "•," are defined. The set /"together with the two binary operations + and • is a field if the following conditions are satisfied: (i) Fisa, commutative group under addition +. The identity element with respect to addition is called the zero element or the additive identity of F and is denoted byO. (ii) The set of nonzero elements in F is a commutative group under multiplication •. The identity element with respect to multiplication is called the unit element or the multiplicative identity of F and is denoted by 1. (iii) Multiplication is distributive over addition; that is, for any three elements a, b, and c in F, a»(b + c) = a*b + a*c. It follows from the definition that a field consists of at least two elements, the additive identity and the multiplicative identity. Later, we will show that a field of two elements does exist. The number of elements in a field is called the order of the Sec. 2.2 Fields 19
field. A field with finite number of elements is called a, finite field. In a field, the additive inverse of an element a is denoted by — a, and the multiplicative inverse of a is denoted by a"1, provided that a ^ 0. Subtracting a field element b from another field element a is defined as adding the additive inverse — b of b to a [i.e., a — b A a + (—b)]. If b is a nonzero element, dividing a by £ is defined as multiplying a by the multiplicative inverse b~l of b [i.e., a — b A. a-b'1]. A number of basic properties of fields can be derived from the definition of a field. Property I. For every element a in a field, a-0 = 0-a = 0. Proof. First we note that a = a-1 = a-(l + 0) = a + a-0. Adding —a to both sides of the equality above, we have —a + a = —a + a + a-0 0 = 0 + a-0 0 = a-0. Similarly, we can show that 0-a = 0. Therefore, we obtain a-0 = 0-a = 0. Q.E.D. Property II. For any two nonzero elements a and b in a field, a-b ^0. Proof. This is a direct consequence of the fact that the nonzero elements of a field are closed under multiplication. Q.E.D. Property III. a-b = 0 and a ^ 0 imply that 6 = 0. Proo/. This is a direct consequence of Property II. Q.E.D. Property IV. For any two elements a and b in a field, -(a-b) = (-a)-b = a-(-b). Proof 0 = 0-b = (a + (-a))-6 = a-6 + (-a)-6. Therefore, (-a)-ft must be the additive inverse of a-b and —(a-b) = (—a)-b. Similarly, we can prove that -(a-b) = a-(-b). Q.E.D. Property V. For a ^ 0, a-b = a-c implies that b = c. Proo/. Since a is a nonzero element in the field, it has a multiplicative inverse a-1. Multiplying both side of a-b = a-c by a"1, we obtain a~!-(a-ft) = a_I-(a-c) (a_1-a)-6 = (a~l-a)-c \-b = \-c. Thus, 6 = c. Q.E.D. We can verify readily that the set of real numbers is a field under real number addition and multiplication. This field has an infinite number of elements. Fields with 20 Introduction to Algebra Chap. 2
TABLE 2.5 MODULO-7 ADDITION TABLE 2.6 MODULO-7 MULTIPLICATION • 0 1 2 3 4 5 6 0 0 0 0 0 0 0 0 1 0 1 2 3 4 5 6 2 0 2 4 6 1 3 5 3 0 3 6 2 5 1 4 4 0 4 1 5 2 6 3 5 0 5 3 1 6 4 2 6 0 6 5 4 3 2 1 + 0 1 2 3 4 5 6 0 0 1 2 3 4 5 6 1 1 2 3 4 5 6 0 2 2 3 4 5 6 0 1 3 3 4 5 6 0 1 2 4 4 5 6 0 1 2 3 5 5 6 0 1 2 3 4 6 6 0 1 2 3 4 5 of p elements. In fact, for any positive integer mt it is possible to extend the prime field GF(p) to a field ofpm elements which is called an extension field of GF(p)and is denoted by GF(/?m). Furthermore, it has been proved that the order of any finite field is a power of a prime. Finite fields are also called Galois fields, in honor of their discoverer. A large portion of algebraic coding theory, code construction, and decoding is built around finite fields. In the rest of this section and in the next two sections we examine some basic structures of finite fields, their arithmetic, and the construction of extension fields from prime fields. Our presentation will be mainly descriptive and no attempt is made to be mathematically rigorous. Since finite-field arithmetic is very similar to ordinary arithmetic, most of the rules of ordinary arithmetic apply to finite-field arithmetic. Therefore, it is possible to utilize most of the techniques of algebra in the computations over finite fields. Consider a finite field of q elements, GF(q). Let us form the following sequence of sums of the unit element 1 in GF(^): £1 = 1, £1 = 1 + 1, £1 = 1 + 1 + 1, ..., i=l /=1 i=l £1 = 1 + 1 + --- + 1(* times), ... i = l Since the field is closed under addition, these sums must be elements in the field. Since the field has finite number of elements, these sums cannot be all distinct. Therefore, at some point of the sequence of sums, there must be a repetition; that is, there must exist two positive integers m and n such that m < n and m n »=i i=l This implies that £"rr 1 = 0. Therefore, there must exist a smallest positive integer X such that £f=1 1 = 0. This integer X is called the characteristic of the field GF(q). The characteristic of the binary field GF(2) is 2, since 1 + 1=0. The characteristic of the prime field GF(p) is/?, since £?., 1 = k ^ 0 for 1 < k < p and £f=1 1 = 0. Theorem 2.3. The characteristic X of a finite field is prime. Proof. Suppose that X is not a prime and is equal to the product of two smaller integers k and m (i.e., X = km). Since the field is closed under multiplication, 22 Introduction to Algebra Chap. 2
(S')-(S') is also a field element. It follows from the distributive law that (k \ / m \ km SO-(£')-£'• Since 2?A 1 = 0> tnen either 2?=i 1 = 0 or 2J1, 1 = 0. However, this contradicts the definition that A is the smallest positive integer such that 2£=i 1 = 0. Therefore, we conclude that A is prime. Q.E.D. It follows from the definition of the characteristic of a finite field that for any two distinct positive integers k and m less than A, k m i= l f= l Suppose that 2?=i 1 = 2l"=i 1- Then we have m-k S 1 = 0 (assuming that w > k). However, this is impossible since m — k < A. Therefore, the sums i-ii, £i, £i, ..., Si, £i =0 1=1 1=1 1=1 i=l 1=1 are A distinct elements in GF(q). In fact, this set of sums itself is a field of A elements, GF(A), under the addition and multiplication of GF(q) (see Problem 2.6). Since GF(A) is a subset of GF(^), GF(A) is called a subfield of GF(#). Therefore, any finite field GF(q) of characteristic A contains a subfield of A elements. It can be proved that if q ^ A, then q is a power of A. Now let a be a nonzero element in GF(<?). Since the set of nonzero elements of GF{q) is closed under multiplication, the following powers of a, a1 = a, a1 = a-a, a3 = a-a-a, . . . must also be nonzero elements in GF(q). Since GF(q) has only a finite number of elements, the powers of a given above cannot all be distinct. Therefore, at some point of the sequence of powers of a, there must be a repetition; that is, there must exist two positive integers k and m such that m > k and ak = am. Let a'1 be the multiplicative inverse of a. Then (a~*)k = a~k is the multiplicative inverse of ak. Multiplying both sides of ak = am by a~k, we obtain 1 = am-k. This implies that there must exist a smallest positive integer n such that an = 1. This integer n is called the order of the field element a. Therefore, the sequence a1, a2, a3,.. . repeats itself after an = 1. Also, the powers a1, a2,. . . , tf1"1, a" = 1 are all distinct. In fact, they form a group under the multiplication of GF(q). First we see that they contain the unit element 1. Consider d-a*. If / +j < n, al-aJ = at+J. Sec. 2.2 Fields 23
If i +j > n, we have / + j = n + r, where 0 < r < n. Hence, a*-aJ = ai+j = a"-ar = ar. Therefore, the powers a1, a2,..., a"'1, cf = 1 are closed under the multiplication of G¥(q). For \ <i <n, a"'1 is the multiplicative inverse of d. Since the powers of a are nonzero elements in GF(#), they satisfy the associative and commutative laws. Therefore, we conclude that a" = 1, a1, a2,..., a""1 form a group under the multiplication of GF(q). A group is said to be cyclic if there exists an element in the group whose powers constitute the whole group. Theorem 2.4. Let a be a nonzero element of a finite field GF(#). Then a9"l = 1. Proof. Let b19 b2,..., bq_x be the ^ — 1 nonzero elements of GF(q). Clearly, the q — 1 elements, a*bua'b2,. . . ,a'bg_l9 are nonzero and distinct. Thus, {a-bi)-{a-b2) • • • {a-bq_x) = bx*b2 • • • bq_x a'-1'(brb2 — bg_l) = brb2 ■•• bq_u Since a ^ 0 and (b1-b2 • — ^_j) ^ 0, we must have a?~l = \. Q.E.D. Theorem 2.5. Let a be a nonzeio element in a finite field GF(#). Let n be the order of a. Then n divides q — 1. Proof. Suppose that n does not divide q — 1. Dividing # — 1 by n, we obtain <7 — 1 = kn + r, where 0 < r < n. Then a*-i = akn+r = akn-ar = (an)k-a\ Since a*-1 = 1 and a" = 1, we must have ar — 1. This is impossible since 0 < r < n and h is the smallest integer such that a" = 1. Therefore, n must divide # — 1. Q.E.D. In a finite field GF(g), a nonzero element a is said to be primitive if the order of a is q — 1. Therefore, the powers of a primitive element generate all the nonzero elements of GF(g). Every finite field has a primitive element (see Problem 2.7). Consider the prime field GF(7) illustrated by Tables 2.5 and 2.6. The characteristic of this field is 7. If we take the powers of the integer 3 in GF(7) using the multiplication table, we obtain 31 = 3, 32 = 3-3 = 2, 33 = 3-32 = 6, 3< = 3.33 = 4, 35 = 3-34 = 5, 36 = 3-35 = 1. Therefore, the order of the integer 3 is 6 and the integer 3 is a primitive element of GF(7). The powers of the integer 4 in GF(7) are 41 = 4, 42 = 4-4 = 2, 43 = 4-42=l. Clearly, the order of the integer 4 is 3, which is a factor of 6. 2.3 BINARY FIELD ARITHMETIC In general, we can construct code with symbols from any Galois field GF(#), where q is either a prime p or a power of p. However, codes with symbols from the binary field GF(2) or its extension GF(2m) are most widely used in digital data transmission 24 Introduction to Algebra Chap. 2
and storage systems because information in these systems is universally coded in binary form for practical reasons. In this book we are concerned only with binary codes and codes with symbols from the field GF(2m). Most of the results presented in this book can be generalized to codes with symbols from any finite field GF(q) with q ^ 2 or 2m. In this section we discuss arithmetic over the binary field GF(2), which will be used in the rest of this book. In binary arithmetic we use modulo-2 addition and multiplication, which are defined by Tables 2.3 and 2.4, respectively. This arithmetic is actually equivalent to ordinary arithmetic, except that we consider 2 to be equal to 0 (i.e., 1 + 1=2 = 0). Note that since 1 + 1 = 0, 1 = — 1. Hence, in binary arithmetic, subtraction is the same as addition. To illustrate how the ideas of ordinary algebra can be used with the binary arithmetic, we consider the following sets of equations: X+Y = 1 x + z = o X + Y + Z= 1. These can be solved by adding the first equation to the third, giving Z = 0. Then from the second equation, since Z = d and X + Z = 0, we obtain X = 0. From the first equation, since X = 0 and X + Y = 1, we have Y = 1. We can substitute these solutions back into the original set of equations and verify that they are correct. Since we were able to solve the equations shown above, they must be linearly independent, and the determinant of the coefficients on the left side must be nonzero. If the determinant is nonzero, it must be 1. This can be verified as follows: 1 1 0 1 0 1 1 1 1 = 1 • 0 1 1 1 - ] . 1 1 1 1 + 0- 1 0 1 1 = 1-1 - 1-0 + 0-1 = 1. We could have solved the equations by Cramer's rule: X = 1 0 1 11 1 1 1 0 1 1 0 1 0 1 1 0 1 1 0 " 1 o, y = 11 1 1 0 ll 1 1 1 1 0 1 1 01 1 1 01 1 1 =+=!> 1 1 1 0 M 1 11 1 1 0 1 1 1 0 ll ol 1 1 *-* Sec. 2.3 Binary Field Arithmetic 25
Next we consider computations with polynomials whose coefficients are from the binary field GF(2). A polynomial f(X) with one variable A" and with coefficients from GF(2) is of the following form: AW =/o +fix+fi*2 +■■■ +/.X-, where/, = 0 or 1 for 0 < / < n. The degree of a polynomial is the largest power of X with a nonzero coefficient. For the polynomial above, if/„ = \,f{X) is a polynomial of degree n; if/„ = 0, f(X) is a polynomial of degree less than n. The degree of f(X) = /0 is zero. In the following we use the phrase "a polynomial over GF(2)" to mean "a polynomial with coefficients from GF(2)." There are two polynomials over GF(2) with degree 1: X and 1 + X. There are four polynomials over GF(2) with degree 2: X2, 1 + X2, X + X2, and 1 + X+ X2. In_general, there are_2" polyno- mials over GF(2) with degree^. Polynomials over GF(2) can be added (or subtracted), multiplied, and divided in the usual way. Let g(X) = go+glX + g2X2 + • •. + gmX« be another polynomial over GF(2). To add/(A") and g(X), we simply add the coefficients of the same power of X'mf(X) and g(X) as follows (assuming that m <n): AX) + g(x) = (/, + go) + (/, + gl)x +■■■ + (/„ + g m)Xm +/m+1^m+l + • • • +f„X", where/ + g( is carried out in modulo-2 addition. For example, adding a(X) = 1 + X+ X3 + X5 and b(X) = 1 + X2 + X2 + X* + X\ we obtain the following sum: a(X) + b(X) = (1 + 1) + X + X2 + (1 + \)X* + X* + X* + X7 = X+ X2 + X* + Xs + X\ When we multiply f(X) and g(X), we obtain the following product: f(X).g(X) = c0 + ClX+c2X2 + • • • + cH+mX"+", where ^0 == 'ogo Ci =fogi +figo Ci = fogi +figi +figo : (2.6) Ci =fogi +figt-i +figi-i + • • • +ftgo ^n + m Jngm* (Multiplication and addition of coefficients are modulo-2.) It is clear from (2.6) that ifg(X) = 0y then /(JT)-0 = 0. (2.7) 26 Introduction to Algebra Chap. 2
We can readily verify that the polynomials over GF(2) satisfy the following conditions: (i) Commutative: a(X) + b(X) - b(X) f a{X) a{X)-b{X) = b{X)-a{X). (ii) Associative: a(X) + [b(X) + c(X)] = [a(X) + b(X)} + c(X) a(X).[b(X).c(X)] ■-, [a(X)-b(X)]-c(X). (iii) Distributive: a(X) ■ [b(X) + c(X)] = [a(X) • b(X)} + [a(X) • c(X)]. (2.8) Suppose that the degree of g(X) is not zero. When f(X) is divided byg(X), we obtain a unique pair of polynomials over GF(2)—q(X), called the quotient, and r(X), called the remainder—such that f(X) = q(X)g(X) + r(X) and the degree of r(X) is less than that of g(X). This is known as Euclid's division algorithm. As an example, we divide f(X) --= 1 + X + X4 + X5 + X6 by g(X) = 1 + X + X3. Using the long-division technique, we have X2 + X2 (quotient) Xz + X + l)JT6 + JT5 + X* +X+\ X6 + *4 + X3 X5 + X3 +X+\ Xs f Jf3 + X2 X2 f X + 1 (remainder). We can easily verify that z6 + x5 + a-4 + jr-f l = (x3 -j- jr2)(*3 + x-\- i) + a^2 + x + i. When /(Z) is divided by gW, if the remainder r(X) is identical to zero [r(X) = 0], we say that/(A') is divisible by g(X) and g(A') is a factor of f(X). For real numbers, if a is a root of a polynomial f(X) [i.e., /(a) = 0], f(X) is divisible by x — a. (This fact follows from Euclid's division algorithm.) This is still true (ovf(X) over GF(2). For example, let f(X) = 1 + X2 + X3 + X\ Substituting X = 1, we obtain /(I) = i + i* + p + 14 = ! + ! + j + ! = 0> Thus, f(X) has 1 as a root and it should be divisible by X + 1. JT3 + *+ 1 X+\)X* + X3 + X2 +1 *4 + jr3 x2 +1 JT2 + X X+ 1 *+l 0 Sec. 2.3 Binary Field Arithmetic 27
For a polynomial f(X) over GF{2), if-it has an even number of terms, it isjiivisible by X'+ 1. A polynomials^) over GF(2) of degree m is said to be irreducible over GF(2^ ifpQQ l^of^iyfsible tiy any polynomial over GF(2j of degree less than m but grejter, tlian ze^q. Among the four polynomials of degree 2, X2, X2 + 1 and X2 + X are not irreducible since they are either divisible by X or X + 1. However, X2 + Xs +1 <Jg£\$ not have either "0" or "1" as a root and so is not divisible fey any polynomial of degree 1. Therefore, X2 + ^X+l is an irreducible polynomial of degree 2. The polynomial X3 + X + 1 is an irreducible polynomial of degree 3. First we note that X3 + X + 1 does not have either 0 or 1 as a root. Therefore, X3 + X + 1 is not divisible by Xor X + 1. Since it is not divisible by any polynomial of degree 1, it cannot be divisible by a polynomial of degree 2. Consequently, X3 + X + 1 is irreducible over GF(2). We may verify that X4 + X + 1 is an irreducible polynomial of degree 4. It hasten grpved that, for any m > 1,.there exists an irreducible polynomial of degree m. An important theorem regarding irreducible polynomials over GF(2) is given below without a proof. Theorem 2.6. Any irreducible polynomial over GF(2) of degree m divides Xim~l + 1. "" As an example of Theorem 2.6, we can check that X3 + X + 1 divides X2i~l + 1 - X1 + 1: X4 + X2 + X + I X' + X+tiX1 +1 x7 • + x5 + x4 X5 + X4 +1 Xs + X3 + X2 X4 + X3 + X2 +1 X4 + X2 + X X3 +X+1 X3 +X+1 0. An irreducible polynomial p(X) of degree m is said to be primitive if the smallest positive integer n for which p(X) divides X" + 1 is n = 2m — 1. We may check "that p(X) = X4 + X + 1 divides X15 + 1 but does not divide any Xn + 1 for 1 < n < 15. Hence, X4 + X + 1 is a primitive polynomial. The polynomial Z4 + A^3 + X2 + A" + 1 is irreducible but it is not primitive, since it divides Xs + 1. It is not easy to reepgnize a primitive polynomial. However, there are tables of irreducible polynomials in which primitive polynomials are indicated [6,7]. For a given m, there may be more than one primitive polynomial of degree m. A list of primitive polynomials is given in Table 2.7. For each degree wvwe list only a primitive polynomial with the smallest number of terms. Before leaving this section, we derive another useful property of polynomials over GF(2). Consider 28 Introduction to Algebra Chap. 2
TABLE 2.7 LIST OF PRIMITIVE POLYNOMIALS m 3 4 5 6 7 8 9 10 11 12 13 1 +*+ A-3 1 4- X + A"* 1 + X2 + *s 1 + X+ X* 1 + X* + A'7 i + *2 + *3 + jr« 1 + X* + ** i + ;p + jt10 1 + X2 + A-n + jr« 1 + JT+ JT4 + X6 + A'12 i + x+ x* + jr* + JP3 m 14 15 16 17 18 19 20 21 22 23 24 1 + X+ X6 + XlQ + X^ 1 + X+X^ 1 + JT + X* + A'12 + A'1* 1 + X* + JT17 1 + Xi + JT18 1 + A-+ *2 + X* + JH* 1 + A-3 + A^o 1 -f X2 + A-21 1 + X+ X22 1 + X5 + X2* \ + X + X2 + XT + X2* P(x) = (fa+LX+ ••• +fnx*y = [/„ + (fxX+fiX1 + • • • +/.*")]* =fl + (ftx+f1x>+---+fnX"y. Expanding the equation above repeatedly, we eventually obtain f\X) = fl + (/l*)2 + (/2^2)2 + • ' • + ifnXn)\ Since/, = 0 or l,/,2 =/,. Hence, we have f\x) =/0 +/t jr* +/,(*»)» + ... +/„(*>)" -/(AT2). It follows from (2.9) that, for any / > 0, [f(X)r=f(Xn. 2.4 CONSTRUCTION OF GALOIS FIELD GF(2m) In this section we present a method for constructing the Galois field of 2m elements (m > 1) from the binary field GF(2). We begin with the two elements 0 and 1, from GF(2) and a new symbol a. Then we define a multiplication " •" to introduce a sequence of powers of a as follows: 0-0 = 0-1 = 1-1 = 0-a = !•« = 0, 1- 1, a- a- 0 0 1 •■•+LX"Y (2.9) (2.10) Sec. 2.4 Construction of Galois Field GF(2m) 29
a2 = a-a, a3 = a-a-a, (2.11) ajr = a-a a (y times), It follows from the definition of multiplication above that 0-a' = a'-0 = 0, l.a' = a'-l = a', (2.12) a'-a' = a'-a' = a'+y. Now, we have the following set of elements on which a multiplication operation "•" is defined: F={0,l9a,a\...9aJ,...}. The element 1 is sometimes denoted a0. Next we put a condition on the element a so that the set F contains only 2m elements and is closed under the multiplication "•" defined by (2.11). Let p(X) be a primitive polynomial of degree m over GF(2). We assume that/?(a) = 0. Since p(JQ divides X2"1'1 + 1 (Theorem 2.6), we have ~ X^ + 1 - g(X)p(X). (2.13) If we replace A" by a in (2.13), we obtain a2""1 + 1 - *(a)/>(a). Since /?(a) = 0, we have a2"1"1 + 1 - ?(a)-0. If we regard q(a) as a polynomial of a over GF(2), it follows from (2.7) that q(a)'0 = 0. As a result, we obtain the following equality: ai--i + i=o. Adding 1 to both sides of a2"1-1 + 1=0 (use modulo-2 addition) results in the following equality: a2"1"1 - 1. (2.14) Therefore, under the condition that p(a) = 0, the set F becomes finite and contains the following elements: GP(i' ) 'c F* = {0, l,a,a2, ...,a2m"2}. The nonzero elements of F* are closed under the multiplication operation "•" defined by (2.11). To see this, let / andy be two integers such that 0 < /, y < 2m — 1. If / + j < 2m — 1, then a'-a7' = a'+y, which is obviously a nonzero element in F*. If i + y > 2m — 1, we can express/ + j as follows: / +j= (2m — 1) + r, where 0 < r < 2m — 1. 30 Introduction to Algebra Chap. 2
Then a'-a' = a'+' = a(2m-1)+r = a2"-1-a' = l-af = a', which is also a nonzero element in F*. Hence, we conclude that the nonzero elements of F* are closed under the multiplication "•" defined by (2.11). In fact, these nonzero elements form a commutative group under "•". First, we see that the element 1 is the unit element. From (2.11) and (2.12) we see readily that the multiplication operation "•" is commutative and associative. For 0 < / < 2m — 1, a2*""'"1 is the multiplicative inverse of a! since a2--/-i.a/ = a2--i = lm (Note that a0 = a2"1-1 = 1.) It will be clear in what follows that 1, a, a2,... , a2"1-2 represent 2m — 1 distinct elements. Therefore, the nonzero elements of F* form a group of order 2m — 1 under the multiplication operation "•" defined by (2.11). Our next step is to define an addition operation "+" on F* so that F* forms a commutative group under "+." For 0 < / < 2m — 1, we divide the polynomial X1 by p(X) and obtain the following: X'= qi(X)p(X) + at(X)9 (2.15) where ql(X)a,nd ^I(Ar)are the quotient and the remainder, respectively. The remainder at(X) is a polynomial of degree m — 1 or less over GF(2) and is of the following form: ai(X) = aiQ+anX+ai2X> + ••• +aUm.xX^. Since A"and p(X) are relatively prime (i.e., they do not have any common factor except 1), X* is not divisible by p(X). Therefore, for any / > 0, a((X)^0. (2.16) For 0 < z, j < 2m — 1, and / ^j, we can also show that ai(X)^aJ(X). (2.17) Suppose that at(X) = a}(X). Then it follows from (2.15) that *' + *j = MX) + qj(X)]p(X) + a£X) + dj{X) = [qt(X) + qj(X)]p(X). This implies that p(X) divides X1 + XJ = X'(\ + Xj'1) (assuming that j > /). Since X1 and p(X) are relatively prime, p(X) must divide Xj~( + 1. However, this is impossible since j — / < 2m — 1 andp(A') is a primitive polynomial of degree m which does not divide Xn + 1 for n < 2m — 1. Therefore, our hypothesis that a{(X) = aj(X) is invalid. As a result, for 0<i9j<2m— 1 and / ^y, we must have at(X) ^ aj(X). Hence, for / = 0, 1, 2,. . ., 2m — 2, we obtain 2m — 1 distinct nonzero polynomials at(X) of degree m — 1 or less. Now, replacing A' by a in (2.15) and using the equality that qi((x)'0 = 0 [see (2.7)], we obtain the following polynomial expression for a': a' =-- tff(a) = tf.o + ana + ai2a2 + • • • + «/-m_1am-1. (2.18) From (2.16), (2.17), and (2.18), we see that the 2m — 1 nonzero elements, a0, a1,..., a2"1-2 in i7*, are represented by 2m — 1 distinct nonzero polynomials of a over GF(2) with degree m — 1 or less. The zero element 0 in F* may be represented by the zero Sec. 2.4 Construction of Galois Field GF(2m) 31 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8035526275634766, "perplexity": 789.5689666462681}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347409171.27/warc/CC-MAIN-20200530102741-20200530132741-00110.warc.gz"} |
https://www.aanda.org/articles/aa/full_html/2010/09/aa13908-09/aa13908-09.html | Free Access
Issue A&A Volume 517, July 2010 A20 7 Interstellar and circumstellar matter https://doi.org/10.1051/0004-6361/200913908 27 July 2010
A&A 517, A20 (2010)
## A variable jet model for the H emission of HH 444
A. C. Raga1 - A. Riera2 - D. I. González-Gómez3
1 - Instituto de Ciencias Nucleares, UNAM, Ap. 70-543, 04510 D. F. México, México
2 - Departament de Física i Enginyeria Nuclear, EUETIB, Universitat Politecnica de Catalunya, Compte d'Urgell 187, 08036 Barcelona, Spain
3 - Instituto de Geofísica, UNAM, 04510 D. F. México, México
Received 18 December 2009 / Accepted 7 March 2010
Abstract
Context. HH 444 is one of the first Herbig-Haro (HH) jets discovered within a photoionized region.
Aims. We re-analyze the Hand red [S II] HST images of HH 444, and calculate the width of the jet as a function of distance from the source. We compare the H image with predictions from variable ejection velocity jet models.
Methods. The determination of the jet's width is done with a non-parametric, wavelet analysis technique. The axisymmetric, photoionized jet simulations are used to predict H maps that can be directly compared with the observations.
Results. Starting with a thin jet (unresolved at the resolution of the observations), we are able to produce knots with widths and morphologies that generally agree with the H knots of HH 444. This agreement is only obtained if the jet axis is at a relatively large, angle with respect to the sky. This agrees with previous spectroscopic observations of the HH 444 bow shock, which imply a relatively large jet axis/plane of the sky angle.
Conclusions. We conclude that the general morphology of the chain of knots close to V510 Ori (the HH 444 source) can be explained with a variable ejection velocity jet model. For explaining the present positions of the HH 444 knots, however, it is necessary to invoke a more complex ejection velocity history than a single-mode, periodic variability.
Key words: circumstellar matter - ISM: jets and outflows - Herbig-Haro objects - ISM: individual objects: HH 444 - stars: formation
## 1 Introduction
HH 444, a Herbig-Haro (HH) object in the vicinity of Orionis, is one of the first jets detected within a photoionized region (Reipurth et al. 1998). The NE outflow lobe has a chain of aligned knots extending away from the source (V510 Ori) and a bow shock structure away from V510 Ori.
Reipurth et al. (1998) presented images and low dispersion spectra of this object. López-Martín et al. (2001) presented two long-slit spectra (of the base of the HH 444 jet, and of the bow shock) and compared the observations with a numerical simulation of an externally photoionized, variable ejection velocity jet. Finally, Andrews et al. (2004) presented a long-slit spectrum of the jet/counterjet system within from V510 Ori as well as an H and a [S II] 6716+30 HST image of the outflow (not including the HH 444 bow shock at from the source, see above).
Since the discovery of HH 444 (Reipurth et al. 1998), a considerable number of HH jets within photoionized regions have been found. For example, Bally & Reipurth (2001) report the discovery of several HH jets within the outskirts of M 42 and in NGC 1333 (also see Bally et al. 2001). Many of these jets show remarkable, curved structures, which have been interpreted as the interaction between the HH outflows and a streaming external medium (which could result e.g. from the expansion of the H II region). This type of curved morphology has been modeled in some detail both analytically (Cantó & Raga 1995) and numerically (Lim & Raga 1998; Masciadri & Raga 2001; Ciardi et al. 2008).
Both its less complex structure and the detailed available observations render HH 444 a candidate for studying whether or not a variable ejection jet model can reproduce the observed knot structures. A similar comparison was previously done e.g. for the DG Tauri microjet (Raga et al. 2001), HH 34 (Raga & Noriega-Crespo 1998), HH 111 (Masciadri et al. 2002) and HH 32 (Raga et al. 2004).
The only externally photoionized jet that was modeled in this way is HH 444. López-Martín et al. (2001) computed 3D, variable jet models from which they obtained predictions of position-velocity diagrams (which they then compare with the observed long-slit spectra of the HH 444 jet base). They studied the effects of having a non-zero initial opening angle for the jet, and of a non-top hat initial cross section.
We first re-analyze the HST images of Andrews et al. (2004). We we use a wavelet analysis technique (Riera et al. 2003) to determine the angular sizes across the outflow axis of the knots in the two outflow lobes (Sect. 2). We then compute a grid of photoionized, single-mode variable ejection velocity, axisymmetric jet models (Sect. 3) from which we obtain Hmaps that can be directly compared with the H HST image of HH 444 (Sect. 4). We discuss the time evolution predicted for the H maps and the effects of having different orientations between the outflow and the plane of the sky. Finally, we discuss a two-mode variable ejection velocity jet model (Sect. 5).
Figure 1: HH 444 H image ( right, see Sect. 2.1) and characteristic width as a function of position along the jet axis ( left), obtained from the wavelet analysis (see Sect. 2.2). Two successive contours correspond to a factor of 2. Open with DEXTER
Figure 2: HH 444 [S II] 6716+30 image ( right, see Sect. 2.1) and characteristic width as a function of position along the jet axis ( left), obtained from the wavelet analysis (see Sect. 2.2). Two successive contours correspond to a factor of 2. Open with DEXTER
## 2 The H and [S II] images of HH 444
### 2.1 HST images
The F656N H and F673N [S II] Wide Field Planetary Camera 2 (WFPC2) images of HHH 444-445 were retrieved from the HLA archive. The images were obtained on 2000 January 25 with the WFPC2 on HST through the F656N and F673N filters. HH 444 was placed on the WF4 CCD which has a plate scale of pixel-1. Four exposures were obtained with each filter giving a total exposure time of 5100 s (for H) and 5200 s (for [S II]). These images were originally part of the Cycle 8 proposal 8323 (P.I.: B. Reipurth). For details of the observations see Andrews et al. (2004). The retrieved data were processed with the HST pipeline used at the Canadian Astronomy Data Centre (CADC).
### 2.2 Morphological analysis
In Figs. 1 and 2 we show the H and [S II] images of HH 444, where we can see the structure of the jet/counterjet close to the outflow source. The knots were named following the nomenclature used by Reipurth et al. (1998).
As previously reported by Andrews et al. (2004), the [S II] emission displays a chain of compact knots emanating from the outflow source up to 12'' from the stellar centroid. The inner region of the jet (i.e., knot A) is dominated by [S II]. The H emission of the jet extends up to from the stellar centroid (i.e., to larger distances than the [S II] emission). The inner part of the H jet is more diffuse than its [S II] counterpart.
In order to assess the width of the HH 444 jet as a function of distance from the central source, we have applied a wavelet transform analysis. This method for deriving the width of a jet is mathematically more complex than the standard'' method of fitting a function (e.g., a Gaussian profile) to the cross section of the jet and then using the characteristic width of this function as an estimate of the jet width (this approach dates back to the papers of Raga & Mateo 1988; Bührke et al. 1988).
We first attempted to fit Gaussians to the cross section of the HH 444 jet in the HST images. We find that this does not produce satisfactory results for two reasons:
• the signal-to-noise of the images is quite poor (because of this, Andrews et al. 2004 actually show spatially smoothed images),
• the substraction of the background emission is not straightforward, particularly in the region close to the jet source (in which a strong reflection nebula with a complex morphology is present).
A wavelet analysis technique is more appropriate in this case, since it is based on convolutions with spatially extended functions (which mitigates the signal to noise problem), and automatically separates the emission in different spatial scales (so that no special treatment is necessary for separating the jet from the background emission). We therefore adopt the procedure developed by Riera et al. (2003), who analyzed images of the HH 110 jet with a wavelet technique.
We rotated the H and [S II] emission maps so that the outflow axis is parallel to the ordinate. On these rotated images, we then carried out a decomposition in a basis of anisotropic wavelets. Following Riera et al. (2003) we used a basis of Mexican hat'' wavelets:
g(r) = C(2 - r2)e-r/2 , (1)
where , , and ax, ay are the scale lengths of the wavelet along the x- and y-axis, respectively. We then chose a range for ax and ay (which are taken to have integer values, from 1 to 30 pixels) and then computed the transform maps Tax,ay(x,y) for the H and [S II] images.
On the observed intensity map we fixed the position of y and found the value of , where the intensity map has a local maximum close to the outflow axis. For the positions , where the intensity has a maximum (along the y-axis), we plotted the 2D spectrum . In each of these spectra we found the peak in the spectral, (ax,ay)-plane, which we denoted . This peak gave us the characteristic size across ( ) and along ( ) the outflow axis of the knot structures present at the position . The width ( ) as a function of position y along the jet obtained in this way are shown in Figs. 1 and 2 (for H and [S II], respectively).
We first describe the characteristic widths (sizes across the jet axis) of the jet in the H image. Figure 1 shows the values as a function of position y (where y increases with distance from the outflow source). Along knot A (i.e., the innermost region) the width of the jet increases more or less monotonically from a value of (basically unresolved) up to (for ). At the inner edge of knot B, we see that suddenly grows (adopting values of up to ). Along knots B and C ( ) the width grows from a value of to . Knot D shows the highest values of , with values in the range from to . The width of the counterjet remains unresolved at the present spatial resolution.
In the red [S II] image (Fig. 2), we obtain widths that are approximately smaller than the H widths for knots A, B and C. The width of the counterjet is again unresolved.
Table 1: Grid of models.
Figure 3: HH 444 H image ( left) and H images predicted from models M1-M6 (labeled on the bottom left of each frame) for a t=375 yr time-integration. The images were computed assuming a orientation angle between the outflow axis and the plane of the sky. The HH 444 image is depicted with the logarithmic scale given at the top of the left frame (in erg s-1sterad-1) and the predicted images are depicted with the scale given at the top of the right frame (in the same units). Both scales cover a dynamic range of 3 orders of magnitude. The displayed domain has an axial (vertical) extent of , corresponding to cm at 450 pc. Open with DEXTER
Figure 4: HH 444 H image ( left) and a time-series of H images predicted from model M4 (the corresponding integration times are given at the bottom left of each frame). The images were computed assuming a orientation angle between the outflow axis and the plane of the sky. The HH 444 image is depicted with the logarithmic scale given at the top of the left frame (in erg s-1sterad-1) and the predicted images are depicted with the scale given at the top of the right frame (in the same units). The displayed domain has an axial (vertical) extent of , corresponding to cm at 450 pc. Open with DEXTER
## 3 The model parameters
López-Martín et al. (2001) found that to model the long-slit spectrum of the HH 444 jet base, a variable ejection velocity jet model with a sinusoidal variability with a mean velocity km s-1, a half-amplitude km s-1and a period yr was appropriate. These authors also deduced an angle between the outflow axis and the plane of the sky (from the maximum and minimum radial velocities observed in the HH 444 bow shock).
In the present work, we study a grid of models with a sinusoidal ejection velocity variability:
(2)
with mean velocity v0=180 km s-1 and all the combinations of half-amplitudes , 60 km s-1 and periods , 100, 200 yr. The six resulting models are tabulated in Table 1.
Figure 5: HH 444 H image ( left) and H images predicted from model M4 (for a t=425 yr time-integration) assuming different orientation angles between the outflow axis and the plane of the sky. The values of for each image are given at the bottom left of each frame). The HH 444 image is depicted with the logarithmic scale given at the top of the left frame (in erg s-1sterad-1) and the predicted images are depicted with the scale given at the top of the right frame (in the same units). The displayed domain has an axial (vertical) extent of , corresponding to cm at 450 pc. Open with DEXTER
Figure 6: HH 444 H image ( left) and a time-series of H images predicted from the two-mode ejection velocity variability model described in Sect. 5 (the corresponding integration times are given at the bottom left of each frame). The images were computed assuming a orientation angle between the outflow axis and the plane of the sky. The HH 444 image is depicted with the logarithmic scale given at the top of the left frame (in erg s-1sterad-1) and the predicted images are depicted with the scale given at the top of the right frame (in the same units). The displayed domain has an axial (vertical) extent of , corresponding to cm at 450 pc. Open with DEXTER
The six jet models have top-hat initial cross sections with a radius cm (corresponding to at a distance of 450 pc), number density cm-3 and temperature K. These parameters give a mean mass-loss rate yr-1for the jet. The jet moves into a uniform environment with density cm-3 and temperature K.
The time-integrations are computed with a cylindrically symmetric version of the yguazú-a'' code in a 4-level, binary adaptive grid with a maximum resolution of cm along the two axes. A detailed description of the yguazú-a'' code is given by Raga et al. (2000a).
We consider that hydrogen is fully ionized throughout the computational grid, and we impose a minimum temperature of 104 K (for higher temperatures, the parametrized cooling function of Raga et al. 2000b is included). This is an approximate way of simulating a fully photoinized jet.
We note that Raga et al. (2000c) estimated that the HH 444 flow would become fully photoionized by the external UV radiation field at a distance of cm from V510 Ori. Therefore, the approximation of a fully photoionized flow is incorrect for distances smaller than cm from the outflow source.
The simulations were carried out in a cm (axial radial) cylindrical grid, with reflection conditions on the symmetry axis and on the jet/counterjet symmetry plane and trasmission conditions in the other grid boundaries. We carried out 500 yr time integrations, in which the leading working surface of the jet leaves the computational domain. At the later integration times we see the emission from the knots close to the outflow source (formed by the ejection velocity variability) without the contribution from the jet's head and its extended bow shock wings.
In the rest of the paper we assume a distance D=450 pc to the HH 444 outflow. This distance is used to scale the model predictions so that they can be directly compared with the observations.
## 4 The H maps
### 4.1 Maps obtained from all models
We now assume that the jet axis lies at a angle with respect to the plane of the sky (consistent with the angle determined for HH 444 by López-Martín et al. 2001), and compute H maps from the t=375 yr flow stratifications obtained from models M1-M6 (see Table 1). The maps are obtained by integrating the H emission coefficient (obtained from the H recombination cascade) along lines of sight.
The resulting maps are shown, together with the H image of HH 444, in Fig. 3. From this figure, it is evident that the models produce a well collimated, narrow H emitting region close to the source and broad knots at larger distances, qualitatively resembling the emission from the HH 444 jet base and from the knots HH 444B and C.
Models M1 and M2 (with a yr variability period, see Table 1) reproduce the separation between knots B and C. However, a well-defined H knot is seen at from the source, which does not exist in HH 444. Models M3 and M4 (with yr) have two knots, which have a separation a factor of larger than the separation beween HH 444B and C. Finally, models M5 and M6 (with yr, see Table 1) have a single knot at the position of HH 444C.
It is clear that while all models have qualitative similarities to the observations, it appears that the H emission structure of HH 444 cannot be reproduced by a model with a single-mode, periodic ejection variability. In order to obtain the correct knot separations it will be necessary to consider at least a two-mode ejection variability model (like the one explored by Raga & Noriega-Crespo 1998), or possibly a non-periodic ejection variability (like the one recently explored by Yirak et al. 2009). A two-mode ejection variability model is presented in Sect. 5.
It is also clear that in the region close to the source the relative H emission from the jet base predicted from all models is much stronger than that observed in HH 444. In order to reconcile the observations and model predictions it is therefore necessary to invoke a relatively strong circumstellar extinction close to V510 Ori.
### 4.2 The time-evolution of the H maps
We now focus on model M4 (with yr and km s-1, see Table 1) and compute a time-series of H maps covering a full ejection variability period. The resulting maps are shown in Fig. 4.
In this time-series, we see the formation of a knot at from the source (in the t=375 yr frame of Fig. 4). This knot travels away from the source and grows in angular size, and in the last two time-frames (t=450, 475 yr) reaches the position of the HH 444B and C knots.
The t=475 yr frame corresponds to a full ejection variability period after the t=375 yr frame (the last and first frames of Fig. 4, respectively). These two Hmaps are very similar, with the exception that the knot at from the source is fainter in the t=475 yr frame. This is because the cocoon gas is progressively evacuated from the computational domain, resulting in lower pre-shock densities for the successive bow shocks travelling away from the source.
### 4.3 The orientation with respect to the plane of the sky
We now consider the t=375 yr frame of model M4 (see Fig. 4), and compute H maps for different angles between the outflow axis and the plane of the sky. For we have a knot at from the source. This knot has a flat, bow-shaped emission structure, which does not resemble the round morphology of the knots HH 444B and C.
As we go to higher values of , the simulated knot approaches the source and develops a rounder morphology. For the morphology of the simulated knot resembles the structures of the HH 444B and C knots.
From this we conclude that the morphologies observed for the HH 444B and C knots are consistent with the morphologies found for the emission from internal working surfaces when the outflow axis is at an angle with respect to the plane of the sky. This result is consistent with the angle (between the outflow axis and the plane of the sky) determined by López-Martín et al. (2001) for the HH 444 outflow.
## 5 A two-mode ejection velocity variability model
### 5.1 The H maps
The available observations of HH 444 are not sufficient to constrain a two- or three-mode ejection variability. This was possible in the past for objects in which more extensive kinematic information (i.e., of a spatially more extended region along the outflow axis) as well as proper motions are available. Examples of this are HH 34 and HH 444 (see Raga et al. 2002) and HH 30 (Anglada et al. 2007; Esquivel et al. 2007).
Figure 7: H images (for a orientation angle between the outflow axis and the plane of the sky) predicted from the two-mode ejection velocity variability model described in Sect. 5 for a t=400 yr integration time. The three maps correspond to three simulations with maximum resolutions cm (the low'' resolution model), cm (medium'' resolution) and cm (high'' resolution) along the two axes. The images are depicted with the scale given at the top of the right frame (in erg cm-2 s-1 sterad-1). The displayed domain has an axial (vertical) extent of cm. Open with DEXTER
For this reason, we only present one two-mode ejection velocity variability model to illustrate that it is indeed possible to produce knot structures that resemble the HH 444 jet. We choose a model that has a velocity variability with two sinusoidal modes with half-amplitudes km s-1 and km s-1and corresponding periods yr and km s-1. The mean velocity v0=180 km s-1, and the remaining parameters of the models are identical to those of models M1-M6 (see Sect. 3). The computation is done (as in models M1-M6, see Sect. 3) in a 4-level, binary adaptive grid with a maximum resolution cm (along the two axes).
In Fig. 6 we present a comparison between the H image of HH 444 and a time-series of H maps computed from the two-mode ejection velocity variability jet model. It is clear that a number of time-frames (e.g., the maps obtained for t=350, 375, 425, and 600 yr) have knot distributions that qualitatively resembles the HH 444 knot structure.
### 5.2 Convergence study
We use this two-mode jet model to illustrate the numerical convergence of our simulations. All results presented above were obtained using a 4-level, binary adaptive grid with a maximum resolution cm (along the two axes). This implies that the initial jet radius ( cm, see Sect. 3) is only resolved with three grid points. While the resolution of the jet beam improves at larger distances from the source (due to the lateral expansion of the beam), this is indeed a rather low resolution, and one might suspect that the results will change for higher resolutions.
In Fig. 7 we show the t=400 yr H map obtained from our two-mode jet model computed with three different maximum resolutions: , and cm (computed in binary grids with 4, 5 and 6 levels, respectively). From this figure, it is clear that while the general morphology of the predicted H maps does not change with increasing resolution of the simulation, the fluxes of the knots do change.
Figure 8: Peak H intensity of the three knots seen in the images predicted from the 2-mode, variable ejection jet model (see Fig. 7) as a function of the maximum resolution of the simulation. The crosses give the H intensity of the knot closest to the source, the triangles correspond to the second knot, and the circles to the third knot. Open with DEXTER
Figure 9: H map ( right) predicted from the two-mode ejection variability model (see Sect. 5) for a t=400 yr time-integration. The map was computed assuming a orientation between the jet axis and the plane of the sky. The left plot shows the jet width vs. position dependence recovered from the predicted Hmap using the wavelet analysis technique (described in Sect. 2.2). Two successive contours correspond to a factor of 2. Open with DEXTER
This change in H intensity as a function of resolution is shown in Fig. 8, in which we plot the peak Hintensities of the three knots (seen in the t=400 yr H maps, see Fig. 7) as a function of maximum resolution of the simulations. If we look at the brightest knot, we see that the peak Hintensity drops by a factor of when going from the to the cm resolution. Its peak H intensity again drops when going from the to the cm resolution, but only by a factor of . Similar results are found for the other two knots.
These results indicate that at least a partial numerical convergence is obtained for the H intensity maps when we reach our highest, cm resolution. We then use this higher resolution simulation to compute the jet width as a function of position, with relative confidence that the results are quantitatively meaningful. This is described in the following section.
### 5.3 Width vs. position
Let us now explore whether or not our two-mode jet model results in a width vs. position distribution which resembles the one observed in the HH 444 jet (see Sect. 2.2). For this analysis we consider the t=400 yr H map computed from our higher resolution simulation (with minimum cell size cm, see Sect. 5.2).
When applying the wavelet analysis to this synthetic image, we obtain the width vs. position shown in Fig. 9. From this figure we see that we obtain a region extending to from the source in which the jet width is , basically unresolved at the resolution of the HST images. The first knot (at a distance of from the source) has a width of , and the second knot (at a distance of from the source) has a width of .
We note the interesting effect seen at from the outflow source (see Fig. 9). In this inter-knot region, the width determined for the jet beam (from the wavelet analysis) blows up, attaining values of several arcseconds. This is probably because in the inter-knot regions the spatial scale of the emission is dominated by the distance to the neighbouring knots. These broadenings in the faint inter-knot regions are generally obtained in width determinations based on wavelet analyses (see Figs. 1 and 2), and a similar effect is also obtained when fitting Gaussian functions to the jet cross-section, provided that data with a high enough signal-to-noise ratio are used (see Raga et al. 1991).
Comparing these results with those obtained from the H map of HH 444 (see Sect. 2.2 and Fig. 1), we see that though the positions of the knots in the simulated jet do not coincide with those of the HH 444 knots, a general agreement is obtained between the observed and predicted width vs. position. Both show an unresolved jet-width region close to the source, and widths of for the knots.
## 6 Conclusions
We re-analyzed the HST H and red [S II] images of HH 444 obtained by Andrews et al. (2004). We applied the non-parametric wavelet analysis technique of Riera et al. (2003) to calculate the width vs. position along the HH 444 jet. From this analysis we found that the jet width is basically unresolved (in both H and [S II]) close to the source, and grows to widths of in the well-defined knots B and C.
We computed a grid of jet models with a single-mode, sinusoidal variability for the ejection velocity, with a range of values for the periods and amplitudes that appears to be appropriate for the knots along HH 444 (Sect. 3). H maps computed from all models (assuming a angle between the outflow axis and the plane of the sky, see Fig. 3) produce knots which qualitatively resemble the HH 444 B and C knots. We studied the effect of changing the angle (between the outflow axis and the plane of the sky, see Fig. 5) and found that the predicted H knots resemble the HH 444 knots only for . This result is consistent with the orientation of the HH 444 outflow estimated by López-Martín et al. (2001).
A systematic difference between the model predictions and the observations is that the models show brighter H emission close to the outflow source (in a region within from the source, see Figs. 3-5). This result might be consistent with the fact that the region around Orionis shows substantial circumstellar emission (possibly including a proplyd tail, see Andrews et al. 2004), indicating the presence of a dense, circumstellar envelope which may be producing a substantial extinction of the jet emission.
However, we find that the single sinusoidal mode variability models cannot explain the knot spacings observed in HH 444. This problem can be solved by proposing a model with a two-mode sinusoidal ejection velocity variability. We illustrate this possibility by computing a two-mode jet model (Sect. 5).
We chose an H map predicted from this two-mode model for computing the jet width vs. position with the wavelet analysis technique that we used for analyzing the HH 444 images. We find that the jet has an unresolved region close to the source and that the jet width grows as a function of increasing distance from the source. A comparison between the predictions (Fig. 9) and the HH 444 H observations (Fig. 1) shows a qualitatively good agreement between the predicted and observed width vs. position.
To summarize, we showed that the H HST image of HH 444 has knots with morphologies that agree with the predictions from a variable ejection velocity jet (if one considers an appropriate orientation angle between the jet axis and the plane of the sky). The knot spacings observed in HH 444, however, require at least a two-mode ejection velocity variability.
The two-mode time-variability that we explored is not well constrained by the present observations, and in principle a more complex variability is probably needed. An indication of the necessity of a more complex variability are the knots at larger distances from the HH 444 source: knots G and H, at distances of 114'' and 154'' (respectively) from the source (Reipurth et al. 1998). These knots can in principle be modeled through the introduction of an extra ejection variability mode (a similar morphology in the HH 34 jet was modeled in this way by Raga & Noriega-Crespo 1998). Instead of a multi-mode variability, a non-periodic variability (see Yirak et al. 2009; Bonito et al. 2010) might be present, but the richness of inter-knot spatial scales that is to be expected from a well-sampled random variability (Raga 1992) does not seem to be present in the HH 444 jet.
Acknowledgements
This work was supported by the CONACyT grants 61547, 101356 and 101975. The work of A.Ri. was supported by the MICINN grant AYA2008-06189-C03 and AYA2008-04211-C02-01 (co-funded with FEDER funds). We acknowledge the support of E. Palacios from the ICN-UNAM computing staff. We thank John Bally (the referee) for several helpful suggestions.
## Footnotes
... HLA
Based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA).
## All Tables
Table 1: Grid of models.
## All Figures
Figure 1: HH 444 H image ( right, see Sect. 2.1) and characteristic width as a function of position along the jet axis ( left), obtained from the wavelet analysis (see Sect. 2.2). Two successive contours correspond to a factor of 2. Open with DEXTER In the text
Figure 2: HH 444 [S II] 6716+30 image ( right, see Sect. 2.1) and characteristic width as a function of position along the jet axis ( left), obtained from the wavelet analysis (see Sect. 2.2). Two successive contours correspond to a factor of 2. Open with DEXTER In the text
Figure 3: HH 444 H image ( left) and H images predicted from models M1-M6 (labeled on the bottom left of each frame) for a t=375 yr time-integration. The images were computed assuming a orientation angle between the outflow axis and the plane of the sky. The HH 444 image is depicted with the logarithmic scale given at the top of the left frame (in erg s-1sterad-1) and the predicted images are depicted with the scale given at the top of the right frame (in the same units). Both scales cover a dynamic range of 3 orders of magnitude. The displayed domain has an axial (vertical) extent of , corresponding to cm at 450 pc. Open with DEXTER In the text
Figure 4: HH 444 H image ( left) and a time-series of H images predicted from model M4 (the corresponding integration times are given at the bottom left of each frame). The images were computed assuming a orientation angle between the outflow axis and the plane of the sky. The HH 444 image is depicted with the logarithmic scale given at the top of the left frame (in erg s-1sterad-1) and the predicted images are depicted with the scale given at the top of the right frame (in the same units). The displayed domain has an axial (vertical) extent of , corresponding to cm at 450 pc. Open with DEXTER In the text
Figure 5: HH 444 H image ( left) and H images predicted from model M4 (for a t=425 yr time-integration) assuming different orientation angles between the outflow axis and the plane of the sky. The values of for each image are given at the bottom left of each frame). The HH 444 image is depicted with the logarithmic scale given at the top of the left frame (in erg s-1sterad-1) and the predicted images are depicted with the scale given at the top of the right frame (in the same units). The displayed domain has an axial (vertical) extent of , corresponding to cm at 450 pc. Open with DEXTER In the text
Figure 6: HH 444 H image ( left) and a time-series of H images predicted from the two-mode ejection velocity variability model described in Sect. 5 (the corresponding integration times are given at the bottom left of each frame). The images were computed assuming a orientation angle between the outflow axis and the plane of the sky. The HH 444 image is depicted with the logarithmic scale given at the top of the left frame (in erg s-1sterad-1) and the predicted images are depicted with the scale given at the top of the right frame (in the same units). The displayed domain has an axial (vertical) extent of , corresponding to cm at 450 pc. Open with DEXTER In the text
Figure 7: H images (for a orientation angle between the outflow axis and the plane of the sky) predicted from the two-mode ejection velocity variability model described in Sect. 5 for a t=400 yr integration time. The three maps correspond to three simulations with maximum resolutions cm (the low'' resolution model), cm (medium'' resolution) and cm (high'' resolution) along the two axes. The images are depicted with the scale given at the top of the right frame (in erg cm-2 s-1 sterad-1). The displayed domain has an axial (vertical) extent of cm. Open with DEXTER In the text
Figure 8: Peak H intensity of the three knots seen in the images predicted from the 2-mode, variable ejection jet model (see Fig. 7) as a function of the maximum resolution of the simulation. The crosses give the H intensity of the knot closest to the source, the triangles correspond to the second knot, and the circles to the third knot. Open with DEXTER In the text
Figure 9: H map ( right) predicted from the two-mode ejection variability model (see Sect. 5) for a t=400 yr time-integration. The map was computed assuming a orientation between the jet axis and the plane of the sky. The left plot shows the jet width vs. position dependence recovered from the predicted Hmap using the wavelet analysis technique (described in Sect. 2.2). Two successive contours correspond to a factor of 2. Open with DEXTER In the text | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.892219603061676, "perplexity": 1484.628689449388}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948544124.40/warc/CC-MAIN-20171214124830-20171214144830-00612.warc.gz"} |
http://www.maa.org/publications/periodicals/convergence/investigating-eulers-polyhedral-formula-using-original-sources-definitions-and-examples?device=desktop | # Investigating Euler's Polyhedral Formula Using Original Sources - Definitions and Examples
Author(s):
Lee Stemkoski (Adelphi University)
Now let us examine the content of the paper.
Euler spends the first six paragraphs motivating his topic; he wishes to put solid geometry on the same foundation as plane geometry. This section of Euler's paper is possibly the most difficult to translate in terms of vocabulary and grammar, and thus we will skip ahead to the mathematics.
This paper is full of results and examples, each of which is divided into a digestible length: paragraphs of only a few sentences. Once the reader is able to capture the mathematical content in a given paragraph (usually one fact per paragraph), the next paragraph may be explored. Here, we will refer to the paragraphs by number and highlight some of those which are especially suitable for use in the classroom.
In paragraph 7 (Figure 1), Euler mentions that there are three parts of the solid that will be considered: points, lines, and surfaces. The names he gives them, respectively, are:
• Anguli solidi (solid angles), the number of which Euler denotes by S; in today's notation, we call them vertices and denote this quantity by V.
• Acies (sharp edges), the number of which Euler denotes by A; we call them edges and use E.
• Hedrae (as in polyhedrae), the number of which Euler denotes by H; we call them faces and use F.
Figure 1: V, E, F defined
Euler also repeatedly refers to two additional quantities. The first of these is anguli planius, the plane angles of the polygons that constitute the faces of a given polyhedron. For example, a square consists of four plane angles; at each vertex of a cube, three squares meet. Therefore, since a cube has eight vertices, it contains a total of of twenty-four plane angles. The second quantity that appears is laternum, or sides of polygons. Using the example of a cube again: a cube has six faces that are squares, each of which has four sides. Therefore we would say that a cube has twenty-four laternum; this is different from the number of edges, which is twelve. As the reader has possibly discovered, every edge is the intersection of two polygonal faces along a single side, and thus in general, the number of edges is half the total number of sides of all faces. This fact is actually Euler's first Proposition, and will be discussed in detail later in our paper. Euler never assigns letters to represent the number of anguli planius or number of laternum, so to facilitate classroom discussion, you may want to declare that these quantities be denoted by P and L. Another early observation of Euler is that P = L, a fact that he will use repeatedly in his subsequent propositions.
Since these terms are new to Euler's audience (as well as our students), the pedagogically sound next step is to look at an example. Therefore, in paragraph 8, Euler discusses a cuneiforme, or wedge-shape. (The word cuneiforme also refers to the wedge-shaped style of writing used by the ancient Sumerians and Babylonians.) Euler includes a picture of a wedge, labels the vertices, and explicitly lists all six vertices, nine edges, and five faces.
Figure 2: Euler's Wedge-Shaped Example
In paragraph 9, Euler mentions the five regular polyhedra, which we commonly call the Platonic solids today. (Note that the "hexahedron" refers to the cube.)
Figure 3: Regular Polyhedra are Mentioned
In paragraph 10, Euler proposes a general naming system for polyhedra based on the number of vertices and faces, not dissimilar from the genus-species system used to classify living organisms today. True to his style, after discussing generalities, Euler proceeds to give specific examples of the names of some common solids under his new system in paragraph 11. In particular, the wedge-shape previously considered shall be called a pentaedrum hexagonum, as it has five faces and six vertices. The other examples of solids mentioned by Euler are a triangular pyramid, a triangular prism, and a parallelepiped. It might aid student comprehension to review the Greek names for the numbers from one to twenty at this point.
These three sections naturally lend themselves to a classroom discussion. What would Euler call a pentagonal prism? What would a hexagonum octaedrum be called today? Students should be given some time to become acquainted with the geometric terms before moving on; Euler's presentation style is very conducive to this. Depending on the depth to which a teacher wishes to cover this topic, there are deeper questions that can be explored at this junction. What are the advantages and disadvantages to Euler's naming system? Is it possible that two different solids would have the same name under this system? What do we mean by "different" in this sense? For example, should we consider a cube and a truncated square pyramid as "different"?
Lee Stemkoski (Adelphi University), "Investigating Euler's Polyhedral Formula Using Original Sources - Definitions and Examples," Loci (April 2010), DOI:10.4169/loci003297 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8320429921150208, "perplexity": 987.9948999639942}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463475.57/warc/CC-MAIN-20150226074103-00167-ip-10-28-5-156.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/176274/chebyshev-inequality-for-martingales | # Chebyshev Inequality for Martingales
Suppose $\{X_n\}_{n \geq 1}$ is a square-integrable martingale with $E(X_1)=0$. Then for $c>0$:
$$P\left(\max_{i=1, \ldots, n} X_i \geq c\right) \leq \frac{\textrm{Var}(X_n)}{\textrm{Var}(X_n) + c^2}.$$
I imagine Doob's martingale inequality will come into play, but the details elude me.
-
Got something from the answer below? – Did Aug 11 '12 at 9:22
One can start from Doob's martingale inequality, which states that for every submartingale $(Y_n)_{n\geqslant0}$ and every $y\gt0$, $$\mathrm P\left(\max\limits_{0\leqslant k\leqslant n}Y_k\geqslant y\right)\leqslant\frac{\mathrm E(Y_n^+)}y\leqslant\frac{\mathrm E(|Y_n|)}y.$$ Applying this to $Y_n=(X_n+z)^2$ for some $z\gt0$ and to $y=(x+z)^2$ for some $x\gt0$, one gets $$\mathrm P\left(\max\limits_{0\leqslant k\leqslant n}X_k\geqslant x\right)\leqslant\mathrm P\left(\max\limits_{0\leqslant k\leqslant n}Y_k\geqslant y\right)\leqslant C_n(z),$$ with $$C_n(z)=\frac{\mathrm E(|Y_n|)}{y}=\frac{\mathrm E(X_n^2)+z^2}{(x+z)^2}.$$ Finally, for $z=\dfrac{\mathrm E(X_n^2)}{x}$, $C_n(z)=\dfrac{\mathrm E(X_n^2)}{\mathrm E(X_n^2)+x^2}$ hence the proof is complete. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9653446078300476, "perplexity": 214.61709518574344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829421.59/warc/CC-MAIN-20140820021349-00447-ip-10-180-136-8.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/191228/image-of-matrix-exponential-map | # Image of Matrix Exponential Map
It is known that every $A$ belongs to $GL(n,\mathbb C)$ equals to $\exp(B)$ for some $n \times n$ matrix $B$. How to show the following is true? Show that a matrix $M$ belonging to $GL_n(\mathbb R)$ is the exponential of a real matrix if, and only if, it is the square of another real matrix.
-
The only if is easy. Suppose $M=e^{A}$. Then $M=(e^{A/2})^2$. – Alex Becker Sep 5 '12 at 2:31
The other direction does not seem true to me. $M$ can be singular AND a square of another real matrix. – Tunococ Sep 5 '12 at 2:43
@Tunococ I believe OP is asking about invertible matrices only (the word "belongs" in the last sentence should be "belonging") – user29743 Sep 5 '12 at 3:00
If that's the case, Alex's proof works both ways :) – Tunococ Sep 5 '12 at 4:42
@Tunococ: Certainly Alex's proof does not show "if"; the other real matrix need not be in the image of $\exp$. – Marc van Leeuwen Sep 5 '12 at 7:10
Let $A = B^2$ be an invertible real matrix, where $B$ is also real. We must show that $A$ is an exponential of a real matrix. (The converse is clear, as in the comments above.)
The first step is to conjugate $B$ by a real matrix into real Jordan form. In real Jordan form, a matrix is made up of real Jordan blocks, which have the form
$$\left(\begin{matrix} C & 1 & \\ &C & 1 & \\ & & \ldots \\ & & & C & 1 \\ & & & & C \end{matrix}\right)$$
Here $C$ can either be a 1 by 1 real scalar, or a 2 by 2 block of the form $$\left(\begin{matrix} a & b\\-b& a \end{matrix}\right).$$ (In the latter case, the 1s in the matrix must be interpreted as 2 by 2 identity matrices. The latter case corresponds to eigenvalues $a \pm bi$.)
Since both exponential and squaring commute with both conjugation and "blocking", it is sufficient to prove the result for real Jordan blocks, i.e., if $B$ is a real Jordan block, then $B^2$ is an exponential of a real matrix.
Now it is a matter of checking the various cases.
If the Jordan block $B$ is a 1 by 1 scalar matrix, then its square is a positive real number, and the result just says that positive real numbers are exponentials, which is true.
If the Jordan block $B$ is $n$ by $n$ consisting of a single nonzero real scalar on the diagonal (i.e. has $n$ equal nonzero real eigenvalues), then $A = B^2$ is an upper triangular matrix with a single positive scalar $c$ repeated on the diagonal. We must check that any such matrix is an exponential.
A candidate matrix should have form $(\log c)I + J$ where $J$ is strictly upper triangular (0's on the main diagonal). Since $\exp((\log c)I + J) = c \exp(J)$, it is sufficient to consider the case $c=1$.
Thus, we are reduced to showing that $\exp$ is surjective from the set of strictly upper triangular matrices (which I will denote $T_0$) , to upper triangular matrices with 1 on the diagonal (denoted $T_1$). This isn't too hard to do by brute force (by calculating explicitly the exponential of an element of $T_0$ ; remember that all terms of the exponential series starting with the $n$th vanish on $T_0$). Alternatively, one can use the Baker-Campbell-Hausdorff formula applied to the Lie algebra $T_0$. Because this Lie algebra is nilpotent, repeated Lie brackets eventually vanish, so the right hand side of the BCH formula becomes finite and therefore converges for all pairs of elements in $T_0$. Hence the image of $T_0$ under $\exp$ is closed under multiplication and inversion, and is hence a subgroup of $T_1$. Because $T_1$ is connected, and the image of $\exp$ contains a neighborhood of the identity (true for any Lie group), it follows that the image must be all of $T_1$.
The remaining cases are where the Jordan blocks of $B$ consist of 2 by 2 blocks, with eigenvalues $a \pm bi$. If it's a single 2 by 2 block, the result follows from the surjectivity of $\exp$ in $\mathbb{C}$ (essentially, because $(a \pm bi)^2$ has a complex logarithm). If the Jordan block is bigger, then we use a similar argument as above: first, reduce to the case where the eigenvalue is 1, and then use the Lie algebra argument as above.
-
This is not a complete proof, however, it may help you to come up with complete proof. I assume you have some knowledge of Lie groups and exponentials, if there is anything I mention that is not clear, say so and I'll try to clarify.
$GL_n(\mathbb{R})$ has two connected components, this fact is suggested (though not proven) by the determinant map
$\det:GL_n(\mathbb{R})\rightarrow \mathbb{R}^{\times}$
These connected components correspond to the matrices with positive and negative determinant. Call them $GL_n(\mathbb{R})^-$ and $GL_n(\mathbb{R})^+$
Then suppose that $M=A^2$ for some real matrix $A$. Then $\det(M)=\det(A)^2>0$, so $M\in GL_n(\mathbb{R})^+$
Two other important facts about lie groups and exponentials:
$(1)$ If $G$ is a lie group, then any open neighborhood of the identity element generates the connected component of the identity, usually called $G^0$
$(2)$ the exponential map is a local homeomorphism (about the origin), so there is an open neighborhood of the zero matrix in $M_n(\mathbb{R})$ that maps homeomorphically to an open neighborhood of the identity in $GL_n(\mathbb{R})$
This means that there is an open set $U$ in $GL_n(\mathbb{R})$ such that for any $A\in U$, $A=e^{B}$ for some $B$ in $M_n(\mathbb{R})$. Then $U$ generates the connected component of the identity, so any matrix in $GL_n(\mathbb{R})^+$ is a product of exponentials. Thus $M=e^{X_1}\ldots e^{X_n}$ for some real matrices $X_1,\ldots,X_n$.
This is not quite answer to the original post, because it is not the image of one matrix under the exponential. But hopefully it helps!
-
And note that $\begin{pmatrix}-1&0\\0&-2\end{pmatrix}$ is not in the image of $\exp$, in spite of it having positive determinant. – Marc van Leeuwen Sep 5 '12 at 11:38
@ Marc van Leeuwen Yes exactly, and $\left(\begin{matrix} -1 & 0\\0&-2 \end{matrix}\right)$ is the product of $\left(\begin{matrix} 1 & 0\\0&2 \end{matrix}\right)$ and $\left(\begin{matrix} -1 & 0\\0&-1 \end{matrix}\right)$. The first is certainly in the image, and so is the second. In fact any matrix of form $\left(\begin{matrix} \cos{t} & -\sin{t}\\ \sin{t}& \cos{t} \end{matrix}\right)$ is in the image. An element that maps to it is $\left(\begin{matrix} 0& -t\\t&0 \end{matrix}\right)$ – Moss Sep 5 '12 at 11:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9573389291763306, "perplexity": 117.16537932838868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824345.69/warc/CC-MAIN-20160723071024-00192-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://www.itl.nist.gov/div898/handbook/mpc/section3/mpc3672.htm | 2. Measurement Process Characterization
2.3. Calibration
2.3.6. Instrument calibration over a regime
2.3.6.7. Uncertainties of calibrated values
## Uncertainty for linear calibration using check standards
Check standards provide a mechanism for calculating uncertainties The easiest method for calculating type A uncertainties for calibrated values from a calibration curve requires periodic measurements on check standards. The check standards, in this case, are artifacts at the lower, mid-point and upper ends of the calibration curve. The measurements on the check standard are made in a way that randomly samples the output of the calibration procedure.
Calculation of check standard values The check standard values are the raw measurements on the artifacts corrected by the calibration curve. The standard deviation of these values should estimate the uncertainty associated with calibrated values. The success of this method of estimating the uncertainties depends on adequate sampling of the measurement process.
Measurements corrected by a linear calibration curve As an example, consider measurements of linewidths on photomask standards, made with an optical imaging system and corrected by a linear calibration curve. The three control measurements were made on reference standards with values at the lower, mid-point, and upper end of the calibration interval.
Compute the calibration standard deviation For the linewidth data, the regression equation from the calibration experiment is $$Y = a + bX + \epsilon$$ and the estimated regression coefficients are the following. $$\hat{a} = 0.2357$$ $$\hat{b} = 0.9870$$ Next, we calculate the difference between the "predicted" $$X$$ from the regression fit and the observed $$X$$. $$W_i = \frac{(Y_i - \hat{a})}{\hat{b}} - X_i$$ Finally, we find the calibration standard deviation by calculating the standard deviation of the computed differences. $$S = \sqrt{\frac{\sum \left( W_i - \overline{W} \right)^2}{n-1}}$$ The calibration standard deviation for the linewidth data is 0.119 µm.
The calculations in this section can be completed using Dataplot code and R code.
Comparison with propagation of error The standard deviation, 0.119 µm, can be compared with a propagation of error analysis.
Other sources of uncertainty In addition to the type A uncertainty, there may be other contributors to the uncertainty such as the uncertainties of the values of the reference materials from which the calibration curve was derived. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9150006771087646, "perplexity": 931.939908325317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272256.16/warc/CC-MAIN-20140728011752-00493-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/5982/intuition-for-nagatas-altitude-formula?sort=votes | # Intuition for Nagata's altitude formula?
This is theorem 14.C on p.84 of Matsumura's commutative algebra.
Let $A$ be a noetherian domain, and let $B$ be a finitely generated overdomain of $A$. Let $P \in Spec(B)$ and $p = P \cap A$. Then we have $ht(P) \leq ht(p) + tr.d._{A} B - tr.d._{K(p)} K(P)$ with equality holds when $A$ is universally catenary or if $B$ is a polynomial ring over $A$.
Question: How should one understand this formula? I'm hazarding a guess that this factor, $tr.d._{A} B - tr.d._{K(p)}K(P)$, can somehow measure how primes of $B$ will be identified when they are restricted back to $A$. But this sounds woefully wrong and I just want to know how I should view this result or whether there is any (geometric) intuition behind the result.
Thanks!
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9606241583824158, "perplexity": 177.44413608470464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645191214.61/warc/CC-MAIN-20150827031311-00210-ip-10-171-96-226.ec2.internal.warc.gz"} |
http://motls.blogspot.com/2005/10/little-hierarchy-problem.html | Wednesday, October 19, 2005 ... /////
Little hierarchy problem
When you compute quantum, loop corrections to the Higgs mass, you obtain quadratically divergent graphs. Therefore, the exact value is quadratically sensitive on the cutoff scale and it is naturally predicted to be huge - unless we fine-tune the bare mass of the Higgs. On the other hand, reality forces us to believe that the Higgs is about as heavy as two W bosons. Otherwise, its quartic coupling is far too large and the effective quantum field theory breaks down.
This is called the (big) hierarchy problem. It is big because usually we want to assume that the effective theories should be valid at the GUT scale or even the Planck scale.
Some people may say that they don't care about these high scales, and they're perfectly happy with completely new - and perhaps non-field-theoretical - physics kicking in already at a few TeV. Even these people have a problem. It is the little hierarchy problem.
According to the precise measurements, the Standard Model is incredibly successful. It seems more successful than just a theory of physics below the 100 GeV scale. If you imagine that there is new physics at "M = 3 TeV" or so, it will generate new non-renormalizable terms (operators whose dimensions exceed four) in the low-energy effective action, suppressed by powers of "1/M", whose coefficients will be of order one. You can estimate the effect of these small corrections on the measured data. Indeed, you will find no effects whatsoever and the precision we have today implies that "M" must be greater than 3 TeV or something like that.
This also holds for new physics that is supposed to stabilize the Higgs mass - such as supersymmetry but not necessarily just supersymmetry. The observation about the higher dimension operators should therefore mean that the mass of the Higgs should be around 3 TeV, too. Of course, theoretical considerations show that it should be somewhere in the 115-200 GeV range - and perhaps up to 700 GeV in the non-supersymmetric case. You see a certain discrepancy between 150 GeV and 3 TeV - a factor of order 20 or so - which is called the little hierarchy problem.
I personally don't call it a real problem. There may be cancellations that drive the Higgs mass to 5% of its "natural" value. The coefficients of order one are never exactly one and 0.05 is an example of a number of order one. What a big deal. We have more serious problems. However, if you're a low-energy phenomenologist, this detail may be one of a very small number of problems that you still have :-) and therefore you study it most of the time.
Giacomo's talk
Yesterday, Giacomo Cacciapaglia from Cornell - yes, the Italians are taking over phenomenology - was presenting their model for the Higgs. The electroweak SU(2) is enhanced to an SU(3); you still need another independent U(1) to generate the hypercharge with the correct Weinberg angle. Such a construction creates extra SU(2) doublets inside the adjoint of the weak SU(3). These new fields would transform as vectors under the Lorentz group. But if you imagine that the space is five-dimensional, there will also be the fifth component of the gauge field and it will behave as a four-dimensional scalar. You play for a little while, trying to reproduce the Standard Model.
In their particular construction, it requires some amount of work to guarantee that there will be light fermions. At the end, however, it is more important to get the heavy top quark because its loop effects are responsible for obtaining the correct Higgs mass, including the sign. Their particular construction achieves this goal by introducing new large representations of the weak SU(3) group, namely the rank-four tensor "15".
Such new objects increase the couplings they needed to increase but they also lower the cutoff below which the theory is usable. The calculated cutoff will be just a few times higher than the compactification scale "1/R". It means that the terms that violate the five-dimensional Lorentz invariance may be generated with relatively large coefficients; and it also means that you only have a few Kaluza-Klein modes that can be trusted. Consequently, the set of rules that you find makes this class of the models equivalent to deconstruction and the little Higgs models where the fifth dimension is discretized and replaced by a couple of nodes in a quiver diagram. And in this context, the five-dimensional Lorentz invariance does not really constrain you and you may invent many justifications why the terms violating this invariance may be freely added to your Lagrangian whenever you find them useful.
Democracy between solutions of the little hierarchy problem
This means that the moral content of all known solutions to the little hierarchy problem is isomorphic; moreover, the factor of 20 is just moved to some other unnatural features of your theory that must be adjusted. For example, adding an otherwise unjustified large representation whose dimension is D - where D turns out to be at least 15 - is about as bad as fine-tuning a continuous parameter with the 1/20 accuracy, I think. Consequently, you may ask whether the problems and unnaturalness that you added exceed the problems that you solved.
Supersymmetry only solves the big hierarchy problem (the little hierarchy problem remains because we know that superpartners are absent below 200 GeV or so), but it does so in a very satisfactory way. It allows us to believe that quantum field theory will be valid up to very high scales, which I guess will ultimately be the conclusion of any experiments that the people will ever construct. On the other hand, it allows you to exactly cancel the loop corrections to the Higgs mass. The nonzero contributions that remain are governed by the supersymmetry breaking scale.
I am too conservative to abandon the notion of naturalness. On the other hand, it is obvious to all of us that a sharp and well-defined definition of naturalness can only occur once we have a complete enough theory.
Natural estimates of the size of a quantity are nothing else than an incomplete approximative calculation based on a theory that is pretty close to the full theory, and it should eventually be replaced by an exact analytical calculation of such a quantity. It has been the case of atomic physics and many other contexts and it is the only interpretation I can imagine that makes the question "which model is more satisfactory" relatively well-defined. A more satisfactory model is, of course, a model that is closer to the exact full theory of everything whose existence must be assumed. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8701876401901245, "perplexity": 407.60477010481713}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705305291/warc/CC-MAIN-20130516115505-00027-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/851327/show-that-it-is-possible-that-the-limit-displaystyle-lim-x-rightarrow-inf | # Show that it is possible that the limit $\displaystyle{\lim_{x \rightarrow +\infty} f'(x)}$ does not exist.
Let $f: \mathbb{R} \rightarrow \mathbb{R}$ a differentiable function with continuous derivative and the limit $\displaystyle{\lim_{x \rightarrow +\infty} f(x) }$ exists. Show with an example that it is possible that the limit $\displaystyle{\lim_{x \rightarrow +\infty} f'(x)}$ does not exist.
My attempt:
$$f(x)=\int_{-\infty}^{x} e^{t^2}dt$$
$$\lim_{x \to +\infty} f(x)=\lim_{x \to +\infty} \int_{-\infty}^{+\infty} e^{t^2}dt=\frac{\sqrt{\pi}}{2}$$
$$\lim_{x \to +\infty} f'(x) =\lim_{x \to +\infty} e^{x^2}= +\infty \notin \mathbb{R}$$
Is my attempt right?
-
You've mistaken finding the derivative: for your function $f'(x)=e^{-x^2}$, so limit does exist. – CuriousGuest Jun 29 '14 at 15:18
I edited my post. – user159870 Jun 29 '14 at 15:19
Well, now the integral diverges on both ends! – Ted Shifrin Jun 29 '14 at 15:20
Allow me to tell a little story. When I was much younger, I read to my horror in a high-school math text that a differentiable function $f(x)$ has an asymptote for $x \to \infty$ iff $\lim_{x\to\infty} f'(x)$ exists. Now the examples here show that one implication is not true. The other is disproved by $f(x) = \sin(1/x)$. – Andreas Caranti Jun 29 '14 at 17:29
Another example, which used to be the standard one in my time, is $$f(x) = \frac{1}{x} \sin( x^{2} ),$$ where $$f'(x) = -\frac{1}{x^{2}} \sin(x^{2} ) + \frac{1}{x} 2 x \cos(x^{2}) = -\frac{1}{x^{2}} \sin(x^{2} ) + 2 \cos(x^{2}).$$
-
will $\frac{1}{x} \sin x$ also work ? – Rene Schipperus Jun 29 '14 at 17:17
Nope, $(\sin(x)/x)' = -\sin(x)/x^{2} + \cos(x)/x$ goes to zero as $x \to \infty$. – Andreas Caranti Jun 29 '14 at 17:21
Nice, thanks... – Rene Schipperus Jun 29 '14 at 17:22
@ReneSchipperus, you're welcome! – Andreas Caranti Jun 29 '14 at 17:22
Almost right: try $$f(x)=\int_0^x \sin(t^2)\,dt.$$
-
So,is my attempt wrong? Why? – user159870 Jun 29 '14 at 15:25
As Ted Shifrin said in the comment to your question, integral of $e^{t^2}$ diverges. – CuriousGuest Jun 29 '14 at 15:26
To elementarily prove that $\lim_{x\to \infty} f(x)$ exists for the above function (without the residue theorem), we can break the integral up into an alternating sum and apply the alternating series test. – whosleon Jun 29 '14 at 15:30
Without using integrals, you can consider
• $\displaystyle \frac{\sin(x^2)}{x}$
• $\displaystyle \frac{\sin(x^3)}{x}$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9369449615478516, "perplexity": 818.9213220985273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928715.47/warc/CC-MAIN-20150521113208-00205-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/125161/ways-to-solve-these-types-of-indeterminations | # Ways to solve these types of indeterminations?
I'm looking for this type of indeterminations and what I have to do to solve these:
$\lim\limits_{x\to\infty}{f(x)=\dfrac{\infty}{\infty}}$
$\lim\limits_{x\to\infty}{f(x)=\infty-\infty}$
$\lim\limits_{x\to\infty}{f(x)=\dfrac{0}{0}}$
$\lim\limits_{x\to\infty}{f(x)=\dfrac{n}{0}},n\not=0$
EDIT: Could anyone post any examples of these indeterminations?
-
Just examples? (1) $f(x) = \frac{x^2+1}{x^3+1}$, $f(x)=\frac{x^3+1}{x^2+1}$, $f(x)=\frac{2x^2-1}{3x^2+x}$; they each have different limits. (2) $f(x) = \sqrt{x^2+1}-\sqrt{x^2-1}$. (3) $f(x) = \frac{\arctan(x)-(\pi/2)}{e^{-x}}$; (4) $f(x) = 3/e^{-x}$. – Arturo Magidin Mar 27 '12 at 19:30
Could anyone post any examples of these indeterminations?
It's too long to evaluate in detail indeterminations of all the 4 types, with and without using L'Hôpital's rule. Just one
Example of $\infty -\infty$ indeterminations.
Sometimes these indeterminations can be evaluated without using l'Hôpital's rule. That's the case of rational fractions in $x$, i.e $\frac{P(x)}{Q(x)}$, with $P(x)$ and $Q(x)$ polynomials in $x$. As an example let $f(x)=\frac{x^{2}}{x+2}$ and $g(x)=\frac{x^{3}}{x+3}$. We have
$$\begin{eqnarray*} \lim_{x\rightarrow \infty }f(x) &=&\lim_{x\rightarrow \infty }\frac{x^{2}}{x+2}=\infty \\ \lim_{x\rightarrow \infty }g(x) &=&\lim_{x\rightarrow \infty }\frac{x^{3}}{x+3}=\infty. \end{eqnarray*}$$ Hence $\lim_{x\rightarrow \infty }f(x)-g(x)$ is indeterminate. Since we can rewrite $f(x)-g(x)$ as $$\begin{equation*} \frac{x^{2}}{x+2}-\frac{x^{3}}{x+3}=\frac{-x^{4}-x^{3}+3x^{2}}{x^{2}+5x+6}:= \frac{P(x)}{Q(x)}, \end{equation*}$$ where $P(x)=-x^{4}-x^{3}+3x^{2}$ and $Q(x)=x^{2}+5x+6$, we have $$\begin{eqnarray*} \lim_{x\rightarrow \infty }\frac{x^{2}}{x+2}-\frac{x^{3}}{x+3} &=&\lim_{x\rightarrow \infty }\frac{P(x)}{Q(x)} \\ &=&\lim_{x\rightarrow \infty }\frac{-x^{4}-x^{3}+3x^{2}}{x^{2}+5x+6} \\ &=&\lim_{x\rightarrow \infty }\frac{-x^{2}-x+3}{1+5/x+6/x^{2}} \\ &=&\frac{\lim_{x\rightarrow \infty }-x^{2}-x+3}{\lim_{x\rightarrow \infty }1+5/x+6/x^{2}} \\ &=&\frac{-\infty }{1+0+0}=-\infty. \end{eqnarray*}$$ The polynomials $P(x)$ and $Q(x)$ are differentiable. We can thus apply l'Hôpital's rule to the fraction $$\begin{equation*} \frac{P(x)}{Q(x)}=\frac{-x^{4}-x^{3}+3x^{2}}{x^{2}+5x+6} \end{equation*}$$ as follows
$$\begin{eqnarray*} \lim_{x\rightarrow \infty }\frac{P(x)}{Q(x)} &=&\lim_{x\rightarrow \infty } \frac{P^{\prime }(x)}{Q^{\prime }(x)}\\ &=&\frac{\lim_{x\rightarrow \infty }-4x^{3}-3x^{2}+6x}{\lim_{x\rightarrow \infty }2x+5} \\ &=&\frac{\lim_{x\rightarrow \infty }-12x^{2}-6x+6}{\lim_{x\rightarrow \infty }2}=-\infty . \end{eqnarray*}$$
The final results are, of course, the same. The evaluation of
$$\displaystyle \lim_{x \to 4} \; \frac{x-4}{5-\sqrt{x^2+9}}$$
is done in this question. There are many other examples in this site.
Exercise. Try to evaluate the similar indetermination in the limit of
$$\begin{equation*} \frac{1}{x-3}+\frac{5}{\left( x+2\right) \left( 3-x\right) } \end{equation*}$$
as $x$ tends to $3$.
-
The answer will generally depend on the particular $f(x)$ (though for cases 1 and 3 there is a general tool, L'Hopital's Rule if you already know it, that will often work, and for case 4 you can always say the limit does not exist).
The first limit type may be done by L'Hopital's rule (if both numerator and denominator are differentiable), or by algebraic manipulations dependent on the particular $f(x)$.
The second type of limit will require some algebraic manipulations (usually particular to the $f(x)$ in question) to bring it to some manageable form.
The third type may be done by L'Hopital's rule if both numerator and denominator are given by differentiable functions; or again by algebraic manipulations that depend on the particular form of $f$.
In the fourth case, the limit does not exist. If $n\gt 0$ and the denominator is always positive for large enough $x$, then the limit will be $\infty$; same if $n\lt 0$ and the denominator is negative for all large enough $x$. If $n\gt 0$ and the denominator is negative for all large enough $x$, or $n\lt 0$ and the denominator is always positive for large enough $x$, then the limit will be $-\infty$. Otherwise, the limit will simply not exist and not diverge to either $\infty$ or $-\infty$.
-
l'Hopital's rule is a standard result that handles SOME problems of this type. It is usually taught in a Calc 1 or Calc 2 class in the United States.
If your function is in the form $\frac{\infty}{\infty}$ or $\frac{0}{0}$, you can try to apply it right away. If your function is in the form $\infty - \infty$, you need to try to rewrite it in some way to get it in one of the other two forms.
The last form $\frac{n}{0}$ is NOT an indeterminate form.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9828066825866699, "perplexity": 142.92213241541904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097512.42/warc/CC-MAIN-20150627031817-00079-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://community.ptc.com/t5/PTC-Mathcad/subscripts-or-superscripts-in-text-mode-If-so-how-Thank-you/m-p/488668/highlight/true | cancel
Showing results for
Did you mean:
cancel
Showing results for
Did you mean:
Regular Member
subscripts or superscripts in text mode ? If so how? Thank you
Is it possible to produce subscripts or superscripts in text mode with prime 3.0 or 3.1 ? if so how ?
6 REPLIES 6
Re: subscripts or superscripts in text mode ? If so how? Thank you
Hi Dixie. Haven't seen you around here in a while
Sorry, but that still one of the missing features in Prime. Hopefully we will get it in Prime 4.0, although I think that's still quite a long way away.
Re: subscripts or superscripts in text mode ? If so how? Thank you
Now I use in Prime texts with subscripts or superscripts as a picture.
The bad solution is a solution too!
Re: subscripts or superscripts in text mode ? If so how? Thank you
You can add a math region inside a text block:
Re: subscripts or superscripts in text mode ? If so how? Thank you
Not easily, but you can use the Windows Character Map to get numeric super and sub-scripts.
1. Find and start Character Map under Start | Accessories | System Tools
2. Select the Mathcad UniMath Prime font in the top Font: selection box
3. In the bottom of the panel select Group by: Unicode Subrange and pick Super/Subscript as the subrange
4. Now pick any of the numeric super-scripts or sub-scripts by double clicking on it.
5. Select the Copy button and then paste the character into your text in Prime
You can keep the Character Map Utility open on your screen to grab more symbols and paste them in. There are some nice fractions (Number Forms), Arrows, and Mathematical Operators.
Caution: If you later reformat your paragraph say to all Times Roman, other fonts don't support these unique symbols.
Re: subscripts or superscripts in text mode ? If so how? Thank you
I'd want the Mathcad file converter to do this automatically rather that just providing a red line in the text box and leaving me to hunting for the substitution.
Highlighted
Re: subscripts or superscripts in text mode ? If so how? Thank you
This is my preferred solution, and makes for a richer-looking math typeset in your MathCAD documents, especially with the sub/superscripts and combining diacritical marks. In fact, the MathCAD UniMath Prime font is fairly rich in just about every unicode character.
There are so many unicode characters that what you want may be difficult to find. A little experience with the search feature, and unicode subrange grouping filter will ease the finding (although charmap seems to have troubles displaying all available characters sometimes).
In fact, this is a great workable solution to providing Plot titles and legends.
However, it sure would be nice to have the palatte directly available within Prime. I have a worksheet of often-used characters for these things... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.872176468372345, "perplexity": 3061.9072304563865}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670255.18/warc/CC-MAIN-20191119195450-20191119223450-00002.warc.gz"} |
https://www.physicsforums.com/threads/castiglianios-theorem.171299/ | # Castiglianios theorem
1. May 22, 2007
### umarfarooq
1. The problem statement, all variables and given/known data
a) State Castiglianos theorem for translational and rotational displacements of an elastic body, stating precisely the meanings of the terms.
b) A swing in a childrens play area is constrcuted from a steel tube bent into a quarter circle of radius R. One end is rigidly fixed to the ground with the tangent to the circle vertical, and the swing attached to the other end. Assuming that the beam has a section constant EI, derive experssions for the vertical and horizontal displacements of the swing when a downwards load P is applied to it.
2. Relevant equations
dV=dU/dL=d(int(M^2/(2*E*I)*r*dTheta,Pi->0)/dP
3. The attempt at a solution
sorry but im completely baffled
2. May 22, 2007
### Pyrrhus
Where are you stuck??
You need to show your work, may i recommend to use your moment equation in function of the angle the radius makes with the vertical.
3. May 22, 2007
### umarfarooq
okay, i think my answer is wrong but this is what ive got.
the moment is M(theta) is PRCos(theta) + F(R-RSin(theta)) where f is ficticious i know so do i disregard that.
Therefore M^2(theta) = R^2(P^2Cos^2(theta) + F^2 - F^2Sin^2(Theta).
Therefore i use that in the formula d(int(M^2/(2*E*I)*r*dTheta,Pi->0)/dP
I use the trig identities for Cos^2(theta) and Sin^2(theta) and integrate. If i ignore the ficticous force F the value of the integral is P^2(Pi/2)
This gives me a deflection of (P^2*Pi*R^3)/(4*E*I)
Is this correct
Would appreciate it alot, Thanks
4. May 22, 2007
### Pyrrhus
You need to read castigliano's theorem again, in this case the fictitious force is equal to the applied load, unless the applied load is not at the free end, if that's the case you must specify where it is, so we can actually help out!
Last edited: May 22, 2007
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Similar Discussions: Castiglianios theorem | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9579350352287292, "perplexity": 1761.2291228972122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825473.61/warc/CC-MAIN-20171022222745-20171023002745-00559.warc.gz"} |
http://mathhelpforum.com/calculus/160352-integration-problem-do-they-want-me-use-partial-fractions.html | # Math Help - Integration problem. Do they want me to use partial fractions?
1. ## Integration problem. Do they want me to use partial fractions?
I am working on an assignment. The first task is:
Find the indefinite integral of (x+2)/(x^2)+x
I factored out an x of the denominator, and split to partial fractions that were easy to integrate. My result was: 2*ln|x| - ln|x+1|
My questions are:
Have i done it right?
Even if i have, is there another (simpler, or more clever) way to do this?
2. I assume you are actually trying to write
$\displaystyle{\int{\frac{x+2}{x^2+x}\,dx}}$
in which case, partial fractions is the way to go.
3. If by partial fractions you mean partial fraction decomposition, you don't even have to do that. You just have to see that
$\displaystyle \frac{x+2}{x^2 + 2} = \frac{x}{x^2+2}+\frac{2}{x^2+2}$
You'll then end up with two separate integrals you'll have to bust some sweet moves on. One should be straight up u-substitution and the other well...here's a BIG hint (it's not a bad idea to have this memorized)
$\displaystyle \int\frac{dx}{a^2+x^2} = \frac{1}{a}\tan^{-1}\left(\frac{x}{a}\right) + C$
4. Great, thanks!
5. Originally Posted by pirateboy
If by partial fractions you mean partial fraction decomposition, you don't even have to do that. You just have to see that
$\displaystyle \frac{x+2}{x^2 + 2} = \frac{x}{x^2+2}+\frac{2}{x^2+2}$
You'll then end up with two separate integrals you'll have to bust some sweet moves on. One should be straight up u-substitution and the other well...here's a BIG hint (it's not a bad idea to have this memorized)
$\displaystyle \int\frac{dx}{a^2+x^2} = \frac{1}{a}\tan^{-1}\left(\frac{x}{a}\right) + C$
Except it's not $\displaystyle{\frac{x + 2}{x^2 + 2}}$, it's $\displaystyle{\frac{x+2}{x^2 + x}}$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.863777220249176, "perplexity": 600.9006564400977}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042981969.11/warc/CC-MAIN-20150728002301-00026-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/finding-location-of-an-object-as-a-function-of-time.530562/ | # Finding location of an object as a function of time
1. Sep 15, 2011
### NCampbell
1. The problem statement, all variables and given/known data
a) Starting From Newtons second law, find the location of the rock as a function of time in horizontal and vertical Cartesian coordinates
b) If r(t) is the distance the rock is from its starting point, what is the maximum value of θ for which r will continually increase as the rock flies through the air?
2. Relevant equations
F=ma
3. The attempt at a solution
a) So as the question asks, I start with newtons second law, that being, f=ma so Fnet=dp/dt (where p=mv), Knowing this am I just to integrate m*dv/dt in vector form?
So I would get something like r(t) = v(t)-1/2a(t)^2 i + v(t)-1/2a(t)^2 j?
b) haven't got that far
2. Sep 15, 2011
### cepheid
Staff Emeritus
If a is not constant, then x(t) (or y(t)) is NOT equal to v(t) - 1/2a(t)^2, you actually have to integrate to figure out what the position function is.
BTW it's confusing because you use parentheses around t to indicate function notation in some places, and seemingly to indicate multiplication in other places. So, I'm not entirely sure what exactly you've written.
3. Sep 15, 2011
### NCampbell
I am sorry I am not grasping the concept of integrating vectors could you provide some direction?
4. Sep 15, 2011
### cepheid
Staff Emeritus
Basically, the components evolve with time independently so that:
r(t) = x(t)i + y(t)j
and,
x(t) = vx(t)dt
y(t) = vy(t)dt
Similarly:
vx(t) = ax(t)dt
vy(t) = ay(t)dt
From Newton's second law:
ax(t) = Fx(t) / m
ay(t) = Fy(t) / m
In all of the above, boldface quantities are vectors, subscripted quantities are vector components, and parentheses indicate function notation.
In case it wasn't clear from the above, the velocity, acceleration, and force vectors are of course given by:
v(t) = vx(t)i + vy(t)j = dr/dt
a(t) = ax(t)i + ay(t)j = dv/dt
F(t) = Fx(t)i + Fy(t)j
So, the summary is that to integrate or differentiate vectors, you can just integrate or differentiate component-wise.
I'm assuming you've been given F(t), otherwise I don't know how you'd solve the problem.
5. Sep 15, 2011
### NCampbell
Thank You Cepheid! that cleared things up for me I was lost after my lecture on vector calculus today amongst the different notations and how each of the components are affected differently, and no I wasn't given F(t) just that a rock was through off a cliff at angle (theta) from the horizontal
6. Sep 15, 2011
### cepheid
Staff Emeritus
Oh okay. So F(t) = 0i - mgj
(gravity: constant with time, and only vertical).
The angle gives you initial conditions on the velocity (i.e. you know both components of v at t = 0). Two initial conditions (initial velocity v(0) and initial position r(0)) are necessary in order to solve the problem, because otherwise when you integrate, you get a whole family of functions as solutions (represented by the arbitrary constants "C" that you get when you integrate each time). The initial conditions help you to find what the value of "C" is in this situation when you integrate.
7. Sep 17, 2011
### Vector Boson
I have this exact same problem (we are probably in the same class and I am stuck on part b). I would greatly appreciate any insight into solving this. Here is the full question:
A rock is thrown from a cliff at an angle θ to the horizontal. Ignore air resistance.
a) Starting From Newton's second law, find the location of the rock as a function of time in horizontal and vertical Cartesian coordinates.
b) If r(t) is the distance the rock is from its starting point, what is the maximum value of θ for which r will continually increase as the rock flies through the air? (Suggestion: write out an expression for r2, using the results of part a), and consider how it changes with time).
My expressions for part a) are:
x(t) = v0cosθt +x0
y(t) = v0sinθt - 4.9t2 + y0
$\vec{r}$(t) = $\hat{x}$x(t) + $\hat{y}$y(t) = $\hat{x}$(v0cosθt +x0) + $\hat{y}$(v0sinθt - 4.9t2 + y0)
So for part b) I have:
r2 = x2 + y2 = (v0cosθt +x0)2 + (v0sinθt - 4.9t2 + y0)2
I am just unsure what I am supposed to do with an expression for r2 and how that relates to finding θ.
Last edited: Sep 17, 2011
8. Sep 17, 2011
### ohms law
This isn't actually an answer (sorry), but...
r represents distance from the starting point. It's a function of x2 and y2. Just thinking about this problem, if you simply hold the rock and let it fall, than θ would be 0 and so would r. If you threw it exactly horizontal, than θ would be 90 degrees, and I'd empirically expect that r2 would reach it's maxima.
Of course, proving that is the question here. My thinking off the cuff here is to use fairly standard algebra to resolve the problem. Manipulate the equation so that you have v0cosθt by itself on the right (or left, whatever...) side, you know? I may be completely off here, but that's my thinking after having just read this now.
9. Sep 17, 2011
### Vector Boson
Actually if you dropped the rock straight off of the cliff then θ = -90 degrees (relative to the horizon as it states in the question) and r is a positive number $\leq$ to the height of the cliff that is always increasing with time as the rock falls. The answer definitely is not throwing it exactly horizontal because that means θ = 0 and we are looking for when θ is at the max angle above the horizontal where the distance to the rock is always increasing.
To get a better sense of the question, imagine throwing the rock at about 85 degrees above the horizon. The rock will go nearly straight up at first (i.e. r is increasing with time) but as soon as it reaches its peak height, it will begin to fall again nearly straight down and r will start decreasing with time. We are looking for the max angle where r will always be increasing over time.
That being said, I still don't know how to solve for θ but I do know that it will be somewhere in the range of ~45 degrees or so.
Last edited: Sep 17, 2011
10. Sep 18, 2011
### ohms law
Right, because... well, because. That makes intuitive sense to me, as well. As a matter of fact, I'm fairly sure that 45 degrees will end up being the answer.
You could try brute forcing the problem for the moment, just to make sure we're on the correct path. Stick the formula into a spreadsheet. I'm fairly convinced that this comes down to standard algebra (in that you simple need to manipulate the formula and solve for theta).
11. Sep 18, 2011
### NCampbell
If we know r (and using polar coordinates) that x=rcos$\theta$ and y=rsin$\theta$ then isn't $\theta$=arctan(y/x)? I am having a difficult time with this question so I am not entirely sure on what to do either.
12. Sep 19, 2011
i have a question, if i got vector r= (something)x-hat+(something)y-hat, how would i counvert it into polar form? i was thinking that if we find the first derivative of the polar form of r, the distance will be contineously increasing if the first derivative is greater than 0?
13. Sep 19, 2011
### NCampbell
To convert to polar form we use r=sqrt(x^2+y^2) and theta= arctan(y/x) where x and y are your x and y components
14. Sep 19, 2011
then how to start part b?
15. Sep 19, 2011
### NCampbell
I am not sure? I don't see the connection between r^2 and how to find theta that way, are you also in this class?
16. Sep 19, 2011
### cepheid
Staff Emeritus
The distance r = |r| =(x² + y²)½ is the distance of the object from the origin. If r is to be always increasing with time, then this suggests that the derivative of r with respect to time must always be positive (for all t), does it not? I think that is the key idea you need in order to get part (b). The problem may have instructed you to work with r² instead because it makes the math easier. I think that if r is always increasing then so is r².
The theta you are referring to here is the theta-coordinate, in a polar coordinate system. In general, this depends on time (since x and y depend on time). However, the problem want you to find the theta at launch, i.e. the initial value of theta at t = 0. This is just one particular value of theta at one particular time, and can be regarded as a constant in the equations. Vector Boson had the right idea.
17. Sep 19, 2011 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9039926528930664, "perplexity": 647.180091490125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583763839.28/warc/CC-MAIN-20190121070334-20190121092334-00407.warc.gz"} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.