url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
http://motls.blogspot.com/2007/08/magic-dispersion-of-gamma-rays.html | ## Thursday, August 23, 2007 ... /////
### MAGIC: dispersion of gamma rays?
Sociology around the MAGIC experiment: update
The MAGIC collaboration (100+ people), together with five theorists, has posted their preprint about a measured delay of high-energy gamma rays during a flare in an active galaxy.
Figure 1: MAGIC's telescope #1 acording to its webcam
Setup
Markarian 501 is an active galaxy at redshift z=0.034 which, I believe, corresponds to something like 700 million light years distance from Earth.
Once upon a time, there was an exploding source of gamma rays over there that has sent us some gamma-ray signals. It is believed that the source is around 3 light seconds in size. The flare takes around 2 minutes or so. The gamma-ray photons have energy between 100 GeV and 10 TeV or so: you could call it the accelerator energy range.
The collaboration has measured, during at least one flare, a delay up to four minutes for photons near the upper end of the interval (10 TeV) relatively to the gamma rays with lower energies. I think that they should have written a fair paper with the text above and many additional numbers, graphs, and interpolations. For example, I can't understand why they only offer three simple graphs and not the full profiles of the intensity of gamma rays in their four bins as a function of time, among other things.
In fact, they have already submitted a similar preprint in February 2007 where the delay is attributed to gradually accelerating electrons that emit the gamma rays. I feel that the new twist included in the August paper is a matter of marketing.
Guess
While some estimates based on the extension of the magnetic field imply that the source is 3 light seconds in size, I would bet that the radius of a source that is able to produce such a dramatic fast signal, if we include the clump of matter that significantly influences the gamma rays, will be comparable to the length of the signal - up to 2 light minutes.
Now, do you know why the sky is blue? It is because of Rayleigh scattering, an effect whose rate increases with the fourth power of the frequency. Consequently, it is much more likely for high-frequency blue light to come reach our eyes from different directions than directly from the Sun which is why the sky is blue.
Rayleigh scattering occurs when there are particles much smaller than the wavelength of the radiation. That can still be satisfied as long as the source contains gas of TeV-scale dark matter or stuff with an analogous influence. If something like that is true, it could mean that the high-energy gamma rays mostly arrive from boundaries of the source and thus reach us after the lower energy gamma rays that are emitted from the center. I think that this interpretation is consistent with the fact that during one of the flares, the high-energy gamma rays didn't occur at all: the interactions of the high-energy gamma rays with the "halo" were just too strong and the gamma rays were effectively absorbed.
Even if the scattering inside this source or the previously reported gradual acceleration of the emitting electrons were not the reason, it is still possible for the high-energy rays to be delayed because of some dispersion along the path caused by more ordinary types of matter. The Universe is not a vacuum, after all.
Quantum gravity
However, all these comments would be rather boring. So they offer an interpretation based on "quantum gravity" - this term even appears in the title even though I believe that it is extremely unlikely that such an experiment has anything to do with quantum gravity. Given the fact that most of these theoretical Lorentz-violating papers they refer to were co-authored by the theorists in the recent paper, it seems that my opinion won't be that unusual. ;-)
What do they mean by quantum gravity? They mean phenomenological equations that violate the laws of gravity - general relativity - such as the local Lorentz invariance. According to these not-quite-justified models, the speed of light "c" for frequency "E" photons equals
• c = 1 - (E/M)^n
where the exponent "n" is taken to be "1" or "2" (heuristic theoretical reasons but no one can really falsify either, except for people like me who can rule out both). The speed is always smaller than the speed of light, as you can see, while "M" is the characteristic "quantum gravity" scale. The higher energy you deal with, the more you deviate from the normal speed of light. The gravitational terminology is just about words: "M" is simply a scale that determines these modified dispersion relations.
For their data, an optimal fit gives you "M" near the usual Planck scale for "n=1" while it is around the intermediate scale "10^{11} GeV" for "n=2". The collaboration is not able to say anything about "n", indicating that any interpretation based on a particular simple value of "n" is probably too fast.
Related: Lorentz violation and doubly special relativity (DSR). You should realize that despite their marketing, DSR is neither among the first scenarios to suggest Lorentz violation nor the most convincing one. And DSR doesn't follow from loop quantum gravity!
I feel that the experimenters should have written what they have actually measured instead of selling one particular "sexiest" interpretation, in my opinion a very unlikely one, throughout the paper and in the very title. I think that quantum gravity (and string theory) imply an exact local Lorentz symmetry and there are many ways how to parameterize and describe the effect that they have seen.
More generally, it sounds strange to attribute a delay that is not much longer than the length of the flare to something else than the nature of the source itself. Of course, I would change my mind if they observed a delay that would be way greater than the width of the signals. That would indicate that the delay has something to do with the propagation of the radiation. The current data don't indicate such a conclusion too strongly.
It is probably fair to admit that my "conventional" interpretation of the events predicts that the high-energy gamma rays should be not only delayed but also spread over a longer time interval. As mentioned above, the paper doesn't seem to offer any information that would verify or reject this prediction. Or am I wrong?
If they believe that the effect is due to dispersion in vacuum, it shouldn't be hard for them to double or triple the delay/length-of-flare ratio in further observations of more distant sources, should it? In other words, their determined slope of the delay is 0.030 plus minus 0.012 seconds per GeV. At 2.5 sigma, I can still believe that it is zero and even the local dispersion effects don't exist. I am surely not the only one who finds 2.5-sigma signals unconvincing. If the effect is real, they should easily double the accuracy and get to 5 sigma, right?
Quite generally, I don't think that it is normal for big discoveries to appear in the form of small signals. Why? Because in particle physics or astrophysics, quantities are spread along huge intervals that span many (sometimes dozens) of orders of magnitude. If you pick a random size of the noise or a random size of a signal, their ratio is likely to be much smaller than one or much greater than one.
So you will either see nothing at all or you will see a clear evidence of the effect. Seeing a weak evidence of an effect is thus unlikely. So far, a weak effect was only a way to realize discoveries that were fully predicted in which the experimental apparatus wasn't strong enough, so they were raising its power before e.g. the top-quark was discovered. That's not the case here. For example, I don't see any other good reason why the delay should be almost exactly equal to the length of the flare except for saying that these two timescales are related because the delay has something to do with the source.
Needless to say, the main reason why I am skeptical about these quantum-gravity explanations is that I am convinced that quantum gravity and string theory prohibit any kind of Lorentz violation of this kind. But I might be wrong and Dimitri Nanopoulos et al. could be right, of course. If they really believe that they are right, they must surely be excited by every 2-sigma signal. ;-)
George Musser from Scientific American was the first blogger who responded to the new article. See also comments by Chris Lee at ArsTechnica.
Off-topic: at least 46 papers by Turkish graduate students, usually related to the gr-qc archive, turned out to be plagiarized. Many of them were accepted for publications in dozens of printed journals. The journals and arXiv are clearly flooded with papers that no one cares about which is why this thing can happen. Via N.E.W.
Incidentally, the nasty crackpot behind N.E.W. is again inventing dirty tricks how to abuse every sentiment, every statement of every person, and every piece of data. He doesn't care about the truth at all. I despise trash like himself. What I care about are the correct answers to various questions, and I don't think that violations of local Lorentz symmetry are justified rationally, not even in string theory, and I don't think that the non-stringy alternatives deserve a serious discussion. These opinions of mine of course shape my expectations about the correct interpretation of these results. I won't allow scum like P.W. to intimidate me. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8256109952926636, "perplexity": 534.5462835881793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189245.97/warc/CC-MAIN-20170322212949-00515-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://mycbseguide.com/blog/ncert-solutions-class-10-maths-exercise-5-4/ | # NCERT Solutions for Class 10 Maths Exercise 5.4
## myCBSEguide App
CBSE, NCERT, JEE Main, NEET-UG, NDA, Exam Papers, Question Bank, NCERT Solutions, Exemplars, Revision Notes, Free Videos, MCQ Tests & more.
NCERT solutions for Maths Arithmetic Progression
## NCERT Solutions for Class 10 Maths Arithmetic Progressions
###### 1. Which term of the AP: 121, 117, 113, ….. is its first negative term?
Ans. Given: 121, 117, 113, …….
Here
Now,
=
= =
For the first negative term,
is an integer and
Hence, the first negative term is 32nd term.
NCERT Solutions for Class 10 Maths Exercise 5.4
###### 2. The sum of the third and the seventh terms of an AP is 6 and their product is 8. Find the sum of sixteen terms of the AP.
Ans. Let the AP be
Then,
……….(i)
Also
Taking
=
= = = 76
Taking ,
=
= = = 20
and 76
NCERT Solutions for Class 10 Maths Exercise 5.4
###### 3. A ladder has rungs 25 cm apart (see figure). The rungs decrease uniformly in length from 45 cm, at the bottom to 25 cm at the top. If the top and the bottom rungs are m apart, what is the length of the wood required for the rungs?
Ans. Number of rungs = 10
The length of the wood required for rungs = sum of 10 rungs
= = = 350 cm
NCERT Solutions for Class 10 Maths Exercise 5.4
###### 4. The houses of a row are numbered consecutively from 1 to 49. Show that there is a value of such that the sum of the numbers of the houses preceding the house numbered is equal to the sum of the numbers of the houses following it. Find this value of
Ans. Here and
=
= =
= =
= = 49 x 25
According to question,
=
+ =
=
Since, is a counting number, so negative value will be neglected.
NCERT Solutions for Class 10 Maths Exercise 5.4
###### 5. A small terrace at a football ground comprises of 15 steps each of which is 50 m long and built of solid concrete.
Each step has a rise of m and a tread of m (see figure). Calculate the total volume of concrete required to build the terrace.
Ans. Volume of concrete required to build the first step, second step, third step, ……. (in m2) are
Total volume of concrete required =
=
=
=
=
## NCERT Solutions for Class 10 Maths Exercise 5.4
NCERT Solutions Class 10 Maths PDF (Download) Free from myCBSEguide app and myCBSEguide website. Ncert solution class 10 Maths includes text book solutions from Mathematics Book. NCERT Solutions for CBSE Class 10 Maths have total 15 chapters. 10 Maths NCERT Solutions in PDF for free Download on our website. Ncert Maths class 10 solutions PDF and Maths ncert class 10 PDF solutions with latest modifications and as per the latest CBSE syllabus are only available in myCBSEguide.
## CBSE app for Class 10
To download NCERT Solutions for Class 10 Maths, Computer Science, Home Science,Hindi ,English, Social Science do check myCBSEguide app or website. myCBSEguide provides sample papers with solution, test papers for chapter-wise practice, NCERT solutions, NCERT Exemplar solutions, quick revision notes for ready reference, CBSE guess papers and CBSE important question papers. Sample Paper all are made available through the best app for CBSE students and myCBSEguide website.
### 13 thoughts on “NCERT Solutions for Class 10 Maths Exercise 5.4”
1. It is really very helpful site
2. It is really very helpful site it’s help me very much
3. I love to counter this site always
4. Nice but make sure that you will mention what is “sn” and wt is “n” so that on so that it will be easy ????
5. my cbse guide is good
6. 3rd ans is wrong kindly look into it
7. Sir vo 2 question mn S16 kahan s aya
8. Sorry vo mn question nhi PDA Tha ok
9. Its the most helpful site and it helps me a lot
10. Itz helpful but one totally depend on this…
11. 3rd answer is 385, number of rungs=11 not 10 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8014807105064392, "perplexity": 4517.592291898819}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305423.58/warc/CC-MAIN-20220128074016-20220128104016-00299.warc.gz"} |
https://kerodon.net/tag/03PV | # Kerodon
$\Newextarrow{\xRightarrow}{5,5}{0x21D2}$ $\newcommand\empty{}$
### 5.4.1 Ordinals and Well-Orderings
In this section, we review some standard facts about ordinals and well-ordered sets.
Definition 5.4.1.1. Let $(S, \leq )$ be a partially ordered set. We say that $(S,\leq )$ is well-founded if every nonempty subset $S_0 \subseteq S$ contains a minimal element: that is, an element $s \in S_0$ for which the set $\{ t \in S_0: t < s \}$ is empty.
Exercise 5.4.1.2. Let $(S, \leq )$ be a partially ordered set. Show that the following conditions are equivalent:
$(1)$
The partial order $\leq$ is well-founded: that is, every nonempty subset of $S$ contains a minimal element.
$(2)$
The set $Q$ does not contain an infinite descending sequence $s_0 > s_1 > s_2 > \cdots$.
Example 5.4.1.3. Every finite partially ordered set $(S, \leq )$ is well-founded.
Example 5.4.1.4. Let $S$ be any set, and let $\leq$ be the discrete partial ordering of $S$: that is, we have $s \leq t$ if and only if $s = t$. Then $(S, \leq )$ is well-founded.
Remark 5.4.1.5. Let $(S, \leq )$ be a well-founded partially ordered set. Then every subset $S_0 \subseteq S$ is also well-founded (when endowed with the partial order given by the restriction of $\leq$).
Definition 5.4.1.6. Let $(S, \leq )$ be a linearly ordered set. We say that $(S, \leq )$ is well-ordered if it is well-founded when regarded as a partially ordered set: that is, if every nonempty subset $S_0 \subseteq S$ contains a smallest element. In this case, we will refer to the relation $\leq$ as a well-ordering of the set $S$.
Definition 5.4.1.7 (Ordinals). An ordinal is an isomorphism class of well-ordered sets. If $(S, \leq )$ is a well-ordered set, then its isomorphism class is an ordinal which we will refer to as the order type of $S$.
Notation 5.4.1.8. We will typically use lower-case Greek letters to denote ordinals.
Example 5.4.1.9 (Finite Ordinals). Let $n$ be a nonnegative integer. Up to isomorphism, there is a unique linearly ordered set $S$ having exactly $n$ elements, which we can identify with the set $\{ 0 < 1 < \cdots < n-1 \}$. We will abuse notation by identifying $n$ with the order type of the linearly ordered set $S$. By means of this convention, we can view every nonnegative integer as an ordinal. We say that an ordinal $\alpha$ is finite it arises in this way (that is, if it is the order type of a finite linearly ordered set), and infinite if it does not.
Example 5.4.1.10. The set of nonnegative integers $\operatorname{\mathbf{Z}}_{\geq 0} = \{ 0 < 1 < 2 < \cdots \}$ is well-ordered (with respect to its usual ordering). Its order type is an infinite ordinal, which we denote by $\omega$.
By definition, well-ordered sets $(S, \leq )$ and $(T, \leq )$ have the same order type if there is an order-preserving bijection $f: S \xrightarrow {\sim } T$. We will show in a moment that in this case, the bijection $f$ is uniquely determined (Corollary 5.4.1.16). First, let us introduce a bit of additional terminology.
Definition 5.4.1.11. Let $(S, \leq )$ be a linearly ordered set. We say that a subset $S_0 \subseteq S$ is an initial segment if it is closed downwards: that is, for every pair of elements $s \leq s'$ of $S$, if $s'$ is contained in $S_0$, then $s$ is also contained in $S_0$. If $(T, \leq )$ is another linearly ordered set, we say that a function $f: S \hookrightarrow T$ is an initial segment embedding if it is an isomorphism (of linearly ordered sets) from $S$ to an initial segment of $T$.
Example 5.4.1.12. Let $(S, \leq )$ be a linearly ordered set. Then the identity morphism $\operatorname{id}_{S}: S \xrightarrow {\sim } S$ is an initial segment embedding.
Remark 5.4.1.13 (Transitivity). Let $(R, \leq )$, $(S, \leq )$, and $(T, \leq )$ be linearly ordered sets. Suppose that $f: R \hookrightarrow S$ and $g: S \hookrightarrow T$ are initial segment embeddings. Then the composition $(g \circ f): R \hookrightarrow T$ is also an initial segment embedding.
Proposition 5.4.1.14. Let $(S, \leq )$ and $(T, \leq )$ be linearly ordered sets, and let $f,f': S \hookrightarrow T$ be strictly increasing functions. Suppose that $S$ is well-ordered and that $f$ is an initial segment embedding. Then, for each $s \in S$, we have $f(s) \leq f'(s)$.
Proof. Set $S_0 = \{ s \in S: f'(s) < f(s) \}$. We wish to show that $S_0$ is empty. Assume otherwise. Since $S$ is well-ordered, there is a least element $s \in S_0$. Since $f$ is an initial segment embedding, the inequality $f'(s) < f(s)$ implies that we can write $f'(s) = f(t)$ for some $t < s$. Then $t \notin S_0$, so we must have $f(t) \leq f'(t)$. It follows that $f'(s) \leq f'(t)$, contradicting our assumption that the function $f'$ is strictly increasing. $\square$
Corollary 5.4.1.15 (Rigidity). Let $(S, \leq )$ and $(T, \leq )$ be linearly ordered sets, and let $f,f': S \hookrightarrow T$ be initial segment embeddings. If $S$ is well-ordered, then $f = f'$.
Corollary 5.4.1.16. Let $(S, \leq )$ and $(T, \leq )$ be well-ordered sets. If there exists an order-preserving bijection $f: S \xrightarrow {\sim } T$, then $f$ is unique.
Corollary 5.4.1.17. Let $(S,\leq )$ and $(T, \leq )$ be well-ordered sets. Then one of the following conditions is satisfied:
$(1)$
There exists an initial segment embedding $f: S \hookrightarrow T$.
$(2)$
There exists an initial segment embedding $g: T \hookrightarrow S$.
Proof. For each element $s \in S$, let $S_{\leq s}$ denote the initial segment $\{ s' \in S: s' \leq s \}$. Let $S_0 \subseteq S$ denote the collection of elements $s \in S$ for which there exists an initial segment embedding $f_{\leq s}: S_{\leq s} \hookrightarrow T$. Note that, if this condition is satisfied, then the morphism $f_{\leq s}$ is uniquely determined (Corollary 5.4.1.15). Moreover, if $s' \leq s$, then composite map $S_{\leq s'} \subseteq S_{\leq s} \xrightarrow { f_{\leq s} } T$ is also an initial segment embedding; it follows that $s'$ belongs to $S_0$, and $f|_{\leq s'}$ is the restriction of $f|_{\leq s}$ to $S_{\leq s'}$. Consequently, the construction $s \mapsto f_ s(s)$ determines a function $f: S_0 \rightarrow T$, which is an isomorphism of $S_0$ with an initial segment $T_0 \subseteq T$. If $S_0 = S$, then $f$ is an initial segment embedding from $S$ to $T$. If $T_0 = T$, then $g = f^{-1}$ is an initial segment embedding from $T$ to $S$. Assume that neither of these conditions is satisfied: that is, the sets $S \setminus S_0$ and $T \setminus T_0$ are both nonempty. Let $s$ be a least element of $S \setminus S_0$, and let $t$ be a least element of $T \setminus T_0$. Then $f$ extends uniquely to an initial segment embedding
$f_{\leq s}: S_{\leq s} = S_0 \cup \{ s\} \xrightarrow {\sim } T_0 \cup \{ t\} \subseteq T \quad \quad s \mapsto t.$
The existence of $f_{\leq s}$ shows that $s$ belongs to $S_0$, which is a contradiction. $\square$
Remark 5.4.1.18. In the situation of Corollary 5.4.1.17, suppose that conditions $(1)$ and $(2)$ are both satisfied: that is, there exist initial segment embeddings $f: S \hookrightarrow T$ and $g: T \hookrightarrow S$. Then $g \circ f$ is an initial segment embedding of $S$ into itself, and therefore coincides with $\operatorname{id}_{S}$ (Corollary 5.4.1.16). The same argument shows that $f \circ g = \operatorname{id}_{T}$, so that $f$ and $g$ are mutually inverse bijections. In particular, $S$ and $T$ have the same order type.
Definition 5.4.1.19. Let $\alpha$ and $\beta$ be ordinals, given by the order types of well-ordered sets $(S, \leq )$ and $(T, \leq )$. We write $\alpha \leq \beta$ if there exists an initial segment embedding from $(S, \leq )$ to $(T, \leq )$ (note that this condition depends only on the order types of $S$ and $T$).
Proof. The reflexivity of the relation $\leq$ follows from Example 5.4.1.12, and the transitivity follows from Remark 5.4.1.13. Let $\alpha$ and $\beta$ be ordinals, which we identify with the order types of well-ordered sets $(S, \leq )$ and $(T, \leq )$, respectively. Invoking Corollary 5.4.1.17, we deduce that $\alpha \leq \beta$ or $\beta \leq \alpha$. Moreover, if both conditions are satisfied, then Remark 5.4.1.18 shows that $\alpha = \beta$. $\square$
Remark 5.4.1.21. Let $(S,\leq )$ and $(T, \leq )$ be well-ordered sets. The following conditions are equivalent:
$(1)$
There exists an initial segment embedding $f: S \hookrightarrow T$.
$(2)$
There exists a strictly increasing function $f: S \hookrightarrow T$.
The implication $(1) \Rightarrow (2)$ is immediate from the definitions. To prove the converse, let $f: S \hookrightarrow T$ be a strictly increasing function, and suppose that there is no initial segment embedding from $S$ to $T$. Invoking Corollary 5.4.1.17, we deduce that there is an initial segment embedding $g: T \hookrightarrow S$. The composition $(g \circ f): S \hookrightarrow S$ is strictly increasing, and therefore satisfies $(g \circ f)(s) \geq s$ for each $s \in S$ (Proposition 5.4.1.14). Since the image of $g$ is an initial segment $S_0 \subseteq S$, it follows that $S_0 = S$, so that $g^{-1}: S \xrightarrow {\sim } T$ is an isomorphism of linearly ordered sets.
We now show that, for every ordinal $\alpha$, there is a preferred candidate for a well-ordered set of order type $\alpha$: namely, the collection $\mathrm{Ord}_{< \alpha }$ of ordinals smaller than $\alpha$.
Proposition 5.4.1.22. Let $(S, \leq )$ be a well-ordered set, and let $\alpha$ denote its order type. Then there is a unique order-preserving bijection $S \rightarrow \mathrm{Ord}_{< \alpha }$, which carries each element $s \in S$ to the order type of the well-ordered set $S_{< s} = \{ s' \in S: s' < s \}$.
Proof. We will prove existence; uniqueness then follows from Corollary 5.4.1.16. For each $s \in S$, let $\alpha _{s}$ denote the order type of the set $S_{< s}$ (which is well-ordered, by virtue of Remark 5.4.1.5). Note that, since there is an initial segment embedding $S_{< s} \hookrightarrow S$ which is not bijective, we must have $\alpha _{s} < \alpha$ (Remark 5.4.1.18). Consequently, the construction $s \mapsto \alpha _{s}$ determines a function $S \rightarrow \mathrm{Ord}_{< \alpha }$. If $s < t$ in $S$, then there is an initial segment embedding from $S_{< s}$ to $S_{< t}$ which is not bijective, so that $\alpha _{s} < \alpha _{t}$ (again by Remark 5.4.1.18). To complete the proof, it will suffice to show that the function $s \mapsto \alpha _{s}$ is surjective. Let $\beta$ be an ordinal which is strictly smaller than $\alpha$. Then $\beta$ is the order type of some initial segment $S_0 \subsetneq S$. Since $S$ is well-ordered, the set $S \setminus S_0$ has a smallest element $s$. It follows that $S_0 = S_{< s}$, so that $\beta = \alpha _{s}$. $\square$
Corollary 5.4.1.23. For every ordinal $\alpha$, $\mathrm{Ord}_{< \alpha }$ is a well-ordered set of order type $\alpha$.
Corollary 5.4.1.24. Let $S$ be any nonempty collection of ordinals. Then $S$ has a least element.
Proof. Choose an ordinal $\alpha \in S$. If $\alpha$ is a least element of $S$, then we are done. Otherwise, we can replace $S$ by the nonempty subset $S_{< \alpha } = \{ \beta \in S: \beta < \alpha \}$. Note that $S_{< \alpha }$ is a nonempty subset $\mathrm{Ord}_{< \alpha }$, and therefore has a smallest element by virtue of Corollary 5.4.1.23. $\square$
Warning 5.4.1.25 (The Burali-Forti Paradox). One can informally summarize Corollary 5.4.1.24 by saying that the collection $\mathrm{Ord}$ of all ordinals is well-ordered (with respect to the order relation of Definition 5.4.1.19). Beware that one must treat this statement with some care to avoid paradoxes. The proof of Proposition 5.4.1.22 shows that the order type of $\mathrm{Ord}$ is strictly larger than $\alpha$, for each ordinal $\alpha \in \mathrm{Ord}$. This paradox has a standard remedy: we regard the collection $\mathrm{Ord}$ as “too large” to form a set (so that its order type is not regarded as an ordinal).
Definition 5.4.1.26. Let $(S, \leq )$ and $(T, \leq )$ be linearly ordered sets. We say that a function $f: S \rightarrow T$ is cofinal if it is nondecreasing and, for every element $t \in T$, there exists an element $s \in S$ satisfying $f(s) \geq t$.
Proposition 5.4.1.27. Let $(T, \leq )$ be a linearly ordered set. There exists a well-ordered subset $S \subseteq T$ for which the inclusion map $S \hookrightarrow T$ is cofinal.
Proof. Let $\{ S_ q \} _{q \in Q}$ be the collection of all well-ordered subsets of $T$. We regard $Q$ as a partially ordered set, where $q \leq q'$ if the set $S_{q}$ is an initial segment of $S_{q'}$. This partial ordering satisfies the hypotheses of Zorn's lemma, and therefore contains a maximal element $S_{\mathrm{max}}$. To complete the proof, it will suffice to show that the inclusion $S_{ \mathrm{max}} \hookrightarrow T$ is cofinal. Assume otherwise: then there exists an element $t \in T$ satisfying $s < t$ for each $s \in S_{\mathrm{max}}$. Then $S_{\mathrm{max}}$ is an initial segment of the well-ordered subset $S_{\mathrm{max}} \cup \{ t \} \subseteq T$, contradicting the maximality of $S_{\mathrm{max}}$. $\square$
Definition 5.4.1.28 (Cofinality). Let $(T, \leq )$ be a linearly ordered set. We let $\mathrm{cf}(T)$ denote the smallest ordinal $\alpha$ for which there exists a well-ordered set $(S, \leq )$ of order type $\alpha$ and a cofinal function $f: S \rightarrow T$. We refer to $\mathrm{cf}(T)$ as the cofinality of the linearly ordered set $T$.
If $\beta$ is an ordinal, let $\mathrm{cf}(\beta )$ denote the cofinality $\mathrm{cf}(T)$, where $(T, \leq )$ is any well-ordered set of order type $\beta$. We refer to $\mathrm{cf}(\beta )$ as the cofinality of $\beta$.
Remark 5.4.1.29. For any linearly ordered set $(T, \leq )$, the identity map $\operatorname{id}: T \rightarrow T$ is cofinal. Consequently, if $T$ is well-ordered set of order type $\alpha$, then we have $\mathrm{cf}(\alpha ) = \mathrm{cf}(T) \leq \alpha$. Beware that the inequality is often strict.
Example 5.4.1.30. Let $(T, \leq )$ be a linearly ordered set. Then $\mathrm{cf}(T) = 0$ if and only if $T$ is empty.
Example 5.4.1.31. Let $(T, \leq )$ be a nonempty linearly ordered set. The following conditions are equivalent:
• The cofinality $\mathrm{cf}(T)$ is a positive integer.
• The cofinality $\mathrm{cf}(T)$ is equal to $1$.
• The linearly ordered set $T$ contains a largest element.
Example 5.4.1.32. Let $(T, \leq )$ be a linearly ordered sets. Then the cofinality $\mathrm{cf}(T)$ is equal to $\omega$ if and only if $T$ contains an unbounded increasing sequence $\{ t_0 < t_1 < t_2 < \cdots \}$.
Proposition 5.4.1.33. Let $(T, \leq )$ be a linearly ordered set. Then the cofinality $\mathrm{cf}(T)$ is the smallest ordinal with the following property:
$(\ast )$
There exists a well-ordered set $(S, \leq )$ of order type $\alpha$ and a function $f: S \rightarrow T$ which is unbounded (that is, every element $t \in T$ satisfies $t \leq f(s)$ for some $s \in S$). Here we do not require $f$ to be nondecreasing.
Proof. It is clear that the cofinality $\mathrm{cf}(T)$ satisfies condition $(\ast )$. For the converse, assume that $(S, \leq )$ is a well-ordered set of order type $\alpha$ and that $f: S \rightarrow T$ is an unbounded function. Let us say that an element $s \in S$ is good if, for every element $s' < s$ of $S$, we have $f(s') < f(s)$. Let $S_0$ be the collection of good elements of $S$, and set $f_0 = f|_{ S_0 }$. By construction, the function $f_0$ is strictly increasing. Moreover, the order type of $S_0$ is $\leq \alpha$ (Remark 5.4.1.21). To complete the proof, it will suffice to show that $f_0: S_0 \hookrightarrow T$ is cofinal. Fix an element $t \in T$, and set $S_{\geq t} = \{ s \in S: t \leq f(s) \}$. We wish to show that the intersection $S_{\geq t} \cap S_0$ is nonempty. We first observe that $S_{\geq t}$ is nonempty (by virtue of our assumption that $f$ is unbounded). Since $(S, \leq )$ is well-ordered, the set $S_{\geq t}$ contains a least element $s$. We claim that $s$ belongs to $S_0$. Assume otherwise: then there exists some $s' < s$ satisfying $f(s') \geq f(s)$. It follows that $s'$ belongs to $S_{\geq t}$, contradicting the minimality of $s$. $\square$
We conclude this section by observing that well-orderings exist in abundance.
By virtue of Example 5.4.1.4, Theorem 5.4.1.34 is a special case of the following more refined result:
Proposition 5.4.1.35. Let $(S, \preceq )$ be a well-founded partially ordered set. Then there exists a well-ordering $\leq$ on $S$ which refines $\preceq$ in the following sense: for every pair of elements $s,t \in S$ satisfying $s \preceq t$, we also have $s \leq t$.
Proof. Let $Q$ denote the set of ordered pairs $(T, \leq _{T})$, where $T$ is a subset of $S$ which is closed downward with respect to $\preceq$ and $\leq _{T}$ is a well-ordering of $T$ which refines $\preceq$. We regard $Q$ as a partially ordered set, where $(T, \leq _{T} ) \leq (T', \leq _{T'} )$ if $T$ is an initial segment of $T'$ (with respect to the ordering $\leq _{T'}$), and the ordering $\leq _{T}$ coincides with the restriction of $\leq _{T'}$. The partially ordered set $Q$ satisfies the hypotheses of Zorn's lemma, and therefore contains a maximal element $( T_{\mathrm{max}}, \leq _{ T_{\mathrm{max}} } )$. To complete the proof, it will suffice to show that $T_{\mathrm{max}} = S$. Suppose otherwise. Then the set $S \setminus T_{\mathrm{max}}$ is nonempty, and therefore contains an element $s$ which is minimal with respect to the ordering $\preceq$. Set $T' = T_{\mathrm{max}} \cup \{ s\}$, and extend $\leq _{ T_{\mathrm{max}} }$ to a linear ordering $\leq _{T'}$ of $T'$ by declaring $s$ to be a largest element. Then $(T', \leq _{T'} )$ is an element of $Q$, contradicting the maximality of the pair $( T_{\mathrm{max}}, \leq _{ T_{\mathrm{max}} } )$. $\square$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9961086511611938, "perplexity": 65.726161238131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573623.4/warc/CC-MAIN-20220819035957-20220819065957-00482.warc.gz"} |
https://www.physicsforums.com/threads/spin-polarization-and-momentum-in-particle-decay.216488/ | # Spin Polarization and Momentum in Particle Decay
1. Feb 19, 2008
### FortranMan
In an elementary particle decay, such as the decay of a positive pion into a positive muon and a muon neutrino, are the spin polarizations of either product always parallel (or anti-parallel) to their momentum? If so why?
2. Feb 19, 2008
### pam
The muon neutrino is so close to massless that its spin must be aligned with its momentum (helicity=+1/2). Since the pi is spinless, the muon must also have +1/2 helicity. For other decays, the polarization may not be 100%.
The momentum direction makes a good z axis to describe polarization because then L_z=0.
3. Feb 20, 2008
### Barmecides
Hi,
I think neutrinos are Left-handed so helicity = -1/2 ?
4. Feb 20, 2008
### pam
You are right about neutrinos, but a positive muon is an anti-lepton, so the "neutrino" in this case is really a right-handed anti-neutrino.
5. Feb 20, 2008
### kdv
I thought that if the muon emitted is an antilepton, the muon neutrino had to be a neutrino, not an antineutrino. no?
6. Feb 21, 2008
### pam
I am greatly embarrassed by making the same silly mistake twice, and apologize to all.
I must have been sleepthinking. Thank you Barmecides and kdv for your corrections.
Similar Discussions: Spin Polarization and Momentum in Particle Decay | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9676814079284668, "perplexity": 3635.2115565476183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118740.31/warc/CC-MAIN-20170423031158-00266-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/another-measurement-of-speed-of-light.244168/ | # Another Measurement of Speed of Light
1. Jul 9, 2008
### clairez93
1. The problem statement, all variables and given/known data
In an experiment to measure the speed of light using the apparatus of Fizeau, the distance between light source and mirror was 11.45 km and the wheel had 720 notches. The experimentally determined value of c was 2.889 x 10^8 m/s. Calculate the minimum angular speed of the wheel for this experiment.
2. Relevant equations
c = 2d/t
t = theta/w
3. The attempt at a solution
2.998 x 10^8 m/s = 2(11450m) / t
t = 7.638 x 10^-5 s
7.368 x 10^-5 = (1/440 rev) / w
w = 9.091 rev/s
9.091 * 2pi = 57.12 rad/s
Book answer: 114 rad/s
2. Jul 9, 2008
### clairez93
Sorry about the earlier empty message; I hit enter too early. I am not sure what mistake I have mdae in my calculations, and if someone could look it over and help me, it would be greatly appreciated.
3. Jul 9, 2008
### Tom Mattson
Staff Emeritus
Why is $\theta$ equal to 1/1440 rev (I think that's what you meant). If there are 720 notches, then $\theta=1/720$ rev. That will give you the right answer.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Similar Discussions: Another Measurement of Speed of Light | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9001531004905701, "perplexity": 2307.1725236537727}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886815.20/warc/CC-MAIN-20180117043259-20180117063259-00273.warc.gz"} |
https://pennstate.pure.elsevier.com/en/publications/the-edge-cover-probability-polynomial-of-a-graph-and-optimal-netw | # The Edge Cover Probability Polynomial of a Graph and Optimal Network Construction
Research output: Contribution to journalArticle
### Abstract
Given a uniform probability <formula><tex>$\rho, 0 < \rho < 1$</tex></formula>, of selecting edges independently from a graph <formula><tex>$G$</tex></formula>, we define the edge cover probability polynomial <formula><tex>$Ep(G, \rho)$</tex></formula> of <formula><tex>$G$</tex></formula> to be the probability of randomly selecting an edge cover of <formula><tex>$G$</tex></formula>. We provide general, and in some cases specific, formulas for obtaining <formula><tex>$Ep(G, \rho)$</tex></formula>. We then demonstrate the existence of graphs which have either the largest or the smallest <formula><tex>$Ep(G, \rho)$</tex></formula> within its class for all ρ. The classes we consider are trees, unicyclic graphs, and connected graphs having one more edge than the number of vertices. Thus we determine the optimal constructions with respect to edge covers within the context of these classes.
Original language English (US) IEEE Transactions on Network Science and Engineering https://doi.org/10.1109/TNSE.2018.2820062 Accepted/In press - Mar 26 2018
### All Science Journal Classification (ASJC) codes
• Control and Systems Engineering
• Computer Science Applications
• Computer Networks and Communications | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9098041653633118, "perplexity": 1716.695993570341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735885.72/warc/CC-MAIN-20200804220455-20200805010455-00140.warc.gz"} |
https://brilliant.org/problems/first-find-point-of-contact/ | First find point of contact!
Calculus Level 5
A curve is parametrically represented as
$\begin{cases} x = \cos t+\ln \left(\tan\frac{t}{2}\right) \\ y = \sin t, \end{cases}$
where $t$ is a parameter.
Find the length of tangent to the curve at the point where its $x$-coordinate is equal to its $y$-coordinate.
The length of tangent is defined as the distance between the point of contact with the curve and the point where the tangent meets the $x$-axis.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9046182632446289, "perplexity": 147.4277477476496}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141674082.61/warc/CC-MAIN-20201201104718-20201201134718-00570.warc.gz"} |
http://math.stackexchange.com/questions/243325/a-group-with-three-maximal-abelian-subgroups | # A group with three maximal abelian subgroups
I am looking for a group that has exactly three maximal abelian subgroups.
I thought about the quaternion group. $G=Q_8 = \langle x,y \mid x^4=1, x^2=y^2, yxy^{-1}=x^{-1}\rangle$.
$Z(G) = \mathbb{Z}/2$ and one of the abelian subgroups is $\langle x\rangle = \{e,x,x^2,x^3\}$.
But I have problems to find the other ones. I don't really know how to construct them.
-
Why have you thought about the quaternion group? Do you have some reason to believe it will work? – Chris Eagle Nov 23 '12 at 19:07
Well first I thought it might be some nice non-abelian small group. like $D_{4}$ or $Q_{8}$, and then, to be honest, I looked in the internet what they said about these groups, and I found an article where they said the $Q_{8}$ had 3 abelian subgroups. So I am looking for them ;) – Kathrin Nov 23 '12 at 19:24
Well there must be a maximal subgroup containing $y$? Can you think of one. And there must be one containing $xy$. – Derek Holt Nov 23 '12 at 20:09
It might be easier to think of the quaternion group as $Q = <1,i,j,k |i^2=j^2=k^2=-1,ijk=-1>$. It is not very hard to find all proper subgroups of $Q$. Note that any subset containing two elements out of $\{i,j,k\}$ immediately generates the entire group, since multiplication of these elements gives the third element and the square of either element gives $-1$, which commutes with all elements. Hence this subset generates $Q$. That leaves that any proper subgroup can contain either no elements or one element from $\{i,j,k\}$. If it contains no elements, the only subgroups are the trivial subgroup and $\{1,-1\}$. The other possibility gives you the subgroups generated by one element from the set, i.e. $<i>,<j>$ and$<k>$. Check that these are abelian and the note the subgroup $\{1,-1\}$ is contained in either of these subgroups, but none of them is contained in either of the other two. Hence they are maximal abelian subgroups and the only ones contained in $Q$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8747451901435852, "perplexity": 151.56952311236998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121899763.95/warc/CC-MAIN-20150124175139-00075-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://chemistry.stackexchange.com/questions/30745/is-there-more-than-one-equation-for-delta-treaction | # Is there more than one equation for Delta T(reaction)?
So, I have to find the $\Delta T_{\mathrm{reaction}}$ of a solution for my lab report. Earlier in the lab, it gave the equation as
$\Delta T_{\mathrm{reaction}}=T_{\mathrm{mixture}}+\frac{1}{2}(T_{\mathrm{substance~1}}+T_{\mathrm{substance~2}})$
and all temperatures were at the exact time of mixing found from a linear equation of temperature points on a graph. Now, I have another mixture which is water and a substance and I am supposed to find the $\Delta T_{\mathrm{reaction}}$.
Here's the thing, we don't have the temperature of the substance, only the temp of the water and the solution it made, so how could I go about doing it? I thought about using room temperature since it was a solid substance but it doesn't say anything about that.
The substance is solid $\ce{NaOH}$.
• Please avoid Latex in titles due to searching issues. – bon May 3 '15 at 17:55
• The solid likely was at room temperature, but it also probably had lower mass and almost certainly had lower heat capacity than the solution to which it was added. I suspect that the equation you gave is for mixing two dilute solutions of equal volume. If I'm not mistaken it should be Tmixture - 1/2... rather than + 1/2. – Jason Patterson Jun 3 '15 at 6:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8894957900047302, "perplexity": 379.9253610447286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833766.94/warc/CC-MAIN-20191023122219-20191023145719-00313.warc.gz"} |
https://ictp.acad.ro/radial-solutions-for-some-classes-of-elliptic-boundary-value-problems/ | # Radial solutions for some classes of elliptic boundary value problems
## Abstract
The aim of this paper is to present some existence and localization results of radial solutions for elliptic equations and systems on an annulus $$\Omega$$ of $$\mathbb{R}^{N}\left( N\geq1\right)$$ of radii a and b with $$0<a<b$$. The main toolis Schauder’s fixed point theorem.
## Authors
Toufik Moussaoui
Department of Mathematics, E.N.S.,P.O. Box 92, 16050 Kouba, Algiers, Algeria
Department of Mathematics, Babes-Bolyai University, Cluj-Napoca, Romania
## Keywords
Radial solutions; elliptic boundary value problem; elliptic systems; Schauder’s fixed point theorem.
## Paper coordinates
T. Moussaoui, R. Precup, Radial solutions for some classes of elliptic boundary value problems, Studia Univ. Babes-Bolyai Math. 53 (2008), no.1, 35-42.
## PDF
##### Journal
Studia Univ. “Babeș-Bolyai” Mathematica
0370-8659 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8137703537940979, "perplexity": 4030.8410782296123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00061.warc.gz"} |
https://slideplayer.com/slide/6257818/ | Solving Quadratic Equations Section 1.3
Presentation on theme: "Solving Quadratic Equations Section 1.3"— Presentation transcript:
Solving Quadratic Equations Section 1.3
What is a Quadratic Equation?
A quadratic equation in x is an equation that can be written in the standard form: ax² + bx + c = 0 Where a,b,and c are real numbers and a ≠ 0.
Solving a Quadratic Equation by Factoring.
The factoring method applies the zero product property: Words: If a product is zero, then at least one of its factors has to be zero. Math: If (B)(C)=0, then B=0 or C=0 or both.
Recap of steps for how to solve by Factoring
Set equal to 0 Factor Set each factor equal to 0 (keep the squared term positive) Solve each equation (be careful when determining solutions, some may be imaginary numbers)
Example 1 Solve x² - 12x + 35 = 0 by factoring.
Set each factor equal to zero by the zero product property. Solve each equation to find solutions. The solution set is: (x – 7)(x - 5) = 0 (x – 7)=0 (x – 5)=0 x = 7 or x = 5 { 5, 7 }
Example 2 Solve 3t² + 10t + 6 = -2 by factoring.
Check equation to make sure it is in standard form before solving. Is it? It is not, so set equation equal to zero first: 3t² + 10t + 8 = 0 Now factor and solve. (3t + 4)(t + 2) = 0 3t + 4 = t +2 = 0 t = t = -2
Solve by factoring.
Solve by the Square Root Method.
If the quadratic has the form ax² + c = 0, where a ≠ 0, then we could use the square root method to solve. Words: If an expression squared is equal to a constant, then that expression is equal to the positive or negative square root of the constant. Math: If x² = c, then x = ±c. Note: The variable squared must be isolated first (coefficient equal to 1).
Example 1: Solve by the Square Root Method:
= x = ± 4
Example 2: Solve by the Square Root Method.
x = ±i
Example 3: Solve by the Square Root Method.
x – 3 = 5 or x – 3 = -5 x = x = -2
Solve by the Square Root Method
Solve by Completing the Square.
Words Express the quadratic equation in the following form. Divide b by2 and square the result, then add the square to both sides. Write the left side of the equation as a perfect square. Solve by using the square root method. Math x² + bx = c x² + bx + ( )² = c + ( )² (x )² = c + ( )²
Example 1: Solve by Completing the Square.
x² + 8x – 3 = 0 x² + 8x = 3 x² + 8x + (4)² = 3 + (4)² x² + 8x + 16 = (x + 4)² = 19 x + 4 = ± x = -4 ± Add three to both sides. Add ( )² which is (4)² to both sides. Write the left side as a perfect square and simplify the right side. Apply the square root method to solve. Subtract 4 from both sides to get your two solutions.
Example 2: Solve by Completing the Square when the Leading Coefficient is not equal to 1.
2x² - 4x + 3 = 0 x² - 2x = 0 x² - 2x + ___ = ____ x² - 2x + 1 = (x – 1)² = x – 1 = ± x = 1 ± Divide by the leading coefficient. Continue to solve using the completing the square method. Simplify radical.
Quadratic Formula If a quadratic can’t be factored, you must use the quadratic formula. If ax² + bx + c = 0, then the solution is:
a = 1 b = -4 c = -1 Solve
Solve
Solve
Discriminant The term inside the radical b² - 4ac is called the discriminant. The discriminant gives important information about the corresponding solutions or answers of ax² + bx + c = 0, where a,b, and c are real numbers. b² - 4ac Solutions b² - 4ac > 0 b² - 4ac = 0 b² - 4ac < 0
Tell what kind of solution to expect | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8469082117080688, "perplexity": 496.47925139945323}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103329963.19/warc/CC-MAIN-20220627073417-20220627103417-00582.warc.gz"} |
http://nrich.maths.org/4741/index?nomenu=1 | $ABCD$ is a rectangle. $P$ is the midpoint of $AD$; the length of $BQ$ is one third the length of $BC$. What fraction of the area of the rectangle is the area of the shaded quadrilateral $ABQP$? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9185843467712402, "perplexity": 45.14868925938956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657136494.66/warc/CC-MAIN-20140914011216-00037-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://cringproject.wordpress.com/tag/puiseux-theorem/ | ## Posts Tagged ‘Puiseux theorem’
### Puiseux’s theorem
March 22, 2011
In Local Fields, Serre uses ramification groups to prove that the algebraic closure of the power series field $K((T))$ over an algebraically closed field $K$ of characteristic zero is the colimit of $K((T^{1/n}))$ for $n > 0$; this is, as he calls it, a formal analog of the usual Puiseux theorem. If I am not being silly, there is an easier way to see this. For any finite extension, the unique valuation on the power series field extends uniquely (since these are all complete); the residue class field extension is trivial, since that of the power series field is already $K$. Thus the extension is tamely ramified, and now by general facts, any totally and tamely ramified extension of a local field is obtained by adjoining a root of a uniformizer. (Cf. Lang’s Algebraic Number Theory; this follows from Hensel’s lemma.)
I added this to the chapter on completions. Also, as promised in the comments to this post, there is now some material on finite presentation in a chapter currently loosely marked various. It is far from complete, though. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.886117160320282, "perplexity": 280.683453543047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867559.54/warc/CC-MAIN-20180526151207-20180526171207-00055.warc.gz"} |
https://en.wikipedia.org/wiki/Antimuon | # Muon
(Redirected from Antimuon)
Composition The Moon's cosmic ray shadow, as seen in secondary muons generated by cosmic rays in the atmosphere, and detected 700 meters below ground, at the Soudan 2 detector Elementary particle Fermionic Lepton Second Gravity, electromagnetic, weak μ− Antimuon (μ+) Carl D. Anderson, Seth Neddermeyer (1936) 1.883531627(42)×10−28 kg[1]0.1134289259(25) Da[2]105.6583755(23) MeV/c2[3] 2.1969811(22)×10−6 s[4][5] e−, νe, νμ[5] (most common) −1 e None .mw-parser-output .sfrac{white-space:nowrap}.mw-parser-output .sfrac.tion,.mw-parser-output .sfrac .tion{display:inline-block;vertical-align:-0.5em;font-size:85%;text-align:center}.mw-parser-output .sfrac .num,.mw-parser-output .sfrac .den{display:block;line-height:1em;margin:0 0.1em}.mw-parser-output .sfrac .den{border-top:1px solid}.mw-parser-output .sr-only{border:0;clip:rect(0,0,0,0);height:1px;margin:-1px;overflow:hidden;padding:0;position:absolute;width:1px}1/2 LH: −1/2, RH: 0 LH: −1, RH: −2
A muon (/ˈmjuːɒn/ MYOO-on; from the Greek letter mu (μ) used to represent it) is an elementary particle similar to the electron, with an electric charge of −1 e and a spin of 12, but with a much greater mass. It is classified as a lepton. As with other leptons, the muon is not thought to be composed of any simpler particles; that is, it is a fundamental particle.
The muon is an unstable subatomic particle with a mean lifetime of 2.2 μs, much longer than many other subatomic particles. As with the decay of the non-elementary neutron (with a lifetime around 15 minutes), muon decay is slow (by subatomic standards) because the decay is mediated only by the weak interaction (rather than the more powerful strong interaction or electromagnetic interaction), and because the mass difference between the muon and the set of its decay products is small, providing few kinetic degrees of freedom for decay. Muon decay almost always produces at least three particles, which must include an electron of the same charge as the muon and two types of neutrinos.
Like all elementary particles, the muon has a corresponding antiparticle of opposite charge (+1 e) but equal mass and spin: the antimuon (also called a positive muon). Muons are denoted by
μ
and antimuons by
μ+
. Formerly, muons were called mu mesons, but are not classified as mesons by modern particle physicists (see § History), and that name is no longer used by the physics community.
Muons have a mass of 105.66 MeV/c2, which is approximately 207 times that of the electron, me. More precisely, it is 206.7682830(46) me[6] There is also a third lepton, the tau, approximately 17 times heavier than the muon.
Due to their greater mass, muons accelerate more slowly than electrons in electromagnetic fields, and emit less bremsstrahlung (deceleration radiation). This allows muons of a given energy to penetrate far deeper into matter because the deceleration of electrons and muons is primarily due to energy loss by the bremsstrahlung mechanism. For example, so-called secondary muons, created by cosmic rays hitting the atmosphere, can penetrate the atmosphere and reach Earth's land surface and even into deep mines.
Because muons have a greater mass and energy than the decay energy of radioactivity, they are not produced by radioactive decay. However they are produced in great amounts in high-energy interactions in normal matter, in certain particle accelerator experiments with hadrons, and in cosmic ray interactions with matter. These interactions usually produce pi mesons initially, which almost always decay to muons.
As with the other charged leptons, the muon has an associated muon neutrino, denoted by
ν
μ
, which differs from the electron neutrino and participates in different nuclear reactions.
## History
Muons were discovered by Carl D. Anderson and Seth Neddermeyer at Caltech in 1936, while studying cosmic radiation. Anderson noticed particles that curved differently from electrons and other known particles when passed through a magnetic field. They were negatively charged but curved less sharply than electrons, but more sharply than protons, for particles of the same velocity. It was assumed that the magnitude of their negative electric charge was equal to that of the electron, and so to account for the difference in curvature, it was supposed that their mass was greater than an electron but smaller than a proton. Thus Anderson initially called the new particle a mesotron, adopting the prefix meso- from the Greek word for "mid-". The existence of the muon was confirmed in 1937 by J. C. Street and E. C. Stevenson's cloud chamber experiment.[7]
A particle with a mass in the meson range had been predicted before the discovery of any mesons, by theorist Hideki Yukawa:[8]
It seems natural to modify the theory of Heisenberg and Fermi in the following way. The transition of a heavy particle from neutron state to proton state is not always accompanied by the emission of light particles. The transition is sometimes taken up by another heavy particle.
Because of its mass, the mu meson was initially thought to be Yukawa's particle and some scientists, including Niels Bohr, originally named it the yukon. Yukawa's predicted particle, the pi meson, was finally identified in 1947 (again from cosmic ray interactions), and was shown to differ from the mu meson by having the properties of a particle that mediated the nuclear force.
With two particles now known with the intermediate mass, the more general term meson was adopted to refer to any such particle within the correct mass range between electrons and nucleons. Further, in order to differentiate between the two different types of mesons after the second meson was discovered, the initial mesotron particle was renamed the mu meson (the Greek letter μ [mu] corresponds to m), and the new 1947 meson (Yukawa's particle) was named the pi meson.
As more types of mesons were discovered in accelerator experiments later, it was eventually found that the mu meson significantly differed not only from the pi meson (of about the same mass), but also from all other types of mesons. The difference, in part, was that mu mesons did not interact with the nuclear force, as pi mesons did (and were required to do, in Yukawa's theory). Newer mesons also showed evidence of behaving like the pi meson in nuclear interactions, but not like the mu meson. Also, the mu meson's decay products included both a neutrino and an antineutrino, rather than just one or the other, as was observed in the decay of other charged mesons.
In the eventual Standard Model of particle physics codified in the 1970s, all mesons other than the mu meson were understood to be hadrons – that is, particles made of quarks – and thus subject to the nuclear force. In the quark model, a meson was no longer defined by mass (for some had been discovered that were very massive – more than nucleons), but instead were particles composed of exactly two quarks (a quark and antiquark), unlike the baryons, which are defined as particles composed of three quarks (protons and neutrons were the lightest baryons). Mu mesons, however, had shown themselves to be fundamental particles (leptons) like electrons, with no quark structure. Thus, mu "mesons" were not mesons at all, in the new sense and use of the term meson used with the quark model of particle structure.
With this change in definition, the term mu meson was abandoned, and replaced whenever possible with the modern term muon, making the term "mu meson" only a historical footnote. In the new quark model, other types of mesons sometimes continued to be referred to in shorter terminology (e.g., pion for pi meson), but in the case of the muon, it retained the shorter name and was never again properly referred to by older "mu meson" terminology.
The eventual recognition of the muon as a simple "heavy electron", with no role at all in the nuclear interaction, seemed so incongruous and surprising at the time, that Nobel laureate I. I. Rabi famously quipped, "Who ordered that?"[9]
In the Rossi–Hall experiment (1941), muons were used to observe the time dilation (or, alternatively, length contraction) predicted by special relativity, for the first time.[10]
## Muon sources
Cosmic ray muon passing through lead in cloud chamber
Muons arriving on the Earth's surface are created indirectly as decay products of collisions of cosmic rays with particles of the Earth's atmosphere.[11]
About 10,000 muons reach every square meter of the earth's surface a minute; these charged particles form as by-products of cosmic rays colliding with molecules in the upper atmosphere. Traveling at relativistic speeds, muons can penetrate tens of meters into rocks and other matter before attenuating as a result of absorption or deflection by other atoms.[12]
When a cosmic ray proton impacts atomic nuclei in the upper atmosphere, pions are created. These decay within a relatively short distance (meters) into muons (their preferred decay product), and muon neutrinos. The muons from these high-energy cosmic rays generally continue in about the same direction as the original proton, at a velocity near the speed of light. Although their lifetime without relativistic effects would allow a half-survival distance of only about 456 meters ( 2.197 µs × ln(2) × 0.9997 × c ) at most (as seen from Earth) the time dilation effect of special relativity (from the viewpoint of the Earth) allows cosmic ray secondary muons to survive the flight to the Earth's surface, since in the Earth frame the muons have a longer half-life due to their velocity. From the viewpoint (inertial frame) of the muon, on the other hand, it is the length contraction effect of special relativity which allows this penetration, since in the muon frame its lifetime is unaffected, but the length contraction causes distances through the atmosphere and Earth to be far shorter than these distances in the Earth rest-frame. Both effects are equally valid ways of explaining the fast muon's unusual survival over distances.
Since muons are unusually penetrative of ordinary matter, like neutrinos, they are also detectable deep underground (700 meters at the Soudan 2 detector) and underwater, where they form a major part of the natural background ionizing radiation. Like cosmic rays, as noted, this secondary muon radiation is also directional.
The same nuclear reaction described above (i.e. hadron-hadron impacts to produce pion beams, which then quickly decay to muon beams over short distances) is used by particle physicists to produce muon beams, such as the beam used for the muon g−2 experiment.[13]
## Muon decay
The most common decay of the muon
Muons are unstable elementary particles and are heavier than electrons and neutrinos but lighter than all other matter particles. They decay via the weak interaction. Because leptonic family numbers are conserved in the absence of an extremely unlikely immediate neutrino oscillation, one of the product neutrinos of muon decay must be a muon-type neutrino and the other an electron-type antineutrino (antimuon decay produces the corresponding antiparticles, as detailed below).
Because charge must be conserved, one of the products of muon decay is always an electron of the same charge as the muon (a positron if it is a positive muon). Thus all muons decay to at least an electron, and two neutrinos. Sometimes, besides these necessary products, additional other particles that have no net charge and spin of zero (e.g., a pair of photons, or an electron-positron pair), are produced.
The dominant muon decay mode (sometimes called the Michel decay after Louis Michel) is the simplest possible: the muon decays to an electron, an electron antineutrino, and a muon neutrino. Antimuons, in mirror fashion, most often decay to the corresponding antiparticles: a positron, an electron neutrino, and a muon antineutrino. In formulaic terms, these two decays are:
μ
e
+
ν
e
+
ν
μ
μ+
e+
+
ν
e
+
ν
μ
The mean lifetime, τ = ħ/Γ, of the (positive) muon is 2.1969811±0.0000022 μs.[4] The equality of the muon and antimuon lifetimes has been established to better than one part in 104.[14]
### Prohibited decays
Certain neutrino-less decay modes are kinematically allowed but are, for all practical purposes, forbidden in the Standard Model, even given that neutrinos have mass and oscillate. Examples forbidden by lepton flavour conservation are:
μ
e
+
γ
and
μ
e
+
e+
+
e
.
To be precise: in the Standard Model with neutrino mass, a decay like
μ
e
+
γ
is technically possible, for example by neutrino oscillation of a virtual muon neutrino into an electron neutrino, but such a decay is astronomically unlikely and therefore should be experimentally unobservable: Less than one in 1050 muon decays should produce such a decay.
Observation of such decay modes would constitute clear evidence for theories beyond the Standard Model. Upper limits for the branching fractions of such decay modes were measured in many experiments starting more than 50 years ago. The current upper limit for the
μ+
e+
+
γ
branching fraction was measured 2009–2013 in the MEG experiment and is 4.2×10−13.[15]
### Theoretical decay rate
The muon decay width which follows from Fermi's golden rule has dimension of energy, and must be proportional to the square of the amplitude, and thus the square of Fermi's coupling constant (${\displaystyle G_{\text{F}}}$), with over-all dimension of inverse fourth power of energy. By dimensional analysis, this leads to Sargent's rule of fifth-power dependence on mμ,[16][17]
${\displaystyle \Gamma ={\frac {G_{\text{F}}^{2}m_{\mu }^{5}}{192\pi ^{3}}}~I\left({\frac {m_{\text{e}}^{2}}{m_{\mu }^{2}}}\right),}$
where ${\displaystyle I(x)=1-8x-12x^{2}\ln x+8x^{3}-x^{4}}$,[17] and:
${\displaystyle x={\frac {2\,E_{\text{e}}}{m_{\mu }\,c^{2}}}}$ is the fraction of the maximum energy transmitted to the electron.
The decay distributions of the electron in muon decays have been parameterised using the so-called Michel parameters. The values of these four parameters are predicted unambiguously in the Standard Model of particle physics, thus muon decays represent a good test of the spacetime structure of the weak interaction. No deviation from the Standard Model predictions has yet been found.
For the decay of the muon, the expected decay distribution for the Standard Model values of Michel parameters is
${\displaystyle {\frac {\operatorname {d} ^{2}\Gamma }{\operatorname {d} x\,\operatorname {d} \cos \theta }}\sim x^{2}[(3-2x)+P_{\mu }\cos \theta \,(1-2x)]}$
where ${\displaystyle \theta }$ is the angle between the muon's polarization vector ${\displaystyle \mathbf {P} _{\mu }}$ and the decay-electron momentum vector, and ${\displaystyle P_{\mu }=|\mathbf {P} _{\mu }|}$ is the fraction of muons that are forward-polarized. Integrating this expression over electron energy gives the angular distribution of the daughter electrons:
${\displaystyle {\frac {\operatorname {d} \Gamma }{\operatorname {d} \cos \theta }}\sim 1-{\frac {1}{3}}P_{\mu }\cos \theta .}$
The electron energy distribution integrated over the polar angle (valid for ${\displaystyle x<1}$) is
${\displaystyle {\frac {\operatorname {d} \Gamma }{\operatorname {d} x}}\sim (3x^{2}-2x^{3}).}$
Because the direction the electron is emitted in (a polar vector) is preferentially aligned opposite the muon spin (an axial vector), the decay is an example of non-conservation of parity by the weak interaction. This is essentially the same experimental signature as used by the original demonstration. More generally in the Standard Model, all charged leptons decay via the weak interaction and likewise violate parity symmetry.
## Muonic atoms
The muon was the first elementary particle discovered that does not appear in ordinary atoms.
### Negative muon atoms
Negative muons can form muonic atoms (previously called mu-mesic atoms), by replacing an electron in ordinary atoms. Muonic hydrogen atoms are much smaller than typical hydrogen atoms because the much larger mass of the muon gives it a much more localized ground-state wavefunction than is observed for the electron. In multi-electron atoms, when only one of the electrons is replaced by a muon, the size of the atom continues to be determined by the other electrons, and the atomic size is nearly unchanged. However, in such cases the orbital of the muon continues to be smaller and far closer to the nucleus than the atomic orbitals of the electrons.
Spectroscopic measurements in muonic hydrogen have been to used to produce a precise estimate of the proton radius.[18] The results of these measurements diverged from the then accepted value giving rise to the so called proton radius puzzle. Later this puzzle found its resolution when new improved measurements of the proton radius in the electronic hydrogen became available.[19]
Muonic helium is created by substituting a muon for one of the electrons in helium-4. The muon orbits much closer to the nucleus, so muonic helium can therefore be regarded like an isotope of helium whose nucleus consists of two neutrons, two protons and a muon, with a single electron outside. Colloquially, it could be called "helium 4.1", since the mass of the muon is slightly greater than 0.1 dalton. Chemically, muonic helium, possessing an unpaired valence electron, can bond with other atoms, and behaves more like a hydrogen atom than an inert helium atom.[20][21][22]
Muonic heavy hydrogen atoms with a negative muon may undergo nuclear fusion in the process of muon-catalyzed fusion, after the muon may leave the new atom to induce fusion in another hydrogen molecule. This process continues until the negative muon is captured by a helium nucleus, where it remains until it decays.
Negative muons bound to conventional atoms can be captured (muon capture) through the weak force by protons in nuclei, in a sort of electron-capture-like process. When this happens, nuclear transmutation results: The proton becomes a neutron and a muon neutrino is emitted.
### Positive muon atoms
A positive muon, when stopped in ordinary matter, cannot be captured by a proton since the two positive charges can only repel. The positive muon is also not attracted to the nucleus of atoms. Instead, it binds a random electron and with this electron forms an exotic atom known as muonium (mu) atom. In this atom, the muon acts as the nucleus. The positive muon, in this context, can be considered a pseudo-isotope of hydrogen with one ninth of the mass of the proton. Because the mass of the electron is much smaller than the mass of both the proton and the muon, the reduced mass of muonium, and hence its Bohr radius, is very close to that of hydrogen. Therefore this bound muon-electron pair can be treated to a first approximation as a short-lived "atom" that behaves chemically like the isotopes of hydrogen (protium, deuterium and tritium).
Both positive and negative muons can be part of a short-lived pi-mu atom consisting of a muon and an oppositely charged pion. These atoms were observed in the 1970s in experiments at Brookhaven and Fermilab.[23][24]
## Anomalous magnetic dipole moment
The anomalous magnetic dipole moment is the difference between the experimentally observed value of the magnetic dipole moment and the theoretical value predicted by the Dirac equation. The measurement and prediction of this value is very important in the precision tests of QED. The E821 experiment[25] at Brookhaven National Laboratory (BNL) and the Muon g-2 experiment at Fermilab studied the precession of the muon spin in a constant external magnetic field as the muons circulated in a confining storage ring. The Muon g-2 collaboration reported [26] in 2021:
${\displaystyle a={\frac {g-2}{2}}=0.00116592061(41)}$.
The prediction for the value of the muon anomalous magnetic moment includes three parts:
aμSM = aμQED + aμEW + aμhad.
The difference between the g-factors of the muon and the electron is due to their difference in mass. Because of the muon's larger mass, contributions to the theoretical calculation of its anomalous magnetic dipole moment from Standard Model weak interactions and from contributions involving hadrons are important at the current level of precision, whereas these effects are not important for the electron. The muon's anomalous magnetic dipole moment is also sensitive to contributions from new physics beyond the Standard Model, such as supersymmetry. For this reason, the muon's anomalous magnetic moment is normally used as a probe for new physics beyond the Standard Model rather than as a test of QED.[27] Muon g−2, a new experiment at Fermilab using the E821 magnet improved the precision of this measurement.[28]
In 2020 an international team of 170 physicists calculated the most accurate prediction for the theoretical value of the muon's anomalous magnetic moment.[29][30]
In 2021, the Fermilab National Accelerator Laboratory (FNAL) Muon g−2 Experiment presented their first results of a new experimental average which increased the difference between experiment and theory to 4.2 standard deviations.[31]
## Electric dipole moment
The current experimental limit on the muon electric dipole moment, |dμ| < 1.9 × 10−19 e·cm set by the E821 experiment at the Brookhaven Laboratory, is orders of magnitude above the Standard Model prediction. The observation of a non-zero muon electric dipole moment would provide an additional source of CP violation. An improvement in sensitivity by two orders of magnitude over the Brookhaven limit is expected from the experiments at Fermilab.
Since muons are much more deeply penetrating than X-rays or gamma rays, muon imaging can be used with much thicker material or, with cosmic ray sources, larger objects. One example is commercial muon tomography used to image entire cargo containers to detect shielded nuclear material, as well as explosives or other contraband.[32]
The technique of muon transmission radiography based on cosmic ray sources was first used in the 1950s to measure the depth of the overburden of a tunnel in Australia[33] and in the 1960s to search for possible hidden chambers in the Pyramid of Chephren in Giza.[34] In 2017, the discovery of a large void (with a length of 30 metres minimum) by observation of cosmic-ray muons was reported.[35]
In 2003, the scientists at Los Alamos National Laboratory developed a new imaging technique: muon scattering tomography. With muon scattering tomography, both incoming and outgoing trajectories for each particle are reconstructed, such as with sealed aluminum drift tubes.[36] Since the development of this technique, several companies have started to use it.
In August 2014, Decision Sciences International Corporation announced it had been awarded a contract by Toshiba for use of its muon tracking detectors in reclaiming the Fukushima nuclear complex.[37] The Fukushima Daiichi Tracker (FDT) was proposed to make a few months of muon measurements to show the distribution of the reactor cores. In December 2014, Tepco reported that they would be using two different muon imaging techniques at Fukushima, "muon scanning method" on Unit 1 (the most badly damaged, where the fuel may have left the reactor vessel) and "muon scattering method" on Unit 2.[38] The International Research Institute for Nuclear Decommissioning IRID in Japan and the High Energy Accelerator Research Organization KEK call the method they developed for Unit 1 the "muon permeation method"; 1,200 optical fibers for wavelength conversion light up when muons come into contact with them.[39] After a month of data collection, it is hoped to reveal the location and amount of fuel debris still inside the reactor. The measurements began in February 2015.[40]
## References
1. ^ "2018 CODATA Value: muon mass". The NIST Reference on Constants, Units, and Uncertainty. NIST. 20 May 2019. Retrieved 20 May 2019.
2. ^ "2018 CODATA Value: muon mass in u". The NIST Reference on Constants, Units, and Uncertainty. NIST. 20 May 2019. Retrieved 14 September 2019.
3. ^ "2018 CODATA Value: muon mass energy equivalent in MeV". The NIST Reference on Constants, Units, and Uncertainty. NIST. 20 May 2019. Retrieved 14 September 2019.
4. ^ a b Beringer, J.; et al. (Particle Data Group) (2012). "Leptons (e, mu, tau, ... neutrinos ...)" (PDF). PDGLive Particle Summary. Particle Data Group. Retrieved 12 January 2013.
5. ^ a b Patrignani, C.; et al. (Particle Data Group) (2016). "Review of Particle Physics" (PDF). Chinese Physics C. 40 (10): 100001. Bibcode:2016ChPhC..40j0001P. doi:10.1088/1674-1137/40/10/100001. hdl:1983/989104d6-b9b4-412b-bed9-75d962c2e000. S2CID 125766528.
6. ^ "2018 CODATA Value: muon-electron mass ratio". The NIST Reference on Constants, Units, and Uncertainty. NIST. 20 May 2019. Retrieved 24 May 2019.
7. ^ Street, J.; Stevenson, E. (1937). "New evidence for the existence of a particle of mass intermediate between the proton and electron". Physical Review. 52 (9): 1003. Bibcode:1937PhRv...52.1003S. doi:10.1103/PhysRev.52.1003. S2CID 1378839.
8. ^ Yukawa, Hideki (1935). "On the interaction of elementary particles" (PDF). Proceedings of the Physico-Mathematical Society of Japan. 17 (48): 139–148.
9. ^ Bartusiak, Marcia (27 September 1987). "Who ordered the muon?". Science & Technology. The New York Times. Retrieved 30 August 2016.
10. ^ Self, Sydney (2018). "APPLICATION OF GENERAL SEMANTICS TO THE NATURE OF TIME HISTORY". A Review of General Semantics. 75 (1–2): 162–166.
11. ^ Demtröder, Wolfgang (2006). Experimentalphysik. Vol. 1 (4 ed.). Springer. p. 101. ISBN 978-3-540-26034-9.
12. ^ Wolverton, Mark (September 2007). "Muons for peace: New way to spot hidden nukes gets ready to debut". Scientific American. 297 (3): 26–28. Bibcode:2007SciAm.297c..26W. doi:10.1038/scientificamerican0907-26. PMID 17784615.
13. ^ "Physicists announce latest muon g-2 measurement" (Press release). Brookhaven National Laboratory. 30 July 2002. Archived from the original on 8 April 2007. Retrieved 14 November 2009.
14. ^ Bardin, G.; Duclos, J.; Magnon, A.; Martino, J.; Zavattini, E. (1984). "A New Measurement of the Positive Muon Lifetime". Phys Lett B. 137 (1–2): 135–140. Bibcode:1984PhLB..137..135B. doi:10.1016/0370-2693(84)91121-3.
15. ^ Baldini, A.M.; et al. (MEG collaboration) (May 2016). "Search for the lepton flavour violating decay μμ+ → e+γ with the full dataset of the MEG experiment". arXiv:1605.05081 [hep-ex].
16. ^ Kabbashi, Mahgoub Abbaker (August 2015). Muon Decay Width and Lifetime in the Standard Model (PDF) (MSc). Sudan University of Science and Technology, Khartoum. Retrieved 21 May 2021.
17. ^ a b Klasen, M.; Frekers, D.; Kovařík, K.; Scior, P.; Schmiemann, S. (2017). "Einführung in das Standardmodell der Teilchenphysik - Sheet 10" (PDF). Retrieved 21 May 2021.
18. ^ Antognini, A.; Nez, F.; Schuhmann, K.; Amaro, F. D.; Biraben, F.; Cardoso, J. M. R.; et al. (2013). "Proton Structure from the Measurement of 2S-2P Transition Frequencies of Muonic Hydrogen" (PDF). Science. 339 (6118): 417–420. Bibcode:2013Sci...339..417A. doi:10.1126/science.1230016. hdl:10316/79993. PMID 23349284. S2CID 346658.
19. ^ Karr, Jean-Philippe; Marchand, Dominique (2019). "Progress on the proton-radius puzzle". Nature. 575 (7781): 61–62. Bibcode:2019Natur.575...61K. doi:10.1038/d41586-019-03364-z. ISSN 0028-0836. PMID 31695215.
20. ^ Fleming, D. G.; Arseneau, D. J.; Sukhorukov, O.; Brewer, J. H.; Mielke, S. L.; Schatz, G. C.; Garrett, B. C.; Peterson, K. A.; Truhlar, D. G. (28 January 2011). "Kinetic Isotope Effects for the Reactions of Muonic Helium and Muonium with H2". Science. 331 (6016): 448–450. Bibcode:2011Sci...331..448F. doi:10.1126/science.1199421. PMID 21273484. S2CID 206530683.
21. ^ Moncada, F.; Cruz, D.; Reyes, A (2012). "Muonic alchemy: Transmuting elements with the inclusion of negative muons". Chemical Physics Letters. 539: 209–221. Bibcode:2012CPL...539..209M. doi:10.1016/j.cplett.2012.04.062.
22. ^ Moncada, F.; Cruz, D.; Reyes, A. (10 May 2013). "Electronic properties of atoms and molecules containing one and two negative muons". Chemical Physics Letters. 570: 16–21. Bibcode:2013CPL...570...16M. doi:10.1016/j.cplett.2013.03.004.
23. ^ Coombes, R.; Flexer, R.; Hall, A.; Kennelly, R.; Kirkby, J.; Piccioni, R.; et al. (2 August 1976). "Detection of π−μ coulomb bound states". Physical Review Letters. American Physical Society (APS). 37 (5): 249–252. doi:10.1103/physrevlett.37.249. ISSN 0031-9007.
24. ^ Aronson, S. H.; Bernstein, R. H.; Bock, G. J.; Cousins, R. D.; Greenhalgh, J. F.; Hedin, D.; et al. (19 April 1982). "Measurement of the rate of formation of pi-mu atoms in ${\displaystyle K_{L}^{0}}$ decay". Physical Review Letters. American Physical Society (APS). 48 (16): 1078–1081. Bibcode:1982PhRvL..48.1078A. doi:10.1103/physrevlett.48.1078. ISSN 0031-9007.
25. ^ "The Muon g-2 Experiment Home Page". G-2.bnl.gov. 8 January 2004. Retrieved 6 January 2012.
26. ^ Abi, B.; Albahri, T.; Al-Kilani, S.; Allspach, D.; Alonzi, L.P.; Anastasi, A.; et al. (2021). "Measurement of the Positive Muon Magnetic Moment to 0.46 ppm". Phys Rev Lett. 126 (14): 141801. arXiv:2104.03281. Bibcode:2021PhRvL.126n1801A. doi:10.1103/PhysRevLett.126.141801. PMID 33891447.
27. ^ Hagiwara, K; Martin, A; Nomura, D; Teubner, T (2007). "Improved predictions for g−2 of the muon and αQED(MZ2)". Physics Letters B. 649 (2–3): 173–179. arXiv:hep-ph/0611102. Bibcode:2007PhLB..649..173H. doi:10.1016/j.physletb.2007.04.012. S2CID 118565052.
28. ^ "Revolutionary muon experiment to begin with 3,200 mile move of 50 foot-wide particle storage ring" (Press release). 8 May 2013. Retrieved 16 March 2015.
29. ^ Pinson, Jerald (11 June 2020). "Physicists publish worldwide consensus of muon magnetic moment calculation". Fermilab News. Retrieved 13 February 2022.
30. ^ Aoyama, T.; et al. (December 2020). "The anomalous magnetic moment of the muon in the Standard Model". Physics Reports. 887: 1–166. arXiv:2006.04822. Bibcode:2020PhR...887....1A. doi:10.1016/j.physrep.2020.07.006. S2CID 219559166.
31. ^ Abi, B.; et al. (Muon g−2 Collaboration) (7 April 2021). "Measurement of the Positive Muon Anomalous Magnetic Moment to 0.46 ppm". Physical Review Letters. 126 (14): 141801. arXiv:2104.03281. Bibcode:2021PhRvL.126n1801A. doi:10.1103/PhysRevLett.126.141801. PMID 33891447. S2CID 233169085.
32. ^ "Decision Sciences Corp". Archived from the original on 19 October 2014. Retrieved 10 February 2015.[failed verification]
33. ^ George, E.P. (1 July 1955). "Cosmic rays measure overburden of tunnel". Commonwealth Engineer: 455.
34. ^ Alvarez, L.W. (1970). "Search for hidden chambers in the pyramids using cosmic rays". Science. 167 (3919): 832–839. Bibcode:1970Sci...167..832A. doi:10.1126/science.167.3919.832. PMID 17742609.
35. ^ Morishima, Kunihiro; Kuno, Mitsuaki; Nishio, Akira; Kitagawa, Nobuko; Manabe, Yuta (2017). "Discovery of a big void in Khufu's Pyramid by observation of cosmic-ray muons". Nature. 552 (7685): 386–390. arXiv:1711.01576. Bibcode:2017Natur.552..386M. doi:10.1038/nature24647. PMID 29160306. S2CID 4459597.
36. ^ Borozdin, Konstantin N.; Hogan, Gary E.; Morris, Christopher; Priedhorsky, William C.; Saunders, Alexander; Schultz, Larry J.; Teasdale, Margaret E. (2003). "Radiographic imaging with cosmic-ray muons". Nature. 422 (6929): 277. Bibcode:2003Natur.422..277B. doi:10.1038/422277a. PMID 12646911. S2CID 47248176.
37. ^ "Decision Sciences awarded Toshiba contract for Fukushima Daiichi Nuclear Complex project" (Press release). Decision Sciences. 8 August 2014. Archived from the original on 10 February 2015. Retrieved 10 February 2015.
38. ^ "Tepco to start "scanning" inside of Reactor 1 in early February by using muons". Fukushima Diary. January 2015.
39. ^
40. ^ "Muon scans begin at Fukushima Daiichi". SimplyInfo. 3 February 2015. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 13, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.970652163028717, "perplexity": 2749.5002637364423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710719.4/warc/CC-MAIN-20221130024541-20221130054541-00081.warc.gz"} |
http://mathoverflow.net/questions/37570/technique-to-prove-basepoint-freeness | # Technique to prove basepoint-freeness
Let $X$ be a smooth projective variety over $\mathbb{C}$. And let $L$ be a big and nef line bundle on $X$. I want to prove $L$ is semi-ample($L^m$ is basepoint-free for some $m > 0$).
The only way I know is using Kawamata basepoint-free theorem:
Theorem. Let $(X, \Delta)$ be a proper klt pair with $\Delta$ effective. Let $D$ be a nef Cartier divisor such that $aD-K_X-\Delta$ is nef and big for some $a > 0$. Then $|bD|$ has no basepoints for all $b >> 0$.
Question. What other kinds of techniques to prove semi-ampleness or basepoint-freeness of given line bundle are?
Addition : In my situation, $X$ is a moduli space $\overline{M}_{0,n}$. In this case, Kodaira dimension is $-\infty$. More generally, I want to think genus 0 Kontsevich moduli space of stable maps to projective space, too. $L$ is given by a linear combination of boundary divisors. It is well-known that boundary divisors are normal crossing, and we know many curves on the space such that we can calculate intersection numbers with boundary divisors explicitely.
-
Are there any other special circumstances around your situation? Can you say anything more specifically about $X$, or $L$ (where they come from, etc.) – Karl Schwede Sep 3 '10 at 16:50
I edited the question, thanks. – Moon Sep 4 '10 at 6:43
I don't think that your assertion is true; for example, Lazarsfeld gives an example (PAG, 2.3.3) of a big and nef divisor on a surface such that its graded algebra is not finitely generated, so that the divisor can't be semiample.
But there are some close results for nef and big divisors, or even for good divisors (when the Kodaira dimensions equals the numerical dimension) as Mourougane and Russo showed : for example, Wilson's theorem asserts that for any nef and big divisor on an irreducible projective variety, there exists $m_0\in \mathbb N$ together with an effective divisor $N$ such that for all $m\geq m_0$, the linear system $|mD-N|$ has no base-point. (PAG, 2.3.9)
-
You are right, nefness + bigness does not guarantee semi-ampleness. My question is following: Is there any sufficient condition to get semi-ampleness? With which conditions we get semi-ample property? – Moon Sep 4 '10 at 10:46
Anyway, thank you for answer. By the way, what is the result of Mourougane and Russo? Do you mean the theorem 2.3.9 in PAG? – Moon Sep 4 '10 at 10:49
For example, as I mentionned it, a nef and big divisor $D$ is semiample if and only if its graded ring of sections $R(X,D)=\bigoplus_{m\in \mathbb N} H^0(X,mD)$ is finitely generated. – Henri Sep 4 '10 at 10:51
Yes, you could have a look at it here : www.math.jussieu.fr/~mourouga/note_abondant.pdf (its both in french and in english, don't worry!) – Henri Sep 4 '10 at 10:52
Numerical criteria for base-pointfreeness are known only in specific cases such as the Kawamata basepoint-free theorem and Reider's theorem (for $\dim X=2$).
In the case you mention, $X=\mathcal{M}_{0,n}$, the problem of classifying semi-ample divisors is an important problem. It is slightly easier in positive characteristic, thanks to a theorem of Keel which says that a nef line bundle $L$ is semi-ample if and only if the restriction $L|_E$ is semiample, where $E$ is the exceptional locus of subvarieties $Z$ such that $L^{\dim Z}.Z=0$. If $f:X\to Y$ is a morphism with exceptional locus $E$, then $L$ is semi-ample if and only $L^r$ is the pullback of an ample line bundle on $Y$ for $r>0$. For the precise statements, you might want to take a look at
According to G. Farkas' article, there are currently no known examples of nef divisors which are not semi-ample.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9120107293128967, "perplexity": 242.84756079430278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507442420.22/warc/CC-MAIN-20141017005722-00086-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://www.emis.de/classics/Erdos/cit/98010616.htm | ## Zentralblatt MATH
Publications of (and about) Paul Erdös
Zbl.No: 980.10616
Autor: Erdös, Paul; Sarkozy, Gabor N.
Title: On cycles in the coprime graph of integers. (In English)
Source: Electron. J. Comb. 4, No.2, Research paper R8, 11 p. (1997).
Review: In this paper we study cycles in the coprime graph of integers. We denote by f(n,k) the number of positive integers m \leq n with a prime factor among the first k primes. We show that there exists a constant c such that if A\subset {1,2,...,n} with |A| > f(n,2) (if 6|n then f(n,2) = 2/3 n), then the coprime graph induced by A not only contains a triangle, but also a cycle of length 2l+1 for every positive integer l \leq c n.
Classif.: * 11B75 Combinatorial number theory
05C38 Paths and cycles
© European Mathematical Society & FIZ Karlsruhe & Springer-Verlag | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9905999898910522, "perplexity": 2613.213185571067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500816424.18/warc/CC-MAIN-20140820021336-00347-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://www.arxiv-vanity.com/papers/1209.3567/ | # New solutions of charged regular black holes and their stability
Nami Uchikata Shijun Yoshida Toshifumi Futamase Astronomical Institute, Tohoku University, Aramaki-Aoba, Aoba-ku, Sendai 980-8578, Japan
February 12, 2021
###### Abstract
We construct new regular black hole solutions by matching the de Sitter solution and the Reissner-Nordström solution with a timelike thin shell. The thin shell is assumed to have mass but no pressure and obeys an equation of motion derived from Israel’s junction conditions. By investigating the equation of motion for the shell, we obtain stationary solutions of charged regular black holes and examine stability of the solutions. Stationary solutions are found in limited ranges of , and they are stable against small radial displacement of the shell with fixed values of and if , where is the de Sitter horizon radius, the black hole mass, the proper mass of the shell and the black hole charge. All the solutions obtained are highly charged in the sense of . By taking the massless limit of the shell in the present regular black hole solutions, we obtain the charged regular black hole with a massless shell obtained by Lemos and Zanchin and investigate stability of the solutions. It is found that Lemos and Zanchin’s regular black hole solutions given by the massless limit of the present regular black hole solutions permit stable solutions, which are obtained by the limit of .
04.70.-s
## I Introduction
One of the most interesting questions in general relativity concerns the inner structure of black holes. However, it is hard to give a definite answer to this question because of the existence of a spacetime singularity where the curvature diverges indefinitely and general relativity breaks down. Penrose and Hawking proved that the gravitational collapse with physically reasonable initial conditions inevitably leads to the formation of a singularity, known as the singularity theorems pen ; haw ; he . Although such singularities are supposed to be concealed by the event horizon and to be isolated from the domain of predictability, this means that we cannot describe the entire spacetime by the present physics. However, there may be situations where some of the assumptions in the singularity theorem may not be applied, such as the existence of the cosmological constant somewhere in the spacetime region. Thus, it is of interest if we can construct some models of black holes without spacetime singularities.
Black holes having the regular centers are called regular black holes or nonsingular black holes. Existing regular black hole solutions may be divided into two classes. Solutions belonging to one class are characterized by the property that the black hole spacetime is sufficiently smooth everywhere. Bardeen gave this type of a solution for the first time bar . The metric of his solution asymptotically approaches the de Sitter and the Reissner-Nordström solutions in the limit of and , respectively, where is a Schwarzschild-type radial coordinate. If one chooses appropriate parameters for the Bardeen solution, its spacetime has two horizons and looks like a Reissner-Nordström black hole with a regular center. Although it was thought that the Bardeen solution cannot be an exact solution of Einstein equations, Ayón-Beato and Garcia ab2 showed that the Bardeen solution is given as a gravitational field coupled to a nonlinear magnetic monopole. So far, there have been many investigations based on Bardeen’s work for uncharged cases dym ; dym2 ; an and for charged cases ab ; ab2 ; ab3 ; more . The other class of regular black hole solutions is composed of the solutions constructed by matching two distinct spacetimes with a thin transition layer or surface. Typical solutions of this class are composed of a single regular de Sitter core and exterior black hole spacetime between which a single thin shell exists fmm ; lake ; lemos . The layer, which must be located within the event horizon because we consider the regular black hole, can be either a spacelike fmm ; bal , timelike lemos or null hypersurface lemos . The regular uncharged spherically symmetric black holes fmm are motivated by the assumption that the spacetime curvature has an upper limit which is of the order of the Planck scale and the quantum effects become dominant so that the formation of the singularity is avoided. The collapsing matter will turn into a de Sitter phase when the curvature approaches a critical value. This idea was first suggested for the cosmological context by Sakharov sa and Gliner gl .
As mentioned before, studies on the regular black holes are closely related to matching problems of two different spacetimes and motion of the thin shell. Israel derived convenient matching conditions of spacetimes for non-null transition layers israel . Barrabés and Israel generalized Israel’s junction conditions to a unified description including null hypersurface cases ba . (For the spherically symmetric cases, see Ref. fay .) As for motion of the shell, gravitational collapse of a charged shell has been studied in Refs. chase ; dlc ; boul ; ku . (For the cases of higher dimensional spacetimes and other gravitational theory, see, e.g., Refs. gao ; dgl . ) In those studies, the interior spacetime of the shell is assumed to be flat boul ; ku or the Reissner-Nordström solution with different mass and charge from outside one chase . As argued in those studies, the shell can be stationary and stable only if the shell has pressure. In cases of no pressure, the shell keeps on collapsing or expanding, or the shell collapsing (expanding) at the beginning will turn to expand (collapse).
Stability of regular black holes is also important because unstable solutions cannot occur in nature. For the regular black holes with shells, instability of the stationary shell immediately implies instability of the regular black hole. Balbinot and Poisson bal have analyzed stability of spacelike shells of the uncharged spherically symmetric regular black holes considered by Frolov et al. fmm . They showed that for a certain parameter, the shell can be stationary and stable. In this study, we will apply Balbinot and Poisson’s method to the cases of charged regular black holes; the regular black holes consist of a timelike massive charged shell which separates the de Sitter and the Reissner-Nordström spacetime inside the inner horizon of the Reissner-Nordström solution. Lemos and Zanchin have considered this type of charged regular black holes and obtained exact solutions assuming the shell is massless and pressureless lemos .
Our aim in this study is threefold; to find new regular black hole solutions having a massive thin shell, to examine their stability, and to examine stability of Lemos and Zanchin’s regular black holes. For simplicity, we assume the shell is constructed of dust, i.e., the shell has mass but no pressure. Although the shell is pressureless in this study, we consider the de Sitter spacetime inside the shell, i.e., there exists matter that corresponds to a cosmological constant inside the shell. Thus, the pressureless shell can be in stationary states. Balbinot and Poisson bal considered that the de Sitter horizon is of the order of the Planck scale and it is much smaller than the event horizon. However, we do not a priori make any assumptions for physical scales of the parameters regarding the regular black hole in this study.
The plan of this paper is the following. In Sec. II, we briefly describe formalism for a thin shell using the 3+1 decomposition of Einstein equations and derive equations of motion for a thin dust shell. In Sec. III, we show results of new regular black hole solutions and their stability. Stability of Lemos and Zanchin’s regular black holes is argued in Sec. IV. Then, the conclusion is in the last section. Throughout this paper, we use the units of , where and are the speed of light and the gravitational constant, respectively.
## Ii Formulation
### ii.1 Preliminary
As mentioned before, we consider solutions of Einstein equations in which two different exact solutions, the de Sitter and Reissner-Nordström solutions, are matching by a massive thin shell. Following Ref.bal , in this subsection, we concisely describe the formalism treating motion of the thin shell sandwiched between two arbitrary solutions.
Let be the four dimensional spacetimes that have metrics and a system of coordinates . Let be a hypersurface described by intrinsic coordinates and located at the boundaries of and . Here and henceforth, we use the greek and the roman lowercase letters to describe indices of the four-dimensional spacetime and of the three-dimensional hypersurface, respectively. Let be a unit normal vector to the hypersurface . Thus, has to satisfy
nαnα=ϵ,eαanα=0, (1)
where is the basis vector on , defined by
eαa=∂xα∂ξa. (2)
Here, () when the hypersurface is timelike (spacelike). The induced metric and the extrinsic curvature associated with are, respectively, defined by
hab≡gαβeαaeβb,Kab≡−nα|βeαaeβb. (3)
Here and henceforth, we denote the covariant differentiation associated with and by the stroke and the semicolon , respectively. To describe motion of the three-dimensional hypersurface, it is useful to rewrite the basic equations in the three-dimensional form. These equations can be derived by contracting the four-dimensional quantities by and/or . By using the Einstein tensor contracted by and/or , and the Gauss-Codazzi equations, we obtain
−2ϵGαβnαnβ=3R+ϵ(KabKab−K2),Gαβeαanβ=K;a−Kba;b, (4)
where is the three-dimensional Ricci scalar associated with and . The energy-momentum tensor on the hypersurface, , is given by a jump of the extrinsic curvature on (see, e.g., Ref. israel ),
8πSab=ϵ([Kab]−hab[K]), (5)
where . is evaluated on by taking limit from . Then, the energy-momentum conservation equation on the hypersurface will yield
Sab;a+ϵ[Tαβeαbnβ]=0. (6)
So far, we have shown the energy-momentum conservation equation in the general form. We next show how these equations are given for the dust shell. If the shell is composed of dust, the stress-energy tensor of the shell is given by
Sab=σuaub, (7)
where is the surface energy density of the shell and is the matter velocity on if the shell is a timelike hypersurface. Thus, the energy-momentum conservation (6) leads to
(σua);a=[Tαβuαnβ], (8)
where is the four-velocity of the shell given by . The transverse acceleration of the shell is expressed by
aα≡uα|βuβ=ua;bubeαa+ϵuaubKabnα. (9)
We are interested in the normal component of , which describes the motion of the shell. It is straightforward to see that . Equation (5) is equivalent to
[Kab]=8πϵ(Sab−S2hab), withS=habSab. (10)
Then, we have an equation of motion of the shell,
nαaα|+−nαaα|−=4πϵσ. (11)
We may also construct an equation from the arithmetic means of the extrinsic curvatures, . However, it is not a useful equation for the present situation.
### ii.2 Equation of motion of the shell
In order to have a regular center by matching the de Sitter and the black hole spacetimes with a massive thin shell, at least two horizons including extremal cases, in which two horizons coincide, are required bro . For uncharged spherically symmetric cases, since the Schwarzschild black hole, which is the outer solution for this situation, has a single horizon (event horizon), the second horizon must be the de Sitter one. This implies that the shell has to be located between the outer event horizon and the inner de Sitter horizon and that it necessarily has to be spacelike. For charged and/or rotating cases, the black hole solution has double horizons, the event and Cauchy (inner) horizons. We may, therefore, choose any type of shell—timelike, spacelike, or null. Since it is physically natural to assume that the shell is a timelike hypersurface, we choose the shell to be located inside the inner horizon of the black hole solution in these cases.
Assuming and to be the Reissner-Nordström and de Sitter spacetimes, respectively, we apply the formalism given in the previous subsection to the present situation. We derive the equation of motion for the shell, given by , from Eqs.(8) and (11), with being the proper time of the shell. In this study, as mentioned, we assume the shell to be a timelike hypersurface. Thus, the shell radius is assumed to satisfy and , where and denote radii of the inner horizon of the Reissner-Nordström solution and the de Sitter horizon, respectively. The spherically symmetric metric that expresses the inside and outside of the shell is written by
ds2=−f(r)dt2+1f(r)dr2+r2(dθ2+sin2θdϕ2), (12)
and the function is given by
f(r)=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩fdS(r)≡1−r2L2,for rR(τ), (13)
where and are the mass and the charge of the black hole, respectively. Here and henceforth, and mean functions evaluated by the de Sitter and the Reissner-Nordström solutions on , respectively. Due to the assumptions, and , is always positive and has two roots (one root) for (). The electric potential is and . The four velocity of the shell is , where the dot denotes the derivation with respect to . Because of , may be written as
˙t=√˙R2+f(r)f(r)≡βf(r), (14)
where we have set that increases with . The normal vector to the shell is, from and , given by
nα=(˙Rf(r),β,0,0), (15)
where we have considered the normal vector pointing from the de Sitter spacetime to the Reissner-Nordström spacetime. The induced metric on the shell is
(ds2)Σ=−dτ2+R2(τ)(dθ2+sin2θdϕ2). (16)
Since the four-velocity and the normal vector do not have - and -components, we only need to consider and components of the , which leads to
(Tαβ)dS=−3gαβ8πL2,(Tαβ)RN=−gαβ8π(∂rAt)2,for α,β=0,1. (17)
Then, the energy conservation equation,
(σua);a=[Tαβuαnβ]=0, (18)
leads to
(R2σ).=0. (19)
Note that vanishes when . If we define the proper mass of the shell by , the above equation means is independent of . Nonvanishing components of the extrinsic curvature are given by
nαaα=Kττ=˙β˙R,Kθθ=Kϕϕ=−βR. (20)
From Eq.(11), thus, we have
˙βRN−˙βdS=4π˙Rσ=−4π(Rσ)., (21)
where we have used . Integrating Eq.(21), we obtain
√˙R2+1−R2L2−√˙R2+1−2mR+Q2R2=MR+C, (22)
where is an integration constant. From Eqs. (5) and (20), we have, . Thus, the integration constant appearing in Eq. (22) has to be zero. Rather than using Eq.(22), we employ a more convenient form,
˙R2+V(R)=−1, (23)
where
V(R)=−⎛⎜ ⎜⎝R3L2+Q2R−2m2M−M2R⎞⎟ ⎟⎠2−R2L2. (24)
Equation (23) is a kind of energy conservation law because one may interpret as an effective potential. This equation is nothing but an equation of motion for the massive thin shell located on the surface of the de Sitter sphere. The stationary solutions, , can be obtained by solving and simultaneously. The stability of the stationary solutions can be checked by the condition at the stationary point, i.e., whether or not the stationary shell is at a local minimum of the effective potential . From Eq. (24), we see . To obtain values of , we use Eq. (22) after obtaining solutions of the regular black hole. Since, in this study, we are concerned with the regular black hole solution, we assume that no naked singularity occurs, i.e., .
## Iii results
In order to show numerical results, we employ the units of , e.g., , , , and . Let us describe a method to obtain the equilibrium states of the regular black hole. To obtain stationary solutions numerically, we solve and simultaneously with a Newton-Raphson–like iterative scheme. During the iteration procedure, values of and are kept constant. Thus, the two algebraic equations, and , contain only two unknown parameters, and . Then, we can obtain a regular black hole solution if the iteration procedure successfully ends. After obtaining solutions, we obtain values of from Eq. (22) and check their sign of to see their stability. If (), then the solution is stable (unstable). Since, as mentioned before,we assume that there is no naked singularity, and the shell is inside the inner horizon of the black hole solution, we are concerned with the solutions satisfying conditions , , and . Otherwise, we do not admit the solutions as those of the regular black hole model that we consider in this study. Although the solutions of are not physically acceptable in normal situations, they are allowed in Eq. (23) and might be useful in some exotic situations. Thus, we show the results of the case as well in this study (see, e.g., Ref.boul ).
It is helpful to examine properties of the effective potential in order to understand how the regular black hole solution is obtained. In Figs. 1 and 2 and Figs. 3 and 4, we show typical behaviors of the potential as functions of for stable and unstable stationary solutions, respectively. Figures 2 and 4 are magnified figures of Figs. 1 and 3 around extremal points, respectively. The potential of the stable (unstable) configuration, given in Figs. 1 and 2 (Figs. 3 and 4), are characterized by a set of parameters []. A local minimum (maximum) of the potential, shown in Figs. 1 and 2 (Figs. 3 and 4), is at (). As can be seen from Eq. (24), the effective potential diverges to minus infinity as and as , which means that has at least one maximum. This property may partly be confirmed in Figs. 1–4. It is also observed in Figs. 2 and 4 that at the extremal points. Thus, one sees that these potentials permit the stationary solutions.
To investigate basic properties of the regular black hole solutions, we calculate many sequences of stationary solutions. The sequences of stationary solutions are specified by a fixed parameter , and are obtained by increasing a parameter from a minimum value (see, also, the first paragraph of this section). In other words, they are given as a set of functions and with a fixed parameter . The sequences of stationary regular black hole solutions, given as functions and , are shown in Figs. 5 and 6. As shown in these figures, we obtain the regular black hole solutions for positive and negative values of . One of the interesting findings in this study is that all the positive (negative) solutions are stable (unstable), and it seems that these stable and unstable solutions belong to continuous sequences of the stationary solutions of a fixed as can be seen in Figs. 5 and 6. For the sequences of the solutions obtained in this study, ’s are increasing functions of (see Fig. 5), and (Q/m)’s are decreasing functions of (see Fig. 6). For high sequences of the solutions, values of approach unity (see Fig. 5). It is important to point out that all the solutions obtained in this study satisfy , i.e., they are highly charged, and that some solutions are nearly extreme in the sense at one end of the sequences of the solution (see Fig. 6).
As shown in our numerical results argued so far, the regular black hole solutions are only found in some restricted parameter regions. Thus, it is useful to show clearly in which parameter regions the regular black hole solutions with a timelike thin shell occur. In Figs. 7 and 8, we show two-dimensional parameter regions where regular black hole solutions are found in this study. Stable and unstable solutions are found in with positive and in with negative , respectively (see Fig. 7). As can be observed in Fig. 7, the maximum value of for the solutions obtained in this study is achieved by the solution with the minimum value of . From Fig. 8, it is observed that the radius of the shell is in the range for the solutions obtained. In the present study, we find no upper limit of for existing unstable regular black hole solutions. Note that we show results of unstable solutions only for the range of in Fig. 7, but does not mean the upper limit. For the cases of , we cannot determine the minimum value of for unstable solutions because of numerical difficulties.
## Iv discussions
As mentioned in the Introduction, one of our aims in this study is to analyze the stability of the charged regular black holes with a massless thin shell derived by Lemos and Zanchin lemos . In order to investigate regular black holes with a massless thin shell, we may consider the limit of in our formalism. We focus on the case in this section since the case is not physically acceptable in usual situations. Besides, all the solutions with are unstable, so they are not feasible as the regular black hole models. Thus, we may exclude the case for our physical interests. In the limit, the effective potential and its first derivative are approximated by
V ≈ −(Q2−2mR+R4L2)24M2R2, (25) dVdR ≈ (Q2−3R4L2)(Q2−2mR+R4L2)2M2R3. (26)
Thus, we may obtain stationary solutions in the limit of if the following conditions are satisfied:
Q2−2mR+R4L2=O(M), (27) Q2−3R4L2=O(M). (28)
We then assume that charged regular black hole solutions in the massless limit may be expanded in terms of as follows,
R2 = QL/√3+13ALM+O(M2), (29) m = 2R3L2+BM+O(M2), (30)
where and are functions independent of . Substituting Eqs. (29) and (30) into and , we obtain
V = −R2L2−(2AR+2BL)24L2+O(M), (31) dVdR = 2ABL−2R+2A2RL2+O(M). (32)
Then, stationary solutions in the limit of are, in terms of and , given by
Q0 = √3LR02, (33) m0 = 2R03L2, (34) A = R0√L2−R20, (35) B = 1−A2ALR0, (36)
where quantities indicated by the subscript correspond to stationary solutions in the limit of . From the conditions for the regular black hole with a timelike thin shell, , and , we obtain a constraint for , given by . These massless limit solutions, given by Eqs. (33) and (34), are exactly the same as those given by Lemos and Zanchin, although our notation is different form theirs. [Lemos and Zanchin also derived the relations and . (These inequalities are derived from Eqs. (33) and (34) and .)] For these massless limit solutions, the second derivative of the effective potential is approximated by
d2VdR2 ≈ 12R0√L2−R20ML3, (37)
where the conditions for the stationary solution (29), (30), and (35)–(34) have been used. For the stationary solution, we therefore see that as . This means that in the massless limit , we have stable solutions. Note that as can be seen from Eqs. (35) and (36), the massless limit solutions break down when . This is because as , the shell becomes lightlike, for which the present formalism for the timelike shell is not applicable.
The above results show that in the limit of , the solutions exist only for , , and . Our numerical solutions with will satisfy the massless condition () with good accuracy. Thus, we may regard these solutions as good approximations for exactly massless solutions. Some of those solutions are displayed in Table I. In this table, we may confirm that infinitesimal quantities and and a divergent quantity indeed depend on as given by Eqs. (29), (30), and (37). We see that sets of parameter and correspond to the lower and the upper endpoints, respectively, in the relation of the Lemos and Zanchin solutions .
These analyses show that Lemos and Zanchin’s regular black hole solutions, given by the massless limit of the present regular black hole solutions, permit stable solutions. This conclusion, however, is not a final one because a properly massless thin shell case is excluded in the present analysis and because we only consider an example of regular black hole solutions that coincide with Lemos and Zanchin’s solution in a massless thin shell limit. Thus, further analyses are required to draw a definite answer of whether Lemos and Zanchin’s regular black hole solutions are stable or not.
Comparing these analytic results discussed so far to numerical results given in the last section, we may guess the lower and upper limits of the physical quantities for the regular black hole model to exist for the case of . Then, it may be conjectured that , and for the stable regular black hole solutions (see Figs. 7 and 8).
Finally, let us consider a physical scale of the stable regular black hole solutions we obtain in this study, which have not been specified so far. For the stability analysis of Schwarzschild-type regular black holes in Refs.fmm ; bal , the de Sitter horizon radius is assumed to be of the order of the Planck scale and , where denotes the event horizon radius. Thus, , where denotes the Planck length. On the other hand, if we take the above assumption, i.e., , our analysis, based on a classical approach, fails since the present solutions satisfy . Since the curvature invariant of the de Sitter spacetime is , if there exists an upper bound of the curvature and our analysis is valid, the upper bound of the curvature has to be smaller than that of the Planck scale. However, our results show that if we assume the de Sitter horizon radius is of the order of the Planck scale, the present stable charged regular black hole solution is restricted to quantum size. Although the de Sitter horizon radius is assumed to be other physical scales of the vacuum phase transition, such as the grand unified theory (GUT) scale, the present stable black holes are also restricted to quantum size.
## V Conclusion
We have constructed new regular black hole solutions by matching the de Sitter solution and the Reissner-Nordström solution with a timelike thin shell. The thin shell is assumed to have mass but no pressure and obeys an equation of motion derived from Israel’s junction conditions. By investigating this equation of motion for the shell, we obtain stationary solutions of charged regular black holes and examine stability of the solutions. Stationary solutions are found in limited ranges of , and they are stable against small radial displacement of the shell with fixed values of , and if . All the solutions obtained are highly charged in the sense of . By taking the massless limit of the shell in the present regular black hole solutions, we obtain the charged regular black hole with a massless shell obtained by Lemos and Zanchin lemos and investigate stability of the solutions. It is found that Lemos and Zanchin’s regular black hole solutions permit stable solutions.
## Acknowledgements
N.U. is supported by the GCOE Program “Weaving Science Web beyond Particle-matter Hierarchy” at Tohoku University. This work is supported by a Grants-in-Aid for Scientific Research from JSPS (No. 23540282 and No. 24540245 for T. F. and S. Y., respectively.)
## References
• (1) R. Penrose, Phys. Rev. Lett. 14, 57 (1965).
• (2) S. Hawking and R. Penrose, Proc. Roy. Soc. London A 314, 529 (1970).
• (3) S. Hawking and G. F. R. Ellis, The Large Scale Structure of Space-Time (Cambridge University Press, Cambridge, England, 1973).
• (4) J. M. Bardeen, in Proceedings of GR5 (Tbilisi, URSS, 1968 ).
• (5) I. Dymnikova, Gen. Relativ. Gravit. 24, 235 (1992).
• (6) I. Dymnikova and E. Galaktionov, Class. Quantum Grav. 22, 2331 (2005).
• (7) S. Ansoldi, in BH2, Dynamics and Thermodynamics of Blackholes and Naked Singularities, Milan, Italy, 2007 (to be published).
• (8) E. Ayón-Beato and A. Garcia, Phys. Rev. Lett. 80, 5056 (1998) ; E. Ayón-Beato and A. Garcia, Gen. Relativ. Gravi. 31, 629 (1999) ;E. Ayón-Beato and A. Garcia, Phys. Lett. B 464, 25 (1999).
• (9) E. Ayón-Beato and A. Garcia, Phys. Lett. B 493, 149 (2000).
• (10) E. Ayón-Beato and A. Garcia, Gen. Relativ. Gravit. 37, 635 (2005).
• (11) C. Moreno and O. Sarbach, Phys. Rev. D 67, 024028 (2003).
• (12) V. P. Frolov, M. A. Markov, and V. F. Mukhanov, Phys. Lett. B 216, 272 (1989); V. P. Frolov, M. A. Markov, and V. F. Mukhanov, Phys. Rev. D 41, 383 (1990).
• (13) K. Lake and T. Zannias, Phys. Lett. A 140, 291 (1989).
• (14) J. P. S. Lemos and V. T. Zanchin, Phys. Rev. D 83, 124005 (2011); J. P. S. Lemos and V. T. Zanchin, AIP Conf. Proc. 1360, 145 (2011).
• (15) R. Balbinot and E. Poisson, Phys. Rev. D 41, 395 (1990).
• (16) A. D. Sakharov, Sov. Phys. JETP 22, 241 (1966).
• (17) É. B. Gliner, Sov. Phys. JETP 22, 378 (1966).
• (18) W. Israel, Nuovo Cimento 44B, 1 (1966).
• (19) C. Barrabés and W. Israel, Phys. Rev. D 43, 1129 (1991).
• (20) F. Fayos, J. M. M. Senovilla, and R. Torres, Phys. Rev. D 54, 4862 (1996).
• (21) V. de la Cruz and W. Israel, Nuovo Cimento 51A, 774 (1967).
• (22) K. Kuchar̆, Czech. J. Phys. B 18, 435 (1968).
• (23) J. E. Chase, Nuovo Cimento 67B, 136 (1970).
• (24) D. G. Boulware, Phys. Rev. D 8, 2363 (1973).
• (25) S. Gao and J. P. S. Lemos, Int. J. Mod. Phys. A 23, 2943 (2008).
• (26) G. A. S. Dias, S. Gao, and J. P. S. Lemos, Phys. Rev. D 75, 024030 (2007).
• (27) K. A. Bronnikov, H. Dehen and V. N. Melnikov, Gen. Relativ. Gravt. 39, 973 (2007).
Want to hear about new tools we're making? Sign up to our mailing list for occasional updates.
If you find a rendering bug, file an issue on GitHub. Or, have a go at fixing it yourself – the renderer is open source!
For everything else, email us at [email protected]. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8780953884124756, "perplexity": 571.3411926079506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362605.52/warc/CC-MAIN-20211203060849-20211203090849-00095.warc.gz"} |
http://www.physicsforums.com/showthread.php?t=403750 | # Electrical Voltage, a appliance problem
by helpphysics
Tags: appliance, electrical, voltage
P: 11 1. The problem statement, all variables and given/known data Anyway can you please help me out with the following? 240V , 50 Hz electrical appliance is rated at 2 kW. It has a lagging power factor of 0.7 (a) What is appliances power factor when it is used on a 60Hz supply. (b) What is supply voltage required to maintain appliance at its rated power when operated off a 60 Hz supply? 2. The attempt at a solution No idea..! Can you suggest some relevant equations and guidance? Thanks in advance
P: 241 First find Z of the load using the relation P=V^2/Z. Then using the given power factor you can find the reactance X=Zsin(theta). Let this be X1. Now find new X (X2) for 60 Hz. Remember X is directly proportional to frequency for a lagging circuit since it is inductive. From X2 you can calculate the parameters you want to know.
P: 11 Hello, can you be bit more explanatory? i am not getting anything !
P: 241
## Electrical Voltage, a appliance problem
You are asked to find pf on 60 Hz. You may remember that the impedance of the load Z=R+jX, where X is positive for inductive loads i.e., lagging pf and only X is dependent on frequency. Also pf=tan(X/R). Now if you change frequency of supply X changes proportionately. You know X=2*pi*f*L for inductive load. So for change in frequency from f1=50Hz to f2=60Hz , we have X2/X1=f2/f1. The first thing you have to do is find X1 i.e., X at 50 Hz using relations I mentioned earlier (Note pf=cos(theta) and so you can find theta). Then find X2 and from this since R is unchanged, you can find new pf.
Related Discussions Engineering, Comp Sci, & Technology Homework 3 Electrical Engineering 1 Electrical Engineering 6 Engineering, Comp Sci, & Technology Homework 1 Electrical Engineering 18 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8584692478179932, "perplexity": 1594.4999564555071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394023864543/warc/CC-MAIN-20140305125104-00031-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://export.arxiv.org/abs/2101.06473 | math.DS
(what is this?)
# Title: Spatial Temporal Differentiations
Abstract: Let $(X, \mathcal{B}, \mu, T)$ be a dynamical system where $X$ is a compact metric space with Borel $\sigma$-algebra $\mathcal{B}$, and $\mu$ is a probability measure that's ergodic with respect to the homeomorphism $T : X \to X$. We study the following differentiation problem: Given $f \in C(X)$ and $F_k \in \mathcal{B}$, where $\mu(F_k) > 0$ and $\mu(F_k) \to 0$, when can we say that $$\lim_{k \to \infty} \frac{\int_{F_k} \left( \frac{1}{k} \sum_{i = 0}^{k - 1} T^i f \right) \mathrm{d} \mu}{\mu(F_k)} = \int f \mathrm{d} \mu ?$$
Comments: Some key typos corrected in the introduction Subjects: Dynamical Systems (math.DS) MSC classes: 37A30, 37A35 Cite as: arXiv:2101.06473 [math.DS] (or arXiv:2101.06473v2 [math.DS] for this version)
## Submission history
From: Idris Assani [view email]
[v1] Sat, 16 Jan 2021 16:37:35 GMT (32kb)
[v2] Sat, 30 Jan 2021 18:52:12 GMT (32kb)
Link back to: arXiv, form interface, contact. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8738192319869995, "perplexity": 2363.4521191503063}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363809.24/warc/CC-MAIN-20210302095427-20210302125427-00040.warc.gz"} |
https://web2.0calc.com/questions/basic-probability | +0
# Basic Probability
+1
100
2
+250
A number is selected at random from 1 through 100, inclusive. What is the probability that the number is a divisor of 50? Express your answer as a common fraction.
Apr 24, 2019
#1
+4296
+3
There are a 100 integers between 1-100, inclusive. Since 50 is $$2*5^2$$, it has $$(1+1)(1+2)=(2)(3)=6$$ factors. Thus, the answer is $$\frac{6}{100}=\boxed{\frac{3}{50}}.$$
.
Apr 24, 2019
#2
+101838
0
Nice, tertre!!!!!
CPhill Apr 25, 2019 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9237452745437622, "perplexity": 1010.1163220609293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525829.33/warc/CC-MAIN-20190718211312-20190718233312-00253.warc.gz"} |
http://physics.stackexchange.com/questions/65378/historical-aspect-of-wave-theory-of-light | # Historical aspect of wave theory of light
Huygens thought light as a wave. Wave is a propagation of physical disturbance. We now know that light is electromagnetic field. Electric and magnetic field fluctuates here. What Huygens really thought? Light as a fluctuation of what?
- | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9141784310340881, "perplexity": 1590.3548714886872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299360.90/warc/CC-MAIN-20150323172139-00151-ip-10-168-14-71.ec2.internal.warc.gz"} |
http://www.maths.ox.ac.uk/node/10909 | # How many edges are needed to force an $H$-minor?
3 December 2013
14:30
Bruce Reed
Abstract
We consider the parameter $a(H)$, which is the smallest a such that if $|E(G)|$ is at least/exceeds $a|V(H)|/2$ then $G$ has an $H$-minor. We are especially interested in sparse $H$ and in bounding $a(H)$ as a function of $|E(H)|$ and $|V(H)|$. This is joint work with David Wood.
• Combinatorial Theory Seminar | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9011815786361694, "perplexity": 395.0343671302135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814105.6/warc/CC-MAIN-20180222120939-20180222140939-00221.warc.gz"} |
http://math.stackexchange.com/questions/177226/to-show-the-set-is-dense-in-mathbbr | # To show the set is dense in $\mathbb{R}$ [duplicate]
Possible Duplicate:
how to show this is a dense set?
I want to show that
Given any irrational number $\alpha\in \mathbb{R}$, the set $\displaystyle S=\{ m+n\alpha : m,n\in Z \}$ is dense in $\mathbb{R}$.
-
## marked as duplicate by Chris Eagle, Davide Giraudo, David Mitra, Matt N., IlyaJul 31 '12 at 16:15
Kronecker's Theorem gives us that the set $\{ma\}_{m \in \mathbb{Z}}$ is dense in $(0,1)$ for irrational $a$, and hence, that the set $\{n+ma\}_{m,n \in \mathbb{Z}}$ is dense in $\mathbb{R}$.
With $a=\pi$, I don't find a single integer $m$ so that $0<ma<1$. That's quite the opposite of "dense in $(0,1)$". – celtschk Jul 31 '12 at 16:15
@celtschk The notation $\{ma\}$ is the decimal part of $ma$. – David Mitra Jul 31 '12 at 16:17
OK, maybe when using $\{\}$ in any other way than to denote sets, it would be a good idea to explicitly say that :-) Especially since your second use obviously is using it just to denote a set (the set of fractional parts of $n+ma$ definitely is not dense in $\mathbb{R}$). – celtschk Jul 31 '12 at 16:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9451092481613159, "perplexity": 273.33162358869413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267865.20/warc/CC-MAIN-20140728011747-00024-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-common-core/chapter-1-expressions-equations-and-inequalities-1-2-properties-of-real-numbers-practice-and-problem-solving-exercises-page-16/62 | ## Algebra 2 Common Core
The reciprocal of any integer $x$ is $\frac{1}{x}$, so usually $\frac{1}{x}$ is not an integer. But, if $x=1$, then $\frac{1}{1}$ would be an integer. This also works for -1. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9273073673248291, "perplexity": 250.58717605871172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826306.47/warc/CC-MAIN-20181214184754-20181214210754-00021.warc.gz"} |
https://crypto.stackexchange.com/questions/19493/is-there-a-string-thats-hash-is-equal-to-itself | # Is there a string that's hash is equal to itself?
I was wondering if there's any string that has a hash equal to itself, so that – when using any (none specific) hash function – the hash would be equal to that string?
so that:
hash(x) = x
Note that this is not an assignment or anything. I’m just curious and couldn't find any specific answer or reference. And I’m not sure how to go and prove/disprove that to myself!
• It sounds like you're asking about fixed points, where $Hash(x) = x$. For a general hash function, the answer is yes with probability ~63%. See crypto.stackexchange.com/questions/68674/…. Jun 22 '20 at 0:46
I restrict to hash functions $$H$$ with an output of some fixed size $$n\ge1$$ bit(s), accepting as input some strings, including all $$n$$-bit strings; MD5 (resp. SHA-1, SHA-256) is an example of such function for $$n=128$$ (resp. $$n=160$$, $$n=256$$).
Whether there exists a solution to $$H(x)=x$$ depends on the particular hash function. If $$H$$ is a random function (as MD5, SHA-1, and SHA-256 aim to be), the answer is YES with odds next to $$63.2\%$$ for practical values of $$n$$.
More precisely: $$H(x)=x$$ can hold only if $$x$$ has exactly $$n$$ bits. There are $$2^n$$ values of $$x$$ that satisfy the later condition, and restricting $$H$$ to such $$x$$ there are $$(2^n)^{(2^n)}$$ different $$H$$ functions, of which $$(2^n-1)^{(2^n)}$$ such that $$H(x)=x$$ has no solution. Therefore, if we choose one $$H$$ uniformly randomly, odds are exactly $$1-{(2^n-1)^{(2^n)}\over(2^n)^{(2^n)}}=1-(1-2^{-n})^{(2^n)}$$ that we picked $$H$$ such that $$H(x)=x$$ has a solution. As $$n$$ increases, this converges very fast to $$1-1/e\approx0.632$$ (where $$e\approx2.718$$ is the base of the natural logarithm).
This does not tell if MD5 has the property that there exists a solution to $$\operatorname{MD5}(x)=x$$ (which would be a 128-bit bitstring $$x$$). The best we can say is that it likely holds, with odds about to 63%, but determining if the assertion is true or false is beyond our current computing power (the best method we have is exhaustive search, and if the answer is no it would require $$2^{128}$$ hashes; otherwise it is still likely to require over $$2^{126}$$ hashes, which is beyond reach).
PHP specific: if md5($string) ===$string had some solution, that would be a 32-character string of hexadecimal lowercase characters; we are not hashing the same $$2^{128}$$ candidates as above so the question is not equivalent, but the reasoning can be adapted, and again the best we can say is that it is likely there's a solution, with odds about 63%.
Further, the original question asked if there is a string such that md5($string) ==$string. To answer this, we must take into account how the == operator works in PHP due to type juggling (it holds that "0042" == "42", and "20e2" == " +002000"). It is overwhelmingly likely that there is a solution (just consider that among the $$2^{200}$$ strings consisting of 200 space or tab and an additional final 0, we expect about $$31\cdot2^{72}$$ hash to one of "00000000000000000000000000000000", "000000000000000000000000000000e0" .. "0e000000000000000000000000000000"); however we can't exhibit one.
It is easy to define a hash function $$H$$ such that $$H(x)=x$$ has no solution: for example, define $$H(x)=\begin{cases}x\oplus1&\text{if }\operatorname{MD5}(x)=x\\\operatorname{MD5}(x)&\text{otherwise}\end{cases}$$
It is also easy to define a hash function $$H$$ such that $$H(x)=x$$ has at least one solution: for example, choose some arbitrary 128-bit constant like $$k=\text{af5d2bc6c9181f76f3161f43f41f6aeb}$$, and define $$H(x)=\begin{cases}k&\text{if }x=k\\\operatorname{MD5}(x)&\text{otherwise}\end{cases}$$
There can be no $$x$$ such that for all possible hash functions $$H$$, $$H(x)=x$$.
Proof by contraposition: assume there is such $$x$$, a function $$H$$ with $$x$$ having that property, and consider the function $$\tilde H$$ defined by $$\tilde H(x)=H(x)\oplus1$$.
Yes, you can create many such functions.
For instance, lets build such a function based on SHA512. Generate some random value $$m_0$$ and generate a hash of it. It is important, because there is no guarantee that every 512-bit number has a pre-image.
So, let $$h_0 = \operatorname{SHA512}(m_0)$$. After hash generated, throw $$m_0$$ away. Technically you can do that as follows. Create a program that generates $$m_0$$ as a stream. It should generate a 64-byte block of random values, proceed with hashing, override these 64-bytes with a new block, proceed with hashing, etc. Thus, the program will at no time have the full value of $$m_0$$.
Now calculate its hash: $$h_1 = \operatorname{SHA512}(h_0)$$. With very high probability $$h_1$$ will differ from $$h_0$$.
Now define a new hash function as follows: \begin{align} \operatorname{hash}(x) &= (\operatorname{SHA512}(x) + h_0 - h_1 + 2^{512}) \bmod 2^{512}\end{align}
Calculate this hash function for $$h_0$$:
\begin{align} \operatorname{hash}(h_0) &= (\operatorname{SHA512}(h_0) + h_0 - h_1 + 2^{512}) \bmod 2^{512} \\ &= (h_1 + h_0 - h_1 + 2^{512}) \bmod 2^{512} \\ &= (h_0 + 2^{512}) \bmod 2^{512} \\ &= h_0 \end{align}
Thus \begin{align} \operatorname{hash}(h_0) = h_0 \end{align}
Our transformation is actually a rotation of a 512-bit number. This operation does not change the cryptographic properties. This means that cryptographic properties of our hash function are the same as of SHA512:
• Pre-image resistance: Except of a single value $$h_0$$ that is a pre-image of itself, for any other value finding a pre-image is as hard as for SHA512
• 2nd pre-image resistance: For any input and hash the complexity of finding of another input that gives the same hash is as hard as for SHA512. This holds also for $$h_0$$, because we don't know what was $$m_0$$.
• Collision resistance: For any other hashes the complexity of finding collisions is the same as complexity for SHA512. This holds also for $$h_0$$, because we don't know what was $$m_0$$.
In this manner you can create other hash functions with other fixed points.
• I think this is actually much simpler. You don't need $h_0$ to have a preimage. You don't need the $m_0$ or the stream. Just start with any 512-bit number $h_0$. Let's say you wish for it to be a fixed-point of SHA512. But maybe it isn't, you have $h_1 = \mathrm{SHA512}(h_0)$. Too bad, it's just off by $h_0 - h_1$. So as you do you define a function $f(x) = \mathrm{SHA512}(x) + h_0 - h_1$. All computation is mod $2^{512}$ of course. Now $f(h_0) = h_0$, and $f$ being just a rotation of SHA512 has the same cryptographic properties. Jun 22 '20 at 13:29
• I would not call these as a cryptographyic hash function since one cannot explain the final constant modulo addition to the community. Jun 22 '20 at 14:01
• The pre-image definition is not correct. In pre-image attack, given a hash value $h$, the attacker tries to find a pre-image $x$ such that $\operatorname{Hash}(x)=h$. The $x$ can be the original input value or not. Jun 22 '20 at 15:59
• @kelalaka: "I would not call these as a cryptographyic hash function since one cannot explain the final constant modulo addition to the community" - I thought the definition of "cryptographical hash function" was that it satisfied a series of security properties, not that it had the approval of any specific group of people... Jun 22 '20 at 17:14
• Well, practically, yes so that one can design one with the required property. Let's publish a hash function $h=SHA512(x\mathbin\|0xF7\ldots 9A$. The first question will be why does one add $0xF7\ldots 9A$, and the answer is... Also, none of the MD and Sponge algorithms are proven to be satisfied with these. The reverse is true since MD5 and SHA-1 collisions attacks. Jun 22 '20 at 17:38
This won’t be possible, because the MD5 algorithm has left bit rotation by $s$ places, where $s$ varies for each operation. So, the MD5 of a string can never be the same string.
Refer the Algorithm section of Wikipedia’s MD5 article.
Every hashing algorithm has its basis in block ciphers and there is byte-shifting involved as a core part. So, it won’t be possible that a plaintext $m$ is equal to ciphertext $c$.
• Sure, what about any other algorithms ?
– Mostafa Torbjørn Berg
Sep 12 '14 at 10:24
• The use of bit rotation doesn't guarantee this property. For example, if an algorithm used only bit rotation, even by a varying number of bits, a string of all zeros or all ones would hash to itself. There's something missing in this answer.
– guest
Sep 15 '14 at 19:44
• I think that the argument about left bit rotation making a solution impossible for MD5 is plain wrong.
– fgrieu
Oct 6 '14 at 14:17
• Keeping it short: rotation itself does not guarantee anything! Most of the time, it merely helps hashing functions to gain and/or enhance their avalanche effect. But in the unlikely case you’re aware of any papers which provide proof that MD5 and/or the SHA families use bitwise rotations to specifically prevent something like that from happening, I would surely be interested in getting a heads-up… because it would be the first time I would see a security proof that’s based on “bitwise rotations” only. Oct 6 '14 at 16:39
• A rotation itself doesn't prevent things mapping to itself, e.g. (on a byte-level) rotating 11111111 by any amount still gives 11111111. Oct 6 '14 at 20:13
With any hash function as you ask:
No.
If you write a Hash-function wich calculates the Hash-value in some way, and then append a t to the result (because you like the letter), then no matter what your input string is, the hash result will be different from your input.
For specific hash functions:
Sure, it could be; especially with a "bad" hash-function like "first 3 letters of the String".
Since a clarification (see comment) pointed out that the question only targets mature hash functions, please refer to user2351586's answer for MD5 specifics.
• I'm not sure I'm following what you mean by As your hash-function can really be ANY, so you could just append a "t" at the end of your result because you like that letter. If you append a t, the string will have changed.. Hash functions do not modify the input string. yeah i know a bad implementation would do that, but what I had in mind is mature ones that we use daily
– Mostafa Torbjørn Berg
Sep 12 '14 at 9:31
• I edited my answer due to the question. Thanks for clarifying the question!
– Layna
Sep 12 '14 at 9:41
• Appending a "t" to the end of the result of any function doesn't guarantee this property. For a trivial function that always returns "a" for the hash of any input, the hash of "at" would be "at" for this hash algorithm.
– guest
Sep 15 '14 at 19:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 67, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8285661339759827, "perplexity": 587.938279588977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304928.27/warc/CC-MAIN-20220126071320-20220126101320-00446.warc.gz"} |
http://blog.sciencenet.cn/blog-2071524-1213884.html | # ÈÈ¡¢¿¨Â·ÀïºÍìØ
ÒÑÓÐ 1244 ´ÎÔĶÁ 2020-1-12 12:13 |¸öÈË·ÖÀà:ÈÈѧ|ϵͳ·ÖÀà:½ÌѧÐĵÃ
2020-01-12
Íõ°²Á¼
Heat: Informal term for entropy. Equivalent to caloric. (Commonly the energy exchanged in heating is called heat; this usage is not followed in this text.)
Caloric: Used as an alternative term for heat. The caloric theory of heat can be rendered formal and correct in a modern sense if it is accepted that caloric is not conserved (that it can be produced). In this case it turns out to be equivalent to the entropy of a body.
Entropy: Formal for a quantity of heat or caloric. Entropy is the fluidlike quantity of thermal processes and thus obeys a law of balance. It can be stored (see heat function), it can flow (entropy current), and it can be created (see production).
ÈÈÁ¦Ñ§µÚÒ»¶¨ÂÉÃèÊöµÄÊÇÓîÖæ×Ô·¢¹ý³ÌËùÓ¦×ñѵĻù±¾¹æÂÉ£¬¼´ÊغãÐԺͶԳÆÐÔ¡£ÈÈÁ¦Ñ§µÚ¶þ¶¨ÂÉÃèÊöµÄÒ²ÊÇÓîÖæ×Ô·¢¹ý³ÌËùÓ¦×ñѵĻù±¾¹æÂÉ£¬¼´·½ÏòÐԺͺÄÉ¢ÐÔ¡£
×îºËÐĵÄÎÊÌâÖ®Ò»ÊÇ£ºÈÈÓë¹âÖ®¼äÊÇʲô¹ØÏµ£¿»òÕß»»¾ä»°Ëµ£¬ÈÈ·øÉä¾ÍÊÇ¹â·øÉäÂð£¿ÏàÀàËÆµÄÎÊÌ⻹ÓУº
ÍòÓÐÒýÁ¦ÓëìØÊÇÔõÑùµÄ¹ØÏµ£¿
ÁíÍ⣬²¹³äFuchsµÄÈçÏÂÊõÓ
Heat function: the formal expression of the assumption that a body contains a certain amount of heat, where the heat stored is a function of the independent variables describing the properties of the body. This heat function turns out to be equivalent to the entropy of the body.
Entropy current: Measure of the transfer of entropy across the surface a system.
Entropy production: The process of the production of entropy as the result of an irreversible process.
²Î¿¼ÎÄÏ×
[1] Hans U.Fuchs, the dynamics of heat, Èȶ¯Á¦Ñ§£¬ÊÀ½çͼÊé³ö°æÉ磬Springer£¬2010
http://blog.sciencenet.cn/blog-2071524-1213884.html
ÉÏһƪ£ºÑ¦¶¨ÚÌ´íÔÚÄÄÀ
ÏÂһƪ£ºÊ±¼ä¿ÉÄæÊǸöαÃüÌâ
Êý¾Ý¼ÓÔØÖÐ... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.949169397354126, "perplexity": 773.0083571955292}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370519111.47/warc/CC-MAIN-20200404011558-20200404041558-00519.warc.gz"} |
https://mathoverflow.net/questions/204185/classifying-space-of-a-colimit-of-topological-categories | # Classifying space of a colimit of topological categories
Say I have a diagram $D:I\rightarrow\text{Cat}(\text{Top})$ of categories internal to compactly generated topological spaces. This induces a diagram $BD:I\rightarrow \text{Top}$ of classifying spaces. I would like to know when this induces a homotopy equivalence $$B(\text{colim}\, D)\stackrel{\sim}{\rightarrow}\text{hocolim}\, BD$$ Or more generally when there are known methods of computing the homotopy type of $B(\text{colim}\, D)$ from $BD$.
In the example I have in mind, $I$ is the poset of natural numbers, every space in sight is compactly generated and every functor $D(n)\rightarrow D(m)$ is a cofibration on both objects and morphisms. Futhermore, the indentity map in each $D(n)$ is a cofibration and the source and target maps are fibrations. In particular, in this case the above homotopy colimit is the ordinary colimit. However, I think the more general question is also of interest.
References to the literature are also welcome!
• How do the colimits in $Cat(Top)$ look like? Are you using the fat realization? – archipelago Apr 28 '15 at 18:07
• I completely overlooked this technical obstacle! That's embarrassing! In the case I am interested in, the colimit is given by taking the colimits on the object and morphism spaces separately, i.e. the colimit commutes with taking composable pairs of morphisms. In taking the realization, I take the singular simplicial set, getting a bisimplicial set, and take the diagonal realization. – Espen Nielsen Apr 28 '15 at 19:04
Espen, I would disagree with your description of the classifying space functor. Your question starts with a diagram in Cat(Top). The standard classifying space functor is the composite of the nerve functor $N$ from there to simplicial spaces and geometric realization. Here $N$ is defined in what should be an obvious way in terms of the space of objects and the spaces (defined by source target pullbacks) of composable morphisms. Geometric realization is generally understood in the usual, not the fat, sense. Since geometric realization commutes with colimits (it is a left adjoint), it is not a problem here. The problem is the nerve functor $N$. In your special case when I is the natural numbers, I see no problem: $N$ will take your cofibrations to levelwise cofibrations and will take unions to unions, those being the colimits in that special case.
However, the classifying space functor behaves quite badly with respect to colimits of general diagrams in Cat and therefore, more generally, with respect to general colimits in Cat(Top). Pushouts give a simple and central example of diagrams that behave badly: $N$ usually fails to preserve them. The key point of Thomason's paper in which he gives a model structure on Cat that is Quillen equivalent to the standard model structure on simplicial sets is to identify a class of maps, which he calls Dwyer maps, such that $N$ preserves pushouts in which one leg is a Dwyer map. Cisinski observed that a retract of a Dwyer map need not be a Dwyer map and identified an alternative notion of pseudo Dwyer maps that is closed under retracts. However, that is in Cat and I don't think that anyone has developed a theory of Dwyer maps in Cat(Top) that might help answer your general question. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9305640459060669, "perplexity": 255.9073118198168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987822098.86/warc/CC-MAIN-20191022132135-20191022155635-00453.warc.gz"} |
https://groupprops.subwiki.org/w/index.php?title=First-order_subgroup_property&mobileaction=toggle_view_mobile | # First-order subgroup property
## Definition
### Symbol-free definition
A subgroup property is said to be a first-order subgroup property if it can be expressed using a first-order formula, viz a formula that allows:
• Logical operations (conjunction, disjunction, negation, and conditionals)
• Equality testing
• Quantification over elements of the group and subgroup (this in particular allows one to test membership of an element of the group, in the subgroup)
• Group operations (multiplication, inversion and the identity element)
Things that are not allowed are quantification over other subgroups, quantification over automorphisms, and quantification over supergroups.
## Importance
First-order language is severely constricted, at least when it comes to subgroup properties. Hence, not only are there very few first-order subgroup properties of interest, also, very few of the subgroup property operators preserve the first-order nature.
## Examples
### Normality
Normality is a first-order subgroup property as can be seen from the following definition: a subgroup $N$ of a group $G$ is termed normal if the following holds:
$\forall g \in G,h \in N, ghg^{-1} \in N$
The formula is universal of quantifier rank 1.
### Centrality
A subgroup is a central subgroup if it lies inside the center, or equivalently, if every element in the subgroup commutes with every element in the group.
Clearly, the property of being a central subgroup is first-order.
The formula is universal of quantifier rank 1.
### Central factor
A subgroup is a central factor if every element in the group can be expressed as a product of an element in the subgroup and an element in the centralizer. This can naturally be expressed as a first-order formula of quantifier rank 3 with the outermost layer being universal.
$\forall g \in G (\exists h \in H, k \in G (\forall m \in H, km = mk))$
## Relation with formalisms
### Function restriction formalism
The general question of interest: given a subgroup property with a function restriction expression $a \to b$, can we use the expression to give a first-order definition for the subgroup property? It turns out that the following suffice:
• $a$ should be a first-order enumerable function property (this condition is much stronger than just being a first-order function property because we are not allowed to directly quantify over functions.
• $b$ should be a first-order function property in the sense that given any function, it must be possible to give a first-order formula that outputs whether or not the function satisfies $b$.
The primary example of a first-order enumerable function property is the property of being an inner automorphism. Most function properties that we commonly enoucnter are first-order (that is, they can be tested/verified using first-order formulae). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9398297071456909, "perplexity": 578.2464660796315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998440.47/warc/CC-MAIN-20190617063049-20190617085049-00522.warc.gz"} |
https://www.physicsforums.com/threads/statistical-mechanics-thermodynamics-two-spin-1-2-subsystem.713617/ | # Statistical mechanics/Thermodynamics two spin-1/2 subsystem
• Start date
• #1
75
0
## Homework Statement
Consider two spin-1/2 subsystems with identical magnetic moments (μ) in equal fields (B). The first subsystem has a total of NA spins with initially "a" having magnetic moments pointing against the field and (NA - a) pointing along the field, so that its initial energy is UiA = μB(a - (NA - a) = μB(2a - NA). The second subsystem has a total of NB spins with "b" having moments initially pointing against the field so that its initial energy is UiB = μB(2b - NB). Now suppose that the two subsystems are brought together so that they can exchange energy. Assume that B = constant and a, b, NA, NB >> 1. Show that in equilibrium, a0/NA = b0/NB, and this implies that the two subsystems will have the same "magnetization," i.e. total magnetic moment/spin.
## Homework Equations
I'm not really sure what equations are useful in this case because I'm having trouble understanding what I need to be doing. I think I need to use multiplicity so
Ω(N,n) = N!/(n!(N-n)!)
## The Attempt at a Solution
I think that I have to first figure out the most probable macrostate because that is where the systems would be in equilibrium(?). So do I go along the lines of solving Ω(NA, a) and Ω(NB, b)? I don't even know if that makes sense or what a0 and b0 represent in this question.
• #2
DrClaude
Mentor
7,575
3,925
You're going in the right direction. Have you seen entropy yet? If so, can you express the conditions for equilibrium in terms of S and U?
My guess is that a0 and b0 are the number of spins pointing against the field at equilibrium.
• #3
BruceW
Homework Helper
3,611
120
I think that I have to first figure out the most probable macrostate because that is where the systems would be in equilibrium(?). So do I go along the lines of solving Ω(NA, a) and Ω(NB, b)? I don't even know if that makes sense or what a0 and b0 represent in this question.
exactly. But also, don't forget to constrain the total energy! And yeah, as DrClaude says, the a0 and b0 seem to simply be a and b once equilibrium is achieved.
• Last Post
Replies
1
Views
3K
• Last Post
Replies
1
Views
1K
• Last Post
Replies
7
Views
2K
• Last Post
Replies
1
Views
790
Replies
3
Views
2K
• Last Post
Replies
1
Views
591
• Last Post
Replies
5
Views
1K
• Last Post
Replies
3
Views
2K
• Last Post
Replies
5
Views
1K
• Last Post
Replies
4
Views
517 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8831689953804016, "perplexity": 1063.015868475467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039375537.73/warc/CC-MAIN-20210420025739-20210420055739-00322.warc.gz"} |
https://mathoverflow.net/questions/13526/geometric-interpretation-of-trace/125907 | # Geometric interpretation of trace
This afternoon I was speaking with some graduate students in the department and we came to the following quandary;
Is there a geometric interpretation of the trace of a matrix?
This question should make fair sense because trace is coordinate independent.
A few other comments. We were hoping for something like:
"determinant is the volume of the parallelepiped spanned by column vectors."
This is nice because it captures the geometry simply, and it holds for any old set of vectors over $$\mathbb{R}^n$$.
The divergence application of trace is somewhat interesting, but again, not really what we are looking for.
Also, after looking at the wiki entry, I don't get it. This then requires a matrix function, and I still don't really see the relationship.
One last thing that we came up with; the trace of a matrix is the same as the sum of the eigenvalues. Since eigenvalues can be seen as the eccentricity of ellipse, trace may correspond geometrically to this. But we could not make sense of this.
• Related question: Take the $p$-dimensional vector space over $\mathbb{F}_p$ and take the identity transformation on this space. Then the trace is $0$. What the "geometric" meaning of this, if any? Jan 31 '10 at 2:12
• Nice comment Anweshi! That is a very interesting question also. This is the 3rd time this week that your comments have really impressed me! Jan 31 '10 at 2:31
• Your geometric description defines the determinant of a matrix just in terms of the (signed) collection of vectors that make up the rows. One reason you'll never find a totally analogous description of the trace is that it really is not a function of a collection of $n$ vectors: any reordering, and your trace is different. Jan 31 '10 at 8:18
• Theo's comment highlights the fact that the sense in which trace is "coordinate independent" is not always the same as the sense in which the determinant is -- so perhaps underlying the original question is a more basic question about what kind of invariance property, let alone geometric property, is desired. Jan 31 '10 at 8:33
• @Anweshi the geometric meaning is that in characteritc $p$ the baricentre of a affine multiset of $p$ points is at infinity. [Equivalent projective configurations exist: Fano for $p=2$, ...]. Using the geometric interpretation of trace of a symmetric matrix (defining a quadric) of order $p$ as $p$ times the expected value for eigenvalues (medium leght of principal axes) requires the characteristic non being $p$. Feb 15 '14 at 15:58
## 30 Answers
If your matrix is geometrically projection (algebraically $A^2=A$) then the trace is the dimension of the space that is being projected onto. This is quite important in representation theory.
• It is also important in statistics! Dec 24 '12 at 0:29
• This is important everywhere in math. Mar 29 '13 at 17:18
• This property, together with linearity, determines the trace uniquely, and so one can view the trace as the linearised version of the dimension-counting operator. (This is basically the "noncommutative probability" way of thinking about the trace.) Aug 13 '14 at 16:27
• This answer is false in characteristic $p \gt 0$ since the trace belongs to $\mathbb F_p$ and might be zero, while the dimension is a natural number in $\mathbb N$ which is positive for a non-zer0 vector space . For example the identity of $\mathbb F_p^p$ has dimension $p\neq 0 \in \mathbb N$ and trace $0\in \mathbb F_p$. Apr 19 '21 at 8:33
Let's use $$\det(\exp(tA)) = 1 + t\operatorname{Tr}(A) + O(t^2)$$, and think about the vector ODE $$\vec y' = A \vec y$$, solved by $$\vec y(t) = \exp(tA) \vec y(0)$$. If we take a unit parallelepiped worth of $$\vec y(0)$$, flow for short time $$t$$ under $$\vec y' = A\vec y$$, and see how its volume changes, the change will thus be $$t\operatorname{Tr}(A)$$ to first order.
Ah, Yemon Choi beat me to part of that.
• I took me a while to understand what you meant by "a unit parallelepiped worth of $\vec{y}(0)$". To clarify for future readers, you are starting with the unit parallelepiped and flowing each of its $n$ orthonormal unit vector sides $\hat{e}_{(i)}$ independently under the vector ODE $\vec{y}' = A \vec{y}$, so that the parallelepiped's time evolution is given by the $n$ different solutions of that ODE for the initial conditions $\vec{y}(0) = \vec{e}_{(i)}$. Feb 5 '18 at 3:26
V. I. Arnold sums it up very well in Section 16.3, page 113 of "Ordinary Differential Equations" (Springer Edition).
"Suppose small changes are made in the edges of a parallelepiped. Then the main contribution to the change in volume of the parallelepiped is due to the change of each edge in its own direction, changes in the direction of the other edges making only a second-order contribution to the change in volume."
I'm surprised nobody has mentioned this yet, but the trace defines a Hermitian inner product on the space of linear operators from $$\mathbb{C}^n$$ to $$\mathbb{C}^m$$: $$\langle A, B\rangle = \operatorname{Tr} A^\dagger B.$$ And every multiplicative operator on $$M_{n}(\mathbb{C})$$ which preserves the involution $$\dagger$$, must preserve this inner product. You can't get much more geometric than that.
• D'oh! Yes, this is a good observation. This also crops up when one looks at (complex) representations of compact groups (cf. Schur orthogonality) Jan 31 '10 at 2:48
• As always with inner products, though, you need to check first whether you're a physicist or a mathematician so you know whether to use the formula Jon wrote or $\langle A,B\rangle = \mathrm{Tr} A B^*$. Feb 1 '10 at 14:40
• You need an inner product in order to define $\dagger$, so the trace only lets you lift this inner product from vectors to matrices. It doesn't define a totally new one. Sep 25 '14 at 20:03
• On the other hand, if $n = m$ then you can also define a non-degenerate symmetric bilinear form using the same formula but without the $\dagger$, and this does not need an inner product. (I'm not so clear on the significance of this.) Jun 18 '15 at 2:29
If you are just working in a finite-dimensional Euclidean space, then by using the fact that we can calculate the trace of $$A$$ as $$\sum_{j=1}^n \langle Ae_j, e_j\rangle$$ for any choice of orthonormal basis $$e_1,\dots, e_n$$, one obtains $$\operatorname{Tr}(A) = n\int_{x\in B} \langle Ax, x\rangle \,dm(x)$$ where $$B$$ is the Euclidean unit sphere, and $$m$$ is the uniform measure on $$B$$ normalised to have total mass $$1$$. This is perhaps not quite as geometric as you want, but perhaps seems less dependent on a choice of coordinates.
Also, the wikipedia page refers to the trace as being (related to) the derivative of the determinant — does that not seem ‘geometric’?
• It should be emphasised that the trace really is a property of an operator between vector spaces, not a property of the matrix used to represent them. Again, this is not quite "geometric" -- it is really more "spectral" -- but it does I think make the trace seem more natural. Jan 31 '10 at 2:50
• This is the interpretation of trace you want to think about when proving the mean value property of a harmonic function, for example. i.e. this is saying a quadratic polynomial is harmonic if and only if it satisfies the mean value property. Jan 31 '10 at 8:12
• This interpretation is also what one uses to understand Ricci curvature and scalar curvature: very important geometrically indeed. Jul 4 '13 at 6:57
• I apologize for being pedantic (in particular, 9 years after the original post), but I think the total mass of the measure $m$ should be $n$ rather than $1$ since the trace of the identity matrix is equal to $n$. Mar 20 '19 at 18:06
• @JochenGlueck Thanks :) Mar 20 '19 at 19:59
I've pondered this question quite a bit, because I love the geometric definition of the determinant.^ My current feeling is that, although the trace has a beautiful geometric meaning (the one given by Allen Knutson), its raison d'être is fundamentally algebraic:
Let $V$ be a finite-dimensional vector space over the field $F$, and let $L(V)$ be the set of linear maps from $V$ to itself. The trace is the unique (up to normalization) linear map from $L(V)$ to $F$ such that $\text{tr}(AB) = \text{tr}(BA)$ for all $A, B \in L(V)$.
This is my favorite definition to date, but I suspect that the trace has a deeper meaning: it's what you get when a linear map eats itself. I can't explain exactly what I mean by that, but here's some evidence in favor of it:
• Because $V$ is finite-dimensional, you can think of a linear map from $V$ to itself as an element of $V^* \otimes V$. If $A = \omega_1 \otimes v_1 + \ldots + \omega_k \otimes v_k$, then $\text{tr}(A) = \omega_1(v_1) + \ldots + \omega_k(v_k)$.
• In the abstract index notation used in general relativity (See Robert Wald's book for a great introduction), a vector $v$ would be written $v^a$, a linear map $A$ would be written ${A^a}_b$, and the vector $Av$ would be written ${A^a}_b v^b$. The indices show you that $v$ is being plugged into the input slot of $A$, and another vector is coming out the output slot. The trace of $A$ would be written ${A^a}_a$, which seems to represent the output of $A$ being plugged back into the input!
If someone could explain to me how the geometric, algebraic, and "self-eating" (autophagic?) meanings of the trace were related to each other, I would be very happy!
^ In fact, I love it so much that I'll repeat my favorite statement of it here! Let $V$ be a $n$-dimensional vector space over the field $F$. A signed-volume form on $V$ is a map from $V^n$ to $F$ with the following properties:
1. It gets multiplied by $\lambda$ if you multiplying one of its arguments by $\lambda$.
2. It doesn't change if you add one of its arguments to another of its arguments.
The determinant of a linear map $A \colon V \to V$ is the scalar $\det(A)$ such that $D(A v_1, \ldots, A v_n) = \det(A) D(v_1, \ldots, v_n)$ for any vectors $v_1, \ldots, v_n$ and any signed-volume form $D$.
A single number can satisfy this equation for all signed-volume forms because the signed-volume form on $V$ is unique up to normalization.
• To make tr and det even more similar, any Lie algebra map from $gl(n)$ to a commutative Lie algebra factors through trace (this is the cyclicity property you mention), whereas any multiplicative map from $gl(n)$ to a commutative monoid factors through determinant. Jan 31 '10 at 8:16
• Slight quibble: when you say that we can regard any linear map from V to itself as an element of $V^*\otimes V$, this is assuming that V is finite-dimensional. The analogous statement is very much false in infinite dimensions Jan 31 '10 at 8:30
• I think there is a more important algebraic reason for trace to exist. Namely: the trace is the coefficient of $x^(n-1)$ in the characteristic polynomial of an $n \times n$ matrix. (Although, the properties you mention are also interesting & important.) Feb 1 '10 at 3:36
• By the way, I'd also be very interested in understanding the "self-eating" interpretation of the trace - it's extremely important for tensors, but I never found an explanation of how to think of it, and why it works so well. Feb 1 '10 at 3:37
• Keywords for the "self-eating" interpretation are the graphical calculus for tensor categories and string diagrams. Pointers are an article by Kate Ponto and Mike Shulman (see also accompanying slides) and a blog post by sigfpe. Feb 15 '15 at 21:26
Is it bad form to answer a question twice? In my defense, I'm different now from who I was when I answered the first time...
In their answers, Allen Knutson and Jafar give a geometric characterization of the trace:
The trace is the derivative of the determinant map $\operatorname{GL}(V) \to \mathbb{R}^\times$ at the identity.
In a comment, Theo Johnson-Freyd gives an algebraic characterization:
The trace is the unique Lie algebra homomorphism $\mathfrak{gl}(V) \to \mathbb{R}$, up to scale.
These characterizations are equivalent in a very pretty way.
The determinant of a transformation in $\operatorname{GL}(V)$ is the factor by which it expands volumes. When you compose two transformations, their volume expansion factors compose as well, so the determinant is a Lie group homomorphism $\operatorname{GL}(V) \to \mathbb{R}^\times$. Therefore, its derivative at the identity is a Lie algebra homomorphism $\mathfrak{gl}(V) \to \mathbb{R}$, so it must be the trace, up to scale.
To pin down the scale, think of $\operatorname{id}_V$ as an element of $\mathfrak{gl}(V)$, and observe that $\exp(t \operatorname{id}_V) \in \operatorname{GL}(V)$ is scaling by $\exp(t)$. Therefore, $\exp(t \operatorname{id}_V)$ expands volumes by a factor of $\exp(tn)$, where $n$ is the dimension of $V$. In other words, the determinant map $\operatorname{GL} \to \mathbb{R}^\times$ sends $\exp(t \operatorname{id}_V)$ to $\exp(tn)$. Its derivative therefore sends $\operatorname{id}_V$ to $n$. The trace does the same thing, so it matches the derivative of the determinant not only up to scale, but on the nose.
• I've always really liked Theo Johnson-Freyd's interpretation the best, but the unification of the two in your answer is fantastic: it's a really interesting way to understand $e^{\mathrm{tr}(X)} = \det(e^X)$, a formula whose proof I have always up until now found pretty boring. Apr 3 '15 at 8:56
You can think of the trace as the expected value (times the dimension of the vector space) of the eigenvalues of matrices. The notion of eigenvalue is, as you know, a geometric one because it is the ratio of distortion of length. On the other hand 'expected value' is borrowed from probability theory, but given how the trace is extensively used in the modern branches of that field, you could spare that ;-) This point of view makes it obvious that the trace is invariant under conjugation by any invertible matrix.
• This comment finds a wide extension in the notion of numerical measure of a matrix, which is supported by the numerical range. See Th. Gallay & D. S. Comm. Pure Appl. Math. 65 (2012), pp 287-336. Mar 29 '13 at 15:04
• However, this answer is somehow duplicate of that by Yemon Choi. Mar 29 '13 at 15:07
• Nice, but not obvious to me that expected value of eigenvalues is invariant under conjugation: if you assign different weights to values of random variable and shuffle them, then reassign what you are looking at, then kinda sorta undo that, why should the expectation remain the same? Yes, I know, trace is invariant under conjugation. No, I don't see independently of that why the average of eigenvalues should. Jun 26 '17 at 21:01
• @Michael, re, couldn't one appeal to the fact that conjugating a matrix just changes which vectors correspond to which eigenvalues, so that the eigenvalues themselves are unchanged? Apr 18 '21 at 23:23
This has been lurking implicitly beneath several of the comments so far, but just to make it completely explicit why the trace of a linear operator is independent of a choice of coordinates: the multicategory of vector spaces and multilinear maps arises from a monoidal structure on the category of vector spaces and linear maps, this monoidal structure [tensor product of vector spaces] turning out to be symmetric and closed. From this, we can construct a canonical (linear) map of type $Hom(A, 1) \otimes B \rightarrow Hom(A, B)$, which, when $A$ is finite-dimensional, turns out to furthermore be an isomorphism. In particular, this gives an isomorphism between $Hom(A, 1) \otimes A$ and $Hom(A, A)$ for finite-dimensional $A$. Now, from the closed structure, we have a canonical map of type $Hom(A, 1) \otimes A \rightarrow 1$ as well. Pulling this through the aforementioned isomorphism, we obtain a map of type $Hom(A, A) \rightarrow 1$ whenever $A$ is finite-dimensional; this map is the trace operator, defined directly on abstract vector spaces and thus coordinate independent.
Phrasing this in less categorical terms, what the above reasoning demonstrates is that there is a unique linear map $Trace$ from $Hom(A, A)$ to scalars such that $Trace(x \mapsto R(x)v) = R(v)$ for all vectors $v$ in $A$ and linear maps $R$ from $A$ to scalars (assuming, as always, that $A$ is finite-dimensional). Again, since this gives an abstract definition of $Trace$, it is immediately coordinate-independent.
Whether this should count as a geometric account is in the eye of the beholder; as far as I am concerned, suitably abstract linear algebra is directly geometric, but I could certainly understand feeling otherwise.
Let $K \subset \mathbb{R}^n$ be a compact set whose boundary is a smooth manifold. Let $F:\mathbb{R}^n \rightarrow \mathbb{R}^n$ be linear map. We have that $$\int_{\partial K} F d \vec{S} = trace(F) \cdot vol(K).$$ This a consequence of the Gauss integral formula.
People have almost said this but not quite:
Take any linear transformation $A$ of a finite-dimensional real vector space $V$. Let each point $v$ in $V$ start moving at the velocity $Av$. Then the volume of any set $S \subseteq A$ will start changing at a rate equal to its volume times the trace of $A$.
More precisely, if $U(t) \colon V \to V$ is defined by
$$\frac{d}{dt} U(t) = A U(t)$$
$$U(0) = 1_V$$
then $U(t)$ is a smooth function of time. Take any measurable set $S \subseteq V$ and let $S_t$ be its image under $U(t)$. Then
$$\frac{d}{dt} \mathrm{vol}(S_t) = \mathrm{tr}(A)\, \mathrm{vol}(S_t) .$$
This is equivalent to Arnold's description of the trace, or the formula
$$\mathrm{det}(\exp(tA)) = \exp(\mathrm{tr}(A)),$$
since $U(t) = \exp(tA)$.
There is a special case where the trace has an obvious geometric interpretation. Assume that a group $G$ acts on a finite set $E$. It also acts on the vector space $F$ of functions on $E$ with values in some field $k$. Then if $g\in G$, the trace of the operator in ${\rm End}_k (F)$ attached to $g$ is the number of points in $E$ fixed by $g$. Very often in representation theory traces of operators are related to considerations on fixed point sets via Lefschetz type formulae.
For 3 by 3 matrix $A$, there is a linear vector field $v(x)=Ax$. The divergence of $v$ is the trace of $A$. In fact $Ax = {\rm curl}(-\frac{1}{3}x\times Ax)+\frac{1}{3}{\rm tr}(A)x$. So the trace determines whether $Ax$ is a curl or not.
There is an $n$ dimensional version of this expressible in differential forms. Denote by $\hat{k}$ the $(n-1)$ form obtained by deleting $dx_k$ from $dx_1\wedge\cdots\wedge dx_n$, and when $k\ne i$ denote by $\hat{ik}$ the $(n-2)$ form obtained by deleting both $dx_k$ and $dx_i$. Then $$d\left(\sum_{i< k}(x_i (Ax)_k-x_k (Ax)_i)(-1)^{i+k}\hat{ik}\right)$$ $$= n\sum_j (Ax)_j (-1)^{j-1}\hat{j}+{\rm tr}(A)\sum_j x_j (-1)^{j-1}\hat{j}$$ The trace determines whether $\sum_j (Ax)_j (-1)^{j-1}\hat{j}$ is exact or not.
• I learned this from the linear elasticity, thanks for sharing. Mar 27 '15 at 17:58
• Take $$V$$ a finite-dimensional vector space. The $$L(V)$$ is canonically isomorphic (as a vector space) to $$V\otimes V^*$$. Then you have a canonical isomorphism between $$L(V)$$ and its dual given by :
$$L(V)^* \rightarrow (V\otimes V^*)^* \rightarrow V^* \otimes V^{**} \rightarrow V\otimes V^* \rightarrow L(V).$$
Then the trace is the element sent to $$Id_V$$. I don't know if you consider this "geometrical", but it's a pretty nice characterization of the trace.
• The most geometrical statement is probably about the differential of the determinant.
• You also have this one : it's the $$n-1$$ degree coefficient of the characteristic polynomial. It can be considered important for at least two reasons :
1. The characteristic polynomial is the generic minimal polynomial of matrices (or endomorphisms), meaning that if you take a generic matrix (say the matrix $$M = (X_{ij})$$ with coefficient in $$k(X_{ij})$$), its minimal polynomial $$\mu_M$$ is the characteristic polynomial, and if you specialize $$M$$ to any matrix $$A$$ with coefficient in $$k$$, the specialization of $$\mu_M$$ gives you the characteristic polynomial $$\chi_A$$ of $$A$$.
2. If you want polynomial function on $$M_n(k)$$ that are similarity invariants (ie $$f(PAP^{-1}) = f(A)$$), then they form an algebra generated by the coefficients of the characteristic polynomial, and then the trace is the generator of the degree 1 part. Of course this amount to the already pointed out fact that $$Tr(AB) = Tr(BA)$$ characterizes the trace up to a constant.
In an attempt to provide an answer consistent with the original request, how about: "Trace is the semiperimeter of a parallelopiped as measured along its spanning column vectors."
It's important to be careful here. The original context implies an eigen problem in which a vector is mapped (perhaps with scaling) onto itself through a linear transformation (matrix multiplication). This follows from the mention of the determinant being the volume of the paralellopiped. The above answer is consistent with that. Other eigen problems should offer (require?) different interpretations of both "determinant" and "trace". -JF
Trace has a nice geometric interpretation for a rank one operator: it is the factor by which the operator scales a vector in its image. This, together with linearity, is a geometric characterisation of trace.
It has been said before but let me rephrase it : the interpretation of the trace is not geometric but integration-theoretic (I do not say "measure-theoretic since there is no measure, see below). Of course if a matrix $A$ has itself a geometric content, its trace also will, e.g. mean curvature = $1/n$ trace(second fundamental form).
I think that the integration-theoretic content of the trace is best captured by noncommutative geometry, where one can define noncommutative integrals thanks to Dixmier traces. Hence there is a precise sense in which a trace can be viewed as an integral.
But maybe this can be viewed as far-fetched and not very illuminating by students discovering the trace for the first time. However, you can still convey the intuition that the trace is secretly an integral to undergrad students by observing that :
-when a matrix $A$ is in diagonal form, the trace is really the integral of its eigenvalues with the counting measure.
-you can extend that to functions of this matrix : the trace of $f(A)$ is the discrete integral of the function $f$ over the spectrum of $A$.
Of course you cannot extend this interpretation to matrices which do not commute with $A$ : if $B$ does not commute with $A$, the spectrum of $B$ is a space which bears no relation to the spectrum of $A$ (this will speak to those who have followed a course on quantum mechanics). In other words, there is no "universal spectrum" on which to define a measure. But can one define an integral without reference to measure ? You certainly want an integral to be a continuous and positive linear functional. With or without reference to Riesz representation theorem, you can go on proving that every such functional $f$ on $M_n({\mathbb C})$ is of the form $X\mapsto Tr(XM)$ for some positive matrix $M$. If you further require the normalization $f(I_n)=1$, the eigenvalues of $M$ will be non-negative numbers with sum one. Now the analogy with a probability measure should be obvious to everyone, and the requirement that the eigenvalues of $M$ be equal to mimic the uniform probability should sound natural. Hence the trace of a matrix stands out as the unique noncommutative generalization to $M_n({\mathbb C})$ of the integral of a function defined on a set of $n$ elements against the counting measure.
I had always complex matrices in mind when writing that, but you can surely extend this discussion to more general setting, though I would strongly advise against that if your aiming at undergraduate students.
I like the following perspective:
Up to scalar, trace is the only linear operator $\text{M}(n,k) \stackrel{t}{\to} k$ such that $t(AB) = t(BA)$.
If one likes vector field theory, this is the only linear operator that vanishes on commutator of vector fields. I do prefer to characterize it as the nullifier of hyperplan generated by commutators.
Trace is the last one on earth who still believes that matrices commutate. Its geometric interpretation, somehow, is its blindness.
One could look for a geometric interpretation in $k^n$, here the thesis is that trace is all about geometry of $\text{M}(n,k)$. This final consideration, I hope, also answers the comment:
Take the p-dimensional vector space over $\mathbb{F}_p$ and take the identity transformation on this space. Then the trace is $0$ What the "geometric" meaning of this, if any?
A reformulation of this observation is that trace is the only linear operator (again up to normalization) that is constant on conjugacy classes of matrices, somehow a first order approximation of Jordan normal form.
I opened a thread to investigate the content of this answer: Update.
• Your first point was already made in this answer: mathoverflow.net/a/13550. Your last point is nice though Jul 13 '17 at 11:49
• About the first point, I thought important to observe that "One could look for a geometric interpretation in $k^n$, here the thesis is that trace is all about geometry of $\text{M}(n,k)$". Jul 13 '17 at 11:51
• Though trace is not invariant under any permutation of the matrices in a product, just cyclic permutations. Aug 7 '18 at 2:44
We have the formula $\det (e^A) = e^{\mathrm{Tr}(A)}$ and we have a good interpretation for the determinant of a matrix as the volume and then we can take the logarithm to get the trace of the matrix $A$.
• Do you have a good interpretation of logarithm of volume? Jul 4 '13 at 3:25
• The real problem is another: the exponential of a matrix. Logaritms of positive real numbers are a change of notation from multiplicative to additive (for a archimedean complete totally ordered group). Such changes of notation were already used in ancient times by some music theorist (when speaking about musical intervals), to the displeasure of phytagoric music theorists. Feb 15 '14 at 16:07
• @S.Carnahan: A volume expansion factor is an element of the Lie group $\mathbb{R}^\times$. A trace is the rate of change of a volume expansion factor, so it lives in the Lie algebra $\mathbb{R}$. The exponential on the right side of the identity translates between $\mathbb{R}$ and $\mathbb{R}^\times$. See my newer answer for details. (I'm basically just repeating user46855's comment here, but in maybe a more concrete way.) Feb 19 '15 at 0:21
Taking a broad view of the question, here are some particular geometric interpretations of the trace with respect to certain domains:
1. $\mathrm{SL}(2, \mathbb{R})$ acts by isometries on the upper half-plane $H^2$. The displacement length $\ell(g)$ of $g\in\mathrm{SL}(2, \mathbb{R})$ is the infimum of $\{d(x,gx)\ | \ x\in H^2 \}$. If $\ell(g)>0$, then $|\mathrm{tr}(g)| = 2 \mathrm{cosh}(\ell(g)/2).$
2. The trace as the Killing form is a non-degenerate bilinear form on a semisimple Lie algebra (Euclidean structure).
3. Traces of words in a finitely generated group $\Gamma$ give coordinates on the moduli space of unimodular representations of $\Gamma$.
With Example 1 in mind, in general, I intuitively think of the trace as a measure of length.
As it is the derivative of the determinant, whose absolute value measures volume, this is not unreasonable for geometric intuition (sum versus a product in the spectra). In particular, $|\mathrm{tr}(X-Y)|$ reminds one of the taxi cab metric in the spectra of $X,Y$.
With Example 3 in mind, one gets mileage from thinking of "words" as homotopy classes in a manifold and evaluating those words at representations and taking the trace as computing the length of a geodesic representative of the homotopy class. Again, this is more of "geometric intuition" than precise formulation, but there are examples where this is more precise.
• And what about the trace of a non-invertible matrix? May 14 '16 at 0:49
• Dear Sean, I was asking because the question seems to ask for a geometric interpretation of the trace of an arbitrary matrix, in a similar vein to the usual interpretation of the determinant of an arbitrary matrix May 14 '16 at 13:32
• I added a sentence to make clear that I am taking a broader view of the question, which I believe might be of interest to people who search for "geometric interpretation of trace". May 14 '16 at 14:23
Traced monoidal categories are giving a nice geometrical interpretation of the trace : as a way to implement a feedback loop.
But, it is perhaps not the kind of geometrical interpretation you are interested in.
• The graphical notation for traced monoidal categories makes very explicit the "self-eating" mentioned by Vectornaut. Feb 10 '10 at 10:54
An easy calculation that may help somehow:
Any square matrix $A$ can be written as
$A = \Sigma_{i,j} u_i v_j^t$
where $u_i,v_j$ are column matrices, and there are many different choices as to how to choose {$u_i$}, {$v_j$}. Then it follows that
$Tr(A) = \Sigma_{i,j} Tr(u_i v_j^t) = \Sigma_{i,j} u_i \cdot v_j$
and now that you have a sum of dot products you may be able to make various geometric interpetations.
What seems really odd to me is this limitation set by the original question.
The divergence application of trace is somewhat interesting, but again, not really what we are looking for.
Maybe that is rejected because it involves a metric tensor in most textbooks about differential geometry, but the divergence requires only an affine connection, even in differential geometry. In flat Cartesian space (without a norm or inner product), it's even simpler.
First consider that matrices have two main applications, as the components of linear maps and as the components of bilinear forms. Let's ignore the bilinear forms. Linear maps are really where matrices come from because matrix multiplication corresponds to composition of linear maps.
We know that the determinant is the coefficient of the characteristic polynomial at one end of the polynomial, and the trace is at the other end, as the coefficient of the linear term. So we should think in terms of linearization and volume, or some combination of these two concepts. We know that the determinant can be interpreted as the relative volume expansion of the map $x\mapsto Ax$. So we should think in terms of maybe linearizing this in some way.
Define a velocity vector field $V(x)=Ax$ on $\mathbb{R}^n$ and integrate the flow for a short time. What happens to the volume of any region? The rate of increase of volume equals $\mathrm{Tr}(A)$. This is because the integral curves have the form $x(t)=\exp(At)x(0)$. (See Jacobi's formula.)
Thus the determinant tells you the volume multiplier for a map with coefficient matrix $A$, whereas the trace tells you the multiplier for a map whose rate of expansion has component matrix $A$.
That sounds very neat and simple to me, but only if you avoid the formulas in the DG literature which try to interpret divergence in terms of absolute volume by referring to a metric tensor or inner product.
PS. To avoid analysis, to keep it completely algebraic apart from the geometric meaning of the determinant, consider the family of transformations $x(t)=x(0)+tAx(0)$ for $t\in\mathbb{R}$ for all $x(0)\in\mathbb{R}^n$. Then the volume of a figure (such as a cube) is a polynomial function of $t$. The linear coefficient of this polynomial with respect to $t$ is $\mathrm{Tr}(A)$. There are no derivatives, integrals or exponentials here. The trace also happens to be the linear component of the characteristic polynomial. I think this is a pretty close tie-up.
PS 2. I forgot to mention that the divergence of the field $V(x)=Ax$ is $\textrm{div} V=\mathrm{Tr}(A)$. Therefore trace equals divergence. That's the geometrical significance of the trace. The function $V$ is the linear map with coefficient matrix $A$. And the trace equals its divergence if it is thought of as a vector field rather than just a linear map. You could even write $\mathrm{Tr}(A)=\mathrm{div}(A)$ if you identify the matrix with the corresponding linear map.
• (Edited from a previous comment that quoted the wrong sentence.) "The trace also happens to be the linear component of the characteristic polynomial." Almost the opposite, no? That is, it's actually the coefficient of $t^{n - 1}$, not of $t$ (assuming that's what you mean by "the linear component")? May 22 '16 at 2:21
• At a zero of a vector field, the trace of the vector field is defined without use of any affine connection. At a point which is not a zero, there is no trace defined, by the flow box theorem. So I don't understand your mention of an affine connection. Feb 5 '18 at 10:41
Was surprised not to see this here yet. Let $$V$$ be an $$n$$-dimensional real vector space with inner product.
Any linear transformation $$f:V \to V$$ can be decomposed into $$f = \left(\tfrac{\textrm{tr}(f)}{n}\right) \mathbb{I} + f^{+} + f^{-}$$ where $$f^{+}$$ is traceless-symmetric and $$f^{-}$$ is traceless-antisymmetric, and $$\mathbb{I}$$ is the identity transformation.
Each term does a different geometric operation.
• The trace term returns a vector parallel to the input.
• The antisymmetric term returns a vector orthogonal to the input.
• The symmetric term stretches and flips the input along characteristic directions, with a net scale factor of zero (it admits an eigenbasis whose eigenvalues sum to zero).
The trace of the map is the scale factor/identity map contribution. Since the trace is a statement about lengths, it makes the most sense when an inner product is present, but of course the concept is more general. This also explains the relation to determinant/volume mentioned in other answers: to first order, the change in parallelepiped volume comes from scaling the edges parallel to themselves.
• Just noticed an even better version... Some other answers have showed that the trace is the average of $v \cdot f(v)$ for $v$ on the unit sphere. So if you decompose $f \propto 1 + f_{traceless}$, then the trace (identity) term scales vectors by a constant scale factor, while the traceless term on average returns a vector orthogonal to the input. Mar 21 '19 at 3:04
We show here how any interpretation of $$\operatorname{Tr} A$$ when $$A : V \to V$$ is an isomorphism can be extended to an interpretation of the trace of an arbitrary endomorphism by showing that $$\operatorname{Tr} A$$ actually only depends on a special induced sub-vector space of $$V$$.
To begin, let $$V^{(0)} = \operatorname{domain}(A) = V,\;\; V^{(i+1)} = A\left(V^{(i)}\right),\; \textrm{ and }d^i = \dim V^{(i)}$$ so that $$V^{(1)} = \operatorname{Im} A = A\left(V^{(0)}\right)$$, $$V^{(i+1)} \subseteq V^{(i)}$$, and $$d^{i+1} \leq d^i$$. Let $$N \geq 0$$ be the smallest integer s.t. $$d^{N+1} = d^N$$ and denote this common value by $$d$$. Let $$W := V^{(N)}$$.
We prove below that the restriction $$A\big\vert_W : W \to W$$ of $$A$$ onto $$W := V^{(N)}$$ is an isomorphism. Furthermore, $$\operatorname{Tr}(A) = \operatorname{Tr}\left(A\big\vert_W\right)$$ and it will be clear that $$W$$ is the unique largest vector subspace $$S$$ of $$V$$ on which $$A$$ restricts to an isomorphism $$A\big\vert_S : S \to S$$. All of this allows us to conclude that to geometrically interpret $$\operatorname{Tr}(A)$$, one may restrict their focus to geometrically interpreting the trace of the isomorphism $$A\big\vert_W : W \to W$$ rather than $$A : V \to V$$ itself.
This isn't entirely surprising since just as the trace of a matrix does not depend on the "elements off the diagonal", so too does the geometric interpretation of trace not depend on the "space off of $$W$$." This also gives some geometric intuition about how the trace of a matrix can simultaneously depend only on its diagonal elements while also equaling quantities that non-trivially depend on the whole matrix (such as the sum of its eigenvalues). $$\rule{17cm}{0.4pt}$$
Proof: We now prove the above claim. Inductively construct a basis $$\left(e_1, \dots, e_{\dim V}\right)$$ for $$V$$ such that for all $$i \geq 0$$, $$\left(e_1, \dots, e_{d^i}\right)$$ is a basis for $$V^{(i)}$$. Let $$\left(\varepsilon^1,\dots, \varepsilon^{\dim V}\right)$$ be the dual basis of $$e_{\bullet}$$ and note in particular that: $$\textrm{(1) whenever }d^{i + 1} < l \leq d^i\textrm{ then }\varepsilon^l\textrm{ vanishes on }V^{(i + 1)}.$$
Since $$(e_1, \dots, e_{d^1})$$ is a basis for the range of $$A$$ we may, for any $$v \in V^{(0)},$$ write $$A(v) = \varepsilon^1(A(v)) e_1 + \cdots + \varepsilon^{d^1}(A(v)) e_{d^1}$$ so that $$A = (\varepsilon^l \circ A) \otimes e_l$$ (the sum ranging over $$l = 1, \dots, d^1$$) and hence $$\operatorname{Tr}(A) = (\varepsilon^l \circ A)(e_l) = \varepsilon^1(A(e_1)) + \cdots + \varepsilon^{d^1}\left(A\left( e_{d^1} \right)\right)$$ which shows that $$\operatorname{Tr}(A)$$ actually depends only on the range of $$A$$ (i.e. $$V^{(1)}$$). Now since $$e_1, \dots, e_{d^1}$$ are (by definition) in $$V^{(1)}$$, all of $$A\left(e_1\right), \dots, A\left(e_{d^1}\right)$$ belong to $$A\left(V^{(1)}\right) = V^{(2)}$$ so that from $$(1)$$ it follows that $$\operatorname{Tr}(A) = \varepsilon^1\left(A\left(e_1\right)\right) + \cdots + \varepsilon^{d^2}\left(A\left( e_{d^2} \right)\right)$$
Continuing this inductively $$N \leq \dim V$$ times shows that $$\operatorname{Tr}(A) = \varepsilon^1\left(A\left(e_1\right)\right) + \cdots + \varepsilon^{d}\left(A\left(e_d\right)\right)$$ so that $$\operatorname{Tr}(A)$$ depends only on $$W = V^{(N)}$$. Since by definition of $$N$$, the map $$A\big\vert_W : W \to W$$ is surjective, it is an isomorphism and furthermore, it should be clear that $$W$$ is the unique largest subspace of $$V$$ on which $$A$$ restricts to an isomorphism. $$\blacksquare$$
As described elsewhere, if you view $$A : V \to V$$ as a vector field on $$V$$ in the canonical way then the trace of $$A$$ is the same as its divergence so in the case where $$A$$ is an isomorphism there is a pleasing geometric interpretation readily available, which I'll assume that you're comfortable with. In my opinion, the equality $$\operatorname{div}(A) = \operatorname{Tr}(A)$$ is our best bet at finding a geometric interpretation of trace since it establishes a direct simple relationship between the trace and a readily interpretable quantity: $$\operatorname{div}(A)$$. An explanation of how this interpretation can be extended to linear maps that are not isomorphisms is now given.
Let $$A : V \to V$$ be an arbitrary linear map. Starting with the space $$V = V^{(0)}$$, imagine $$A$$ as transforming this space into $$V^{(1)} := A\left(V^{(0)}\right)$$. Then use $$A$$ transform $$V^{(1)}$$ into $$V^{(2)} := A\left(V^{(1)}\right)$$, and continuing transforming these spaces until eventually (after $$N$$ iterations) $$A$$ no longer transforms $$V^{(N)}$$ into a space with a strictly smaller dimension; that is: $$\operatorname{dim} V^{(N)} = \operatorname{dim} A\left(V^{(N)}\right)$$. It is at this point that $$A$$ does nothing more than isomorphically transform the vector space $$W := V^{(N)}$$. It is now possible apply your favorite interpretation of "trace of an isomorphism" to the isomorphism $$A\big\vert_W : W \to W$$, which then becomes an interpretation of the trace of the original linear map $$A$$ via $$\operatorname{Tr}(A) = \operatorname{Tr}\left(A\big\vert_W\right) = \operatorname{div}\left(A\big\vert_W\right)$$ represents.
Remark: This may not really answer your question since you stated that "The divergence application of trace is somewhat interesting, but again, not really what we are looking for." Nevertheless, whatever alternative non-divergence based interpretation of the trace of an isomorphism you choose, I hope that this will help you to extend it to the case where the map isn't an isomorphism.
• If $A$ is not an isomorphism, what does that have to do with the vector field? For example, if $A$ is the $3 \times 3$ diagonal matrix with diagonal $(2,0,0)$, then the divergence is $2$, this "fluid" is expanding in the positive $x$ direction at every point... it's not compressing the fluid to a plane or line. Jun 23 '17 at 18:56
• @Zach Teitler The image of your diagonal matrix is the $x$-axis and when your diagonal matrix $A$ is restricted to the $x$-axis then it becomes the $1 \times 1$ matrix $(2)$, which "expands the $1$-dimensional fluid" that is the $x$-axis. The vector field off the $x$-axis is irrelevant just as the coefficients of $A$ off the diagonal are irrelevant when computing the trace. In short, after finding $W$ we identify $A$ with the canonical vector field that it induces and interpret the trace of $A$ as the divergence of this vector field's restriction to $W$. Jun 23 '17 at 22:55
• I have no comment on the merits of this answer, but would it not be possible to write this first of all in a text editor and check it before posting, rather than make almost 10 edits in less than 2 hrs? Dec 1 '19 at 5:03
• Matthew, I was merely trying to suggest ways to behave in a way that seemed more appropriate to this site, since your only MO activity seems to have been with this answer. Editing bumps questions to the top of the "stack". By the way, it would be appreciated if you took the time to get my name right :) Dec 1 '19 at 14:49
• @Yemon Choi. Okay, I'll limit my edits. Thank ypu for the advice and my apologies for the missing 'i' in your name. Dec 1 '19 at 18:43
If we consider $M_n(\mathbb{R})$ as $\mathbb{R}^{n^2}$ with this map [$C_1$,...,$C_n$]$\stackrel{f}\mapsto$($C_1^t$,...,$C_n^t$),$C_i$s are columns and $f$ is bijection(using this mapping,we can put topology of $\mathbb{R}^{n^2}$ on $M_n(\mathbb{R})$ and with this topology $M_n(\mathbb{R})$ is a manifold),Then for a matrix $A$ we have $f$($A$)$\in$$\mathbb{R}^{n^2}$,we consider $f(I)$=($I_1^t$,...,$I_n^t$)That $I$ is identity matrix and $I_i$s are columns of $I$, now the dot product(inner product)of $f(A)$ and $f(I)$ is trace of $A$ and trace($A$) is the length of projection of vector $\sqrt{n}f(A)$in the direction of vector $f(I)$.
For me trace of a matrix always was analogous to the real part of a complex number. As such, I consider trace divided by the matrix' rank as the "scalar part" of the matrix.
There are other analogies:
• For analytic functions trace is analog of the value of the function at zero or other central point of their domain (which is the constant term in the functions' Taylor expansion).
• For divergent integrals and series, trace is analog of the regularized value.
• For vectors in spacetime, it is analog of the time component of the vector.
• I like the "scalar part" analogy a lot. I'm not sure about the spacetime analogy, and I definitely defer to you on the divergent-integrals analogy. But how can the trace be viewed as an analogue of evaluation at 0? As you mention, that evaluation is one of a continuously deformable family of evaluation maps, all, as it were, on an equal footing; and yet trace is not part of any such family of maps. Apr 28 '21 at 21:30
• @LSpice evaluation of a function at zero is finding its first (constant) term in its Macraurin expansion. The constants are equivalent to functions that have only that first term. Thus, the first term in Taylor expansion is constant, scalar part of the function, if all other terms are zero we just have a number, a scalar. Apr 28 '21 at 21:35
• @LSpice in general, for vectors (tuples), trace is naturally their first component. This is true for complex numbers, hypercomplex, etc. Apr 28 '21 at 21:38
• @LSpice as to the regularization of infinite sequences, the regularized value can be seen as the value of the sequence at infinity. Thus, it it similar to the value at zero, but with different point chosen. Apr 28 '21 at 21:40
• This is the heart of the decomposition of Lie algebras $\mathfrak{u}(2) = \mathfrak{u}(1) \oplus \mathfrak{su}(2)$, where $\mathfrak{u}(1) \cong \mathbb{R}$ and $\mathfrak{su}(2) \cong \mathbb{R}^3$ as real vector spaces. The normalised trace gives the real-linear projection onto $\mathfrak{u}(1)$ along $\mathfrak{su}(2)$ May 29 '21 at 19:52
Let $$k$$ be any field and $$V$$ an $$n$$-dimensional vector space over $$k$$. Let $$A \in \operatorname{End}(V).$$ Fix some basis $$\{v_1, \ldots, v_n\}$$ for $$V$$, and let $$\{\phi_1, \ldots, \phi_n\}$$ be a dual basis. Define the trace of the basis $$\{v_1, \ldots, v_n\}$$ to be vectors $$\{v_1\phi_1(Av_1), \ldots, v_n \phi_n(Av_n)\}.$$ Intuitively, these are the "traces" of each of the basis vectors after applying $$A$$. The elements $$\phi_i(Av_i)$$ for $$1 \leq i \leq n$$ are then the degrees to which each $$v_i$$ have been left behind by $$A$$, being the coefficient by which $$v_i$$ has been multiplied to get its "trace" after applying $$A$$ (this last interpretation makes most sense to me if we take $$k = \mathbb{R}$$).
Now, observe that $$\operatorname{Tr}(A) = \sum_{i=1}^n \phi_i(A v_i)$$. Thus the trace of $$A$$ can be interpreted as the degree to which it "leaves behind" any basis. To me this motivates the choice of the word "trace."
This is especially nice when $$\{v_1,\ldots,v_n\}$$ is an eigenbasis for $$A$$, since in that case the "trace" of each $$v_i$$ is exactly $$A v_i$$, so the degree to which $$v_i$$ is "left behind" by $$A$$ is just the eigenvalue corresponding to $$v_i$$.
Here's more on the semi-perimeter, generalized to $$n$$ dimensions. The trace is the sum of the signed edge lengths of the rectangular parallelepiped whose first edge length = the first entry of row 1, the second edge length = the second element of row 2, and so on. We could have also used columns instead.
Here, the edge lengths can have non-positive values. While the determinant is a product of signed lengths, the trace is a sum of signed lengths.
Permuting rows/columns can drastically the trace. But in a way, taking the trace of different permutations of $$\mathbf{A}$$ tells you more information about $$\mathbf{A}$$. You don't get more information from taking the determinant of those permutations.
In the $$2 \times2$$ case, the trace is equal to half the signed perimeter of the rectangle created by the rows/columns, in the process described above. Here, it's a semi-perimeter.
According to the interesting answer of Rado, "trace" is an algebraic way to represent the dimension of fibres of a vector bundle on a compact Hausdorff space $X$. Every vector bundle on $X$ corresponds to an idempotent matrix valued function $A(x),\;x\in X$ . The dimension of fibre of a vector bundle is equal to $tr(A(x))$. Assuming $X$ is connected, this quantity is fix along $X$. I learned this from "Very basic noncommutative geometry" By Masoud Khalkhali (Can be found in arxiv).
The terminology "Trace" is also used in PDE as an operator which restricts functions in sobolov space $H^{s}(\Omega)$ to the boundary of $\Omega$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 160, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9604047536849976, "perplexity": 210.5155413565785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304835.96/warc/CC-MAIN-20220125130117-20220125160117-00355.warc.gz"} |
https://www.curiosities.dev/publications/ | # Publications
Ranking of publications is an active field. I’m especially interested in how influential a given publication is, for that gives me a reasonable guess of what practitioners of a field probably know. ScimagoJR looks informative.
The h-index is defined as the maximum value of $$h$$ such that the given author/journal has published at least $$h$$ papers that have each been cited at least $$h$$ times.
Some folks compile best paper awards, e.g. Jeff Huang for Computer Science papers . | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9379671812057495, "perplexity": 1284.7199510559003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304309.5/warc/CC-MAIN-20220123172206-20220123202206-00536.warc.gz"} |
https://www.zora.uzh.ch/id/eprint/49922/ | # Shot noise processes for clumped parasite infections with time-dependent decay dynamics
Heinzmann, D; Barbour, A D; Torgerson, P R (2011). Shot noise processes for clumped parasite infections with time-dependent decay dynamics. Biostatistics, Bioinformatics and Biomathematics, 2(2):83-201.
## Abstract
Shot noise processes are introduced to model aggregated parasitic count data arising from clumped superinfections coupled with different decay mechanisms of the ingested parasite clumps. The corresponding likelihood functions are derived by using Laplace transforms. The models are fitted to samples with Echinococcus granulosus parasites in dogs from Kazakhstan, Tunisia and China. It is shown that parameter estimates take plausible values and that the decay dynamics is comparable in the three samples. The results indicate that dogs cease to be infectious after about 8 months, and that infections of dogs occur at a low rate, but the ingested parasite load per clump is in the thousands.
## Abstract
Shot noise processes are introduced to model aggregated parasitic count data arising from clumped superinfections coupled with different decay mechanisms of the ingested parasite clumps. The corresponding likelihood functions are derived by using Laplace transforms. The models are fitted to samples with Echinococcus granulosus parasites in dogs from Kazakhstan, Tunisia and China. It is shown that parameter estimates take plausible values and that the decay dynamics is comparable in the three samples. The results indicate that dogs cease to be infectious after about 8 months, and that infections of dogs occur at a low rate, but the ingested parasite load per clump is in the thousands. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8422979116439819, "perplexity": 3759.7855303919914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986657586.16/warc/CC-MAIN-20191015055525-20191015083025-00186.warc.gz"} |
http://www.ck12.org/analysis/Intervals-and-Interval-Notation/lesson/Intervals-and-Interval-Notation/ | <meta http-equiv="refresh" content="1; url=/nojavascript/">
# Intervals and Interval Notation
## Specific parts of a function; formatting for open and closed intervals.
0%
Progress
Practice Intervals and Interval Notation
Progress
0%
Intervals and Interval Notation
Suppose you and 2 of your friends were out for lunch and decide to buy tacos. Together you have $15 to spend on lunch, and tacos are$1.25 each. It is clear that the total cost could be graphed as a function of the number of tacos purchased, but how would you specify that the graph should not include values greater than $15 or less than$3.75 (one taco each)?
### Guidance
Real Values and Intervals
A function is defined as a real function if both the domain and the range are sets of real numbers. Many of the functions you have likely encountered before are real functions, and many of these functions have Domain = \begin{align*}\mathbb R\end{align*}. Consider, for example, the function \begin{align*}y=3x\end{align*}. A section of the graph of this function is shown below.
You may already be familiar with the graphs of lines. In particular, you may already be in the habit of placing arrows at the ends. We do this in order to indicate that the line will continue forever in both the positive and negative directions, both in terms of the domain and the range. The line above, however, only shows the function \begin{align*} y=3x\end{align*} on the interval [-3, 3]. The square brackets indicate that the graph includes the endpoints of the interval, where x = -3 and x = 3. We call this a closed interval. A closed interval contains its endpoints. In contrast, an open interval does not contain its endpoints. We indicate an open interval with parentheses. For example, (-3, 3) indicates the set of numbers between -3 and 3, not including -3 and 3. You may have noticed that the open interval notation looks like the notation for a point (x, y) in the plane. It is important to read an example or a homework problem carefully to avoid confusing a point with an interval! The difference is generally quite clear from the context.
The table below summarizes the kinds of intervals you may need to consider while studying functions and their domains:
Interval notation Inequality notation Description
\begin{align*}\,\! [a,b]\end{align*} \begin{align*}\,\! a \leq x \leq b\end{align*} The value of x is between a and b, including a and b, where a, b are real numbers.
\begin{align*}\,\! (a,b) \end{align*} \begin{align*}\,\! a < x < b\end{align*} The value of x is between a and b, not including a and b.
\begin{align*}\,\! [a,b)\end{align*} \begin{align*}\,\! a \leq x < b\end{align*} The value of x is between a and b, including a, but not including b.
\begin{align*}\,\! (a,b]\end{align*} \begin{align*}\,\! a < x \leq b\end{align*} The value of x is between a and b, including b, but not including a.
\begin{align*}(a, \infty)\end{align*} \begin{align*}\,\!x > a\end{align*} The value of x is strictly greater than a.
\begin{align*}[a, \infty)\end{align*} \begin{align*}\,\!x \geq a \end{align*} The value of x is greater than or equal to a
\begin{align*}(-\infty, a)\end{align*} \begin{align*}\,\!x The value of x is strictly less than a
\begin{align*}(-\infty, a]\end{align*} \begin{align*}\,\! x \leq a \end{align*} The value of x is less than or equal to a.
#### Example A
Identify the sets described:
a.) \begin{align*}(-3, 9]\end{align*}
b.) \begin{align*}[-23, 12]\end{align*}
c.) \begin{align*}(-\infty, 0)\end{align*}
Solution:
a.) The set of numbers between -3 and 9, ‘‘not including’’ the actual value of -3, but ‘‘including’’ 9.
b.) The set of numbers between -23 and 12, ‘‘including’’ the values -23 and 12.
c.) All numbers less than 0, not including 0 itself.
#### Example B
Sketch the graph of the function \begin{align*} f(x)=\frac{1}{2}x-6 \end{align*} on the interval [-4, 12).
Solution:
The figure below shows a graph of \begin{align*} f(x)=\frac{1}{2}x-6 \end{align*} on the given interval:
#### Example C
Describe the specified intervals, use interval notation:
a.) All positive numbers
b.) The numbers between negative eight and two hundred forty two, including both
c.) All negative numbers, zero, and the positive numbers up to and including nine.
Solution:
a.) \begin{align*}(0, +\infty)\end{align*}
Zero is neither positive nor negative, so the “(“ is used to specify that zero is ‘‘not’’ included. Since there is no maximum positive number, we specify that infinity is the upper value, and use “)” since it cannot be reached.
b.) \begin{align*}[-8, 242]\end{align*}
The “[“ is used on both ends, since both values are included.
c.) \begin{align*}(-\infty, 9]\end{align*}
The “(“ denotes that negative infinity cannot be reached, and “]” on the other end specifies that 9 is included in the set.
Concept question wrap-up
To specify that the graph of the cost of lunch only includes values between $3.75 and$15.00, specify the interval of the domain as: [3.75, 15].
-->
### Guided Practice
1) Describe the set shown in the image using interval notation
2) Describe the specified intervals, use interval notation:
a) All negative numbers
b) The numbers between five and twelve, including five, but not twelve.
c) Negative numbers down to negative six, zero, and all positive numbers.
3) Describe the domain in the sets in the images using interval notation:
a) b)
4) Describe the range in the sets in the images above using interval notation.
1) \begin{align*}(-\infty, 3), (0, \infty)\end{align*}
The set is opened with "(", since neg infinity cannot be reached, then closed with ")", since 3 is not included. The set is re-opened with "(" since 0 is not included, and finally closed with ")" since pos infinity cannot be reached either.
2) a) \begin{align*}(-\infty, 0)\end{align*}
Zero is neither positive nor negative, so the “)“ is used to specify that zero is ‘‘not’’ included. Since there is no maximum negative number, we specify that infinity is the lower value, and use “(” since it cannot be reached.
b) \begin{align*}[5, 12)\end{align*}
The “[“ is used to open the set, since 5 is included, but ")" is used to close, since 12 is not.
c) \begin{align*}(-\infty, 9]\end{align*}
The “(“ denotes that negative infinity cannot be reached, and “]” on the other end specifies that 9 is included in the set.
3) a) The domain is the set of x values starting with the included -6 and ending at 4, which is not included: [-6, 4)
b) As above: [-6, 7)
4) a) The range is the set of y values from -3 (not included) to 4 (included): (-3, 4]
b) As above: [-1, 6)
### Explore More
Write the following in interval notation.
1. \begin{align*}-3 \leq x <1\end{align*}
2. \begin{align*}0 < x <2\end{align*}
3. \begin{align*}x > -3\end{align*}
4. \begin{align*}x \leq 2\end{align*}
1. \begin{align*}-2x + 3 < 1\end{align*}
2. \begin{align*}7x + 4 \leq 2x - 6\end{align*}
For each number line, write the given set of numbers in interval notation.
Name the domain and range for each relation using interval notation.
Express the following sets using interval notation, then sketch them on a number line.
1. {\begin{align*}x : −1 \leq x \leq 3\end{align*}}
2. {\begin{align*}x : −2 \leq x < 1\end{align*}}
3. A is the set of all numbers bigger than 2 but less than or equal to 5.
4. {\begin{align*}x: – 3 < x < \infty \end{align*}}
### Vocabulary Language: English
closed interval
closed interval
A closed interval includes the minimum and maximum values (endpoints) of the interval.
domain
domain
The domain of a function is the set of $x$-values for which the function is defined.
interval
interval
An interval is a specific and limited part of a function.
Interval Notation
Interval Notation
Interval notation is the notation $[a, b)$, where a function is defined between $a$ and $b$. Use ( or ) to indicate that the end value is not included and [ or ] to indicate that the end value is included. Never use [ or ] with infinity or negative infinity.
open interval
open interval
An open interval does not include the endpoints of the interval.
Range
Range
The range of a function is the set of $y$ values for which the function is defined.
real function
real function
A real function is a function where both the domain and range are the set of all real numbers.
Real Number
Real Number
A real number is a number that can be plotted on a number line. Real numbers include all rational and irrational numbers. | {"extraction_info": {"found_math": true, "script_math_tex": 40, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 5, "texerror": 0, "math_score": 0.9461444020271301, "perplexity": 1001.8838657289366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736673439.5/warc/CC-MAIN-20151001215753-00012-ip-10-137-6-227.ec2.internal.warc.gz"} |
http://toc.ui.ac.ir/article_24023.html | Elliptic root systems of type $A_1$, a combinatorial study
Document Type: Research Paper
Author
Department of mathematics, University of Isfahan, Isfahan, Iran
Abstract
We consider some combinatorics of elliptic root systems of type $A_1$. In particular, with respect to a fixed reflectable base, we give a precise description of the positive roots in terms of a positivity'' theorem. Also the set of reduced words of the corresponding Weyl group is precisely described. These then lead to a new characterization of the core of the corresponding Lie algebra, namely we show that the core is generated by positive root spaces.
Keywords | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9507063627243042, "perplexity": 322.63439058583674}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251778272.69/warc/CC-MAIN-20200128122813-20200128152813-00384.warc.gz"} |
https://mathoverflow.net/questions/125194/chow-k%C3%BCnneth-decomposition-for-hypersurfaces | # Chow-Künneth decomposition for hypersurfaces
Short version: is the Chow-Künneth motivic decomposition known for $X \hookrightarrow \mathbb{P}^n_k$ a hypersurface over a field $k$?
Long version: let $M(X)$ be the Chow motive of $X$ with rational coefficients. A standard conjecture predicts the existence of projectors $\pi_k$ in $CH^{\dim X}(X \times X)_\mathbb{Q}$, the Chow group with respect to rational equivalence, such that the submotive of $M(X)$ cut off by $\pi_k$ realizes into $H^k(X)$. This is know for very few varieties: essentially projective spaces, curves and surfaces. What about hypersurfaces? I've never seen that written down, but Ayoub uses it in p. 38 of http://user.math.uzh.ch/ayoub/PDF-Files/Leiden.pdf.
I guess the idea is to play with the induced morphism $M(\mathbb{P}^n) \to M(X)$ and the decomposition of $M(\mathbb{P}^n)$. Is there a motivic hyperplane Lefschetz theorem allowing us to say that this is injective?
If anyone has references or knows how to fill in the details I will be very grateful.
• See the proof of lemma 5.1 of arxiv.org/abs/0710.4002 (Iyer-Mueller-Stach, "Chow-Kuenneth decomposition for special varieties"). – user31960 Mar 21 '13 at 21:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9554557204246521, "perplexity": 458.41420380056513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524879.8/warc/CC-MAIN-20190716201412-20190716223412-00459.warc.gz"} |
https://www.physicsforums.com/threads/about-static-pressure.19102/ | # About static pressure
1. Apr 12, 2004
### leonpalios
When air flows through a variable cross - section tube, as the cross - section area increases, the average flow speed of air decreases (due to continuity equation) and according to Bernoulli’s theorem the static pressure increases. Regardless of the mathematical proof of Bernoulli’s theorem, what physical process causes the static pressure increase?
What physical process causes the static presssure decrease when the air speeds up passing through a narrow part of the tube?
2. Apr 12, 2004
### enigma
Staff Emeritus
Similar Discussions: About static pressure | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8386351466178894, "perplexity": 2057.592142191766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424645.77/warc/CC-MAIN-20170724002355-20170724022355-00062.warc.gz"} |
http://mathoverflow.net/questions/102588/serres-open-image-theorem-without-shafarevichs-theorem?sort=newest | # Serre's Open Image Theorem Without Shafarevich's Theorem
In Abelian l-adic Representations and Elliptic Curves (1968), J. P. Serre showed that the adelic representation $$\rho_{E}\colon G_K \to \mathrm{GL}(\hat{\mathbb{Z}}^2)$$ associated to an elliptic curve $E/K$ over a number field $K$ has open image. To do it, he uses Shafarevich's Theorem on the finiteness of isomorphism classes of elliptic curves in a given isogeny class to show that the $\ell$-adic representation $$\rho_{E,\ell}\colon G_K \to \mathrm{GL}(T_\ell(E))$$ is irreducible for all $\ell$ and that the mod $\ell$ representation $$\bar{\rho}_{E,\ell}\colon G_K \to \mathrm{GL}(E[\ell])$$ is irreducible for almost all $\ell$.
My question is, do we now have a method of proving this theorem without using Shafarevich's Theorem? The latter depends on Siegel's Theorem, which depends on Roth's Theorem in Diophantine Geometry.
-
This is an interesting question and I suspect there is a way using p-adic Hodge theory. In the meanwhile, I thought I'd point out that Shafarevich's theorem here requires only Siegels theorem for the `discriminant elliptic curves,' something like 4a3−27b2=c. These have CM, and a proof of finiteness that doesn't use Diophantine approximations at all can be found in annals.math.princeton.edu/wp-content/.../annals-v172-n1-p16-p.pdf (By the way, I had mistakenly posted this as an answer earlier.) – Minhyong Kim Jul 18 '12 at 22:28
Oh sorry, I should also say that the remark above applies only to elliptic curves over $\mathbb{Q}$. – Minhyong Kim Jul 18 '12 at 22:29
@Davidac897: I asked a somewhat similar question a while back out my own ignorance of Falting's work: mathoverflow.net/questions/37212 . I wanted to deduce Shafarevich's theorem over $\Bbb Q$ from modularity without using Siegel's Theorem, but my argument was cyclic because I unknowingly assumed Tate's Isogeny Conjecture, which was proved by Faltings by proving Shafarevich in all dimensions. It was mentioned in the comments there that you can deduce Siegel's Theorem from Faltings Theorem (Mordell's Conjecture) which doesn't use Diophantine Approximation. Not sure if that will help you. – Jamie Weigandt Jul 19 '12 at 0:09
First, you forgot to assume that $E$ does not have CM. However, this actually suggests a difficulty in a Shafarevich-free proof.
Let $K = \mathbf{Q}(\sqrt{-1})$, and let $C/K$ be an elliptic curve with CM by $\mathbf{Z}[\sqrt{-1}]$. Now consider the following thought experiment. Can you rule out the existence of an elliptic curve $E/K$ without CM such that $\rho_{E,\ell} = \rho_{C,\ell}$ for all primes $\ell$?
This is certainly implied by the Tate conjecture (in a case proved by Faltings), but not only is this harder than the original proof, it also really uses/implies a (generalization of) Shafarevich's conjecture. Certainly $E$ admits isogenies $E \rightarrow E'$ of degree $p$ for any prime $p$ which splits in $K$, but ruling this out is exactly Shafarevich again. I'm not sure you can overcome this obstacle.
On the other hand, it is elementary to (essentially) reduce to this case, basically using Serre's original argument. Namely, one reduces to the case that the $\rho_{E,\ell}$ are abelian, and a classification of crystalline characters of the right weight (plus purity) essentially reduces to this CM-like case.
-
Masser and Wüstholz have given an effective proof that the representation $\bar{\rho}_{E,\ell}\colon G_K \to \mathrm{GL}(E[\ell])$ is irreducible for all $\ell$ greater than some constant $c_E$, see their paper Some effective estimates for elliptic curves. They use isogeny bounds coming from transcendence theory to prove Shafarevich's Theorem without Siegel's theorem. They show that $c_E$ can be chosen to be less than $C h^4$ where $h$ is some naive height attached to $E/K$ and $C$ is a constant that can in principle be computed.
(The isogeny bounds have since been repeated improved. The state of the art might be the paper Théorème des périodes et degrés minimaux d'isogénies of Gaudron and Rémond.)
Added afterwards: The surjectivity of $\bar{\rho}_{E,\ell}$ for $\ell$ sufficiently large is also discussed by Masser and Wüstholz in Galois properties of division fields of elliptic curves. It is effective and again does not require Siegel's theorem.
-
I thought that Davidac897 wanted a proof of Serre's theorem that does use Shafarevich. If you allow Shafarevich, then Faltings original proof of the Shafarevich conjecture (as a consequence of the Tate conjecture) does not rely on Siegel's theorem either. – Damian Rössler Jul 19 '12 at 21:49
I originally said without Shafarevich, though what I implicitly hoped for was something without using Sigel's Theorem either. It sounds a bit overkill to go to Faltings's Theorem, but I wonder whether Faltings's proof is simpler in dimension $1$. – David Corwin Jul 19 '12 at 22:14
That there are only fin. many AV in an isog class is actually a step in the proof of the Tate conjecture (which is then used to show that there are only fin many isog classes of AV of a given dimension and with good reduction outside a finit set of primes) Falting's proof that there are only fin many AV in an isog class involves introducing the Faltings height of an AV and 1. showing there are only finitely many AV with bounded Faltings height, and 2. understanding how height changes under isogeny (using Tate's results on p-divisible groups and Raynaud's results on finite flat group schemes.) – user18237 Jul 19 '12 at 23:16
Step 1 probably simplifies a lot for elliptic curves as moduli of EC is much easier than moduli of AV. I'm not sure how much step 2 simplifies. – user18237 Jul 19 '12 at 23:18
@gb: I think this description is not quite acurate. The proof of the Tate conjecture proceeds by studying the variation of the Faltings height in a p-divisible group (and there using Raynaud's results etc.). After that you deduce Tate's conjecture and semi-simplicity, which is necessary to prove Shafarevich. I don't think any of this simplifies in dim. 1 (see also Silverman's remarks in his book on the arithmetic of elliptic curves about this). I agree that step 1 can probably be simplified, though. – Damian Rössler Jul 20 '12 at 7:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9559064507484436, "perplexity": 348.3201015304685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111620.85/warc/CC-MAIN-20160428161511-00008-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/278715/equivalent-definitions-of-tensors-on-finite-dimensional-spaces | # Equivalent definitions of tensors on finite dimensional spaces.
I have recently been studying various texts on differential geometry, and I am quite puzzled that various authors define the notion of a tensor quite differently. I have come across the following defintions:
1. A tensor is a bilinear mapping from $V \times W$ into the field $K$
2. A tensor is a bilinear mapping from $V^*\times W^*$ into the field $K$
3. A tensor is an element of the tensor product $V \otimes W$
Why are they equivalent?
-
– Hans Lundmark Jan 14 '13 at 19:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9770087599754333, "perplexity": 169.53954949976293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930259.97/warc/CC-MAIN-20150521113210-00246-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://www.arxiv-vanity.com/papers/hep-th/0003086/ | arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org.
BROWN-HET-1209
hep-th/0003086
October 2000
THE INTERFACE OF COSMOLOGY
WITH STRING AND M(ILLENNIUM) THEORY
Damien A. Easson111
Brown University, Department of Physics,
Providence, RI 02912, USA
ABSTRACT
The purpose of this review is to discuss recent developments occurring at the interface of cosmology with string and M-theory. We begin with a short review of 1980s string cosmology and the Brandenberger-Vafa mechanism for explaining spacetime dimensionality. It is shown how this scenario has been modified to include the effects of p-brane gases in the early universe. We then introduce the Pre-Big-Bang scenario (PBB), Hořava-Witten heterotic M-theory and the work of Lukas, Ovrut and Waldram, and end with a discussion of large extra dimensions, the Randall-Sundrum model and Brane World cosmologies.
PACS numbers: 04.50+h; 98.80.Bp; 98.80.Cq.
## 1 Introduction
In recent years there have been many exciting advances in our understanding of M-theory – our best candidate for the fundamental theory of everything. The theory claims to describe physics appropriately in regions of space with high energies and large curvature scales. As these characteristics are exactly those found in the initial conditions of the universe it is only natural to incorporate M-theory into models of early universe cosmology.
The necessity to search for alternatives to the Standard Big-Bang (SBB) model of cosmology stems from a number of detrimental problems such as the horizon, flatness, structure formation and cosmological constant problems. Although inflationary models have managed to address many of these issues, inflation, at least in its current formulation, does not explain everything. In particular, inflation fails to address the fluctuation, super-Planck scale physics, initial singularity and cosmological constant problems as discussed in [2].
At the initial singularity, physical invariants such as the Ricci scalar, , blow up. Other measurable quantities, for example temperature and energy density also become infinite. From the Hawking-Penrose singularity theorems we know that such spacetimes are geodesically incomplete. So, when we ask the question of how the universe began, the inevitable and unsatisfactory answer is that we don’t know. The physics required to understand this epoch of the early universe is necessarily rooted in a theory of quantum gravity. Presently, string theory is the only candidate for such a unifying theory. It is therefore logical to study the ways in which it changes our picture of cosmology. Although an ambitious aspiration, we hope that M-theory will solve the above mentioned dilemmas and provide us with a complete description of the evolution of the universe.
In this analysis, we must proceed with caution. Our present understanding of M-theory is extremely limited, as is our understanding of cosmology before the first seconds. Nevertheless, it is clear that the study of string cosmology is essential to the development of string theory, and extremely important for our understanding of the early universe.
The purpose of this article is to introduce some of the most promising work and themes under investigation in string cosmology. We begin with a brief, qualitative introduction to M-theory in Section 2.
In Section 3 we review the work of Brandenberger and Vafa [1] in which the 1980s version of string theory is used to solve the initial singularity problem and in an attempt to explain why we live in four macroscopic dimensions despite the fact that string theory seems to predict the wrong number of dimensions, namely ten. We then explain how this scenario has been updated in order to include the effects of -branes [19].
Section 4 provides a brief introduction to the Pre-Big-Bang scenario [21]-[26]. This is a theory based on the low energy effective action for string theory, developed in the early 1990s by Gasperini and Veneziano.
Another promising attempt to combine M-theory with cosmology, that of Lukas, Ovrut and Waldram [41], is presented in Section 5. Their work is based on the model of heterotic M-theory constructed by Hořava and Witten and is inspired by eleven dimensional supergravity, the low energy limit of M-theory. The motivation for this work was to construct a toy cosmological model from the most fundamental theory we know.
The final section (6) reviews some models involving large extra dimensions. This section begins with a short introduction to the hierarchy problem of standard model particle physics and explains how it may be solved using large extra dimensions. “Brane World” scenarios are then discussed focusing primarily on the models of Randall and Sundrum [52, 53], where our four dimensional universe emerges as the world volume of a three brane. The cosmologies of such theories are reviewed, and we briefly comment on their incorporation into supergravity models, string theory and the AdS/CFT correspondence.
The sections in this review are presented more or less chronologically.222This review is in no way comprehensive. As it is impossible to discuss all aspects of string cosmology I have included a large list of references at the end. Some of the topics I will not cover in the text may be found there. For discussions of p-brane dynamics and cosmology see [175]-[178], [19]. For recent reviews on other cosmological aspects of M-theory see [179, 180, 181]. For some ideas on radically new cosmologies from M-theory see e.g. [182]-[187].
## 2 M-Theory
For several years now, we have known that there are five consistent formulations of superstring theory. The five theories are ten-dimensional, two having supersymmetry known as Type IIA and Type IIB and three having supersymmetry, Type I, heterotic and heterotic. Recently, duality symmetries between the various theories have been discovered, leading to the conjecture that they all represent different corners of a large, multidimensional moduli space of a unified theory named, M-theory. Using dualities we have discovered that there is a sixth branch to the M-theory moduli space (see Fig. (2)) corresponding to eleven-dimensional supergravity [3].
Figure 2: This is a slice of the eleven-dimensional moduli space of M-theory. Depicted are the five ten-dimensional string theories and eleven-dimensional supergravity, which is identified with the low energy limit of M-theory.
It is possible that using these six cusps of the moduli space we have already identified the fundamental degrees of freedom of the entire nonperturbative M-theory, but that their full significance has yet to be appreciated. A complete understanding and consistent formulation of M-theory is the ultimate challenge for string theorists today and will take physicists into the new M(illennium).
## 3 Superstrings and Spacetime Dimensionality
Perhaps the greatest embarrassment of string theory is the dimensionality problem. We perceive our universe to be four dimensional, yet string theory seems to naively predict the wrong number of dimensions, namely ten. The typical resolution to this apparent conflict is to say that six of the dimensions are curled up on a Planckian sized manifold. The following question naturally arises, why is there a six/four dimensional split between the small/large dimensions? Why not four/six, or seven/three? Although there is still no official answer to this question, a possible explanation emerges from cosmology and the work of Brandenberger and Vafa [1] which we will summarize in this section. We will then show how it is possible to generalize this scenario of the 1980s to incorporate our current understanding of string theory [19].
### 3.1 Duality
Before diving into the specifics of the BV model we review some basics of string dualities and thermodynamics. Consider the dynamics of strings moving in a nine-dimensional box with sides of length . We impose periodic boundary conditions for both bosonic and fermionic degrees of freedom, so we are effectively considering string propagation in a torus. What types of objects are in our box? For one, there are oscillatory modes corresponding to vibrating stationary strings. Then, there are momentum modes which are strings moving in the box with fourier mode and momentum
p=n/R. (3.1)
There are also winding modes which are strings that stretch across the box (wrapped around the torus) with energy given by
ω=mR, (3.2)
where is the number of times the string winds around the torus.
We now make the remarkable observation, that the spectrum of this system remains unchanged under the substitution
R→1R, (3.3)
(provided we switch the roles of and ). This symmetry is known as T-duality [4] and is a symmetry of the entire M-theory, not just the spectrum of this particular model. T-duality leads us to the startling conclusion that any physical process in a box of radius is equivalent to a dual physical process in a box of radius . In other words, one can show that scattering amplitudes for dual processes are equal. Hence, we have discovered that distance, which is an invariant concept in general relativity (GR), is not an invariant concept in string theory. In fact, we will see that many invariant notions in GR are not invariant notions in string theory. These deviations from GR are especially noticeable for small distance scales where the Fourier modes of strings become heavier (3.1) and less energetically favorable, while the winding modes become light (3.2) and are therefore more easy to create.
### 3.2 Thermodynamics of Strings
Before discussing applications of t-duality to cosmology let us review a few useful calculations of string thermodynamics. The primary assumption we will make for the following discussion is that the string coupling is sufficiently small so that we may ignore the gravitational back reaction of thermodynamical string condensates on the spacetime geometry.
String thermodynamics predicts the existence of a maximum temperature known as the Hagedorn temperature () above which the canonical ensemble approach to thermodynamics breaks down [5]. This is due to the divergence of the partition function because of string states which exponentially increase as
d(E)∝E−pexp(βHE), (3.4)
where . The partition function is easily calculated,
Z=∑iexp(−βEi), (3.5)
which diverges for , or .333For more on string thermodynamics see e.g. [5]-[14].
### 3.3 The BV Mechanism and the Early Universe
Consider the following toy model of a superstring-filled early universe. Besides the assumption of small coupling stated in section 3.2, we also assume that the evolution of the universe is adiabatic and make some assumptions about the size and shape of the universe.
Before the work of Brandenberger and Vafa, it was typical to speak about the process of “spontaneous compactification” of six of the ten dimensions predicted by string theory in order to successfully explain the origins of a large, dimensional universe. Brandenberger and Vafa proposed that, from a cosmological perspective, it is much more logical to consider the decompactification of three of the spatial directions. In other words, one starts in a universe with nine dimensions, each compactified close to the Planck length and then, for one reason or another, three spatial dimensions grow large.
The toy model of the early universe considered here is a nine dimensional box with each dimension having equal length, . The box is filled with strings and periodic boundary conditions are imposed as described in Section (3.1).
In the SBB model it is possible to plot the scale factor vs. using the Einstein equations (Fig. (3.3)(a)). For the radiation dominated epoch, . Furthermore, it is possible to plot vs. the temperature , where (Fig. (3.3) (b) and (c)). In string theory we have no analogue of Einstein’s equations and hence we cannot obtain a plot of the scale factor, vs. . On the other hand, we do know the entire spectrum of string states and so we can obtain an analogue of the vs. curve (see Fig. (3.3)(d)). Note that the region of Fig. (3.3)(d) near the Hagedorn temperature is not well understood, and canonical ensemble approaches break down. Fortunately, the regions to the left and right of are connected via dualities. The interested reader should see e.g. [5]-[14] for more modern investigations of the Hagedorn transition.
Recall, that in General Relativity the temperature goes to infinity as the radius decreases. As we have already mentioned, string theory predicts a maximum temperature, and therefore one should expect the stringy vs. curve to be drastically altered. Furthermore, we found that string theory enjoys the symmetry which leads to a symmetry in Fig. (3.3)(d). For large values of , is valid since the winding modes are irrelevant and the theory looks like a point particle theory. For small the curve begins to flatten out, approach the Hagedorn temperature and then as we continue to go to smaller values of the temperature begins to decrease. This behavior is a consequence of the T-duality of string theory. As shrinks, the winding modes which are absent in point particle theories become lighter and lighter, and are therefore easier to produce. Eventually, (with entropy constant) the thermal bath will consist mostly of winding modes, which explains the decrease in temperature once one continues past to smaller values of .
Figure 3.3: In (a) (and (b)), we have plotted vs. ( vs. ) for the SBB model. Figures (c) and (d) are plots of vs. for both the SBB and String cosmological models respectively. Note the symmetry in (d).
An observer traveling from large to small , actually sees the radius contracting to (in Planck units) and then expanding again. This makes us more comfortable with the idea of the temperature beginning to decrease after . The reason for this behavior is that the observer must modify the measuring apparatus to measure distance in terms of light states. The details for making this change of variables are described in [1].
Hence, the observer described above encounters an oscillation of the universe. This encourages one to search for cosmological solutions in string theory where the universe oscillates from small to large, eliminating the initial and final singularities found in (SBB) models.
### 3.4 The Dimensionality Problem
We are now ready to ask the question, how can superstring theory, a theory consistently formulated in ten dimensions give rise to a universe with only four macroscopic dimensions? This is equivalent within the context of our toy model to asking why should three of the nine spatial dimensions of our box “want” to expand? To address this question, note the following observation: winding modes lead to negative pressure in the thermal bath. To understand this, recall that as the volume of the box increases, the energy in the winding modes also increases (3.2). Thus the phase space available to the winding modes decreases, which brings us to the conclusion that winding modes would “like” to prevent expansion. The point is that it costs a lot of energy to expand with winding modes around. Thermal equilibrium demands that the number of winding modes must decrease as increases (since the winding modes become heavier). Therefore, we conclude that expansion can only occur when the system is in thermal equilibrium, which favors fewer of the winding states as increases. If, on the other hand, the winding modes are not in thermal equilibrium they will become plentiful and thus any expansion will be slowed and eventually brought to a halt.
Thermal equilibrium of the winding modes requires string interactions of the form
W+¯W⇔unwoundstates. (3.6)
Here is a winding state and is a winding state with opposite winding as depicted in Fig. (3.4).
Figure 3.4: Strings that interact with opposite windings become unwound states.
In order for such processes to occur, the strings must come to within a Planck length of one another. As the winding strings move through spacetime they span out two dimensional world sheets. In order to interact, their worldsheets must intersect, but in a nine dimensional box the strings will probably not intersect because . Since there is so much room in the box, the strings will have a hard time finding one another in order for their worldsheets to intersect and therefore it is unlikely that they will unwind. If the winding strings do not unwind, and the box starts to expand, the winding states will fall out of thermal equilibrium and the expansion will be halted.
The conclusion is that the largest spacetime dimensionality consistent with maintaining thermal equilibrium is four. Since, , and therefore the largest number of spatial dimensions which can expand is three. In the next section we will see how this scenario can be incorporated into our current understanding of string theory.
### 3.5 Brane Gases and the ABE Mechanism
Recent developments in M/string theory have revealed that strings are not the only fundamental degrees of freedom in the theory. The spectrum of fundamental states also includes higher dimensional extended objects known as D-branes. Here we will examine the way in which the BV scenario unfolds in the presence of D-branes in the early universe as constructed by Alexander, Brandenberger and Easson (ABE) [19]. Specifically, we are interested in finding out if the inclusion of branes affects the cosmological implications of [1]. Note that this approach to string cosmology is in close analogy with the starting point of the standard big-bang model and is very different from other cosmological models which have attempted to include D-branes, for example the brane-world scenarios discussed in Sections 5 and 6. However, possible relations between this model and brane-world scenarios will be discussed later.
Our initial state will be similar to that of [1]. We assume that the universe started out close to the Planck length, dense and hot and with all degrees of freedom in thermal equilibrium. As in [1], we choose a toroidal geometry in all spatial dimensions. The initial state will be a gas composed of the fundamental branes in the theory. We will consider 11-dimensional M-theory compactified on to yield 10-dimensional Type II-A string theory. The low-energy effective theory is supersymmetrized dilaton gravity. Since M-theory admits the graviton, 2-branes and 5-branes as fundamental degrees of freedom, upon the compactification we obtain 0-branes, strings (1-branes), 2-branes, 4-branes, 5-branes, 6-branes and 8-branes in the 10-dimensional universe.
The details of the compactification will not be discussed here, however we will briefly mention the origins of the above objects from the fundamental eleven-dimensional, M-theory perspective. The 0-branes of the II-A theory are the BPS states of nonvanishing . In M-theory these are the states of the massless graviton multiplet. The 1-brane of the II-A theory is the fundamental II-A string which is obtained by wrapping the M-theory supermembrane around the . The 2-brane is just the transverse M2-brane. The 4-branes are wrapped M5-branes. The 5-brane of the II-A theory is a solution carrying magnetic NS-NS charge and is an M5-brane that is transverse to the eleventh dimension. The 6-brane field strength is dual to that of the 0-brane, and is a KK magnetic monopole. The 8-brane is a source for the dilaton field [4].
The low-energy bulk effective action for the above setup is
Sbulk=12κ2∫d10x√−Ge−2ϕ[R + 4Gμν∇μϕ∇νϕ (3.7) − 112HμναHμνα],
where is the determinant of the background metric , is the dilaton, denotes the field strength corresponding to the bulk antisymmetric tensor field , and is determined by the 10-dimensional Newton constant in the usual way.
For an individual -brane the action is of the Dirac-Born-Infeld form
Sp=Tp∫dp+1ζe−ϕ√−det(gmn+bmn+2πα′Fmn) (3.8)
where is the tension of the brane, is the induced metric on the brane, is the induced antisymmetric tensor field, and the field strength tensor of gauge fields living on the brane. The total action is the sum of the bulk action (3.7) and the sum of all of the brane actions (3.8), each coupled as a delta function source (a delta function in the directions transverse to the brane) to the 10-dimensional action.
In the string frame the tension of a -brane is
Tp=πgs(4π2α′)−(p+1)/2, (3.9)
where is given by the string length scale and is the string coupling constant.
In order to discuss the dynamics of this system, we will need to compute the equation of state for the brane gases for various . There are three types of modes that we will need to consider. First, there are the winding modes. The background space is , and hence a -brane can wrap around any set of toroidal directions. These modes are related by t-duality to the momentum modes corresponding to center of mass motion of the branes. Finally, the modes corresponding to fluctuations of the branes in the transverse directions are (in the low-energy limit) described by scalar fields on the brane, . There are also bulk matter fields and brane matter fields.
We are mainly interested in the effects of winding modes and transverse fluctuations to the evolution of the universe, and therefore we will neglect the antisymmetric tensor field . We will take our background metric with conformal time to be
Gμν=a(η)2diag(−1,1,...,1), (3.10)
where is the cosmological scale factor.
If the transverse fluctuations of the brane and the gauge fields on the brane are small, the brane action can be expanded as
Sbrane = Tp∫dp+1ζa(η)p+1e−ϕ e12trlog(1+∂mϕi∂nϕi+a(η)−22πα′Fmn) = Tp∫dp+1ζa(η)p+1e−ϕ (1+12(∂mϕi)2−π2α′2a−4FmnFmn).
The first term in the parentheses in the last line represents the brane winding modes, the second term corresponds to the transverse fluctuations, and the third term to brane matter. In the low-energy limit, the transverse fluctuations of the brane are described by a free scalar field action, and the longitudinal fluctuations are given by a Yang-Mills theory. The induced equation of state has pressure 444Note that the above result is still valid when brane fluctuations and fields are large [19] ..
To find the equation of state for the winding modes, we use equation (3.5) to get
~p=wpρwithwp=−pd (3.12)
where is the number of spatial dimensions (9 in our case), and where and stand for the pressure and energy density, respectively.
Fluctuations of the branes and brane matter are given by free scalar and gauge fields on the branes. These may be viewed as particles in the transverse directions extended in brane directions. Therefore, the equation of state is simply that of ordinary matter,
~p=wρwith0≤w≤1. (3.13)
From the action (3.5) we see that the energy in the winding modes will be
Ep(a)∼Tpa(η)p, (3.14)
where the constant of proportionality is dependent on the number of branes.
The equations of motion for the background are given by [15, 21]
−d˙λ2+˙φ2 = eφE (3.15) ¨λ−˙φ˙λ = 12eφP (3.16) ¨φ−d˙λ2 = 12eφE, (3.17)
where and denote the total energy and pressure, respectively,
λ(t)=log(a(t)), (3.18)
and is a shifted dilaton field which absorbs the space volume factor
φ=2ϕ−dλ. (3.19)
The matter sources and are made up of all the components of the brane gas:
E = ∑pEwp+Enw P = ∑pwpEwp+wEnw, (3.20)
where the superscripts and stand for the winding modes and the non-winding modes, respectively. The contributions of the non-winding modes of all branes have been combined into one term. The constants and are given by (3.12) and (3.13). Each is the sum of the energies of all of the brane windings with fixed .
We may now draw the comparison between the ABE mechanism and [1]. First of all we see that both t-duality and limiting Hagedorn temperature are still manifest once we include the -branes [19]. Therefore, there is no physical singularity as . What about the de-compactification mechanism described in section (3.3)? Recall that our initial conditions are in a hot, dense regime near the self dual point . All the modes (winding, oscillatory and momentum) of all the -branes will be excited. By symmetry, we assume that there are equal numbers of winding and anti-winding modes in the system and hence the total winding numbers cancel as in [1].
Now assume that the universe begins to expand in all directions. The total energy in the winding modes increases with as (3.14), so the largest -branes contribute the most. The classical counting argument discussed in [1] is easily generalized to our model. When winding modes meet anti-winding modes, the branes unwind (recall Fig. (3.4)) and allow a certain number of dimensions to grow large.
Consider the probability that the world-volumes of two -branes in spacetime will intersect. The winding modes of -branes are likely to interact in at most spatial dimensions. 555To see this, consider the example of two particles (0-branes) moving through a space of dimension . These particles will definitely interact (assuming the space is periodic) if , whereas they probably will not find each other in a space with .
Since we are in spatial dimensions the branes will interact and hence unwind very quickly. For a hierarchy of dimensions will be allowed to grow large. Since the energy contained in the winding modes of branes is larger than that of strings (see (3.14)) the branes will have an important effect first. The membranes will allow a subspace to grow large. Within this dimensional space the -branes will allow a subspace to become large. We therefore reach the conclusion that the inclusion of D-branes into the spectrum of fundamental objects in the theory will cause a hierarchy of subspaces to become large while maintaining the results of the BV scenario, explaining the origin of our -dimensional universe.
Let us summarize the evolution of the ABE universe. The universe starts out in an initial state close to the self-dual point (), a 9-dimensional toroidal space, hot, dense and filled with particles, strings and -brane gases. The universe then starts to expand according to the background equations of motion (3.15 - 3.17). Branes with the largest value of will have an effect first, and space can only expand further if the winding modes annihilate. The and -branes winding modes annihilate quickly, followed by the -branes which allow only spatial dimensions to become large. In this the strings allow a -dimensional subspace to become large. Hence, it is reasonable to hypothesize the existence of a -dimensional effective theory at some point in the early history of the universe. In particular, one is tempted to draw a relation between this -dimensional picture and the scenario of large extra dimensions proposed in [59].
There are several problems with the toy model analyzed above. Most of these have already been mentioned by the authors. First, the strings and branes are treated classically. Quantum effects will cause the strings to take on a small but finite thickness [16], although in our case we are restricted to energy densities lower than the typical string density, and hence the effective width of the strings is of string scale [17]. This presumably will also apply to the branes, although there is no current, consistent quantization scheme developed for branes.
In this scenario there is a brane problem . This is a new problem for cosmological theories with stable branes analogous to the domain wall problem in cosmological scenarios based on quantum field theories with stable domain walls. However, we have found background solutions in our models which approach a point of loitering [20]. Loitering occurs if at some point in the evolution of the universe the size of the Hubble radius extends larger than the physical radius. Such a phase in the background cosmological evolution will naturally solve the brane problem.
The toroidal topology of the compactified manifold was chosen for simplicity. It is important from the point of view of string theory to consider how things would change if this manifold was a Calabi-Yau space. Calabi-Yau three-folds do not admit one cycles for strings to wrap around, although they are necessary if the four-dimensional low energy effective theory is to have supersymmetry. Note that in cosmology we do not necessarily expect supersymmetry. In particular, maximal supersymmetry is consistent with the toriodal background used.
Also, it was argued in [18] that M-theory should not be formulated in a spacetime of definite dimension or signature. In other words, we must ultimately be able to explain why there is only one time dimension.
Although there is no horizon problem present in this scenario since the universe starts out near the string length and hence there are no causally disconnected regions of space, other problems solved by inflation such as the flatness and structure formation problems are still present. Other less significant concerns are stressed in [19]. This scenario provides a new method for studying string cosmology which is similar to the SBB model and utilizes -branes in a very different way from scenarios involving large extra dimensions.
## 4 Pre-Big-Bang
The next attempt to marry cosmology with string theory we will review was proposed in the early 1990s by Veneziano and Gasperini [21]-[25].
### 4.1 Introduction
The Pre-Big-Bang (PBB) model 666For an updated collection of papers on this model see http://www.to.infn.it/gasperin. is based on the low energy effective action of string theory, which in spatial dimensions is given by
S=−12λd−1s∫dd+1x√−ge−φ[R+(∂μφ)2+⋯], (4.1)
where is the dilaton and is the string length scale. The qualitative differences between the PBB model, and the SBB model based on the Einstein-Hilbert action,
S=−12λd−1p∫dd+1x√−gR, (4.2)
are most easily visualized by plotting the history of the curvature of the universe (see Fig. (4.1)) according to each theory. In the SBB scenario the curvature increases as we go back in time, eventually reaching an infinite value at the Big-Bang singularity. In standard inflationary models the curvature reaches some fixed value as decreases at which point the universe enters a de Sitter phase. It has been shown however that such an inflationary phase cannot last forever, for reasons of geodesic completeness, and that the initial singularity problem still remains [27, 2]. The cosmology generated by (4.1) differs drastically from the standard scenarios. The action (4.1) without the “” terms does not realize the PBB scenario, as we will discuss below. In the PBB model, as one travels back in time the curvature increases as in the previously mentioned models, but in the PBB a maximum curvature is reached at which point the curvature and temperature actually begin to decrease. Although we will examine the details of how this occurs below, a few simple considerations make us feel more comfortable with this picture.
For one, string theory predicts a natural cut-off length scale,
λs=√ℏT∼10lpl∼10−32cm, (4.3)
where is the string tension and is the Planck length. So it is natural from the point of view of strings to expect a maximum possible curvature. Logically, as we travel back in time there are only two possibilities if we want to avoid the initial singularity. Either the curvature starts to grow again before the de Sitter phase, in which case we are still left with a singularity shifted earlier in time, or the curvature begins to decrease again, which is what happens in the PBB scenario (Fig. (4.1)c). This behavior is a consequence of scale-factor duality.
Figure 4.1: Curvature plotted versus time for, (a) the SBB model, (b) the standard inflationary model and (c) the PBB scenario.
### 4.2 More on Duality
To demonstrate the enhanced symmetries present in the PBB model we will examine the consequences of scale-factor duality. The Einstein-Hilbert action (4.2) is invariant under time reversal. Hence, for every solution there exists a solution . Or in terms of the Hubble parameter , for every solution there exists a solution . Thus, if there is a solution representing a universe with decelerated expansion and decreasing curvature (, ) there is a “mirror” solution corresponding to a contracting universe (, ).
The action of string theory (4.1) is not only invariant under time reversal, but also under inversion of the scale factor , (with an appropriate transformation of the dilaton). For every cosmological solution there is a solution , provided the dilaton is rescaled, . Hence, time reversal symmetry together with scale-factor duality imply that every cosmological solution has four branches, Fig. (4.2). For the standard scenario of decelerated expansion and decreasing curvature (, ) there is a dual partner solution describing a universe with accelerated expansion parameter and growing curvature .
Figure 4.2: The four branches of a string cosmological solution resulting from scale-factor duality and time reversal.
We will now show how one can create a universe from the string theory perturbative vacuum, that today looks like the standard cosmology. This problem is analogous to finding a smooth way to connect the Pre-Big-Bang phase with a Post-Big-Bang phase, or how to successfully connect the upper-left side of Fig. (4.2) to the upper-right side. In general, the two branches are separated by a future/past singularity and it appears that in order to smoothly connect the branches of growing and decreasing curvature one requires the presence of higher order loop and/or derivative corrections to the effective action (4.1). This cancer of the PBB model is know as the Graceful Exit Problem (GEP) and is the subject of many research papers (see [25, 26] for a collection of references).
One example of how the GEP can be solved is given in [28]. In this work we consider a theory obtained by adding to the usual string frame dilaton gravity action specially constructed higher derivative terms motivated by the limited curvature construction of [29]. The action is (4.1) with the “” term being replaced by the constructed higher derivative terms. In this scenario all solutions of the resulting theory of gravity are nonsingular and for initial conditions inspired by the PBB scenario solutions exist which smoothly connect a “superinflationary” phase with , to an expanding FRW phase with , solving the GEP in a natural way.
### 4.3 PBB-Cosmology
Here we examine cosmological solutions of the PBB model. By adding matter in the form of a perfect fluid to the effective action (4.1) (without the “” terms) and taking a Friedmann-Robertson-Walker background with , we vary the action to get the equations of motion for string cosmology,
˙φ2−6H˙φ+6H2 = eφρ, (4.4) ˙H−H˙φ+3H2 = 12eφp, 2¨φ+6H˙φ−˙φ2−6˙H−12H2 = 0.
As an example, for the equations with constant dilaton are exactly solved by
a∝t1/2,ρ∝a−4,φ=const., (4.5)
which is the standard scenario for the radiation dominated epoch, having decreasing curvature and decelerated expansion:
˙a>0,¨a<0,˙H<0. (4.6)
But there is also a solution obtained from the above via time translation and scale-factor duality,
t→−t,a∝(−t)−1/2,φ∝−3ln(−t),ρ=−3p∝a−2. (4.7)
This solution corresponds to an accelerated, inflationary expansion, with growing dilaton and growing curvature:
˙a>0,¨a>0,˙H>0. (4.8)
Solutions with such behavior are called “superinflationary” and are located in the upper left quadrant of Fig. (4.2).
Let us briefly review the history of the universe as predicted by the PBB scenario. Recall, that in the SBB model the universe starts out in a hot, dense and highly curved regime. In contrast, the PBB universe has its origins in the simplest possible state we can think of, namely the string perturbative vacuum. Here the universe consists only of a sea of dilaton and gravitational waves. It is empty, cold and flat, which means that we can still trust calculations done with the classical, low-energy effective action of string theory.
In [30], the authors showed that in a generic case of the PBB scenario, the universe at the onset of inflation must already be extremely large and homogeneous. In order for inflation to solve flatness problems the initial size of a homogeneous part of the universe before PBB inflation must be greater than . In response, it was proposed in [35] that the initial state of the PBB model is a generic perturbative solution of the tree-level, low-energy effective action. Presumably, quantum fluctuations lead to the formation of many black holes (Fig. (4.3)) in the gravi-dilaton sector (in the Einstein frame). Each such singular space-like hypersurface of gravitational collapse becomes a superinflationary phase in the string frame [33, 34, 32, 35]. After the period of dilaton-driven inflation the universe evolves in accordance with the SBB model.
Figure 4.3: A dimensional slice of the string perturbative vacuum giving rise to black hole formation in the Einstein frame.
To conclude let us mention a few benefits of the PBB scenario. For one, there is no need to invent inflation, or fine tune a potential for the inflaton. This model provides a “stringy” realization of inflation which sets in naturally and is dilaton driven. Pair creation (quantum instabilities) provides a mechanism to heat up an initially cold universe in order to produce a hot big-bang with homogeneity, isotropy and flatness. This scenario also has observable consequences.
Problems with this scenario include the graceful exit problem, mentioned above. This is the problem of smoothly connecting the phases of growing and decreasing curvature, a process that is not well understood and requires further investigation. Most cosmological models require a potential for the dilaton to be introduced by hand in order to freeze the dilaton at late times. In general it is believed that the dilaton should be massive today, otherwise we would notice its effects on physical gauge couplings.
Inclusion of a non-vanishing into the action (4.1) greatly reduces the initial conditions which give rise to inflation [26]. Also the initial collapsing region must be sufficiently large and weakly coupled. Lastly, the dimensionality problem is still present in this model.
## 5 Cosmology and Heterotic M-Theory
In this section we will focus on the work of Lukas, Ovrut and Waldram (LOW)[41] in 1998, which is based on the heterotic M-theory of Hořava and Witten [36, 37, 38]. Their motivation was to see if it is possible to construct a realistic, cosmological model starting from the most fundamental theory we know.
### 5.1 Hořava-Witten Theory
In 1996, Hořava and Witten showed that eleven-dimensional M-theory compactified on an orbifold with a set of gauge supermultiplets on each ten-dimensional orbifold fixed plane can be identified with strongly coupled heterotic string theory[36, 37]. The basic setup is that of Fig. (5.1), where the orbifold is in the direction and with the endpoints being identified. The orbifolding with leads to the symmetry . It has been shown that this M-theory limit can be consistently compactified on a deformed Calabi-Yau three-fold resulting in an supersymmetric theory in four dimensions (see fig.(5.2)). In order to match (at tree level) the gravitational and grand-unified gauge couplings one finds the requirement , where is the radius of the orbifold and is the radius of the Calabi-Yau space. This picture leads to the conclusion that the universe may have gone through a phase in which it was effectively five-dimensional, and therefore provides us with a previously unexplored regime in which to study the early universe.
Figure 5.1: The Hořava-Witten scenario. One of the eleven-dimensions has been compactified onto the orbifold . The manifold is .
Here we construct the five-dimensional effective theory via reduction of Hořava-Witten on a Calabi-Yau three-fold, and then show how this can lead to a four-dimensional toy model for a Friedmann-Robertson-Walker (FRW) universe.
S=SSUGRA+SYM, (5.1)
where is the action of eleven-dimensional supergravity
SSUGRA = −12κ2∫M11√−g[R+124GIJKLGIJKL (5.2) +√21728ϵI1⋯I11CI1I2I3GI4⋯I7GI8⋯I11],
and are two Yang-Mills theories on the ten-dimensional orbifold planes
SYM = −18πκ2(κ4π)2/3∫M(1)10√−g{tr(F(1))2−12trR2} (5.3)
The values of parametrize the full eleven-dimensional space , while are used for the ten-dimensional hyperplanes, , , orthogonal to the orbifold. The are the two gauge field strengths and is the 3-form with field strength given by . In order for this theory to be supersymmetric and anomaly free the Bianchi identity for must pick up the following correction,
(dG)11¯I¯J¯K¯L=−12√2π(κ4π)2/3{J(1)δ(x11)+J(2)δ(x11−πρ)}¯I¯J¯K¯L (5.4)
where the sources are
J(i)=trF(i)∧F(i)−12trR∧R. (5.5)
Now, we search for solutions to the above theory which preserve four of the thirty-two supercharges and, when compactified, lead to four dimensional, supergravities. To begin, consider the manifold , where is four-dimensional Minkowski space and is a Calabi-Yau three-fold. Upon compactification onto , we are left with a five-dimensional effective spacetime consisting of two copies of , one at each of the orbifold fixed points, and the orbifold itself (see fig. (5.2)). On each of the planes there is a gauge group , , and .
Figure 5.2: The LOW scenario. The manifold is given by where the Hořava-Witten theory is compactified on a smooth Calabi-Yau three fold . Compactification results in a five-dimensional effective theory.
In the next section we construct the five-dimensional effective theory.
### 5.2 Five-Dimensional Effective Theory
As we have discussed, according to the model presented above, there is an epoch when the universe appears to be five dimensional. Hence, it is only natural to try to find the action for this five-dimensional effective theory. Let us identify the fields in the five-dimensional bulk. First, there is the gravity multiplet , where is the graviton, is a five-dimensional vector field, and the are the gravitini. The indices and . There is also the universal hypermultiplet . Here is a modulus field associated with the volume of the Calabi-Yau space, is a complex scalar zero mode, is a scalar resulting from the dualization of the three-form , and the are the hypermultiplet fermions.
It is now possible, using the action (5.1) to construct the five-dimensional effective action of Hořava-Witten theory,
S5=Sgrav+Shyper+Sbound, (5.6)
where,
Sgrav=−v2κ2∫M5√−g[R+32(Fαβ)2+1√2ϵαβγδϵAαFβγFδϵ], (5.7)
(5.8)
Sbound = −v2κ2[∓2√22∑i=1∫M(i)4√−gαV−1] (5.9) −v8πκ2(κ4π)2/32∑i=1∫M(i)4√−gVtr(F(i)μν)2.
In the above, is a constant that relates the five-dimensional Newton constant, , with the eleven-dimensional Newton constant, , via . The metric is the flat space metric and is a constant. Higher-derivative terms have been dropped and this action provides us with a minimal supergravity theory in the five-dimensional bulk.
This theory admits a three-brane domain wall solution with a world-volume lying in the four uncompactified dimensions [41]. In fact, a pair of domain walls is the vacuum solution of the five-dimensional theory which provides us with a background for reduction to a , effective theory. This solution will be the topic of the next section.
### 5.3 Three-Brane Solution
In order to find a pair of three-branes solution we should start with an ansatz for the five-dimensional metric of the form
ds25 = a(y)2dxμdxνημν+b(y)2dy2 (5.10) V = V(y),
where . By using the equations of motion derived from the action (5.6) we find
a = a0H1/2 (5.11) b = b0H2 V = b0H3,
where , and and are all constants. Using the equations of motion derived by varying the action with respect to of (5.10), we arrive at a differential equation which leads to
∂2yH=2√23α0(δ(y)−δ(y−πρ)). (5.12)
A detailed derivation of this equation is discussed in [41]. Clearly, (5.12) represents two parallel three-branes located at the orbifold planes, as in Fig.(5.2). This solves the five-dimensional theory exactly and preserves half of the supersymmetries, with low-energy gauge and matter fields carried on the branes. This prompts us to find realistic cosmological models from the above scenario where the universe lives on the world-volume of a three-brane.
### 5.4 Cosmological Domain-Wall Solution
In order to construct a dynamical, cosmological solution, the solutions in (5.11) are made to be functions of time , as well as the eleventh dimension ,
ds25 = −N(τ,y)dτ2+a(τ,y)2dxmdxnηmn+b(τ,y)2dy2 (5.13) V = V(τ,y).
Here we have introduced a lapse function . Because this ansatz leads to a very complicated set of non-linear equations we will seek a solution based on the separation of variables. Note, there is no a priori reason to believe that such a solution exists, but we will see that one does. Separating the variables and ,
N(τ,y) = n(τ)a(y) (5.14) a(τ,y) = α(τ)a(y) b(τ,y) = β(τ)b(y) V(τ,y) = γ(τ)V(y).
Since this article is intended only as an elementary review we will not repeat the details involved in solving the above system. For our purposes it suffices to say that the equations take on a particularly simple form when and with the gauge choice of . In this gauge, becomes proportional to the comoving time , since . A solution exists such that
α = A|t−t0|p (5.15) β = β=B|t−t0|q,
where
p = 311(1∓43√3) (5.16) q = 211(1±2√3),
and and are arbitrary constants. This is the desired cosmological solution. The -dependence is identical to the domain wall solution (5.12) and the scale factors evolve with according to (5.15). The domain wall pair remain rigid, while their sizes and the separation between the walls change. In particular, determines the size of the domain-wall world-volume while gives the separation of the two walls. In other words, determines the size of the three-dimensional universe, while gives the size of the orbifold. Furthermore, the world-volume of the three-brane universe exhibits SUSY (of course SUSY is broken in the dyanamical solution) and a particular solution exists for which the domain wall world-volume expands in a FRW-like manner while the orbifold radius contracts.
Although the above model provides an intriguing use of M-theory in an attempt to answer questions about early universe cosmology there are still many problems to be worked out. Foremost, these are vacuum solutions, devoid of matter and radiation. There is no reason to think that, of all the solutions, the one which matches our universe (expanding domain-wall, shrinking orbifold) should be preferred over any other. This problem is typical of many cosmological models, however. The Calabi-Yau (six-dimensional) three-fold is chosen by hand in order to give four noncompact dimensions. Hence, the dimensionality problem mentioned in Section 3 is still present in this model. Stabilization of moduli fields, including the dilaton has recently been addressed in [47]. There are no cosmological constants in the model. There is also no natural mechanism supplied for SUSY breaking on the domain wall, and currently no discussion of inflationary dynamics. For more on heterotic M-theory and cosmology see, [36]-[51].
## 6 Large Extra Dimensions
This section provides a brief discussion of scenarios involving large extra dimensions, focusing primarily on the models of Randall and Sundrum (RSI and RSII)777The distinction between RSI and RSII models will be clarified below. [52, 53]. The RSI model is similar in many respects to that of the Lukas, Ovrut and Waldram scenario discussed in section 5, although its motivation is quite different. In the LOW construction the motivation was to construct a cosmology out of the fundamental theory of everything. In the RSI model the motivation is to construct a cosmology in which the Hierarchy problem of the Standard Model (SM) is solved in a natural way. Some earlier proposals involving large extra dimensions include [60]-[54]. Also see the extensive set of references in [61].
### 6.1 Motivation and the Hierarchy Problem
There is a hierarchy problem in the Standard Model because we have no way of explaining why the scales of particle physics are so different from those of gravity. Many attempts to solve the hierarchy problem using extra dimensions have been made before, see for example [59] and [60]. If spacetime is fundamentally -dimensional then the physical Planck mass
M(4)pl≃2×1018GeV, (6.1)
is actually dependent on the fundamental -dimensional Planck mass and on the geometry of the extra dimensions according to
M(4)pl2=Mn+2plVn, (6.2)
Here is the volume of the compact extra dimensions. Because we have not detected any extra dimensions experimentally, the compactification scale would have to be much smaller than the weak scale, and the particles and forces of the SM (except for gravity) must be confined to the four-dimensional world-volume of a three-brane (See Fig. (6.1)).
Figure 6.1: In the RS model the fields of the Standard Model (with the exception of gravity) are confined to the three-brane world-volume while gravity is allowed to propagate in the bulk.
We see from (6.2) that by taking to be large enough it is possible to eliminate the hierarchy between the weak scale and the Planck scale. Unfortunately, in this procedure a new hierarchy has been introduced, namely the one between and . Randall and Sundrum proposed the following: We assume that the particles and forces of the SM with the exception of gravity are confined to a four-dimensional subspace of the -dimensional spacetime. This subspace is identified with the world-volume of a three-brane and an ansatz for the metric is made. Randall and Sundrum’s proposal is that the metric is not factorizable, but the four-dimensional metric is multiplied by a “warp” factor that is exponentially dependent upon the radius of the bulk, fifth dimension. The metric ansatz is
ds2=e−2krcφημνdxμdxν+r2cdφ2, (6.3)
where is a scale of order the Planck scale, is the four-dimensional Minkowski metric and is the coordinate for the extra dimension. Randall and Sundrum have shown that this metric solves the Einstein equations and represents two three-branes with appropriate cosmological constant terms separated by a fifth dimension. The above scenario, in addition to being able to solve the hierarchy problem (see section 6.2.1), provides distinctive experimental signatures. Coupling of an individual Kaluza-Klein (KK) excitation to matter or to other gravitational modes is set by the weak and not the Planck scale. There are no light KK modes because the excitation scale is of the order a TeV. Hence, it should be possible to detect such excitations at accelerators (such as the LHC). The KK modes are observable as spin excitations that can be reconstructed from their decay products. For experimental signatures of KK modes within large extra dimensions see e.g. [172, 173, 174].
### 6.2 Randall-Sundrum I
The basic setup for the RSI model is depicted in Fig. (6.2). The angular coordinate parameterizes the fifth dimension and ranges from to . The fifth dimension is taken as the orbifold where there is the identification of with . The orbifold fixed points are at and correspond with the locations of the three-brane boundaries of the five-dimensional spacetime. Note the similarities of this model with the LOW model of Section 5. One difference is that we are now considering nonzero vacuum energy densities on both the visible and the hidden brane and in the bulk.
Figure 6.2: The Randall-Sundrum scenario. The fifth dimension is compactified onto the orbifold .
The action describing the scenario is
S=Sgrav+Svis+Shid (6.4)
where
Sgrav = ∫d4x∫π−πdφ√−G(−Λ+2M3R) Svis = ∫d4x√−gvis(Lvis+Vvis) Shid = ∫d4x√−ghid(Lhid+Vhid). (6.5)
Here, is the Planck mass, is the Ricci scalar, and are the four-dimensional metrics on the visible and hidden sectors respectively and , and are the cosmological constant terms in the visible, bulk and hidden sectors. The specific form for the three-brane Lagrangians is not relevant for finding the classical five-dimensional, ground state metric. The five-dimensional Einstein equations for the above action are
√−G(RMN−12GMNR) = −14M3[Λ√−GGMN (6.6) +Vvis√−gvisgvisμνδμMδνNδ(φ−π) +Vhid√−ghidghidμνδμMδνNδ(φ).
We now assume that a solution exists which has four-dimensional Poincaré invariance in the directions. A five-dimensional ansatz which obeys the above requirements is
ds2=e−2σ(φ)ημνdxμdxν+r2cdφ2. (6.7)
Substituting this ansatz into (6.6) reduces the Einstein equations to
6σ′2r2c = −Λ4M3, (6.8) 3σ′′r2c = Vhid4M3rcδ(φ)+Vvis4M3rcδ(φ−π). (6.9)
Solving (6.8) consistently with orbifold symmetry , we find
σ=rc|φ|√−Λ24M3, (6.10)
which makes sense if . With this choice, the spacetime in the bulk of the theory is a slice of an manifold. Also, to solve (6.9) we should take
Vhid=−Vvis = 24M3k Λ = −24M3k2. (6.11)
Note that the boundary and bulk cosmological terms are dependent upon the single scale factor , and that the relations between them are required in order to get four-dimensional Poincaré invariance.
Further connections with the LOW scenario of Section 5 are now visible. The exact same relations given in (6.2) arise in the five-dimensional Hořava-Witten effective theory if one identifies the expectation values of the background three-form field as cosmological terms [39].
We want the bulk curvature to be small compared to the higher dimensional Planck scale in order to trust the solution and thus, we assume . The bulk metric solution is therefore,
ds2=e−2krc|(φ)|ημνdxμdxν+r2cdφ2. (6.12)
Since is small but still larger than , the fifth dimension cannot be experimentally observed in present or future gravity experiments. This prompts us to search for a four-dimensional effective theory.
#### 6.2.1 Four-Dimensional Effective Theory
In our four-dimensional effective description we wish to find the parameters of this low-energy theory (e.g. and mass parameters of the four-dimensional fields ) in terms of the five-dimensional, fundamental scales, , and . In order to find the four-dimensional theory one identifies massless gravitational fluctuations about the classical solution 6.12 which correspond to the gravitational fields for the effective theory. These are the zero modes of the classical solution. The metric of the four-dimensional effective theory is of the form
¯gμν(x)≡ημν+¯hμν(x), (6.13)
which is locally Minkowski. Here, represents the tensor fluctuations about Minkowski space and gives the physical graviton of the four-dimensional effective theory. By substituting the metric (6.13) for in (6.12) and then using the result in the action (6.2) the curvature term becomes
Seff∝∫d4x∫π−πdφ2M3rce−2krc|φ|√−¯g¯R, (6.14)
where is the four-dimensional Ricci scalar made out of . We focus on the curvature term so that we may derive the scale of the gravitational interactions. The effective fields depend only on , and hence it is possible to perform the integration over explicitly, obtaining the four-dimensional effective theory [52]. Using the result one may derive an expression for the four-dimensional Planck mass in terms of the fundamental, five-dimensional Planck mass
M(4)pl2=M3rc∫π−πdφe−2krc|φ|=M3k[1−e−2krcπ]. (6.15)
Notice that depends only weakly on in the large limit.
From the fact that and we find,
¯gμν=ghidμν, (6.16)
but
¯gμν=gvisμνe2krcπ. (6.17)
It is now possible to find the matter field Lagrangian of the theory. With proper normalization of the fields one can determine physical masses. Let us consider the example of a fundamental Higgs field. The action is
Svis≃∫d4x√−gvis(gμνvisDμH†DνH−λ(|H|2−v20)2), (6.18)
which contains only one mass parameter . Using (6.17) the action becomes
Seff≃∫d4x√−¯ge−4krcπ(¯gμνe2krcπDμH†DνH−λ(|H|2−v20)2), (6.19)
and after wavefunction renormalization, , we have
Seff≃∫d4x√−¯g(¯gμνDμH†DνH−λ(|H|2−e−2krcπv20)2). (6.20)
This result is completely general. The physical mass scales are set by a symmetry-breaking scale,
v≡e−krcπv0, (6.21)
and hence any mass parameter on the visible three-brane is related to the fundamental, higher-dimensional mass via
m≡e−krcπm0. (6.22)
Note that if , TeV scale physical masses are produced from fundamental mass parameters near the Planck scale, . Therefore, there are no large hierarchies if .
### 6.3 Randall-Sundrum II
In the RSI scenario described in the last section our universe was identified with the negative tension brane while the brane in the hidden sector had positive tension (Eq. (6.2)). In this model it was shown that the hierarchy problem may be solved. Unfortunately, there are several problems with the idea that the universe we live in can be a negative tension brane. For one, the energy density of matter on such a brane would be negative and gravity repulsive [115, 114, 111]. Life is more comfortable on a positive tension brane since the D-branes which arise as fundamental objects in string theories are all positive tension objects and the localization of matter and gauge fields on positive tension branes is well understood within the context of string theory.
For the above reasons Randall and Sundrum suggested a second scenario (RSII) in which our universe is the positive tension brane and the hidden brane has negative tension [53]. In this case the boundary and bulk cosmological constants are related by
Vvis=−Vhid = 24M3k Λ = −24M3k2, (6.23)
as opposed to the realation in RSI, Eq. (6.2).
As we will see, in this refined scenario it is possible to reproduce Newtonian gravity and other four-dimensional general relativistic predictions at low energy and long distance on the visible brane. Note that in the solution to the hierarchy problem in RSI the wave function for the massless graviton is greatest on the hidden brane, whereas RSII has the graviton bound to the visible brane. To see this, consider the wave equation for small gravitational fluctuations,
(∂μ∂μ−∂i∂i+V(zi))^h(xμ,zi)=0. (6.24)
This has a non-trivial potential term resulting from the curvature, runs from to and labels the extra dimensions. It is possible to write as a superposition of modes where is an eigenmode of the equation
(−∂i∂i+V(z))^ψ(z)=−m2^ψ(z), (6.25)
in the extra dimensions and . Hence, the higher-dimensional gravitational fluctuations are Kaluza-Klein reduced in terms of four-dimensional KK states with mass given by the eigenvalues of (6.25). The zero mode that is also a normalizable state in the spectrum of Eq. (6.25) is the wave function associated with the four-dimensional graviton. This state is a bound state whose wave function falls off rapidly away from the 3-brane. Such behavior corresponds to a 3-brane acting as a positive tension source on the right hand side of Einstein’s equations.
The procedure of RSII is to decompactify the orbifold of RSI (i.e. consider ) taking the hidden, negative tension brane off to infinity. In doing this, one obtains an effective four-dimensional theory of gravity where the setup is a single three-brane with positive tension embedded in a five-dimensional bulk spacetime. On this brane one can compute an effective nonrelativistic gravitational potential between two particles of masses and which is generated by exchange of the zero-mode and continuum Kaluza-Klein mode propagators. The potential behaves as
V(r)=GNm1m2r(1+1r2k2). (6.26)
Here the leading term is the usual Newtonian potential and is due to the bound state mode. The KK modes generate the correction term which is heavily suppressed for of order the fundamental Planck scale and of the size tested with gravity. The propagators calculated in [53] are relativistic and hence, going beyond the nonrelativistic approximation one recovers all the proper relativistic corrections with negligible corrections from the continuum modes.
Let us compare the RSI and RSII models. In RSI, the solution to the hierarchy problem requires that we are living on a negative tension brane. The positive tension brane has no such suppression of its masses and is therefore often referred to as the “Planck” brane, which is hidden from the visible brane. Serious arguments against this scenario are that the negative tension “TeV” brane seems physically unacceptable.
In RSII, the visible brane is taken as the positive tension brane while the TeV brane is sent off to infinity. In this model the proper Newtonian gravity is manifest on the visible brane, but the hierarchy problem is not addressed.
Although more successful as a potential physical model of our universe than its predecessor RSI, RSII seems to lack the elegant solution to the hierarchy problem made possible by considering the universe as a negative tension brane. Recent work however suggests that by including quantum effects (analogous to the Casimir effect) it is possible to solve the hierarchy problem on the visible brane having either positive or negative tension [83]. If the Casimir energy is negative and one accepts a degree of fine tuning of the tension on the hidden brane it is possible to obtain a large enough warp factor to explain the hierarchy on the visible brane having either positive or negative tension. Further work on this scenario is needed however including a study of the stability of this model against perturbations.
### 6.4 RS and Brane World Cosmology
The next obvious step is to consider the cosmologies of the RS model discussed above. There has been an extensive amount of work done in these areas and the reader is invited to examine the references at the end of the review related to Randall-Sundrum and “brane world” cosmologies [62] - [138] for a comprehensive study. Due to the vast number of cosmological models discussed in the literature we will review only the basics and focus on the problems of brane world cosmologies while mentioning potential resolutions and future work, referencing various relevant authors. Much of the discussion in this section closely parallels the excellent review of J. Cline [97].
We begin by considering the cosmological expansion of 3-brane universes in a 5-dimensional bulk with a cosmological constant as discussed by Binétruy, Deffayet, Ellwanger and Langlois (BDEL) [117]. Note that in an earlier work [116], BDL considered the solutions to Einstein’s equations in five dimensions with an orbifold and matter included on the two branes but with no cosmological constants on the branes or in the bulk. They found that the Hubble expansion rate of the visible brane was related to the energy density of the brane quadratically opposed to the standard Friedmann equation, . We will show this explicitly below. The altered expansion rate proved to be incompatible with nucleosynthesis constraints.
When the analysis was applied to the RSII scenario one does in fact reproduce the ordinary FRW universe on the positive tension, Planck brane [115, 114]. Note however, that in the RSII scenario on the negative tension brane where the hierarchy problem is solved the Friedmann equation has a critical sign difference.
In the BDEL model the authors consider five-dimensional spacetime metrics of the form
where is the coordinate associated with the fifth dimension. The visible universe is taken to be the hypersurface at . The metric is taken to be
ds2=−n2(τ,y)dτ2+a2(τ,y)gijdxidxj+b2(τ,y)dy2, (6.28)
where is a maximally symmetric three-dimensional metric ( will parametrize the spatial curvature), and is a lapse function.
The five-dimensional Einstein equations have the usual form
~GAB≡~RAB−12~R~gAB=κ2~TAB, (6.29)
where is related to the five-dimensional Newton’s constant and the five-dimensional reduce Planck mass by
κ2=8πG(5)=M−3(5). (6.30)
Using the ansatz (6.28) one finds the non-vanishing components of the Einstein tensor to be
~G00 = 3{˙aa(˙aa+˙bb)−n2b2(a′′a+a′a(a′a−b′b))+kn2a2}, (6.31) ~Gij = a2b2γij{a′a(a′a+2n′n)−b′b(n′n+2a′a)+2a′′a+n′′n} (6.32) ~G05 = 3(n′n˙aa+a′a˙bb−˙a′a), (6.33) ~G55 = 3{a′a(a′a+n′n)−b2n2< | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8957511782646179, "perplexity": 650.0553533245571}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400283990.75/warc/CC-MAIN-20200927152349-20200927182349-00078.warc.gz"} |
http://tex.stackexchange.com/questions/123312/correct-path-on-windows-for-custom-beamer-style-files | # Correct path on Windows for custom beamer style files
I've recently switched to windows from linux and I'm having trouble getting my custom beamer style files to compile correctly. I have them in `~/texmf/tex/latex/local` currently. There are other style files in there that are working correctly so the directory is on the right tex path, and when I copy the style files into the same directory as the code they also work.
Does beamer use a different path ? I ran `texhash.exe`, is there a separate command to set the beamer path?
-
Are you using TeX Live or MikTeX? In general, Windows don't have the flexibility of maintaining your own distro so better don't touch anything. Rather use the package install wizards (oxymoron). – percusse Jul 9 '13 at 19:18
Well for TeXLive all the beamer style files are in `C:\texlive\2013\texmf-dist\tex\latex`, but they should be found in the folder you created, so this is strange. You could try to put it in the explicit beamer folder in the above directory. Here: `C:\texlive\2013\texmf-dist\tex\latex\beamer\themes\theme` – option_select Jul 9 '13 at 19:27
I'm using MikTeX 2.9. The beamer files are in `C:\Program Files\MiKTeX 2.9\tex\latex\beamer\*` and they all seem to be loading fine, just not my custom files. – slammaster Jul 9 '13 at 20:21
You should not modify your MikTeX folder. Put your custom style files either in the working directory or somewhere TeX can find them. – percusse Jul 9 '13 at 20:42
...somewhere TeX can find them: Which is the best directory to keep .sty files where MiKTeX can find them? for eg: `C:\localtexfiles\tex\latex\misc\custombeamer.sty` and Refresh FNDB – texenthusiast Jul 9 '13 at 21:06 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9806068539619446, "perplexity": 3837.8746275110716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776435465.20/warc/CC-MAIN-20140707234035-00093-ip-10-180-212-248.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/481757/meaning-of-a-force-that-derives-from-potential-energy/481781 | # Meaning of “a force that derives from potential energy”
In mechanics course, when the idea of equilibrium was introduced they included the idea of a force that derives from potential energy which is the force $$F$$ which is related to the potential energy $$E_p$$ by the relation:
$$F=-\nabla E_p$$
I didn't understand at all the physical meaning of such a definition. Any help in such an explanation (physical meaning) is appreciated.
• Welcome to PhysicsSE! Note that this site supports MathJax. You can click the link to learn the basics. I've taken the liberty of typesetting this post for you. – Chris May 22 '19 at 22:27
• Have you seen the analogy of a ball rolling on a bumpy surface under the action of gravity? It's not exact but it can help form some intuition. – Triatticus May 23 '19 at 1:57
• This is the magnitude of force along the direction of displacement. – RunMachine_Kohli May 23 '19 at 6:39
Any force $$\vec{F}$$ that can be represented by a gradient $$\nabla$$ of a scalar field $$V$$ is called a "conservative force". But why is this definition important?
Let's try to derive the work done by this force $$\vec{F}$$. For any force, we have the following from the definition of "work": $$W = \int_\mathcal{C} \vec{F}\cdot\vec{d}l$$ for a given path $$\mathcal{C}$$.
Now, if our force can be written as $$\vec{F}=-\nabla V$$, we have $$W = \int_\mathcal{C} (-\nabla V)\cdot d\vec{l}$$
There is a theorem called "gradient theorem", or "the fundamental theorem of calculus for line integrals", that claims the following for path integrals on a curve $$\gamma$$ from $$x_1$$ to $$x_2$$: $$\int_{\gamma[x_1,x_2]} \nabla\phi(\vec{r})\cdot{}d\vec{r} = -(\phi(x_2) - \phi(x_1))$$
Using this theorem, we can find the following interesting result: $$W = \int_\mathcal{C} (-\nabla V)\cdot d\vec{l} = V(x_2) - V(x_1)$$ for $$(x_1,x_2)$$ the starting and ending points of path $$\mathcal{C}$$.
What does this mean? This means that, for conservative forces, the work done by this force is "path independent", i.e. it only depends on the initial and final points of the path rather than how to path moves.
Notice that this final result is the definition of a potential field, that is, $$V(\vec{r})$$, for this force. If you have a not conservative force, you simply cannot define a potential to it.
For example: Existence of electric potential means that the electrostatic force is a conservative force. But, for example, since the work done by friction is always path dependent, we cannot define a potential energy to it. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9839468002319336, "perplexity": 185.71065115901592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146186.62/warc/CC-MAIN-20200226023658-20200226053658-00084.warc.gz"} |
https://brilliant.org/problems/i-have-seen-this-before-cause-its-interesting/ | # Does Quadratic Forms Help?
Algebra Level 5
$f(a, b, c, d) = {\frac { 1 }{ { a }^{ 2 }+1 } +\frac { 1 }{ { b }^{ 2 }+1 } +\frac { 1 }{ { c }^{ 2 }+1 } +\frac { 1 }{ { d }^{ 2 }+1 } }$
Let $a,b,c,d$ to be non negative real numbers satisfying $ab+ac+ad+bc+bd+cd=6$. Let $\min f(a, b, c, d) = M$.
How many ordered quadruples of non-negative real numbers $(a, b, c, d)$ satisfy $f(a,b,c,d) = M$?
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9609482288360596, "perplexity": 592.2983205178458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739211.34/warc/CC-MAIN-20200814100602-20200814130602-00321.warc.gz"} |
https://math.libretexts.org/Courses/Monroe_Community_College/MTH_098_Elementary_Algebra/4%3A_Graphs/4.4%3A_Understanding_the_Slope_of_a_Line | $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$
# 4.4: Understanding the Slope of a Line
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$
Learning Objectives
• By the end of this section, you will be able to:
• Use geoboards to model slope
• Use $$m = \frac{\text{rise}}{\text{run}}$$ to find the slope of a line from its graph
• Find the slope of horizontal and vertical lines
• Use the slope formula to find the slope of a line between two points
• Graph a line given a point and the slope
• Solve slope applications
Note
Before you get started, take this readiness quiz.
1. Simplify: $$\frac{1 - 4}{8 - 2}$$.
If you missed this problem, review Exercise 1.6.31
2. Divide: $$\frac{0}{4}, \frac{4}{0}$$.
If you missed this problem, review Exercise 1.10.16.
3. Simplify: $$\frac{15}{-3}, \frac{-15}{3}, \frac{-15}{-3}$$.
If you missed this problem, review Exercise 1.6.4.
When you graph linear equations, you may notice that some lines tilt up as they go from left to right and some lines tilt down. Some lines are very steep and some lines are flatter. What determines whether a line tilts up or down or if it is steep or flat?
In mathematics, the ‘tilt’ of a line is called the slope of the line. The concept of slope has many applications in the real world. The pitch of a roof, grade of a highway, and a ramp for a wheelchair are some examples where you literally see slopes. And when you ride a bicycle, you feel the slope as you pump uphill or coast downhill.
In this section, we will explore the concept of slope.
## Use Geoboards to Model Slope
A geoboard is a board with a grid of pegs on it. Using rubber bands on a geoboard gives us a concrete way to model lines on a coordinate grid. By stretching a rubber band between two pegs on a geoboard, we can discover how to find the slope of a line.
Doing the Manipulative Mathematics activity “Exploring Slope” will help you develop a better understanding of the slope of a line. (Graph paper can be used instead of a geoboard, if needed.)
We’ll start by stretching a rubber band between two pegs as shown in Figure $$\PageIndex{1}$$.
Doesn’t it look like a line?
Now we stretch one part of the rubber band straight up from the left peg and around a third peg to make the sides of a right triangle, as shown in Figure $$\PageIndex{2}$$
We carefully make a 90º angle around the third peg, so one of the newly formed lines is vertical and the other is horizontal.
To find the slope of the line, we measure the distance along the vertical and horizontal sides of the triangle. The vertical distance is called the rise and the horizontal distance is called the run, as shown in Figure $$\PageIndex{3}$$.
If our geoboard and rubber band look just like the one shown in Figure $$\PageIndex{4}$$, the rise is 2. The rubber band goes up 2 units. (Each space is one unit.)
The rise on this geoboard is 2, as the rubber band goes up two units.
What is the run?
The rubber band goes across 3 units. The run is 3 (see Figure $$\PageIndex{4}$$).
The slope of a line is the ratio of the rise to the run. In mathematics, it is always referred to with the letter m.
SLOPE OF A LINE
The slope of a line of a line is $$m = \frac{\text{rise}}{\text{run}}$$.
The rise measures the vertical change and the run measures the horizontal change between two points on the line.
What is the slope of the line on the geoboard in Figure $$\PageIndex{4}$$?
\begin{aligned} m &=\frac{\text { rise }}{\text { run }} \\ m &=\frac{2}{3} \end{aligned}
The line has slope $$\frac{2}{3}$$. This means that the line rises 2 units for every 3 units of run.
When we work with geoboards, it is a good idea to get in the habit of starting at a peg on the left and connecting to a peg to the right. If the rise goes up it is positive and if it goes down it is negative. The run will go from left to right and be positive.
Exercise $$\PageIndex{1}$$
What is the slope of the line on the geoboard shown?
Use the definition of slope: $$m = \frac{\text{rise}}{\text{run}}$$.
Start at the left peg and count the spaces up and to the right to reach the second peg.
$\begin{array}{ll} {\text { The rise is } 3 .} &{m=\frac{3}{\operatorname{rnn}}} \\ {\text { The run is 4. }} & {m=\frac{3}{4}} \\ { } & {\text { The slope is } \frac{3}{4} \text { . }}\end{array}$
This means that the line rises 3 units for every 4 units of run.
Exercise $$\PageIndex{2}$$
What is the slope of the line on the geoboard shown?
$$\frac{4}{3}$$
Exercise $$\PageIndex{3}$$
What is the slope of the line on the geoboard shown?
$$\frac{1}{4}$$
Exercise $$\PageIndex{4}$$
What is the slope of the line on the geoboard shown?
Use the definition of slope: $$m = \frac{\text{rise}}{\text{run}}$$.
Start at the left peg and count the units down and to the right to reach the second peg.
$\begin{array}{ll}{\text { The rise is }-1 .} & {m=\frac{-1}{\operatorname{run}}} \\ {\text { The run is } 3 .} & {m=\frac{-1}{3}} \\ {} & {m=-\frac{1}{3}} \\ {} &{\text { The slope is }-\frac{1}{3}}\end{array}$
This means that the line drops 1 unit for every 3 units of run.
Exercise $$\PageIndex{5}$$
What is the slope of the line on the geoboard?
$$-\frac{2}{3}$$
Exercise $$\PageIndex{6}$$
What is the slope of the line on the geoboard?
$$-\frac{4}{3}$$
Notice that in Exercise $$\PageIndex{1}$$ the slope is positive and in Exercise $$\PageIndex{4}$$ the slope is negative. Do you notice any difference in the two lines shown in Figure(a) and Figure(b)?
We ‘read’ a line from left to right just like we read words in English. As you read from left to right, the line in Figure(a) is going up; it has positive slope. The line in Figure(b) is going down; it has negative slope.
POSITIVE AND NEGATIVE SLOPES
Exercise $$\PageIndex{7}$$
Use a geoboard to model a line with slope $$\frac{1}{2}$$.
To model a line on a geoboard, we need the rise and the run.
$$\begin{array}{ll} {\text { Use the slope formula. }} &{m = \frac{\text{rise}}{\text{run}}} \\ {\text { Replace } m \text { with } \frac{1}{2} \text { . }} &{ \frac{1}{2} = \frac{\text{rise}}{\text{run}}}\\ {\text { So, the rise is } 1 \text { and the run is } 2 \text { . }} \\ {\text { Start at a peg in the lower left of the geoboard. }} \\ {\text { Stretch the rubber band up } 1 \text { unit, and then right } 2 \text { units. }}\end{array}$$
The hypotenuse of the right triangle formed by the rubber band represents a line whose slope is $$\frac{1}{2}$$.
Exercise $$\PageIndex{8}$$
Model the slope $$m = \frac{1}{3}$$. Draw a picture to show your results.
Exercise $$\PageIndex{9}$$
Model the slope $$m = \frac{3}{2}$$. Draw a picture to show your results.
Exercise $$\PageIndex{10}$$
Use a geoboard to model a line with slope $$\frac{-1}{4}$$.
$$\begin{array}{ll} {\text { Use the slope formula. }} &{m = \frac{\text{rise}}{\text{run}}} \\ {\text { Replace } m \text { with } \frac{-1}{4} \text { . }} &{ \frac{-1}{4} = \frac{\text{rise}}{\text{run}}}\\ {\text { So, the rise is } -1 \text { and the run is } 4 \text { . }} \\ {\text { Since the rise is negative, we choose a starting peg on the upper left that will give us room to count down.}} \\ {\text { We stretch the rubber band down } 1 \text { unit, and then right } 4 \text { units. }}\end{array}$$
The hypotenuse of the right triangle formed by the rubber band represents a line whose slope is $$\frac{-1}{4}$$.
Exercise $$\PageIndex{11}$$
Model the slope $$m = \frac{-2}{3}$$. Draw a picture to show your results.
Exercise $$\PageIndex{12}$$
Model the slope $$m = \frac{-1}{3}$$. Draw a picture to show your results.
## Use $$m = \frac{\text{rise}}{\text{run}}$$ to Find the Slope of a Line from its Graph
Now, we’ll look at some graphs on the xy-coordinate plane and see how to find their slopes. The method will be very similar to what we just modeled on our geoboards.
To find the slope, we must count out the rise and the run. But where do we start?
We locate two points on the line whose coordinates are integers. We then start with the point on the left and sketch a right triangle, so we can count the rise and run.
Exercise $$\PageIndex{13}$$:
Find the slope of the line shown.
Exercise $$\PageIndex{14}$$
Find the slope of the line shown.
$$\frac{2}{5}$$
Exercise $$\PageIndex{15}$$
Find the slope of the line shown.
$$\frac{3}{4}$$
FIND THE SLOPE OF A LINE FROM ITS GRAPH
1. Locate two points on the line whose coordinates are integers.
2. Starting with the point on the left, sketch a right triangle, going from the first point to the second point.
3. Count the rise and the run on the legs of the triangle.
4. Take the ratio of rise to run to find the slope, $$m = \frac{\text{rise}}{\text{run}}$$.
Exercise $$\PageIndex{16}$$
Find the slope of the line shown.
Locate two points on the graph whose coordinates are integers. (0,5) and (3,3) Which point is on the left? (0,5) Starting at (0,5), sketch a right triangle to (3,3). Count the rise—it is negative. The rise is −2. Count the run. The run is 3. Use the slope formula. $$m = \frac{\text{rise}}{\text{run}}$$ Substitute the values of the rise and run. $$m = \frac{-2}{3}$$ Simplify. $$m = -\frac{2}{3}$$ The slope of the line is $$-\frac{2}{3}$$.
So y increases by 3 units as xx decreases by 2 units.
What if we used the points (−3,7) and (6,1) to find the slope of the line?
The rise would be −6 and the run would be 9. Then $$m = \frac{-6}{9}$$, and that simplifies to $$m = -\frac{2}{3}$$. Remember, it does not matter which points you use—the slope of the line is always the same.
Exercise $$\PageIndex{17}$$
Find the slope of the line shown.
$$-\frac{4}{3}$$
Exercise $$\PageIndex{18}$$
Find the slope of the line shown.
$$-\frac{3}{5}$$
In the last two examples, the lines had y-intercepts with integer values, so it was convenient to use the y-intercept as one of the points to find the slope. In the next example, the y-intercept is a fraction. Instead of using that point, we’ll look for two other points whose coordinates are integers. This will make the slope calculations easier.
Exercise $$\PageIndex{19}$$
Find the slope of the line shown.
Locate two points on the graph whose coordinates are integers. (2,3) and (7,6) Which point is on the left? (2,3) Starting at (2,3), sketch a right triangle to (7,6). Count the rise. The rise is 3. Count the run. The run is 5. Use the slope formula. $$m = \frac{\text{rise}}{\text{run}}$$ Substitute the values of the rise and run. $$m = \frac{3}{5}$$ The slope of the line is $$\frac{3}{5}$$.
This means that y increases 5 units as x increases 3 units.
When we used geoboards to introduce the concept of slope, we said that we would always start with the point on the left and count the rise and the run to get to the point on the right. That way the run was always positive and the rise determined whether the slope was positive or negative.
What would happen if we started with the point on the right?
Let’s use the points (2,3) and (7,6) again, but now we’ll start at (7,6).
$$\begin{array}{ll} {\text {Count the rise.}} &{\text{The rise is −3.}} \\ {\text {Count the run. It goes from right to left, so}} &{\text {The run is−5.}} \\{\text{it is negative.}} &{}\\ {\text {Use the slope formula.}} &{m = \frac{\text{rise}}{\text{run}}} \\ {\text{Substitute the values of the rise and run.}} &{m = \frac{-3}{-5}} \\{} &{\text{The slope of the line is }\frac{3}{5}}\\ \end{array}$$
It does not matter where you start—the slope of the line is always the same.
Exercise $$\PageIndex{20}$$
Find the slope of the line shown.
$$\frac{5}{4}$$
Exercise $$\PageIndex{21}$$
Find the slope of the line shown.
$$\frac{3}{2}$$
## Find the Slope of Horizontal and Vertical Lines
Do you remember what was special about horizontal and vertical lines? Their equations had just one variable.
$\begin{array}{ll}{\textbf {Horizontal line } y=b} & {\textbf {Vertical line } x=a} \\ {y \text { -coordinates are the same. }} & {x \text { -coordinates are the same. }}\end{array}$
So how do we find the slope of the horizontal line y=4y=4? One approach would be to graph the horizontal line, find two points on it, and count the rise and the run. Let’s see what happens when we do this.
$$\begin{array}{ll} {\text {What is the rise?}} & {\text {The rise is 0.}} \\ {\text {What is the run?}} & {\text {The run is 3.}}\\ {} &{m = \frac{\text{rise}}{\text{run}}} \\ {} &{m = \frac{0}{3}} \\ {\text{What is the slope?}} &{m = 0} \\ {} &{\text{The slope of the horizontal line y = 4 is 0.}} \end{array}$$
All horizontal lines have slope 0. When the y-coordinates are the same, the rise is 0.
SLOPE OF A HORIZONTAL LINE
The slope of a horizontal line, y=b, is 0.
The floor of your room is horizontal. Its slope is 0. If you carefully placed a ball on the floor, it would not roll away.
Now, we’ll consider a vertical line, the line.
$$\begin{array}{ll} {\text {What is the rise?}} & {\text {The rise is 2.}} \\ {\text {What is the run?}} & {\text {The run is 0.}}\\ {} &{m = \frac{\text{rise}}{\text{run}}} \\ {\text{What is the slope?}} &{m = \frac{2}{0}} \end{array}$$
But we can’t divide by 0. Division by 0 is not defined. So we say that the slope of the vertical line x=3x=3 is undefined.
The slope of any vertical line is undefined. When the x-coordinates of a line are all the same, the run is 0.
SLOPE OF A VERTICAL LINE
The slope of a vertical line, x=a, is undefined.
Exercise $$\PageIndex{22}$$
Find the slope of each line:
ⓐ x=8 ⓑ y=−5.
ⓐ x=8
This is a vertical line.
Its slope is undefined.
ⓑ y=−5
This is a horizontal line.
It has slope 0.
Exercise $$\PageIndex{23}$$
Find the slope of the line: x=−4.
undefined
Exercise $$\PageIndex{24}$$
Find the slope of the line: y=7.
0
QUICK GUIDE TO THE SLOPES OF LINES
Remember, we ‘read’ a line from left to right, just like we read written words in English.
## Use the Slope Formula to find the Slope of a Line Between Two Points
Doing the Manipulative Mathematics activity “Slope of Lines Between Two Points” will help you develop a better understanding of how to find the slope of a line between two points.
Sometimes we’ll need to find the slope of a line between two points when we don’t have a graph to count out the rise and the run. We could plot the points on grid paper, then count out the rise and the run, but as we’ll see, there is a way to find the slope without graphing. Before we get to it, we need to introduce some algebraic notation.
We have seen that an ordered pair (x,y) gives the coordinates of a point. But when we work with slopes, we use two points. How can the same symbol (x,y) be used to represent two different points? Mathematicians use subscripts to distinguish the points.
$\begin{array}{ll}{\left(x_{1}, y_{1}\right)} & {\text { read }^{‘} x \text { sub } 1, y \text { sub } 1^{'}} \\ {\left(x_{2}, y_{2}\right)} & {\text { read }^{‘} x \text { sub } 2, y \text { sub } 2^{’}}\end{array}$
The use of subscripts in math is very much like the use of last name initials in elementary school. Maybe you remember Laura C. and Laura M. in your third grade class?
We will use $$\left(x_{1}, y_{1}\right)$$ to identify the first point and $$\left(x_{2}, y_{2}\right)$$ to identify the second point.
If we had more than two points, we could use $$\left(x_{3}, y_{3}\right)$$, $$\left(x_{4}, y_{4}\right)$$, and so on.
Let’s see how the rise and run relate to the coordinates of the two points by taking another look at the slope of the line between the points (2,3) and (7,6).
Since we have two points, we will use subscript notation, $$\left( \begin{array}{c}{x_{1}, y_{1}} \\ {2,3}\end{array}\right) \left( \begin{array}{c}{x_{2}, y_{2}} \\ {7,6}\end{array}\right)$$.
On the graph, we counted the rise of 3 and the run of 5.
Notice that the rise of 3 can be found by subtracting the y-coordinates 6 and 3.
$3=6-3$
And the run of 5 can be found by subtracting the x-coordinates 7 and 2.
$5 = 7 - 2$
We know $$m = \frac{\text{rise}}{\text{run}}$$. So $$m = \frac{3}{5}$$.
We rewrite the rise and run by putting in the coordinates $$m = \frac{6-3}{7-2}$$
But 6 is y2, the y-coordinate of the second point and 3 is y1, the y-coordinate of the first point.
So we can rewrite the slope using subscript notation. $$m = \frac{y2-y1}{7-2}$$
Also, 7 is x2, the x-coordinate of the second point and 2 is x1, the x-coordinate of the first point.
So, again, we rewrite the slope using subscript notation. $$m = \frac{y2-y1}{x2-x1}$$
We’ve shown that $$m = \frac{y2-y1}{x2-x1}$$ is really another version of $$m = \frac{\text{rise}}{\text{run}}$$. We can use this formula to find the slope of a line when we have two points on the line.
SLOPE FORMULA
The slope of the line between two points $$\left(x_{1}, y_{1}\right)$$ and $$\left(x_{2}, y_{2}\right)$$ is
$m=\frac{y_{2}-y_{1}}{x_{2}-x_{1}}$
This is the slope formula.
The slope is:
$\begin{array}{c}{y \text { of the second point minus } y \text { of the first point }} \\ {\text { over }} \\ {x \text { of the second point minus } x \text { of the first point. }}\end{array}$
Exercise $$\PageIndex{25}$$
Use the slope formula to find the slope of the line between the points (1,2) and (4,5).
$$\begin{array} {ll} {\text{We’ll call (1,2) point #1 and (4,5) point #2.}} &{\left( \begin{array}{c}{x_{1}, y_{1}} \\ {1,2}\end{array}\right) \left( \begin{array}{c}{x_{2}, y_{2}} \\ {4,5}\end{array}\right)} \\ {\text{Use the slope formula.}} &{m = \frac{y_{2}-y_{1}}{x_{2}-x_{1}}} \\ {\text{Substitute the values.}} &{} \\ {\text{y of the second point minus y of the first point}} &{m=\frac{5-2}{x_{2}-x_{1}}} \\{\text{x of the second point minus x of the first point}} &{m = \frac{5-2}{4-1}} \\{\text{Simplify the numerator and the denominator.}} &{m = \frac{3}{3}} \\{\text{Simplify.}} &{m = 1} \end{array}$$
Let’s confirm this by counting out the slope on a graph using $$m = \frac{\text{rise}}{\text{run}}$$.
It doesn’t matter which point you call point #1 and which one you call point #2. The slope will be the same. Try the calculation yourself.
Exercise $$\PageIndex{26}$$
Use the slope formula to find the slope of the line through the points: (8,5) and (6,3).
1
Exercise $$\PageIndex{27}$$
Use the slope formula to find the slope of the line through the points: (1,5) and (5,9).
1
Exercise $$\PageIndex{28}$$
Use the slope formula to find the slope of the line through the points (−2,−3) and (−7,4).
$$\begin{array} {ll} {\text{We’ll call (-2, -3) point #1 and (-7,4) point #2.}} &{\left( \begin{array}{c}{x_{1}, y_{1}} \\ {-2,-3}\end{array}\right) \left( \begin{array}{c}{x_{2}, y_{2}} \\ {-7,4}\end{array}\right)} \\ {\text{Use the slope formula.}} &{m = \frac{y_{2}-y_{1}}{x_{2}-x_{1}}} \\ {\text{Substitute the values.}} &{} \\ {\text{y of the second point minus y of the first point}} &{m=\frac{4-(-3)}{x_{2}-x_{1}}} \\{\text{x of the second point minus x of the first point}} &{m = \frac{4-(-3)}{-7-(-2)}} \\{\text{Simplify the numerator and the denominator.}} &{m = \frac{7}{-5}} \\{\text{Simplify.}} &{m = -\frac{7}{5}} \end{array}$$
Let’s verify this slope on the graph shown.
\begin{aligned} m &=\frac{\text { rise }}{\text { run }} \\ m &=\frac{-7}{5} \\ m &=-\frac{7}{5} \end{aligned}
Exercise $$\PageIndex{29}$$
Use the slope formula to find the slope of the line through the points: (−3,4) and (2,−1).
-1
Exercise $$\PageIndex{30}$$
Use the slope formula to find the slope of the line through the pair of points: (−2,6) and (−3,−4).
10
## Graph a Line Given a Point and the Slope
Up to now, in this chapter, we have graphed lines by plotting points, by using intercepts, and by recognizing horizontal and vertical lines.
One other method we can use to graph lines is called the point–slope method. We will use this method when we know one point and the slope of the line. We will start by plotting the point and then use the definition of slope to draw the graph of the line.
Exercise $$\PageIndex{31}$$
Graph the line passing through the point (1,−1) whose slope is $$m = \frac{3}{4}$$.
Exercise $$\PageIndex{32}$$
Graph the line passing through the point (2,−2) with the slope $$m = \frac{4}{3}$$.
Exercise $$\PageIndex{33}$$
Graph the line passing through the point (−2,3) with the slope $$m=\frac{1}{4}$$.
GRAPH A LINE GIVEN A POINT AND THE SLOPE.
1. Plot the given point.
2. Use the slope formula $$m=\frac{\text { rise }}{\text { rise }}$$ to identify the rise and the run.
3. Starting at the given point, count out the rise and run to mark the second point.
4. Connect the points with a line.
Exercise $$\PageIndex{34}$$
Graph the line with y-intercept 2 whose slope is $$m=−\frac{2}{3}$$.
Plot the given point, the y-intercept, (0,2).
$$\begin{array} {ll} {\text{Identify the rise and the run.}} &{m =-\frac{2}{3}} \\ {} &{\frac{\text { rise }}{\text { run }} =\frac{-2}{3} }\\ {}&{\text { rise } =-2} \\ {} &{\text { run } =3} \end{array}$$
Count the rise and the run. Mark the second point.
Connect the two points with a line.
You can check your work by finding a third point. Since the slope is $$m=−\frac{2}{3}$$, it can be written as $$m=\frac{2}{-3}$$. Go back to (0,2) and count out the rise, 2, and the run, −3.
Exercise $$\PageIndex{35}$$
Graph the line with the y-intercept 4 and slope $$m=−\frac{5}{2}$$.
Exercise $$\PageIndex{36}$$
Graph the line with the x-intercept −3 and slope $$m=−\frac{3}{4}$$.
Exercise $$\PageIndex{37}$$
Graph the line passing through the point (−1,−3) whose slope is m=4.
Plot the given point.
$$\begin{array} {ll} {\text{Identify the rise and the run.}} &{ \text{ m = 4}} \\ {\text{Write 4 as a fraction.}} &{\frac{\text {rise}}{\text {run}} =\frac{4}{1} }\\ {}&{\text {rise} =4\quad\text {run} =3} \end{array}$$
Count the rise and run and mark the second point.
Connect the two points with a line.
You can check your work by finding a third point. Since the slope is m=4, it can be written as $$m = \frac{-4}{-1}$$. Go back to (−1,−3) and count out the rise, −4, and the run, −1.
Exercise $$\PageIndex{38}$$
Graph the line with the point (−2,1) and slope m=3.
Exercise $$\PageIndex{39}$$
Graph the line with the point (4,−2) and slope m=−2.
## Solve Slope Applications
At the beginning of this section, we said there are many applications of slope in the real world. Let’s look at a few now.
Exercise $$\PageIndex{40}$$
The ‘pitch’ of a building’s roof is the slope of the roof. Knowing the pitch is important in climates where there is heavy snowfall. If the roof is too flat, the weight of the snow may cause it to collapse. What is the slope of the roof shown?
$$\begin{array}{ll}{\text { Use the slope formula. }} & {m=\frac{\text { rise }}{\text { rise }}} \\ {\text { Substitute the values for rise and run. }} & {m=\frac{9}{18}} \\ {\text { Simplify. }} & {m=\frac{1}{2}}\\ {\text{The slope of the roof is }\frac{1}{2}.} &{} \\ {} &{\text{The roof rises 1 foot for every 2 feet of}} \\ {} &{\text{horizontal run.}} \end{array}$$
Exercise $$\PageIndex{41}$$
Use Exercise $$\PageIndex{40}$$, substituting the rise = 14 and run = 24.
$$\frac{7}{12}$$
Exercise $$\PageIndex{42}$$
Use Exercise $$\PageIndex{40}$$, substituting rise = 15 and run = 36.
$$\frac{5}{12}$$
Exercise $$\PageIndex{43}$$
Have you ever thought about the sewage pipes going from your house to the street? They must slope down $$\frac{1}{4}$$ inch per foot in order to drain properly. What is the required slope?
$$\begin{array} {ll} {\text{Use the slope formula.}} &{m=\frac{\text { rise }}{\text { run }}} \\ {} &{m=\frac{-\frac{1}{4} \mathrm{inch}}{1 \text { foot }}}\\ {}&{m=\frac{-\frac{1}{4} \text { inch }}{12 \text { inches }}} \\ {\text{Simplify.}} &{m=-\frac{1}{48}} \\{} &{\text{The slope of the pipe is }-\frac{1}{48}} \end{array}$$
The pipe drops 1 inch for every 48 inches of horizontal run.
Exercise $$\PageIndex{44}$$
Find the slope of a pipe that slopes down $$\frac{1}{3}$$ inch per foot.
$$-\frac{1}{36}$$
Exercise $$\PageIndex{45}$$
Find the slope of a pipe that slopes down $$\frac{3}{4}$$ inch per yard.
$$-\frac{1}{48}$$
Access these online resources for additional instruction and practice with understanding slope of a line.
## Key Concepts
• Find the Slope of a Line from its Graph using $$m=\frac{\text { rise }}{\text { run }}$$
1. Locate two points on the line whose coordinates are integers.
2. Starting with the point on the left, sketch a right triangle, going from the first point to the second point.
3. Count the rise and the run on the legs of the triangle.
4. Take the ratio of rise to run to find the slope.
• Graph a Line Given a Point and the Slope
1. Plot the given point.
2. Use the slope formula $$m=\frac{\text { rise }}{\text { run }}$$ to identify the rise and the run.
3. Starting at the given point, count out the rise and run to mark the second point.
4. Connect the points with a line.
• Slope of a Horizontal Line
• The slope of a horizontal line, y=b, is 0.
• Slope of a vertical line
• The slope of a vertical line, x=a, is undefined | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8858646154403687, "perplexity": 488.0861443852282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150264.90/warc/CC-MAIN-20210724094631-20210724124631-00040.warc.gz"} |
https://standards.globalspec.com/std/3861711/astm-d7430-16 | ### This is embarrasing...
An error occurred while processing the form. Please try again in a few minutes.
### This is embarrasing...
An error occurred while processing the form. Please try again in a few minutes.
# ASTM International - ASTM D7430-16
## Standard Practice for Mechanical Sampling of Coal
inactive
Organization: ASTM International Publication Date: 1 June 2016 Status: inactive Page Count: 45 ICS Code (Coals): 73.040 ICS Code (Solid fuels): 75.160.10
##### significance And Use:
6.1 It is intended that this practice be used to provide a sample representative of the coal from which it is collected. Because of the variability of coal and the wide variety of mechanical... View More
##### scope:
1.1 This practice is divided into 4 parts A, B, C, and D. These 4 parts represent the previous standards D7256/D7256M, D4916, D4702, and D6518. These 4 standards are the 4 that govern the mechanical sampling of coal and have been combined into one document for the ease of reference of the users of these standards.
1.2 The scope of Part A can be found in Section 4.
1.3 The scope of Part B can be found in Section 13.
1.4 The scope of Part C can be found in Section 19.
1.5 The scope of Part D can be found in Section 31.
1.6 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. For specific hazard statements, see Sections 7, 16, 21, 34, and 37.1.1.
### Document History
April 15, 2018
Standard Practice for Mechanical Sampling of Coal
1.1 This practice is divided into four parts: A, B, C, and D. These four parts represent the previous standards D7256/D7256M, D4916, D4702, and D6518. These four standards are the four that govern...
April 15, 2018
Standard Practice for Mechanical Sampling of Coal
1.1 This practice is divided into four parts: A, B, C, and D. These four parts represent the previous standards D7256/D7256M, D4916, D4702, and D6518. These four standards are the four that govern...
March 1, 2018
Standard Practice for Mechanical Sampling of Coal
1.1 This practice is divided into four parts: A, B, C, and D. These four parts represent the previous standards D7256/D7256M, D4916, D4702, and D6518. These four standards are the four that govern...
November 1, 2017
Standard Practice for Mechanical Sampling of Coal
1.1 This practice is divided into 4 parts A, B, C, and D. These 4 parts represent the previous standards D7256/D7256M, D4916, D4702, and D6518. These 4 standards are the 4 that govern the mechanical...
September 15, 2016
Standard Practice for Mechanical Sampling of Coal
1.1 This practice is divided into 4 parts A, B, C, and D. These 4 parts represent the previous standards D7256/D7256M, D4916, D4702, and D6518. These 4 standards are the 4 that govern the mechanical...
September 15, 2016
Standard Practice for Mechanical Sampling of Coal
1.1 This practice is divided into 4 parts A, B, C, and D. These 4 parts represent the previous standards D7256/D7256M, D4916, D4702, and D6518. These 4 standards are the 4 that govern the mechanical...
June 15, 2016
Standard Practice for Mechanical Sampling of Coal
1.1 This practice is divided into 4 parts A, B, C, and D. These 4 parts represent the previous standards D7256/D7256M, D4916, D4702, and D6518. These 4 standards are the 4 that govern the mechanical...
ASTM D7430-16
June 1, 2016
Standard Practice for Mechanical Sampling of Coal
1.1 This practice is divided into 4 parts A, B, C, and D. These 4 parts represent the previous standards D7256/D7256M, D4916, D4702, and D6518. These 4 standards are the 4 that govern the mechanical...
November 1, 2015
Standard Practice for Mechanical Sampling of Coal
1.1 This practice is divided into 4 parts. These 4 parts represent the previous standards D7256/D7256M, D4916, D4702, and D6518. These 4 standards are the 4 that govern the mechanical sampling of...
May 15, 2015
Standard Practice for Mechanical Sampling of Coal
1.1 This practice is divided into 4 parts. These 4 parts represent the previous standards D7256/D7256M, D4916, D4702, and D6518. These 4 standards are the 4 that govern the mechanical sampling of...
February 1, 2015
Standard Practice for Mechanical Sampling of Coal
1.1 This practice is divided into 4 parts. These 4 parts represent the previous standards D7256/D7256M, D4916, D4702, and D6518. These 4 standards are the 4 that govern the mechanical sampling of...
May 15, 2014
Standard Practice for Mechanical Sampling of Coal
1.1 This practice is divided into 4 parts. These 4 parts represent the previous standards D7256/D7256M, D4916, D4702, and D6518. These 4 standards are the 4 that govern the mechanical sampling of...
April 1, 2013
Standard Practice for Mechanical Sampling of Coal
1.1 This practice is divided into 4 parts. These 4 parts represent the previous standards D7256/D7256M, D4916, D4702, and D6518. These 4 standards are the 4 that govern the mechanical sampling of...
November 1, 2012
Standard Practice for Mechanical Sampling of Coal
1.1 This practice is divided into 4 parts. These 4 parts represent the previous standards D7256/D7256M, D4916, D4702, and D6518. These 4 standards are the 4 that govern the mechanical sampling of...
October 1, 2012
Standard Practice for Mechanical Sampling of Coal
1.1 This practice is divided into 4 parts. These 4 parts represent the previous standards D7256/D7256M, D4916, D4702, and D6518. These 4 standards are the 4 that govern the mechanical sampling of...
May 15, 2012
Standard Practice for Mechanical Sampling of Coal
Top 1.2 Part A Mechanical Collection and Within-System Preparation of a Gross Sample of Coal from Moving Streams—Covers procedures for the mechanical collection of a sample under Classification...
October 1, 2011
Standard Practice for Mechanical Sampling of Coal
1.1 This practice is divided into 4 parts. These 4 parts represent the previous standards D7256/D7256M, D4916, D4702, and D6518. These 4 standards are the 4 that govern the mechanical sampling of...
October 1, 2011
Standard Practice for Mechanical Sampling of Coal
1.1 This practice is divided into 4 parts. These 4 parts represent the previous standards D7256/D7256M, D4916, D4702, and D6518. These 4 standards are the 4 that govern the mechanical sampling of...
June 1, 2011
Standard Practice for Mechanical Sampling of Coal
1.1 This practice is divided into 4 parts. These 4 parts represent the previous standards D7256/D7256M, D4916, D4702, and D6518. These 4 standards are the 4 that govern the mechanical sampling of...
September 1, 2010
Standard Practice for Mechanical Sampling of Coal
It is intended that this practice be used to provide a sample representative of the coal from which it is collected. Because of the variability of coal and the wide variety of mechanical sampling...
May 1, 2010
Standard Practice for Mechanical Sampling of Coal
1.1 This practice is divided into 4 parts. These 4 parts represent the previous standards D7256/D7256M, D4916, D4702, and D6518. These 4 standards are the 4 that govern the mechanical sampling of...
February 1, 2010
Standard Practice for Mechanical Sampling of Coal
1.1 This practice is divided into 4 parts. These 4 parts represent the previous standards D7256/D7256M, D4916, D4702, and D6518. These 4 standards are the 4 that govern the mechanical sampling of...
November 1, 2009
Standard Practice for Mechanical Sampling of Coal
1.1 This practice is divided into 4 parts. These 4 parts represent the previous standards D7256/D7256M, D4916, D4702, and D6518. These 4 standards are the 4 that govern the mechanical sampling of...
October 1, 2008
Standard Practice for Mechanical Sampling of Coal
1.1 This practice is divided into 4 parts. These 4 parts represent the previous standards D 7256/D 7256M , D 4916 , D 4702 , and D 6518 . These 4 standards are the 4 that govern the mechanical...
October 1, 2008
Standard Practice for Mechanical Sampling of Coal
1.1 This practice is divided into 4 parts. These 4 parts represent the previous standards D 7256/D 7256M , D 4916 , D 4702 , and D 6518 . These 4 standards are the 4 that govern the mechanical... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8854786157608032, "perplexity": 3313.969498592514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057508.83/warc/CC-MAIN-20210924080328-20210924110328-00332.warc.gz"} |
https://www.physicsforums.com/threads/question-about-frictional-and-normal-force.786281/ | # Question about frictional and normal force
Tags:
1. Dec 7, 2014
### imagesparkle
1. The problem statement, all variables and given/known data
Okay so there is a wall that has a mass of 150 kg and I am pushing it but the wall does not move. The coefficient of friction is 2 and I am trying to find the frictional force using the frictional force formula. Although, I need the normal force in which I am using to push the wall but don't know how to find the normal force.
2. Relevant equations
Fr = (mu)(Fn)
3. The attempt at a solution
2. Dec 7, 2014
### haruspex
You need to describe the set up better. Frictional force between what two objects - you and the wall?
Friction acts parallel to the surfaces in contact. If there is no movement and no other forces in that direction then the frictional force will be zero, regardless of the coefficient. The static frictional force is only as large as it needs to be to prevent relative motion. The correct equation is |Fr| <= mu |Fn|.
Draft saved Draft deleted
Similar Discussions: Question about frictional and normal force | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8042949438095093, "perplexity": 426.6696391428424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117874.26/warc/CC-MAIN-20170823055231-20170823075231-00247.warc.gz"} |
https://brilliant.org/discussions/thread/product-and-sum-of-eigenvalues/ | # This note has been used to help create the Eigenvalues and Eigenvectors wiki
Given a square matrix $$A$$, prove that the sum of its eigenvalues is equal to the trace of $$A$$, and the product of its eigenvalues is equal to the determinant of $$A$$.
Solution
This proof requires the investigation of the characteristic polynomial of $$A$$, which is found by taking the determinant of $$(A - \lambda{I}_{n})$$.
$A = \begin{bmatrix} {a}_{11} & \cdots &{a}_{1n} \\ \vdots &\ddots &\vdots \\ {a}_{n1} &\cdots & {a}_{nn}\\ \end{bmatrix}$
$A - {I}_{n}\lambda = \begin{bmatrix} {a}_{11} - \lambda & \cdots &{a}_{1n} \\ \vdots &\ddots &\vdots \\ {a}_{n1} &\cdots & {a}_{nn} - \lambda\\ \end{bmatrix}$
Observe that $$det(A - \lambda{I}_{n} ) = det(A) + ... + tr(A){(-\lambda)}^{n-1} + {(-\lambda)}^{n}$$.
Let $${r}_{1}, {r}_{2}, ...,{r}_{n}$$ be the roots of an n-order polynomial.
$P(\lambda) = ({r}_{1} - \lambda)({r}_{2} - \lambda)...({r}_{n} - \lambda)$ $P(\lambda) = \prod _{ i=i }^{ n }{ { r }_{ i } } +...+\sum _{ i=i }^{ n }{ { r }_{ i }{(-\lambda)}^{n-1} + {(-\lambda)}^{n} }$
Since the eigenvalues are the roots of a matrix polynomial, we can match $$P(x)$$ to $$det(A - \lambda{I}_{n})$$. Therefore it is clear that $\prod _{ i=i }^{ n }{ { \lambda }_{ i } = det(A)}$
and
$\sum _{ i=i }^{ n }{ { \lambda }_{ i } = tr(A)}.$
Check out my other notes at Proof, Disproof, and Derivation
Note by Steven Zheng
3 years, 11 months ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$...$$ or $...$ to ensure proper formatting.
2 \times 3 $$2 \times 3$$
2^{34} $$2^{34}$$
a_{i-1} $$a_{i-1}$$
\frac{2}{3} $$\frac{2}{3}$$
\sqrt{2} $$\sqrt{2}$$
\sum_{i=1}^3 $$\sum_{i=1}^3$$
\sin \theta $$\sin \theta$$
\boxed{123} $$\boxed{123}$$
Sort by:
Did you mean $$\displaystyle \sum_{i=i}^n \lambda_i = tr(A)$$ ?
- 3 years, 11 months ago
Fixed it
- 3 years, 11 months ago | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9979545474052429, "perplexity": 3168.2657818513608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593051.79/warc/CC-MAIN-20180722061341-20180722081341-00050.warc.gz"} |
https://proofwiki.org/wiki/Category:Symmetry_Groups | Category:Symmetry Groups
This category contains results about Symmetry Groups.
Definitions specific to this category can be found in Definitions/Symmetry Groups.
Let $P$ be a geometric figure.
Let $S_P$ be the set of all symmetries of $P$.
Let $\struct {S_P, \circ}$ be the algebraic structure such that $\circ$ denotes the composition of mappings.
Then $\struct {S_P, \circ}$ is called the symmetry group of $P$.
Subcategories
This category has the following 4 subcategories, out of 4 total.
Pages in category "Symmetry Groups"
The following 5 pages are in this category, out of 5 total. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9142768979072571, "perplexity": 691.2012263542503}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541317967.94/warc/CC-MAIN-20191216041840-20191216065840-00021.warc.gz"} |
http://math.stackexchange.com/questions/79984/algebra-equation-in-terms-of-variables-explanation-required | # Algebra equation - “in terms of” & variables, explanation required
I have a question in my book at the complicated equations chapter, it explain the difference between a term and a variable.
I would like to put here one question in order to somebody nice help me understand what is going on with it.
(I have actually cheated and saw the answers at the book, without actually knowing the answers, so I post here the solutions too).
This is the question on the book: "Match each equation to a description of which variable is in terms of which other variable."
$$T = 15d - 45 + 2^2.\qquad\qquad\text{Answer: Variable }T\text{ in terms of variable }d.$$
-
What is the question asked in the book? There are probably some general instructions about what you are supposed to do with the equation you posted, but you did not include them. Please provide them so that what you post is understandable. – Arturo Magidin Nov 7 '11 at 20:25
The equation you write gives you a relationship between the variable $T$ and the variable $d$. Each value of $d$ will give you a value of $T$ that makes the equation true, each value of $T$ will give you a value of $d$ that makes the equation true.
The equation gives you what is called an "explicit" expression of $T$ in terms of $d$: it tells you how to obtain the value of $T$ that corresponds to any particular value of $d$. That means that it "expresses [the value of] $T$ in terms of [the value of] $d$."
If you happen to know the value of $d$, then simply plugging in and performing the computations will give you the corresponding value of $T$. For example, if $d=4$, then plugging that into the right hand side of the equation gives you $$T = 15(4) - 45 + 2^2 = 60 - 45 + 4 = 19,$$ so when $d=4$, the value of $T$ that makes the equation true is $T=19$.
If you happen to know the value of $T$ instead, plugging it will not immediately yield a value for $d$; instead, you would need to do some algebra. If you happened to know that $T=19$, then you would have $$19 = 15d - 45 + 2^2.$$ From here, you would need to "solve for $d$" by performing algebraic manipulations to find that $d=4$.
So there is a subbstantial difference between using this equation to figure out the value of $T$ if you know the value of $d$; and using this equation to figure out the value of $d$ if you know the value of $T$. The value of $t$ is given explicitly as an expression involving the value of $d$; the value of $d$ is only given "implicitly" in terms of the value of $T$.
So we way that the variable $T$ is given "in terms" of the variable $d$: the expression tells you what to do to the value of $d$ in order to obtain the value of $T$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8401540517807007, "perplexity": 98.28243799005918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448399326483.8/warc/CC-MAIN-20151124210846-00295-ip-10-71-132-137.ec2.internal.warc.gz"} |
http://www.matapp.unimib.it/~ferrario/var/e/b1/sect0063.html | # 9.1 Applications to Lagrange’s Conjecture
D. Chern’s derivation of quasi-Noetherian, continuous curves was a milestone in microlocal number theory. Every student is aware that every contra-irreducible homeomorphism is freely arithmetic, smooth and separable. A central problem in harmonic category theory is the classification of primes. So in [287], the main result was the classification of locally Deligne, injective vector spaces. Therefore is it possible to classify non-projective, hyper-continuously Euler, bounded hulls? Here, surjectivity is trivially a concern. In this context, the results of [280] are highly relevant. In [240], the main result was the extension of discretely differentiable, dependent systems. Recently, there has been much interest in the characterization of groups. It would be interesting to apply the techniques of [294] to pairwise super-characteristic, partial, totally Laplace sets.
Lemma 9.1.1. Suppose $\phi \left( i^{-3} \right) = \sum _{\nu \in Z} \hat{\Sigma } \left( H’ \cup \varphi , \emptyset \times e \right) + \dots \times \tilde{D} \left( e \omega , \| {\gamma ^{(\eta )}} \| \cdot 0 \right) .$ Let ${\mathcal{{W}}^{(N)}}$ be an almost everywhere ultra-degenerate number. Then $\overline{\| C \| } > \liminf \overline{1 B}.$
Proof. We proceed by induction. One can easily see that if the Riemann hypothesis holds then Beltrami’s criterion applies. Note that $| \mathfrak {{q}} | = 0$. Because every super-injective, isometric, ultra-normal factor is super-linear, if $\kappa$ is conditionally Frobenius then $-\infty ^{4} = {\varepsilon _{C}}^{-1} \left( \aleph _0 \right)$. Hence if $\hat{U}$ is pointwise minimal then $W = \mathcal{{Q}}”$. This trivially implies the result.
Theorem 9.1.2. \begin{align*} \overline{0^{-7}} & \in \int _{0}^{-1} \overline{-\mathfrak {{u}}} \, d \zeta \\ & = \sum _{\eta '' \in \mathfrak {{u}}''} \oint _{i}^{2} \tau \, d \sigma \pm \dots + \mathfrak {{t}} \left( \frac{1}{-\infty }, \pi \vee \| \bar{Z} \| \right) .\end{align*}
Proof. We show the contrapositive. Let us assume ${\eta _{\mathbf{{b}}}} \ni B$. Because $i ( \Delta ) \| \tau \| \ge \tan \left( \mathcal{{D}}” W \right)$, if Germain’s criterion applies then $\mathscr {{V}} ( W ) \neq \varepsilon ”$. The result now follows by well-known properties of Poisson subgroups.
Lemma 9.1.3. Let $\beta ’ = | \phi |$. Suppose we are given a pseudo-canonical, injective subgroup $\mathscr {{F}}$. Further, let $\mathbf{{a}} < \tilde{g}$. Then $\bar{t} > \| V’ \|$.
Proof. We begin by observing that $B’ \ni \infty$. We observe that there exists an almost surely continuous additive element acting quasi-pairwise on an one-to-one system. Therefore if $g \subset 1$ then there exists a continuously linear and null super-continuously Siegel Archimedes space. By a little-known result of Darboux–Fibonacci [210], $0 \cup \bar{e} = 2 {\mathfrak {{k}}_{K}}$. The converse is trivial.
Proposition 9.1.4. Let us assume we are given a real, separable, almost surely connected morphism acting compactly on an universally co-Abel isomorphism $\Psi$. Suppose ${\mathbf{{i}}_{\mathscr {{M}},D}}$ is super-independent and anti-analytically ultra-degenerate. Then $-\infty \ni \overline{O^{-9}}$.
Proof. This is left as an exercise to the reader.
Proposition 9.1.5. $–\infty \le \tan \left( \Phi ^{-9} \right)$.
Proof. See [83].
Theorem 9.1.6. Let $\mathcal{{L}}$ be a semi-degenerate, left-Atiyah, finitely admissible triangle. Let $\eta ” = | Q |$ be arbitrary. Then $\ell ’ \wedge \aleph _0 \le U \left( \frac{1}{1}, w ( \varphi ” ) \right)$.
Proof. This is left as an exercise to the reader.
Proposition 9.1.7. Assume there exists a Selberg reversible, commutative, onto set. Let $\tilde{\mathfrak {{a}}}$ be a monodromy. Further, suppose $M \to U’$. Then $| {\Sigma _{H}} | = 2$.
Proof. We proceed by induction. Trivially, if $\mathbf{{p}}$ is not distinct from $\mathscr {{R}}$ then ${\mu _{\beta ,\mathfrak {{u}}}} \subset \tilde{\alpha }$. On the other hand, if ${\rho _{\mathcal{{K}},\mathbf{{m}}}}$ is geometric then ${s_{\mathscr {{D}},\mathbf{{z}}}} \equiv x$. Note that if Hilbert’s criterion applies then $c$ is $e$-finite and quasi-singular. Next, if $d \in P”$ then $K$ is positive. By a standard argument, if $V$ is isomorphic to $\mathbf{{y}}$ then Beltrami’s conjecture is false in the context of intrinsic fields. Next, if Euclid’s criterion applies then $\rho ’ \equiv i$. In contrast, \begin{align*} \tanh \left( {\mathcal{{S}}_{i}} ( {O_{M,\Delta }} ) \alpha \right) & = \left\{ 0 \from \frac{1}{1} < \lim \cosh \left( 1^{9} \right) \right\} \\ & = \left\{ -i \from \overline{-1^{-1}} \ge \bigcup _{\Xi ' = \infty }^{0} f \left(-\emptyset , \| \mathbf{{d}} \| \right) \right\} .\end{align*} Since $\mathcal{{J}}’ \subset {L^{(K)}}$, $\hat{N} \left( 0-\hat{\gamma } ( \pi ), \dots , {Z_{\Gamma ,\Theta }} ( s ) \right) = \lim _{\lambda \to \aleph _0} \int \log ^{-1} \left( {Y^{(\mathcal{{O}})}} \right) \, d \mathscr {{N}}.$ The remaining details are trivial. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996289014816284, "perplexity": 1086.9041646216444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887660.30/warc/CC-MAIN-20180118230513-20180119010513-00791.warc.gz"} |
https://jordanbell.info/euler/euler-algebra-I-I-04.html | ### Part I. Section I. Chapter 4. “Of the Nature of whole Numbers, or Integers, with respect to their Factors.”
37 We have observed that a product is generated by the multiplication of two or more numbers together, and that these numbers are called factors. Thus, the numbers $$a$$, $$b$$, $$c$$, $$d$$, are the factors of the product $$abcd$$.
38 If, therefore, we consider all whole numbers as products of two or more numbers multiplied together, we shall soon find that some of them cannot result from such a multiplication, and consequently have not any factors; while others may be the products of two or more numbers multiplied together, and may consequently have two or more factors. Thus 4 is produced by 2 · 2; 6 by 2 · 3; 8 by 2 · 2 · 2; 27 by 3 · 3 · 3; and 10 by 2 · 5, etc.
39 But on the other hand, the numbers 2, 3, 5, 7, 11, 13, 17, etc. cannot be represented in the same manner by factors, unless for that purpose we make use of unity, and represent 2, for instance, by 1 · 2. But the numbers which are multiplied by 1 remaining the same, it is not proper to reckon unity as a factor.
All numbers, therefore, such as 2, 3, 5, 7, 11, 13, 17, etc. which cannot be represented by factors, are called simple, or prime numbers; whereas others, as 4, 6, 8, 9, 10,
12, 14, 15, 16, 18, etc. which may be represented by factors, are called composite numbers.1
40 Simple or prime numbers deserve therefore particular attention, since they do not result from the multiplication of two or more numbers. It is also particularly worthy of observation, that if we write these numbers in succession as they follow each other, thus,
2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, etc.
we can trace no regular order; their increments being sometimes greater, sometimes less; and hitherto no one has been able to discover whether they follow any certain law or not.
41 All composite numbers, which may be represented by factors, result from the prime numbers above mentioned; that is to say, all their factors are prime numbers. For, if we find a factor which is not a prime number, it may always be decomposed and represented by two or more prime numbers. When we have represented, for instance, the number 30 by 5 · 6, it is evident that 6 not being a prime number, but being produced by 2 · 3, we might have represented 30 by 5 · 2 · 3, or by 2 · 3 · 5; that is to say, by factors which are all prime numbers.
42 If we now consider those composite numbers which may be resolved into prime factors, we shall observe a great difference among them; thus we shall find that some have only two factors, that others have three, and others a still greater number. We have already seen, for example, that
4 is the same as 2 · 2,
6 is the same as 2 · 3,
8 is the same as 2 · 2 · 2,
9 is the same as 3 · 3,
10 is the same as 2 · 5,
12 is the same as 2 · 2 · 3,
14 is the same as 2 · 7,
15 is the same as 3 · 5,
etc.
43 Hence, it is easy to find a method for analysing any number, or resolving it into its simple factors. Let there be proposed, for instance, the number 360; we shall represent it first by 2 · 180. Now 180 is equal to 2 · 90, and
90 is the same as 2 · 45,
45 is the same as 3 · 15,
15 is the same as 3 · 5.
So that the number 360 may be represented by these simple factors,
2 · 2 · 2 · 3 · 3 · 5;
since all these numbers multiplied together produce 360.
44 This shows, that prime numbers cannot be divided by other numbers; and, on the other hand, that the simple factors of compound numbers are found most conveniently, and with the greatest certainty, by seeking the simple, or prime numbers, by which those compound numbers are divisible. But for this division is necessary; we shall therefore explain the rules of that operation in the following chapter.
#### Editions
1. Leonhard Euler. Elements of Algebra. Translated by Rev. John Hewlett. Third Edition. Longmans, Hurst, Rees, Orme, and Co. London. 1822.
2. Leonhard Euler. Vollständige Anleitung zur Algebra. Mit den Zusätzen von Joseph Louis Lagrange. Herausgegeben von Heinrich Weber. B. G. Teubner. Leipzig and Berlin. 1911. Leonhardi Euleri Opera omnia. Series prima. Opera mathematica. Volumen primum.
1. According to Euclid’s definitions, 1 (unity) is not considered a number, and therefore is not considered a prime number. “Greek mathematicians tend to conceive of number (arithmos) as a plurality of units. Perhaps a better translation, without our deeply entrenched notions, would be ‘count’.” Mendell, Henry, “Aristotle and Mathematics”, The Stanford Encyclopedia of Philosophy (Fall 2019 Edition), Edward N. Zalta (ed.), URL = https://plato.stanford.edu/archives/fall2019/entries/aristotle-mathematics/. Section 10. Unit (monas) and Number (arithmos)
According to Euler’s terminology, 1 is described as unity and is not considered either prime or composite: thus a positive integer is either unity, prime, or composite. Modern mathematics follows the same convention as Euler: 1 is a positive integer with exactly one positive factor; a prime number is a positive integer with exactly two positive factors; and a composite number is a positive integer with more than two positive factors. Hewlett’s translation - unlike the German original - adds material treating 1 as a prime number, which I have removed. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8650608062744141, "perplexity": 568.1905489485403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949958.54/warc/CC-MAIN-20230401094611-20230401124611-00404.warc.gz"} |
http://clay6.com/qa/14410/if-n-is-an-integer-which-leaves-remainder-one-when-divided-by-three-then-1- | If $n$ is an integer which leaves remainder one when divided by three, then $(1+\sqrt {3}i)^n+(1-\sqrt {3}i)^n=$
$\begin {array} {1 1} (1)\;-2^{n+1} & \quad (2)\;2^{n+1} \\ (3)\;-(-2)^n & \quad (4)\;-2^n \end {array}$
$(3)\;-(-2)^n$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8351802229881287, "perplexity": 100.33172243671325}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948617.86/warc/CC-MAIN-20180426222608-20180427002608-00513.warc.gz"} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Book%3A_ChemPRIME_(Moore_et_al.)/03%3A_Using_Chemical_Equations_in_Calculations/3.07%3A_Energy | # 3.7: Energy
Energy is usually defined as the capability for doing work. For example, a billiard ball can collide with a second ball, changing the direction or speed of motion of the latter. In such a process the motion of the first ball would also be altered. We would say that one billiard ball did work on (transferred energy to) the other.
## Kinetic Energy
Image source: Smart Learning for All
Energy due to motion is called kinetic energy and is represented by Ek. For an object moving in a straight line, the kinetic energy is one-half the product of the mass and the square of the speed:
$E_{k} = \frac{1}{2} mu^{2} \label{1}$
where
• m = mass of the object
• u = speed of object
If the two billiard balls mentioned above were studied in outer space, where friction due to their collisions with air molecules or the surface of a pool table would be negligible, careful measurements would reveal that their total kinetic energy would be the same before and after they collided. This is an example of the law of conservation of energy, which states that energy cannot be created or destroyed under the usual conditions of everyday life. Whenever there appears to be a decrease in energy somewhere, there is a corresponding increase somewhere else.
Example $$\PageIndex{1}$$ : Kinetic Energy
Calculate the kinetic energy of a Volkswagen Beetle of mass 844 kg (1860 lb) which is moving at 13.4 m s–1 (30 miles per hour).
Solution:
$$\large E_{k} = \frac{1}{2} m u^{2} = \frac{1}{2} \times 844 \text{ kg} \times ( 13.4 \text{ m} \text{ s}^{-1} )^{2} = 7.58 \times 10^{4} \text{ kg}\text{ m}^{2} \text{ s}^{-2}$$
In other words the units for energy are derived from the SI base units kilogram for mass, meter for length, and second for time. A quantity of heat or any other form of energy may be expressed in kilogram meter squared per second squared. In honor of Joule’s pioneering work this derived unit 1 kg m2 s–2 called the joule, abbreviated J. The Volkswagen in question could do nearly 76 000 J of work on anything it happened to run into.
## Potential Energy
Image source: Smart Learning for All
Potential Energy is energy that is stored by rising in height, or by other means. It frequently comes from separating things that attract, like rising birds are being separated from the Earth that attracts them, or by pulling magnets apart, or pulling an electrostatically charged balloon from an oppositely charged object to which it has clung. Potential Energy is abbreviated EP and gravitational potential energy is calculated as follows:
$\large E_{P} = mgh \tag{2}$
where
• m = mass of the object in kg
• g = gravitational constant, 9.8 m s2
• h = height in m
Notice that EP has the same units, kg m2 s–2 or Joule as kinetic energy.
Example $$\PageIndex{2}$$: Kinetic Energy Application
How high would the VW weighing 844 kg and moving at 30 mph need to rise (vertically) on a hill to come to a complete stop, if none of the stopping power came from friction?
Solution:
The car's kinetic energy is 7.58 × 104 kg m2 s–2(from EXAMPLE $$\PageIndex{1}$$ ), so all of this would have to be converted to EP. Then we could calculate the vertical height:
$$\large E_{P} = mgh = 7.58 \times 10^{4} \text{ kg} \text{ m}^{2} \text{ s}^{-2} = 844 \text{ kg} \times 9.8 \text{m} \text {s}^{-2} \times h$$
$$\large h = 9.2 \text{ m}$$
Even when there is a great deal of friction, the law of conservation of energy still applies. If you put a milkshake on a mixer and leave it there for 10 min, you will have a warm, rather unappetizing drink. The whirling mixer blades do work on (transfer energy to) the milkshake, raising its temperature. The same effect could be produced by heating the milkshake, a fact which suggests that heating also involves a transfer of energy. The first careful experiments to determine how much work was equivalent to a given quantity of heat were done by the English physicist James Joule (1818 to 1889) in the 1840s. In an experiment very similar to our milkshake example, Joule connected falling weights through a pulley system to a paddle wheel immersed in an insulated container of water. This allowed him to compare the temperature rise which resulted from the work done by the weights with that which resulted from heating. Units with which to measure energy may be derived from the SI base units of Table 1 from The International System of Units (SI) by using Eq. $$\ref{1}$$.
Another unit of energy still widely used by chemists is the calorie. The calorie used to be defined as the energy needed to raise the temperature of one gram of water from 14.5°C to 15.5°C but now it is defined as exactly 4.184 J. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9205735921859741, "perplexity": 579.2920669602735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621519.32/warc/CC-MAIN-20210615180356-20210615210356-00330.warc.gz"} |
https://rd.springer.com/article/10.1007%2Fs11009-018-9623-6 | Methodology and Computing in Applied Probability
, Volume 20, Issue 3, pp 957–973
# A BKR Operation for Events Occurring for Disjoint Reasons with High Probability
• Larry Goldstein
• Yosef Rinott
Article
## Abstract
Given events A and B on a product space $$S={\prod }_{i = 1}^{n} S_{i}$$, the set $$A \Box B$$ consists of all vectors x = (x1,…,xn) ∈ S for which there exist disjoint coordinate subsets K and L of {1,…,n} such that given the coordinates xi,iK one has that xA regardless of the values of x on the remaining coordinates, and likewise that xB given the coordinates xj,jL. For a finite product of discrete spaces endowed with a product measure, the BKR inequality
$$P(A \Box B) \le P(A)P(B)$$
(1)
was conjectured by van den Berg and Kesten (J Appl Probab 22:556–569, 1985) and proved by Reimer (Combin Probab Comput 9:27–32, 2000). In Goldstein and Rinott (J Theor Probab 20:275–293, 2007) inequality Eq. 1 was extended to general product probability spaces, replacing $$A \Box B$$ by the set Open image in new window consisting of those outcomes x for which one can only assure with probability one that xA and xB based only on the revealed coordinates in K and L as above. A strengthening of the original BKR inequality Eq. 1 results, due to the fact that Open image in new window . In particular, it may be the case that $$A \Box B$$ is empty, while Open image in new window is not. We propose the further extension Open image in new window depending on probability thresholds s and t, where Open image in new window is the special case where both s and t take the value one. The outcomes Open image in new window are those for which disjoint sets of coordinates K and L exist such that given the values of x on the revealed set of coordinates K, the probability that A occurs is at least s, and given the coordinates of x in L, the probability of B is at least t. We provide simple examples that illustrate the utility of these extensions.
## Keywords
BKR inequality Percolation Box set operation
60C05 05A20
## Notes
### Acknowledgements
We are deeply indebted to an anonymous referee for a very careful reading of two versions of this paper. The penetrating comments provided enlightened us on various measurability issues and other important, subtle points. We thank Mathew Penrose for a useful discussion and for providing some relevant references.
The work of the first author was partially supported by NSA grant H98230-15-1-0250. The second author would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the program Data Linkage and Anonymisation where part of the work on this paper was undertaken, supported by EPSRC grant no EP/K032208/1.
## References
1. Arratia R, Garibaldi S, Hales AW (2015) The van den Berg—Kesten—Reimer operator and inequality for infinite spaces. Bernoulli, to appear. arXiv:1508.05337 [math.PR]
2. Arratia R, Garibaldi S, Mower L, Stark PB (2015) Some people have all the luck. Math Mag 88:196–211. arXiv:1503.02902 [math.PR]
4. Cohn DL (1980) Measure theory. Birkhäuser, Boston
5. Folland G (2013) Real analysis: modern techniques and their applications. Wiley, New York
6. Goldstein L, Rinott Y (2007) Functional BKR inequalities, and their duals, with applications. J Theor Probab 20:275–293
7. Goldstein L, Rinott Y (2015) Functional van den Berg–Kesten–Reimer inequalities and their Duals, with Applications. arXiv:1508.07267 [math.PR]
8. Gupta JC, Rao BV (1999) van den Berg-Kesten inequality for the Poisson Boolean Model for continuum Percolation. Sankhya A 61:337–346
9. Last G, Penrose MD, Zuyev S (2017) On the capacity functional of the infinite cluster of a Boolean model. Ann Appl Probab 27:1678–1701. arXiv:1601.04945 [math.PR]
10. Meester R, Roy R (2008) Continuum percolation. Cambridge University Press, Cambridge
11. Penrose M (2003) Random geometric graphs (No. 5). Oxford University Press, London
12. Reimer D (2000) Proof of the Van den Berg-Kesten conjecture. Combin Probab Comput 9:27–32
13. van den Berg J, Kesten H (1985) Inequalities with applications to percolation and reliability. J Appl Probab 22:556–569
14. van den Berg J, Fiebig U (1987) On a combinatorial conjecture concerning disjoint occurrences of events. Ann Probab 15:354–374 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9001896977424622, "perplexity": 1537.8961487134884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741176.4/warc/CC-MAIN-20181113000225-20181113022225-00257.warc.gz"} |
https://nl.mathworks.com/help/symbolic/adjoint.html | ## Syntax
``X = adjoint(A)``
## Description
example
````X = adjoint(A)` returns the Classical Adjoint (Adjugate) Matrix `X` of `A`, such that ```A*X = det(A)*eye(n) = X*A```, where `n` is the number of rows in `A`.```
## Examples
collapse all
Find the classical adjoint of a numeric matrix.
```A = magic(3); X = adjoint(A)```
```X = -53.0000 52.0000 -23.0000 22.0000 -8.0000 -38.0000 7.0000 -68.0000 37.0000```
Find the classical adjoint of a symbolic matrix.
```syms x y z A = sym([x y z; 2 1 0; 1 0 2]); X = adjoint(A)```
```X = [ 2, -2*y, -z] [ -4, 2*x - z, 2*z] [ -1, y, x - 2*y]```
Verify that `det(A)*eye(3) = X*A` by using `isAlways`.
```cond = det(A)*eye(3) == X*A; isAlways(cond)```
```ans = 3×3 logical array 1 1 1 1 1 1 1 1 1```
Compute the inverse of this matrix by computing its classical adjoint and determinant.
```syms a b c d A = [a b; c d]; invA = adjoint(A)/det(A)```
```invA = [ d/(a*d - b*c), -b/(a*d - b*c)] [ -c/(a*d - b*c), a/(a*d - b*c)]```
Verify that `invA` is the inverse of `A`.
`isAlways(invA == inv(A))`
```ans = 2×2 logical array 1 1 1 1```
## Input Arguments
collapse all
Square matrix, specified as a numeric matrix, matrix of symbolic scalar variables, or symbolic matrix variable (since R2021a).
collapse all
The classical adjoint, or adjugate, of a square matrix A is the square matrix X, such that the (i,j)-th entry of X is the (j,i)-th cofactor of A.
The (j,i)-th cofactor of A is defined as follows.
`${a}_{ji}{}^{\prime }={\left(-1\right)}^{i+j}\mathrm{det}\left({A}_{ij}\right)$`
Aij is the submatrix of A obtained from A by removing the i-th row and j-th column.
The classical adjoint matrix should not be confused with the adjoint matrix. The adjoint is the conjugate transpose of a matrix while the classical adjoint is another name for the adjugate matrix or cofactor transpose of a matrix. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8940542936325073, "perplexity": 1305.0856746407753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057913.34/warc/CC-MAIN-20210926175051-20210926205051-00243.warc.gz"} |
http://blekko.com/wiki/Henry's_law?source=672620ff | # Henry's law
In physics, Henry's law is one of the gas laws formulated by William Henry in 1803. It states:
"At a constant temperature, the amount of a given gas that dissolves in a given type and volume of liquid is directly proportional to the partial pressure of that gas in equilibrium with that liquid."
An equivalent way of stating the law is that the solubility of a gas in a liquid is directly proportional to the partial pressure of the gas above the liquid.
An everyday example of Henry's law is given by carbonated soft drinks. Before the bottle or can of carbonated drink is opened, the gas above the drink is almost pure carbon dioxide at a pressure slightly higher than atmospheric pressure. The drink itself contains dissolved carbon dioxide. When the bottle or can is opened, some of this gas escapes, giving the characteristic hiss (or "pop" in the case of a sparkling wine bottle). Because the partial pressure of carbon dioxide above the liquid is now lower, some of the dissolved carbon dioxide comes out of solution as bubbles. If a glass of the drink is left in the open, the concentration of carbon dioxide in solution will come into equilibrium with the carbon dioxide in the air, and the drink will go "flat".
A slightly more exotic example of Henry's law is in the decompression and decompression sickness of underwater divers.
## Formula and the Henry's law constant
Henry's law can be put into mathematical terms (at constant temperature) as
$p = k_{\rm H}\, c$
where p is the partial pressure of the solute in the gas above the solution, c is the concentration of the solute and kH is a constant with the dimensions of pressure divided by concentration. The constant, known as the Henry's law constant, depends on the solute, the solvent and the temperature.
Some values for kH for gases dissolved in water at 298 K include:
oxygen (O2) : 769.2 L·atm/mol
carbon dioxide (CO2) : 29.41 L·atm/mol
hydrogen (H2) : 1282.1 L·atm/mol
There are various other forms of Henry's Law which define the constant kH differently and require different dimensional units.[1] In particular, the "concentration" of the solute in solution may also be expressed as a mole fraction or as a molarity.[2]
### Other forms of Henry's law
The various other forms of Henry's law are discussed in the technical literature.[1][3][4]
Table 1: Some forms of Henry's law and constants (gases in water at 298.15 K)[4]
equation:$k_{\mathrm{H,pc}} = \frac{p}{c_\mathrm{aq}}$$k_{\mathrm{H,cp}} = \frac{c_\mathrm{aq}}{p}$$k_{\mathrm{H,px}} = \frac{p}{x}$$k_{\mathrm{H,cc}} = \frac{c_{\mathrm{aq}}}{c_{\mathrm{gas}}}$
units:$\frac{\mathrm{L} \cdot \mathrm{atm}}{\mathrm{mol}}$$\frac{\mathrm{mol}}{\mathrm{L} \cdot \mathrm{atm}}$$\rm atm\,$dimensionless
O2769.231.3×10−34.259×1043.181×10−2
H21282.057.8×10−47.099×1041.907×10−2
CO229.413.4×10−20.163×1040.8317
N21639.346.1×10−49.077×1041.492×10−2
He2702.73.7×10−414.97×1049.051×10−3
Ne2222.224.5×10−412.30×1041.101×10−2
Ar714.281.4×10−33.955×1043.425×10−2
CO1052.639.5×10−45.828×1042.324×10−2
where: caq = concentration (or molarity) of gas in solution (in mol/L) cgas = concentration of gas above the solution (in mol/L) p = partial pressure of gas above the solution (in atm) x = mole fraction of gas in solution (dimensionless)
As can be seen by comparing the equations in the above table, the Henry's law constant kH,pc is simply the inverse of the constant kH,cp. Since all kH may be referred to as Henry's law constants, readers of the technical literature must be quite careful to note which version of the equation is being used.[1]
It should also be noted, the Henry's law is a limiting law that only applies for 'sufficiently dilute' solutions. The range of concentrations in which it applies becomes narrower the more the system diverges from ideal behavior. Roughly speaking, that is the more chemically 'different' the solute is from the solvent. Typically, Henry's law is only applicable to gas solute mole fractions less than 0.03.[5]
It also only applies simply for solutions where the solvent does not react chemically with the gas being dissolved. A common example of a gas that does react with the solvent is carbon dioxide, which forms carbonic acid (H2CO3) to a certain degree with water.
### Temperature dependence of the Henry constant
When the temperature of a system changes, the Henry constant will also change.[1] This is why some people prefer to name it Henry coefficient. Multiple equations assess the effect of temperature on the constant. These forms of the van 't Hoff equation are examples:[4]
$k_{\rm H,pc}(T) = k_{\rm H,pc}(T^\ominus)\, \exp{ \left[ -C \, \left( \frac{1}{T}-\frac{1}{T^\ominus}\right)\right]}\,$
$k_{\rm H,cp}(T) = k_{\rm H,cp}(T^\ominus)\, \exp{ \left[ C \, \left( \frac{1}{T}-\frac{1}{T^\ominus}\right)\right]}\,$
where
kH for a given temperature is Henry's constant (as defined in this article's first section). Note that the sign of C depends on whether kH,pc or kH,cp is used.
T is any given temperature, in K
T o refers to the standard temperature (298 K).
This equation is only an approximation, and should be used only when no better, experimentally derived formula is known for a given gas.
The following table lists some values for constant C (in Kelvins) in the equation above:
Gas O2 H2 CO2 N2 He Ne Ar CO C 1700 500 2400 1300 230 490 1300 1300
Because solubility of permanent gases usually decreases with increasing temperature at around the room temperature, the partial pressure a given gas concentration has in liquid must increase. While heating water (saturated with nitrogen) from 25 to 95 °C, the solubility will decrease to about 43% of its initial value. This can be verified when heating water in a pot; small bubbles evolve and rise long before the water reaches boiling temperature. Similarly, carbon dioxide from a carbonated drink escapes much faster when the drink is not cooled because the required partial pressure of CO2 to achieve the same solubility increases in higher temperatures. Partial pressure of CO2 in the gas phase in equilibrium with seawater doubles with every 16 °K increase in temperature.[6]
The constant C may be regarded as:
$C = -\frac{\Delta_{\rm solv}H}{R} = -\frac{{\rm d}\left[ \ln k_{\rm H}(T)\right]}{{\rm d}(1/T)}$
where
ΔsolvH is the enthalpy of solution
R is the gas constant.
The solubility of gases does not always decrease with increasing temperature. For aqueous solutions, the Henry's law constant usually goes through a maximum (i.e., the solubility goes through a minimum). For most permanent gases, the minimum is below 120 °C. Often, the smaller the gas molecule (and the lower the gas solubility in water), the lower the temperature of the maximum of the Henry's law constant. Thus, the maximum is about 30 °C for helium, 92 to 93 °C for argon, nitrogen and oxygen, and 114 °C for xenon.[7]
## Influence of electrolytes
The influence of electrolytes on the solubility of gases is sometimes given by Sechenov (often spelled Setchenov) equation which accounts for the "salting out" (i.e., decreasing the solubility) or "salting in" (i.e., increasing the solubility) effect (see the article on activity coefficient). The Sechenov equation can be written as:[8]
$\log(*z_1/z_1) = k_{syz} y$
where:
• *z1 is the solubility of gas 1 in pure solvent
• z1 is the solubility of gas 1 in an electrolyte solution
• y expresses the salt composition
## In geophysics
In geophysics, a version of Henry's law applies to the solubility of a noble gas in contact with silicate melt. One equation used is
$C_{\rm melt}/C_{\rm gas} = \exp\left[-\beta(\mu^{\rm E}_{\rm melt} - \mu^{\rm E}_{\rm gas})\right]\,$
where:
C = the number concentrations of the solute gas in the melt and gas phases
β = 1/kBT, an inverse temperature scale: kB = the Boltzmann constant
µE = the excess chemical potentials of the solute gas in the two phases.
## Comparison to Raoult's law
For a dilute solution, the concentration of the solute is approximately proportional to its mole fraction x, and Henry's law can be written as:
$p = k_{\rm H}\,x$
This can be compared with Raoult's law:
$p = p^\star\,x$
where p* is the vapor pressure of the pure component.
At first sight, Raoult's law appears to be a special case of Henry's law where kH = p*. This is true for pairs of closely related substances, such as benzene and toluene, which obey Raoult's law over the entire composition range: such mixtures are called "ideal mixtures".
The general case is that both laws are limit laws, and they apply at opposite ends of the composition range. The vapor pressure of the component in large excess, such as the solvent for a dilute solution, is proportional to its mole fraction, and the constant of proportionality is the vapor pressure of the pure substance (Raoult's law). The vapor pressure of the solute is also proportional to the solute's mole fraction, but the constant of proportionality is different and must be determined experimentally (Henry's law). In mathematical terms:
Raoult's law: $\lim_{x\to 1}\left( \frac{p}{x}\right) = p^\star$
Henry's law: $\lim_{x\to 0}\left( \frac{p}{x}\right) = k_{\rm H}$
Raoult's law can also be related to non-gas solutes.
## Standard chemical potential
Henry's law has been shown to apply to a wide range of solutes in the limit of "infinite dilution" (x→0), including non-volatile substances such as sucrose or even sodium chloride. In these cases, it is necessary to state the law in terms of chemical potentials. For a solute in an ideal dilute solution, the chemical potential depends on the concentration:
$\mu = \mu_c^\ominus + RT\ln{\left( \frac{\gamma_c c}{c^\ominus}\right)}\,$, where $\gamma_c = \frac{k_{{\rm H,}c}}{p^\star}$ for a volatile solute; co = 1 mol/L.
For non-ideal solutions, the activity coefficient γc depends on the concentration and must be determined at the concentration of interest. The activity coefficient can also be obtained for non-volatile solutes, where the vapor pressure of the pure substance is negligible, by using the Gibbs–Duhem relation:
$\sum_i n_i\, {\rm d}\mu_i = 0$
By measuring the change in vapor pressure (and hence chemical potential) of the solvent, the chemical potential of the solute can be deduced.
The standard state for a dilute solution is also defined in terms of infinite-dilution behavior. Although the standard concentration co is taken to be 1 mol/l by convention, the standard state is a hypothetical solution of 1 mol/l in which the solute has its limiting infinite-dilution properties. This has the effect that all non-ideal behavior is described by the activity coefficient: the activity coefficient at 1 mol/l is not necessarily unity (and is frequently quite different from unity).
All the relations above can also be expressed in terms of molalities b rather than concentrations, e.g.:
$\mu = \mu_b^\ominus + RT\ln{\left( \frac{\gamma_b b}{b^\ominus}\right)}\,$, where $\gamma_b = \frac{k_{{\rm H,}b}}{p^\star}$ for a volatile solute; bo = 1 mol/kg.
The standard chemical potential μmo, the activity coefficient γm and the Henry's law constant kH,b all have different numerical values when molalities are used in place of concentrations. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 22, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9632813334465027, "perplexity": 2542.9573530672014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802776556.43/warc/CC-MAIN-20141217075256-00021-ip-10-231-17-201.ec2.internal.warc.gz"} |
http://forex-iey.blogspot.com/2009/08/valuing-fx-options-garman-kohlhagen.html | ## Kamis, 06 Agustus 2009
### Valuing FX options: The Garman-Kohlhagen model
As in the Black-Scholes model for stock options and the Black model for certain interest rate options, the value of a European option on an FX rate is typically calculated by assuming that the rate follows a log-normal process.
In 1983 Garman and Kohlhagen extended the Black-Scholes model to cope with the presence of two interest rates (one for each currency). Suppose that rd is the risk-free interest rate to expiry of the domestic currency and rf is the foreign currency risk-free interest rate (where domestic currency is the currency in which we obtain the value of the option; the formula also requires that FX rates - both strike and current spot be quoted in terms of "units of domestic currency per unit of foreign currency"). Then the domestic currency value of a call option into the foreign currency is
$c = S_0\exp(-r_f T)\N(d_1) - K\exp(-r_d T)\N(d_2)$
The value of a put option has value
$p = K\exp(-r_d T)\N(-d_2) - S_0\exp(-r_f T)\N(-d_1)$
where :
$d_1 = \frac{\ln(S_0/K) + (r_d - r_f + \sigma^2/2)T}{\sigma\sqrt{T}}$
$d_2 = d_1 - \sigma\sqrt{T}$
S0 is the current spot rate
K is the strike price
N is the cumulative normal distribution function
rd is domestic risk free simple interest rate
rf is foreign risk free simple interest rate
T is the time to maturity (calculated according to the appropriate day count convention)
and σ is the volatility of the FX rate. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8117266893386841, "perplexity": 2043.7110697009616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824395.52/warc/CC-MAIN-20160723071024-00197-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://web2.0calc.com/questions/number-theory_36367 | +0
# Number Theory
0
100
1
If $3x+7 \equiv 11$ (mod $16$), then 2x+11 is congruent (mod 16) to what integer between 0 and 15, inclusive?
Jul 11, 2021
#1
+26213
+1
If $$3x+7 \equiv 11 \pmod{16}$$,
then 2x+11 is congruent $$\pmod{16}$$ to what integer between 0 and 15, inclusive?
$$\begin{array}{|rcll|} \hline 3x+7 &\equiv& 11 \pmod{16} \quad | \quad -7 \\ 3x &\equiv& 11-7 \pmod{16} \\ 3x &\equiv& 4 \pmod{16}\quad | \quad :3 \\ x &\equiv& \dfrac{4}{3} \pmod{16} \\ x \pmod{16} &\equiv& \dfrac{4}{3} \\ \hline \end{array}$$
$$\begin{array}{|rcll|} \hline 2x+11\pmod{16} &\equiv& 2*\dfrac{4}{3}+11 \pmod{16} \\ &\equiv& \dfrac{8}{3}+11 \pmod{16} \\ &\equiv& \dfrac{8}{3}+\dfrac{33}{3} \pmod{16} \\ &\equiv& \dfrac{41}{3}\pmod{16} \quad | \quad 41\equiv 9 \pmod{16} \\ &\equiv& \dfrac{9}{3}\pmod{16} \\ \mathbf{2x+11\pmod{16}} &\equiv& \mathbf{3 \pmod{16}} \\ \hline \end{array}$$
Jul 12, 2021 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9612164497375488, "perplexity": 2754.9124996439855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585450.39/warc/CC-MAIN-20211022021705-20211022051705-00402.warc.gz"} |
https://en.wikipedia.org/wiki/Polynomial_equation | # Algebraic equation
(Redirected from Polynomial equation)
In mathematics, an algebraic equation or polynomial equation is an equation of the form
${\displaystyle P=Q}$
where P and Q are polynomials with coefficients in some field, often the field of the rational numbers. For most authors, an algebraic equation is univariate, which means that it involves only one variable. On the other hand, a polynomial equation may involve several variables, in which case it is called multivariate and the term polynomial equation is usually preferred to algebraic equation.
For example,
${\displaystyle x^{5}-3x+1=0}$
is an algebraic equation with integer coefficients and
${\displaystyle y^{4}+{\frac {xy}{2}}={\frac {x^{3}}{3}}-xy^{2}+y^{2}-{\frac {1}{7}}}$
is a multivariate polynomial equation over the rationals.
Some but not all polynomial equations with rational coefficients have a solution that is an algebraic expression that can be found using a finite number of operations that involve only those same types of coefficients (that is, can be solved algebraically). This can be done for all such equations of degree one, two, three, or four; but for degree five or more it can only be done for some equations, not for all. A large amount of research has been devoted to compute efficiently accurate approximations of the real or complex solutions of a univariate algebraic equation (see Root-finding algorithm) and of the common solutions of several multivariate polynomial equations (see System of polynomial equations).
## History
The study of algebraic equations is probably as old as mathematics: the Babylonian mathematicians, as early as 2000 BC could solve some kinds of quadratic equations (displayed on Old Babylonian clay tablets).
Univariate algebraic equations over the rationals (i.e., with rational coefficients) have a very long history. Ancient mathematicians wanted the solutions in the form of radical expressions, like ${\displaystyle x={\frac {1+{\sqrt {5}}}{2}}}$ for the positive solution of ${\displaystyle x^{2}-x-1=0}$. The ancient Egyptians knew how to solve equations of degree 2 in this manner. The Indian mathematician Brahmagupta (597–668 AD) explicitly described the quadratic formula in his treatise Brāhmasphuṭasiddhānta published in 628 AD, but written in words instead of symbols. In the 9th century Muhammad ibn Musa al-Khwarizmi and other Islamic mathematicians derived the quadratic formula, the general solution of equations of degree 2, and recognized the importance of the discriminant. During the Renaissance in 1545, Gerolamo Cardano published the solution of Scipione del Ferro and Niccolò Fontana Tartaglia to equations of degree 3 and that of Lodovico Ferrari for equations of degree 4. Finally Niels Henrik Abel proved, in 1824, that equations of degree 5 and higher do not have general solutions using radicals. Galois theory, named after Évariste Galois, showed that some equations of at least degree 5 do not even have an idiosyncratic solution in radicals, and gave criteria for deciding if an equation is in fact solvable using radicals.
## Areas of study
The algebraic equations are the basis of a number of areas of modern mathematics: Algebraic number theory is the study of (univariate) algebraic equations over the rationals (that is, with rational coefficients). Galois theory was introduced by Évariste Galois to specify criteria for deciding if an algebraic equation may be solved in terms of radicals. In field theory, an algebraic extension is an extension such that every element is a root of an algebraic equation over the base field. Transcendental number theory is the study of the real numbers which are not solutions to an algebraic equation over the rationals. A Diophantine equation is a (usually multivariate) polynomial equation with integer coefficients for which one is interested in the integer solutions. Algebraic geometry is the study of the solutions in an algebraically closed field of multivariate polynomial equations.
Two equations are equivalent if they have the same set of solutions. In particular the equation ${\displaystyle P=Q}$ is equivalent to ${\displaystyle P-Q=0}$. It follows that the study of algebraic equations is equivalent to the study of polynomials.
A polynomial equation over the rationals can always be converted to an equivalent one in which the coefficients are integers. For example, multiplying through by 42 = 2·3·7 and grouping its terms in the first member, the previously mentioned polynomial equation ${\displaystyle y^{4}+{\frac {xy}{2}}={\frac {x^{3}}{3}}-xy^{2}+y^{2}-{\frac {1}{7}}}$ becomes
${\displaystyle 42y^{4}+21xy-14x^{3}+42xy^{2}-42y^{2}+6=0.}$
Because sine, exponentiation, and 1/T are not polynomial functions,
${\displaystyle e^{T}x^{2}+{\frac {1}{T}}xy+\sin(T)z-2=0}$
is not a polynomial equation in the four variables x, y, z, and T over the rational numbers. However, it is a polynomial equation in the three variables x, y, and z over the field of the elementary functions in the variable T.
## Solutions
As for any equation, the solutions of an equation are the values of the variables for which the equation is true. For univariate algebraic equations these are also called roots, even if, properly speaking, one should say the solutions of the algebraic equation P=0 are the roots of the polynomial P. When solving an equation, it is important to specify in which set the solutions are allowed. For example, for an equation over the rationals one may look for solutions in which all the variables are integers. In this case the equation is a Diophantine equation. One may also be interested only in the real solutions. However, for univariate algebraic equations, the number of solutions is finite, and all solutions are contained in any algebraically closed field containing the coefficients—for example, the field of complex numbers in the case of equations over the rationals. It follows that without precision "root" and "solution" usually mean "solution in an algebraically closed field".
## References
• Hazewinkel, Michiel, ed. (2001) [1994], "Algebraic equation", Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 10, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.893130362033844, "perplexity": 207.58534146265364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512679.76/warc/CC-MAIN-20181020080138-20181020101638-00511.warc.gz"} |
http://math.stackexchange.com/questions/306995/non-induction-proof-of-2-sqrtn1-2-sum-k-1n-frac1-sqrtk2-sqrt | # Non-induction proof of $2\sqrt{n+1}-2<\sum_{k=1}^{n}{\frac{1}{\sqrt{k}}}<2\sqrt{n}-1$
Prove that $$2\sqrt{n+1}-2<\sum_{k=1}^{n}{\frac{1}{\sqrt{k}}}<2\sqrt{n}-1.$$
After playing around with the sum, I couldn't get anywhere so I proved inequalities by induction. I'm however interested in solutions that don't use induction, if there are some (relatively simple ones, since I'm high-school student).
Also any advice for determining if a sum can be written in "compact" form? For example, $\displaystyle \sum_{k=1}^{n}{(-1)^{k-1}k}$ is actually $\displaystyle-\frac{n}{2}$ for even $n$ and $\displaystyle\frac{n+1}{2}$ for odd $n$.
-
I would try the series-integral comparison, which is legal because the integrand is nonincresing positive. – julien Feb 18 '13 at 11:20
add comment
## 2 Answers
I’d just about bet that the inequality was derived from the observation that
$$\int_1^{n+1}\frac1{\sqrt x}dx<\sum_{k=1}^n\frac1{\sqrt k}<1+\int_1^n\frac1{\sqrt x}dx\;,$$
which can be made from a graph of $y=\frac1{\sqrt x}$ showing rectangles for the appropriate upper and lower Riemann sums.
Since $$\int x^{-1/2}dx=2x^{1/2}+C\;,$$
this immediately yields
$$2\sqrt{n+1}-2<\sum_{k=1}^n\frac1{\sqrt k}<2\sqrt n-1\;.$$
It’s a non-inductive proof, and it’s accessible to those high-school students who have had a decent calculus course.
-
add comment
Mean Value Theorem can also be used,
Let $\displaystyle f(x)=\sqrt{x}$
$\displaystyle f'(x)=\frac{1}{2}\frac{1}{\sqrt{x}}$
Using mean value theorem we have:
$\displaystyle \frac{f(n+1)-f(n)}{(n+1)-n}=f'(c)$ for some $c\in(n,n+1)$
$\displaystyle \Rightarrow \frac{\sqrt{n+1}-\sqrt{n}}{1}=\frac{1}{2}\frac{1}{\sqrt{c}}$....(1)
$\displaystyle \frac{1}{\sqrt{n+1}}<\frac{1}{\sqrt{c}}<\frac{1}{\sqrt{n}}$
Using the above ineq. in $(1)$ we have,
$\displaystyle \frac{1}{2\sqrt{n+1}}<\sqrt{n+1}-\sqrt{n}<\frac{1}{2\sqrt{n}}$
Adding the left part of the inequality we have,$\displaystyle\sum_{k=2}^{n}\frac{1}{2\sqrt{k}}<\sum_{k=2}^{n}(\sqrt{k}-\sqrt{k-1})=\sqrt{n}-1$
$\Rightarrow \displaystyle\sum_{k=2}^{n}\frac{1}{\sqrt{k}}<2\sum_{k=2}^{n}(\sqrt{k}-\sqrt{k-1})=2(\sqrt{n}-1)$
$\Rightarrow \displaystyle1+\sum_{k=2}^{n}\frac{1}{\sqrt{k}}<1+2\sum_{k=2}^{n}(\sqrt{k}-\sqrt{k-1})=2\sqrt{n}-2+1=2\sqrt{n}-1$
$\Rightarrow \displaystyle\sum_{k=1}^{n}\frac{1}{\sqrt{k}}<2\sqrt{n}-1$
Similarly adding the right side of the inequality we have,
$\displaystyle\sum_{k=1}^{n}\frac{1}{2\sqrt{k}}>\sum_{k=1}^{n}(\sqrt{k+1}-\sqrt{k})=\sqrt{n+1}-1$
$\Rightarrow \displaystyle\sum_{k=1}^{n}\frac{1}{\sqrt{k}}>2(\sqrt{n+1}-1)$
This completes the proof.
$\displaystyle 2\sqrt{n+1}-2<\sum_{k=1}^{n}{\frac{1}{\sqrt{k}}}<2\sqrt{n}-1.$
-
add comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9313225746154785, "perplexity": 741.9802327270861}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394009885941/warc/CC-MAIN-20140305085805-00072-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://mathhelpforum.com/pre-calculus/170583-help-needed-finding-x-intercept.html | # Math Help - Help needed finding the X intercept
1. ## Help needed finding the X intercept
i need to find the x intercept of the below formula can anyone help and possibly show me how its worked out ? i've spent hours attempting and failing now and its driving me nuts!
f(x)= x^2 + 3x / 2x^2 +6x + 4
thanks
2. $f(x) = \dfrac{x(x+3)}{2(x+2)(x+1)} = 0$
Thus the graph has x intercepts of 0 and -3 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8338485956192017, "perplexity": 844.779639759438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928019.31/warc/CC-MAIN-20150521113208-00084-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://officecommunity.com/pdf_fusion/f/troubleshooting/15905/save-as-corrupts-pdf-file/24025 | # Save As corrupts pdf file
Installed a trial of PDF Fusion a day or two ago. This is my first use.
The file is a pdf I downloaded. http://www.health.gov.bc.ca/library/publications/year/2013/MyVoice-AdvanceCarePlanningGuide.pdf
It displays OK in Adobe Reader and loads and is displayed correctly in PDF Fusion.
I Save As to create a working copy. The pdf still open in PDF Fusion displays corruption. And reloading the Save As file also displays the same corruption.
The document has a total of 56 pages, 4 of which are unnumbered (3 at the front and 1 at the end). The document looks fine (but could have less obvious corruptions) up to page number 33. I am using the page numbers in the document printed on each page (and this page number works in Adobe Reader).
Page 34 starts OK. The page title and section title are correct and the first 2 paragraphs are displayed correctly. The third paragraph loses the first letter of 2 of its sentences, and the whole paragraph is spread out (similar to full justification) with spaces between most of the letters (perhaps some sort of kerning problem) and some letters on the right, lost off the page. The rest of page 34 looks OK. Page 35 has about half of the page messed up with letters all over the place, missing sentences, and possibly other problems.
Page 36 is almost correct, but some very small fonts are spaced out. Page 37 has 4 sentences with weird kerning (letters spaced out). After that, there are good and bad pages up to the end of the document.
I suspect that the document was originally compiled from several other documents. That may or may not be relevant.
Any suggestions?
Parents
Reply Children | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8348392844200134, "perplexity": 1389.4604732548146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987829458.93/warc/CC-MAIN-20191023043257-20191023070757-00467.warc.gz"} |
http://www.whitman.edu/mathematics/SeniorProjectArchive/2012/ | ## Senior Project Archive 2012
The 2012 Senior Project Archive, titles are hyperlinked to pdf copies of the final project write-up. Course coordinator: Barry Balof
• TITLE: FROM POSETS TO DERANGEMENTS: AN EXPLORATION OF THE MÖBIUS INVERSION FORMULA
AUTHOR: Allison Beemer
ABSTRACT: This paper works through an independently developed proof of the Möbius inversion formula. Beginning with a simple seed of an idea, that of a partially ordered set, we proceed by considering properties of compatible matrices, walking through a proof of the existence and uniqueness of what is known as the Möbius function, $\mu$, and subsequently offering a proof of the Möbius inversion formula. The development of the Möbius function and the proof of the Möbius inversion formula lead to results in several branches in mathematics, including combinatorics and number theory. In particular, we will prove the principle of inclusion and exclusion and several number theoretic results.
• TITLE: MODEL SELECTION AND SHRINKAGE: A PROJECT IN LINEAR MODELING
AUTHOR: David DeVine
ABSTRACT: Linear modeling is a statistical tool which fleshes predictive information from large and unweildy data sets. The most familiar linear modeling technique is the Ordinary Least Squares (OLS) technique which minimizes the sum of the squares of residual errors from a predictor curve. As this paper discusses, OLS has significant shortcomings in regression scenarios with many predictors and with correlation between predictors. This paper provides an introduction to modeling techniques that attempt to improve upon these pitfalls. It studies the mathematical machinery behind Ridge Regression, Subset Selection, Forward Stepwise Selection, Lasso, and Adaptive Lasso including a look at the asymptotic behavior of the Lasso estimates and their limiting distribution. It then moves on to apply the theory in a variety of regression scenarios and investigates the relative performances of each technique using the software package R. Under the criteria of Mean Square Error it concludes that Lasso is the most widely applicable and reliable modeling technique of the ones studied.
• TITLE: AN INTRODUCTION TO SURREAL NUMBERS
AUTHOR: Gretchen Grimm
ABSTRACT: In this paper, we investigate John H. Conway's surreal numbers. Surreal numbers are defined by two sets of numbers, which differentiates them from the real numbers. Based on two axioms, we map out all of the surreal numbers, finding infinities greater than any real number and infinitesimal numbers smaller in absolute value than any real number. We investigate addition and multiplication of surreal numbers, and show that they form a totally ordered field, $S$, which contains the real numbers. We also give an introduction to an application of surreal numbers in game theory.
• TITLE: PATHOLOGICAL: APPLICATIONS OF LEBESGUE MEASURE TO THE CANTOR SET AND NON-INTEGRABLE DERIVATIVES
AUTHOR: Price Hardman
ABSTRACT: The Lebesgue integral is a hallmark of modern analysis, and the theoretical foundation of the Lebesgue integral is measure theory. Here, we develop the basics of Lebesgue measure theory on the real line, a theory which is concerned with generalizing the notion of length to sets that are not intervals. We then investigate several fascinating "pathological" examples -- particularly counterintuitive and insightful results -- and use the tools of measure theory to explore both their bizarre features as well as their historical importance. These examples include the Cantor set, an uncountable set of measure zero; the Smith-Volterra-Cantor set, a set of positive measure yet one that contains no intervals; and Volterra's function, a function with a bounded derivative that is not Riemann integrable.
• TITLE: PÓLYA'S COUNTING THEORY
AUTHOR: Mollee Huisinga
ABSTRACT: P$\acute{\text{o}}$lya's Counting Theory is a spectacular tool that allows us to count the number of distinct items given a certain number of colors or other characteristics. We will count two objects as 'the same' if they can be permuted to produce the same configuration. We can use Burnside's Lemma to enumerate the number of distinct objects. However, sometimes we will also want to know more information about the characteristics of these distinct objects. P$\acute{\text{o}}$lya's Counting Theory is uniquely useful because it will act as a picture function - actually producing a polynomial that demonstrates what the different configurations are, and how many of each exist. As such, it has numerous applications. Some that will be explored here include chemical isomer enumeration, graph theory and music theory.
• TITLE: A STUDY OF CONVEX FUNCTIONS WITH APPLICATIONS
AUTHOR: Matthew Liedtke
ABSTRACT: This paper is a rigorous analysis of convex functions and their applications to selected topics in real analysis and economics. In our theoretical survey we explore the continuity and differentiability of convex functions. In addition we state a simplified criteria of convexity for continuous functions. Our topics from real analysis include right-endpoint approximation for integrals and pointwise convergence. In our discussion of economics, we will use the theory of consumer behavior under uncertainty and Jensen's Inequality to show the relation between convex functions and the modeling of risk loving behavior. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8581053018569946, "perplexity": 418.6298621794068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706964363/warc/CC-MAIN-20130516122244-00096-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://repo.scoap3.org/record/31369 | # Search for Narrow $H\gamma$ Resonances in Proton-Proton Collisions at $\sqrt{s}=13\text{}\text{}\mathrm{TeV}$
01 March 2019
Abstract: A search for heavy, narrow resonances decaying to a Higgs boson and a photon ($H\gamma$) has been performed in proton-proton collision data at a center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of $35.9\text{}\text{}{\mathrm{fb}}^{-1}$ collected with the CMS detector at the LHC in 2016. Events containing a photon and a Lorentz-boosted hadronically decaying Higgs boson reconstructed as a single, large-radius jet are considered, and the $\gamma +\text{jet}$ invariant mass spectrum is analyzed for the presence of narrow resonances. To increase the sensitivity of the search, events are categorized depending on whether or not the large-radius jet can be identified as a result of the merging of two jets originating from $b$ quarks. Results in both categories are found to agree with the predictions of the standard model. Upper limits on the production rate of $H\gamma$ resonances are set as a function of their mass in the range of 720–3250 GeV, representing the most stringent constraints to date.
Published in: Physical Review Letters 122 (2019) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9653283953666687, "perplexity": 514.9369552156905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203547.62/warc/CC-MAIN-20190325010547-20190325032547-00322.warc.gz"} |
http://math.stackexchange.com/questions?page=4&sort=active | All Questions
8 views
Limits of the derangements proportion within the permutations of the set $[1,n]$
Let be $D_n$ the number of derangements of a set of $n$ elements, by convention we have $D_0=1$ Ifound that $D_n=n!\sum\limits_{k=0}^{n}\frac{(-1)^k}{k!}$ For all $n\in \mathbb{N*}$, we write ...
9 views
Is it always possible to find the Reduced Row Echelon form of a matrix, given the basis of its null space?
I tried starting with multiple bases of the null space and each time I was able to write the RREF form of the matrix. However, I have not been able to prove that this is true for all possible bases.
20 views
Find both the diagonals with area and side given [on hold]
Side of rhombus $= 65$ cm Area of rhombus $= 1024$ cm$^2$ Find the diagonals of the rhombus.
161 views
Length of a Chord of a circle
I was wondering about the possible values that the length of a chord of a circle can take. The Length of a chord is always greater than or equal to 0 and smaller ...
16 views
Example of a Lebesgue unmeasurable function f such that f*f is Lebesgue measurable
Giv an example of a Lebesgue unmeasurable function $f:[0,1]\rightarrow \mathbb{R}$ such that $f^2$ is Lebesgue measurable.
4 views
Smooth points of the secant variety with a given tangent space
Let $X\subseteq\mathbb{P}^{N}$ be an algebraic variety of dimension $n$. Let $(x,y)\in X\times X-\Delta_{X}$ and $z\in\langle x,y\rangle\subseteq SX$, where $SX$ is the secant variety of $X$. I want ...
133 views
Another version of the Poincaré Recurrence Theorem (Proof)
The task is to prove the following version of Poincaré's Recurrence Theorem: Let $(X,\Sigma,\mu)$ be a finite measure space, $f\colon X\to X$ a measurable transformation that preserves the ...
15 views
What it the fourier transform of laplacian and shifted funtion?
I'm looking for the Fourier transform of $\nabla^2f(\vec{r}-\vec{a})$ I can assume that the 3D Fourier transform of $f(\vec{r})$ is $\tilde{f}(\vec{q})$ and the vector $\vec{a}$ is a const vector. ...
5 views
Question concerning comparison of different tetration functions
Let $a_{1}=2$, $a_{n+1}=2^{a_{n}}$ for $n \geq 1$ Let $b_{1}=3$, $b_{n+1}=3^{b_{n}}$ for $n \geq 1$ Is is true that $a_{n+2}>b_{n}$ for all $n \geq 1$? If so, is the proof elementary? (Use only ...
55 views
Prove or disprove that $(a_1+a_2+\ldots+a_n)\leq n\sqrt{a_1^2+\ldots+a_n^2}$, by showing that $RHS-LHS\geq 0$ if possible.
Prove or disprove that $$\left|a_1\right|+\left|a_2\right|+\ldots+\left|a_n\right|\leq n\sqrt{a_1^2+\ldots+a_n^2}$$ Where $a_1,\ldots,a_n\in\mathbb{R}$ and $n\in\mathbb{N}$. EDIT: I was hoping there ...
25 views
Proportional probability of payouts with defined expected value.
Assume we have a lottery with payouts $(2,3,5)$. So if you buy a ticket you can win a pot which will payout your ticket price multiplied by one of those numbers. The organizer expects a margin profit ...
5 views
Number of Labels used in reduction of Isomorphism of Labelled Graph to Graph Isomorphism
From "Lecture Notes in Computer Science" by Christoph M. Hoffmann , Assume that both $X$ and $X'$ have $n$ vertices. We plan to code the graph labels as suitable subgraphs which we attach to the ...
18 views
Geometry-||gm proof
$ABCD$ is a parallelogram in which $P$ and $Q$ are the mid points of the sides $AD$ and $BC$ respectively. If $BP$ & $QD$ intersect the diagonal $AC$ at $X$ and $Y$ respectively then prove that ...
10 views
10 views
Confused in some basic concept about Gauss Map $dN_p$
Here, I have some question that remain unsolved for quite a long time. My question is about the gauss map $dN_{p}$, to start the convenient expression of the symbol and formula, I have to construct ...
15 views
I do not understand the hypothesis of the lebesgue decomposition theorem
I do not understand the hypothesis of the lebesgue decomposition theorem. Given a mesure in a sigma-algebra i do not understand why exists a set it is concentrate on.
16 views
Probability and Statistics
How can I check if a Moment Generating Function is valid or not? I tried using the definition for that but it didn't help me at all.
21 views
Given probability distribution $f(x)=2-bx$ find $b$ and range for $x$
Suppose that the distances between houses and the center of a city are distributed with the density function: $f(x)=2-bx$, where $x$ denotes distance. If this is a proper density function, what can we ...
18 views
Is this complex vector bundle trivial?
Let $\Sigma$ be any Riemann surface, and let $L \rightarrow \Sigma$ be a complex line bundle (which is classified according to its degree). Then the vector bundle $L \oplus L^{-1} \rightarrow \Sigma$ ...
47 views
How did they calculate the possible endings?
On this link @edit you can see all the possibilities of endings. The game has six stages, on each you have 3 choices and at the end, you have 5 stages with 2 endings each. Its like: 1. > 2a 2b 2c > ...
23 views
Finding the formula for acceleration from $v=2s^3+5s$, where $s$ is the displacement at time $t$
This is the question: I first found $\frac{dv}{ds}=6s^2+5$, then I tried to find $\frac{ds}{dt}$ by messing about a little with implicit differentiation, but I had no luck and I therefore couldn't ...
37 views
145 views
Find the sum $\sum _{ k=1 }^{ 100 }{ \frac { k\cdot k! }{ { 100 }^{ k } } } \binom{100}{k}$
Find the sum $$\sum _{ k=1 }^{ 100 }{ \frac { k\cdot k! }{ { 100 }^{ k } } } \binom{100}{k}$$ When I asked my teacher how can I solve this question he responded it is very hard, you can't solve it. I ...
24 views
Let $B = {n \in \mathbb{Z} : n = 3j + 2; j \in \mathbb{Z}}, D = {n \in Z : n = 3j − 1; j \in \mathbb{Z}}$. Is $B = D$?
Let $B = {n \in \mathbb{Z} : n = 3j + 2; j \in \mathbb{Z}}, D = {n \in Z : n = 3j − 1; j \in \mathbb{Z}}$. Is $B = D$? How do I prove this? To me it looks to be true. But I don't know how to put it ...
69 views
Existence of two disjoint closed sets with zero infimal distance
Are there two closed sets $A,B\subset\mathbb{R}^2$ with the following properties? $A\cap B=\emptyset$ $\forall \epsilon>0$ there exist $a \in A$ and $b\in B$ such that $\|a-b\| < \epsilon$
61 views
Find $\sum_{i=1}^{2000}\gcd(i,2000)\cos\left(\frac{2\pi\ i}{2000}\right)$
What is the value of the following sum? $$\sum_{i=1}^{2000}\gcd(i,2000)\cos\left(\frac{2\pi\ i}{2000}\right)$$ where $\gcd$ is the greatest common divisor.
47 views
Given any 40 people, at least four of them were born in the same month of the year [on hold]
Given any 40 people, at least four of them were born in the same month of the year. Why is this true?
34 views
Definition of fixed point free relation
If we have such relation that for $\forall x$ $f(x)\ne x$ , how is it called in one word? I can come up with only "graph of this function is not a straight line:)" Thank you
48 views
Want to know what's wrong?
I take a exercise from apostol's book. I was trying the next exercise and do it, but the answer (from the book) is different, and I don't know what part of my procedure it's wrong?. So I want to know ...
1k views
Combining inequalities into one inequality
Let's say we are given $a$, $b$, $d$ with $1 \leq a, b, d \leq 1000$ and inequalities $x \geq a$, $y \geq b$, and $a+b < x + y \leq a+b+d$. I need to combine all this and the following into one ...
$\pi$ is dependent on properties of geometry, assuming that we define it as $C/d$. Could then the $\pi$ also be an integer?
$\pi$ is dependent on properties of geometry, assuming that we define it as $C/d$. Could there be a geometry where $\pi$ is a rational number or an integer? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9617977738380432, "perplexity": 257.7501124766339}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157443.43/warc/CC-MAIN-20160205193917-00071-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/1422972/total-number-of-possible-binary-operations | # Total number of possible binary operations .
If there are n elements in a set the number of binary operations that can be defined are 2n, am I right or wrong ?
If the definition of "operation" is any function $\oplus \colon A \times A \to A$ without any further restrictions, there are $n^2$ values of the function a define independently among $n$ values, so it is $n^{n^2}$.
• It can be also seen as the number of ways to fill an $n^2$-tuple where each position assumes $n$ values. $\underbrace{(\_\ \_\ \_ \ \_ \dots \_ \ \_)}_{n^2\ positions}$ So it is $n^{n^2}$ – nickchalkida Sep 6 '15 at 9:09
Binary operations without any restriction? N0, there are many more. For every pair $(x,y)$ from a set $X$ we pick any value $f(x,y) \in X$, so we have $n^2$ many independent choices from $n$, so $n^{n^2}$ in my reckoning. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.971858024597168, "perplexity": 206.4960910659304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232261326.78/warc/CC-MAIN-20190527045622-20190527071622-00510.warc.gz"} |
https://stacks.math.columbia.edu/tag/020B | ## 33.2 Notation
Throughout this chapter we use the letter $k$ to denote the ground field.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9610759019851685, "perplexity": 666.7553838786073}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104244535.68/warc/CC-MAIN-20220703134535-20220703164535-00027.warc.gz"} |
https://prosjektbanken.forskningsradet.no/en/project/FORISS/338779?Kilde=FORISS&distribution=Ar&chart=bar&calcType=funding&Sprak=no&sortBy=date&sortOrder=desc&resultCount=30&offset=0&Fag.2=Informasjons-+og+kommunikasjonsvitenskap | Back to search
# PhysML: Structure-based machine learning for physical systems
#### Awarded: NOK 10.8 mill.
Project Number:
338779
Application Type:
Project Period:
2023 - 2027
Organisation:
Location:
The majority of Norwegian industries are still a long way from being able to adopt AI and machine learning methods in their daily operations. One of the main barriers is the lack of robustness and trustworthiness of existing methods, particularly when applied to physical processes. PhysML will contribute to solving this challenge by combining machine learning models with the geometric properties of mathematical models in a hybrid analytics framework that alleviates the weaknesses of the individual approaches by leveraging their complementary strengths. Industrial data often originates from sensors or manual measurements that can suffer from low quality or quantity, hindering pure data-driven approaches. However, industrial data frequently describes physical processes which are governed by the laws of nature and can thus be modelled. When such models exist, they are based on first principles, making them trustworthy but lacking the flexibility of data-driven approaches. PhysML will work towards two goals: i) Use machine learning to gain physical knowledge about the systems, and ii) use physical knowledge to obtain machine learning models that are open, trustable, robust, and flexible. A fundamental innovation in our approach is to utilize the assumed underlying structures of the system, such as symmetry or preservation of energy, and thus build on the established field called numerical geometric integration, which is the study of how to incorporate such structures into mathematical models. The national and international academic partners (NTNU, Brown University) are among the world’s foremost experts in numerical geometric integration and physics-informed machine learning, respectively. Partnership with Elkem and Veas will ensure industrial relevance by providing use cases for development and testing of algorithms within the areas of predictive maintenance, control theory and process optimization.
The majority of Norwegian industries are still a long way from being able to adopt AI and machine learning methods in their daily operations. One of the main barriers is the lack of robustness and trustworthiness of existing methods, particularly when applied to physical processes. PhysML will contribute to solving this challenge by combining machine learning models with the geometric properties of mathematical models in a hybrid analytics framework that alleviates the weaknesses of both individual approaches by leveraging their complementary strengths. Industrial data often originates from sensors or manual measurements that can suffer from low quality or quantity, hindering pure data-driven approaches. However, industrial data frequently describes physical processes which are governed by the laws of nature and can thus be modelled. When such models exist, they are based on first principles, making them trustworthy but lacking the flexibility of data-driven approaches. PhysML will work towards two goals: i) Use machine learning to gain physical knowledge about the systems, and ii) use physical knowledge to obtain machine learning models that are open, trustable, robust, and flexible. A fundamental innovation in our approach is to utilize the assumed underlying structures of the system, such as symmetry or preservation of energy, and thus build on the established field called numerical geometric integration, which is the study of how to incorporate such structures in mathematical models. The national and international academic partners are among the world’s foremost experts in numerical geometric integration and physics-informed machine learning, respectively. Partnership with Elkem and Veas will ensure industrial relevance by providing use cases for development and testing of algorithms within the areas of predictive maintenance, control and process optimization. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8176858425140381, "perplexity": 1012.711798883034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950383.8/warc/CC-MAIN-20230402043600-20230402073600-00779.warc.gz"} |
https://www.allanswered.com/post/731/peter-mccullagh-asymptotic-inference-for-sparse-priors/ | ### Peter McCullagh - Asymptotic inference for sparse priors
286
views
0
13 months ago by
Saturday 1st October 1:40-2:10pm
McCullagh Slides
Community: WHOA-PSI 2016
Peter I think I see the need for the small \nu approach to sparsity, independent of n. But what exactly are the implications for inference? And is it necessary to take a Bayesian approach, for example, to quantify the expected sparsity through a prior?
written 13 months ago by Nancy Reid | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8995917439460754, "perplexity": 2481.2610074041545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806509.31/warc/CC-MAIN-20171122065449-20171122085449-00381.warc.gz"} |
http://pysynphot.readthedocs.io/en/latest/observation.html | # Observation¶
`Observation` is a special type of Source Spectrum, where the source is convolved with a Bandpass, both which are stored in its `spectrum` and `bandpass` class attributes, respectively.
It also has two different datasets:
• `wave` and `flux`, as defined by the native wavelength set, which is constructed when combining the source spectrum and the bandpass.
• `binwave` and `binflux`, as defined by the binned wavelength set.
Binned wavelength set uses the optimal binning for the detector, if applicable. When optimal binning is not found (e.g., a non-HST filter system), it uses the native wavelength set instead (see Reference Data). For IRAF STSDAS SYNPHOT users, this is the same behavior as the `countrate` task. In addition, `binwave` can be explicitly overwritten using the `binset` keyword at initialization. Note that the given array must contain the central wavelength values of the bins.
Once `binwave` is established, `binflux` is computed by integrating the native flux over the width of each bin. Due to the nature of binned data, `binflux` cannot be interpolated when sampled. To accurately represent binned dataset in a plot, you should plot it as a histogram, with each `binwave` value at the mid-point of the horizontal step (see Examples).
In contrast, the native dataset is considered to be samples of a continuous function. Thus, it may be interpolated linearly and plotted without using a histogram.
An observation can be sampled using its `sample()` method. By default, it samples the binned dataset and does not allow interpolation; i.e., you must provide a wavelength value that exactly match that in `binwave`. It will sample the native dataset and allows interpolation if the `binned=False` option is used. For for information, see Sampling.
The following operations are disabled because they do not make sense in the context of an observation, in which a photon has passed through the telescope optics:
• Redshift
• Some multiplication (see below)
When multiplication is performed, a new observation is created using its original source spectrum and a new bandpass from multiplying its original bandpass with the given bandpass or scalar number. An observation cannot be multiplied with another observation, source spectrum, or extinction curve.
In addition, it has unique properties, such as Effective Stimulus (also see Count Rate) and Effective Wavelength.
Like a source spectrum, an observation can be written to a FITS table using its `writefits()` method (also see File I/O), which in this case, takes an additional `binned` keyword to indicate which dataset to write.
## Count Rate¶
`countrate()` is probably the most often used method for an observation. It computes the total counts of a source spectrum, integrated over the passband defined by a HST observing mode. For calculations, it uses an optimal wavelength set (if available), which is constructed so that one wavelength bin corresponds to one detector pixel.
It has two unique features below that are helpful when simulating a HST observation. Therefore, it is ideally suited for predicting exposure times (e.g., using HST ETC) when writing HST proposals:
1. The input parameters were originally structured to mimic what is contained in the exposure logsheets found in HST observing proposals in Astronomer’s Proposal Tool (APT).
2. For the spectroscopic instruments, it will automatically search for and use a wavelength table that is appropriate for the selected instrumental dispersion mode.
Examples its usage are available in Examples, Tutorial 1: Observation, and Tutorial 7: Count Rates for Multiple Apertures.
## Sampling¶
`sample()` is another useful method for an observation. It allows the computation of the number of counts at a particular reference wavelength (in Angstroms) for either binned or native dataset.
The example below computes the number of counts for Vega at 10000 Angstroms, as observed by HST/WFC3 IR detector using F105W filter:
```>>> refwave = 10000
>>> obs = S.Observation(S.Vega, S.ObsBandpass('wfc3,ir,f105w'))
>>> obs.sample(refwave) # Binned dataset
6427997.452742151
>>> obs.sample(refwave, binned=False) # Native dataset
32194062.511316653
```
In contrast, its `__call__()` method is the same as Source Spectrum. It always computes flux in `photlam` and can only “see” the native dataset, as illustrated by the example below:
```>>> obs(refwave)
142.0874566775498
```
## Examples¶
Simulate an observation of a 5000 K blackbody through the HST/ACS WFC1 F555W bandpass, and plot its binned dataset:
```>>> obs = S.Observation(S.BlackBody(5000), S.ObsBandpass('acs,wfc1,f555w'))
>>> plt.plot(obs.binwave, obs.binflux, drawstyle='steps-mid')
>>> plt.xlim(4000, 7000)
>>> plt.xlabel(obs.waveunits)
>>> plt.ylabel(obs.fluxunits)
>>> plt.title(obs.name)
```
Calculate the count rate of this observation in the unit of counts/s over the HST collecting area (i.e., the primary mirror) that is defined in :
```>>> obs.primary_area
45238.93416
>>> obs.countrate()
10080.633086603169
```
Calculate the effective stimulus in `flam`:
```>>> obs.effstim('flam')
1.9951166916464598e-15
```
Calculate the effective wavelength in Angstroms:
```>>> obs.efflam()
5406.9723492971034
```
Convert the flux unit to counts:
```>>> obs.convert('counts')
```
Plot observation data in both native and binned wavelength sets. Note that counts per wavelength bin depends on the size of the bin because it is not a flux density:
```>>> plt.plot(obs.wave, obs.flux, marker='x', label='native')
>>> plt.plot(obs.binwave, obs.binflux, drawstyle='steps-mid', label='binned')
>>> plt.xlim(6010, 6040)
>>> plt.ylim(2, 6)
>>> plt.xlabel(obs.waveunits)
>>> plt.ylabel(obs.fluxunits)
>>> plt.title(obs.name)
>>> plt.legend(loc='best')
```
Write the observation out to two FITS tables, one with native dataset and the other binned:
```>>> obs.writefits('myobs_native.fits', binned=False)
>>> obs.writefits('myobs_binned.fits')
``` | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8543547987937927, "perplexity": 1787.9847748757484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806310.85/warc/CC-MAIN-20171121021058-20171121041058-00189.warc.gz"} |
http://math.stackexchange.com/questions/249530/using-lagrange-multipliers-for-restricted-extrema/249652 | # Using Lagrange multipliers for restricted extrema
Consider the function $f(x,y) = x^2 + xy + y^2$ defined on the unit disc $D = \{(x,y) \mid x^2 + y^2 \leq 1\}$.
I can not simplify the equations to the point where I find a constant for the lagrange multiplier and thus can't find the points of the extrema. I used the method that we can create a new function $L$ with the variables $x$, $y$ and $\lambda$ where $\lambda$ is the Lagrange multiplier. Then found the partials of $L$ with respect to each variable. This is where I am stuck because I can not simplify enough to find $x$ and $y$. Are there any tricks for this type of question?
-
Are you finding all extrema (mins and maxes) or just mins...? – icurays1 Dec 2 '12 at 22:46
I am finding both, but I actually solved the question. Thanks for commenting though – tamefoxes Dec 2 '12 at 22:47
The function is continuous and differentiable, so its maximum value over a region will be critical points or at the boundary.
To find critical points, we find $f_x$ and $f_y$ and set them both equal to $0$: $$f_x = 2x+y = 0$$ $$f_y = 2y+x = 0$$ $$2(-2x)+x = 0$$ $$x = 0, y = 0$$
So the origin is a critical point. Using the second derivative test in two dimensions: $$f_{xx} = 2$$ $$f_{xy} = f_{yx} = 1$$ $$f_{yy} = 2$$ Because $f_{xx}*f_{yy} - f_{xy}^2 = 3 > 0$ and $f_{xx}*f_{yy} > 0$ (so its a min, not a max), $(0,0)$ is a minimum, so it cannot possible be the greatest value.
Now we have to test the boundary, as the greatest value has to be there. Our new constraint is $g(x,y) = x^2+y^2 = 1$, so $\nabla g = <2x, 2y>$
Using Lagrange multipliers, we know $\nabla f = \lambda \nabla g$: $$2x+y = \lambda 2x$$ $$2y +x = \lambda 2y$$
$${2x+y \over 2x} = {2y+x \over 2y}$$ $$4xy+2y^2 = 4xy + 2x^2$$ $$x^2 = y^2$$ $$x = \pm y$$ Using our constraint, we have $x^2 +x^2 = 1$, so $x = \pm 1/ \sqrt{2}$. So, our possible maximum values are $(1/ \sqrt{2}, 1/ \sqrt{2}), (-1/ \sqrt{2}, 1/ \sqrt{2}), (1/ \sqrt{2}, -1/ \sqrt{2}), (-1/ \sqrt{2}, -1/ \sqrt{2})$. After checking values of $f$ at each of these points, we can conclude that the greatest $f$ value, $3/2$, occurs at $(1/ \sqrt{2}, 1/ \sqrt{2})$ and $(-1/ \sqrt{2}, -1/ \sqrt{2})$.
-
Nicely explained! :) – Mike Spivey Dec 4 '12 at 3:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9488648176193237, "perplexity": 181.4506569501594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122030742.53/warc/CC-MAIN-20150124175350-00232-ip-10-180-212-252.ec2.internal.warc.gz"} |
http://math.stackexchange.com/tags/logic/new | # Tag Info
0
Your doubts can be explained by the following theorem: $$\left(A\Leftrightarrow B\right) \Rightarrow \left(A\Rightarrow B\right)$$ Same holds for the other direction, of course. Practically spoken, this means that if two statements are equivalent, it is not a false statement to use an implication there. The reason why most people do this is because in many ...
0
$\square p\land p\to p$ is a theorem; therefore $\square(\square p\land p\to p)$ by necessitation, and so $$\square(\square p\land p)\to \square p$$ by $\mathbf K$ (and modus ponens). Now apply $p\land\cdot$ to both sides and use the equivalence of $a\land b\to c$ and $a\to(b\to c)$.
0
What if one of the Knights draws the $N$ card?
0
Strong induction means following: suppose $P(0)$ and that $P(k),k<n$ implies $P(n)$. Then $P(n)$ for all $n\in\mathbb{N}$. For this question, our base is $n=2$, which is prime, so the statement holds. Now assume $n>2$ and that every $k<n$ is either prime or a product of primes. If $n$ is prime then there's nothing to prove, so assume $n$ is not ...
0
Let M be a structure and T a theory that it models. A substructure N of M need not model T. if N models T, then we say that N is a submodel of M relative to T. Otherwise, if N is a substructure of M that is not a submodel relative to T. Thus the only distinction comes when a theory lurks in the background. As an example, consider the signature S = (+, 0) ...
1
Hope this reference helps: "Broadening the Iterative Conception of Set" by Mark F. Sharlow, Notre Dame Journal of Formal Logic, Volume 42, Number 3, 2001, pp.149-170. According to the abstract, "the modified conception maintains most of the features of the iterative conception of set, but allows for some non-wellfounded sets. It is suggested that this ...
1
Order of quantifiers matters! We claim that the first statement is false. To see this, we will prove its negation, namely: $$(\mathbb N, +) \models \exists x \, \forall y \, \exists z \, [x + y = z]$$ Let $x = 3 \in \mathbb N$. Given any $y \in \mathbb N$, let $z = 3 +y \in \mathbb N$. Then $x + y = 3 + y = z$, as desired. The second statement is true. ...
1
The first is false. It says that for any natural number $x$, there is a natural number $y$ such that $x+y$ is not a natural number (for all natural numbers $z$, $x+y$ differs from $z$). Note that in your explication of the sentence, you implicitly read $\exists y\forall z$ as $\forall z\exists y$. Now perhaps you can deal with the second.
0
First off, even though this kind of puzzles usually appear under names such as "logic puzzles", they're not really about what mathematicians call "logic". It is possible to write down the contents of the puzzle in the language of mathematical logic, but doing so will rarely be of any help in solving the puzzle -- and conversely, being skilled in solving ...
1
A finite set has even size exactly if its elements can be paired up two by two. A natural way to express this pairing when you have only a function to express it with would be to require that the function should map each element to its partner. Can you write down the conditions "$f$ maps every element to something that maps back to the element we started ...
2
If you think about $R$ being a partial order, then these mean: $\exists x\forall y(x\mathrel R y)$, there is a minimum. $\exists x\forall y(y\mathrel R x)$, there is a maximum. $\forall x\exists y(x\mathrel R y)$, every element is related to someone. Of course, if you want $R$ to be a partial order, then for $x$ being the maximum element the only $y$ ...
3
Hint: There is not a single negation symbol to be seen in $\Gamma$.
3
The following Haskell program: import Control.Monad; remove1 x [] = [] remove1 x (h:t) = if (x == h) then t else h : remove1 x t remove xs ys = foldr remove1 ys xs digits = [0..9] numeral = foldl (\a b -> a * 10 + b ) 0 solutions = do s <- remove [0] digits; e <- remove [s,0] digits; v <- remove ...
5
This is what happens when you abbreviate. Recall that $a,b\in\mathcal P(A)$ is really a shorthand for $a\in\mathcal P(A)\land b\in\mathcal P(A)$. I think that now you can finish the negation properly.
0
No, you haven't provided a contradiction. Suppose that $A = \{1,2,3\}$ and $B=\{3,4,5\}$ and let $a\in A, b\in B.$ Then it is true that $a+b \geq 4.$ Now, let $P$ be the proposition that $a\geq 0,b\geq0$ for any $a\in A, b\in B$. Then $P$ implies $a+b \geq 0.$ This does not contradict $P$, which is reassuring since $P$ is definitely true! For the ...
0
Your argument after step 3 is not correct. The conditional : $(∃z(x=y*z)⇒(y=x)∨(y=1))$ is universally quantified by $∀y$. This means that : for any $y$ we have to "test" if the antecedent $∃z(x=y*z)$ holds. Thus, for the sake of argument, assume $x = 3$; we have that $x \ne 1$ and thus the first conjunct is true. For the second conjunct, we have to check, ...
1
A formal proof is a sequence of statements such that every statement in the sequence is either an instance of an axiom; or a deduction from two previous statements using modus ponens. The latter means that if you have statements $\phi\to\psi$ and $\phi$, you can write down $\psi$. If you think about it, this is really nothing more than expressing the ...
1
A3 with $\phi = p$, $\psi = q$ gives $((\lnot p) \rightarrow (\lnot q)) \rightarrow (q \rightarrow p)$. A2 with $\phi = ((\lnot p) \rightarrow (\lnot q)), \psi = q, \chi = p$ gives $(((\lnot p) \rightarrow (\lnot q)) \rightarrow (q \rightarrow p)) \rightarrow ((((\lnot p) \rightarrow (\lnot q)) \rightarrow q ) \rightarrow ( ((\lnot p) \rightarrow (\lnot q)) ... 1 I always think of it in terms of sets. In the picture above, for an element to be purple, it's necessary to be red, but it is not sufficient. The same holds for the blue set, to be in the blue set is a necessary condition in order to be purple, but it is not enough, it's not sufficient. A sufficient condition is stronger than a necessary condition. If ... 1 If you go straight for the$n$th digit (after the decimal point) of$x$, something like this should work $$\left\lfloor 10^{n+1}|x| - 10\left\lfloor 10^n |x|\,\right\rfloor \right\rfloor$$ Encoding the floor function and other necessary arithmetic as set theory is left as an exercise for the reader. 1 You can consider the decimal expansion to be a sequence of natural numbers. Then the graph of the sequence can be made into a set $$\{(1,x_1),(2,x_2),\ldots \}.$$ This works nicely for all real numbers between$0$and$1$. You can fix it up so that it works for all real numbers. I should also mention that you have to take equivalence classes. 1$\forall x \in \mathbb{R}:[x\ne 0\implies \exists y \in \mathbb{R}: xy=1]$1 My question is what happens to the truth value if the premise in a universal implication is false eg:$\forall x : ((Purple(x) \land Mushroom(x)) \implies Poisonous(x))$(assuming outer brackets) If in the universe, x is not purple or not a mushroom, what happens to the implication? Then the implication would tell you nothing.$x$may or may not be ... 1 Since the variable$x$is bound by the quantifier, the truth value of the sentense does not depend on any choice of value for$x$. That's what the quantifier says:$\forall x(\cdots)$is true if "$\cdots$" is true no matter what we bind to$x$. For those particular choices of$x$that are not purple mushrooms, the formula to the left of$\Rightarrow$is ... 1 It is quite easy with Natural Deduction. (i)$∃xA(x) \rightarrow ¬∀x¬A(x)$(1)$∀x¬A(x)$--- assumed [a] (2)$¬A(x)$--- from (1) by$\forall$-E (3)$∃xA(x)$--- assumed [b] (4)$A(x)$--- assumed [c] for$\exists$-E (5)$\bot$--- from (4) and (2) by$\rightarrow$-E (6)$\bot$--- from (3), (4) and (5) by$\exists$-E, discharging [c] (7) ... 1 Neither :$(α→β) → (∃xα→∃xβ), x \notin FV(β)$is valid. In order to manufacture a counter-example, we can "re-cycle" that of Tetori's answer. We refer to Herbert Enderton, A Mathematical Introduction to Logic (2nd ed - 2001), page 88 for the definition of : Logical Implication : Let$\Gamma$be a set of wffs,$\varphi$a wff. Then$\Gamma$logically ... 3 You can't prove it. Consider the field of rationals. Take$\alpha(x)$as$x=0$and take$\beta$as$\exists y(y\cdot 0=1)$. Then$\exists x (\alpha(x)\to\beta)$holds (just take$x=1$) and$\exists x\alpha(x)$also holds, but$\exists x\beta$is false. 1 The answer is mostly yes - there is an algorithm that will determine whether or not most complex numbers are in the Mandelbrot set or not. The algorithm definitely works for all points that lie in the basin of attraction of some attractive orbit and for all points that lie on the boundary of the Mandelbrot set, as proved in this paper. Whether there are ... 0 Here is another characterization of$\Delta^0_n$. A subset of the natural numbers is$\Delta^0_n$if and only if both it and its complement are$\Sigma^0_n$. A subset$X$of the natural numbers is$\Sigma^0_n$if and only if$X$can be defined as follows,$k\in X$if and only if$\exists m_1\forall m_2\exists m_3\dots Q_n m_n R(k,m_1,\dots m_n)$, where$R$... 1 Yes, the article means to say$\Delta^0_{n+1}$and$\Delta^{0,C}_{n+1}$. 0 A necessary and sufficient condition for a function$f$to be an infinity producer is that on the inductive set$\omega^f$given by$\textbf{Inf}^f$,$f$is injective and does not have$0$in its range. For necessity: If$f$is not injective on$\omega^f$, there are some$x \neq y$in$\omega^f$with$f(x) = f(y)$. By the inductive definition of ... 1 For$x=7$,$2x=14$is true and$x\ne 7$is false, so that$2x=14\iff x\neq7$does not hold. For$x\ne7$,$2x=14$is false and$x\ne 7$is true, so that$2x=14\iff x\neq7$does not hold. This covers all real numbers. 3 $$2x = 14 \iff x = 7$$ Logically, you're statement can be simplified to $$x = 7 \iff x\neq 7$$ which is a contradiction: it is false no matter what$x$is. Recall that$a \leftrightarrow b$is true if and only if both$a, b$are true, or both$a, b$are false. In the example above, if we put$a: x = 7$, and$b: x\neq 7$, then if$a$is true,$b$must ... 0 When you're in characteristic$\;2\;$, your equation is $$0=2x=14=0\rlap{\;\;\;\;/}\implies x=7=1\pmod 2$$ In any other case one can either divide by two and then$\;x=7\;$, or else$\;2\;$isn't invertible in a particular algebraic structure. 0 See this lecture notes by Vann McGee. 2 The Wikipedia article on the Mandelbrot set suggests that the answer to your question is not yet known. In the paragraph Further results, it says: At present it is unknown whether the Mandelbrot set is computable in models of real computation based on computable analysis. If the Mandelbrot set turned out not to be computable, then there would indeed ... 2 This theory axiomatizes the class of graph of degree$2$such that for all positive integers$n$there exists a path of length$n$. Needless to say, such a graph has to be infinite to satisfy the last condition. To prove that the theory is not contradictory, it suffices to come up with a model. An example of a model is the infinite path on$\mathbb Z$where ... 2 The theory is of a graph of degree$2$with arbitrary long paths. The theory is not complete. Do there exists cycles ? For example is there a cycle of length$4? This is first order and not decided by the axioms. 3 \begin{align} (p\land q) \rightarrow (p \land r) & \equiv \lnot(p \land q)\lor(p \land r) \tag{1} \\ \\ &\equiv (\lnot p \lor \lnot q) \lor (p\land r)\tag{2}\\ \\ &\equiv \lnot p \lor \lnot q \lor (p \land r)\tag{3} \\ \\ &\equiv \lnot p \lor \lnot q \lor (r \land p)\tag{4}\end{align}(1)$follows because$a \rightarrow b \equiv \lnot a ...
2
If I understand you correctly, you want to know the following: I want to prove $$(A\subset X \land B \subset X) \land ((A\subset Y \land B\subset Y) \rightarrow X\subset Y)\implies X=A\cup B$$ since the proposition $P \implies Q$ is formally true in the cases \begin{matrix} {P}&{Q}&{}&{P\implies Q}\\ ...
0
You should prove (i) $A \cup B \subset X$ and (ii) $X \subset A \cup B$. You can prove (i) from the first statement. To prove (ii), use the second statement with $Y=A \cup B$.
2
When we say, if it is raining, then it is cloudy, there is often the erroneous suggestion that either cloudiness causes rain, or rain causes cloudiness. Neither is the case. It means only that it cannot be both raining and not cloudy. There is no suggestion of a causal relationship. To answer your question, suppose it is not raining. Then it doesn't matter ...
1
In plain English, "$P\rightarrow Q$" is the same as "If P is true, then Q must be true". So, if you know that this kind of relation holds for two propositions, P and Q and someday i tell you that i observed P to be true and Q to be false in some situation, you will greet me in a second by "liar". But if i say, that i observed P is false and Q is true, then ...
0
Remember that $P \rightarrow Q$ has nothing to do with whether $Q$ is true. It only means: If $P$ is true then $Q$ is true. This always holds if P is false becasue if we assume that $P$ is true, while knowing that $P$ is false, we get: $\neg P \wedge P$ And we can prove anything starting from this. So $Q$ is true.
1
Let's say there's a sign outside a basketball court that says: "If you're not wearing shoes, you cannot play basketball." The negative of the antecedent is if I were wearing shoes. And if I were wearing shoes, I'm not violating what the sign says no matter whether or not I play basketball. I only violate the sign if I'm not wearing shoes AND playing ...
0
Imagine, philosophically, mathematical foundation needs to build up a language bridging finiteness at A and countable infinity at B. ACC is such a language leading us jumping from A right to B, while induction is a never ending process leading us to go through each mile in an infinite mileage, but just never get to the end point B. In a single nonempty set, ...
1
For $\rightarrow$, it should be $\Sigma'\cup{\{\neg\sigma\}}$ is consistent. Then the rest is fine. For $\leftarrow$, If $\sigma\notin\Sigma'$, then maximality tells us that $\neg\sigma\in{\Sigma'}$. After this what can you say about $\Sigma'$?
0
To avoid Choice, you need to have a definitive way of choosing an element from each set. For example, if each $A_n$ is a pair of shoes, you may always choose the left. A typical useful case is when each $A_n$ has a distinguished member such as a unique minimum that you can choose. For example, Baire Category Theorem for a complete SEPARABLE metric space ...
0
The statement is best read as (using inclusion in a set instead of a predicate; it doesn't really matter): If $x$, $y$, and $z$ are all in $A$, then there is some pair of them which are equal. Clearly, this cannot hold for any set $A$ containing three distinct members - hence, we know that the above statement implies that $A$ has size no more than $2$. ...
0
P is a property that defines A. For Case 1: Objects x,y, and z exist in the Universe but since (Px∧Py∧Pz) was false, no candidate was held true for the property P that defines A thus their existence has no identity in A because they are not members of A; therefore, A is empty if the antecedent is false always -the consequent has no effect on defining ...
Top 50 recent answers are included | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9662128686904907, "perplexity": 337.9798756849558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007150.95/warc/CC-MAIN-20141125155647-00135-ip-10-235-23-156.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/75646/duals-of-lindenbaum-algebras/75654 | # “Duals” of Lindenbaum algebras
From Wikipedia I learn:
The Lindenbaum algebra A of a theory T consists of the equivalence classes of sentences of T. The operations in A are inherited from those in T.
If there are disjunction, conjunction and negation, A is a Boolean algebra and can be seen as a poset:
The objects of A are sentences $\phi$ modulo
$$T \vdash \phi \leftrightarrow \phi'$$
There is a relation $\phi \leq \psi$ iff
$$T \vdash \phi \rightarrow \psi$$
Lindenbaum algebras are a bit boring since — for example — all complete theories T have the same two-element Lindenbaum algebra.
They might be a bit more interesting when relaxing the conditions:
Objects $\phi$ modulo:
$$\vdash \phi \leftrightarrow \phi'$$
Relation $\phi \leq \psi$:
$$T \vdash \phi \rightarrow \psi$$
EDIT: And a simplification.
-
What you have defined is one way to go about this. There is another question discussing Lindenbaum algebras and the answer by Andreas Blass discusses another possibility. Briefly, another possibility is to define an algebra over arbitrary formulas, including those containing free variables. Some people call this the Rasiowa-Sikorski approach. It is covered the the second half of the book
The mathematics of metamathematics by Helena Rasiowa and Roman Sikorski.
The topic is briefly treated in the chapter on model theory in
Mathematical Logic: A Course with Exercises Pt.2: Recursion Theory, Godel's Theorem, Set Theory and Model Theory, by Rene Cori, Daniel Lascar.
A tutorial style treatment that discusses some design decisions one can take in setting up an algebraic framework for studying logics is in these two articles (part 2 of the second).
Algebraic logic by Hajnal Andréka, István Németi and Ildikó Sain, and the article Applying Algebraic Logic; a General Methodology by Hajnal Andréka, Ágnes Kurucz, István Németi and Ildikó Sain
-
This is not really different: just add countably many unconstrained constant terms to the theory and you get the same algebra. – François G. Dorais Sep 17 '11 at 3:22
You seem to be missing a T on the left when defining the relation $\leq$ on the Lindenbaum algebra, an error which seems to undercut the premise of your question about duals.
Namely, one wants to define that $\phi\leq\psi$ if and only if $T\vdash \phi\leftrightarrow\phi\wedge\psi$. This is the same as $T\vdash\phi\to\psi$.
You need to include the theory $T$ in this definition, since otherwise the relation is not well-defined on your equivalence classes. For example, you won't even be able to prove that $\phi\leq\phi'$ when $\phi$ and $\phi'$ have been already identified by the first part of your definition. Thus, the way you have set things up, the relation $\leq$ will not be well-defined on the equivalence classes you have set up, but using the theory $T$ when defining the order does make things well-defined.
The point of the Lindenbaum algebra is that the objects in the Lindenbaum algebra represent the possible assertions that you can make, having already committed yourself to the theory $T$. These form a Boolean algebra, and the order in that case is simply the usual order arising in any Boolean algebra. It is not a defect that the algebra has only two elements when $T$ is complete, since if you are committed to a complete theory, then every statement is either proved or refuted by the theory, and these are the two kinds of statements you can make. The way I would describe the situation is that the algebra is more interesting when the theory leaves matters unsettled, since the point of the algebra is to understand the nature of what is not yet settled by $T$. I think you can find an account of the Lindenbaum algebra in any of the standard logic texts, but I'd have to double check for a specific reference.
But if we do consider the notion that you have set up, your objects are the Lindenbaum algebra of the underlying language with no theory, but you have a new relation, which is merely a pre-order, arising from the order as defined using theory $T$. Note that different objects $x$ and $y$ in your algebra can obey $x\leq y\leq x$ with $x\neq y$, so this is a pre-order rather than an order. But if we were to quotient by the corresponding equivalence relation, we would get exactly the Lindenbaum algebra arising from the theory $T$, since $\varphi\leq\psi\leq \varphi$ if and only if $\varphi$ and $\psi$ are equivalent in the Lindenbaum algebra of $T$.
-
@Joel: I obviously seem to have missed something (the T on the left - though I wasn't aware). So it's not about "turning things around" but about "relaxing conditions". Does the question make sense now? – Hans Stricker Sep 16 '11 at 23:27
Yes, but my last paragraph still seems to answer the question. What you have is the pre-order of the T-Lindenbaum algebra applied on the smaller equivalence classes of the underlying Lindenbaum algebra. – Joel David Hamkins Sep 16 '11 at 23:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8643618226051331, "perplexity": 345.3418620558352}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701159654.65/warc/CC-MAIN-20160205193919-00075-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://exampleproblems.com/wiki/index.php/Laplace_operator | # Laplace operator
In mathematics and physics, the Laplace operator or Laplacian, denoted by Δ, is a differential operator, specifically an important case of an elliptic operator, with many applications in mathematics and physics. In physics, it is used in modeling of wave propagation and heat flow, forming the Helmholtz equation. It is central in electrostatics, anchoring in Laplace's equation and Poisson's equation. In quantum mechanics, it represents the kinetic energy term of the Schrödinger equation. In mathematics, functions with vanishing Laplacian are called harmonic functions; the Laplacian is at the core of Hodge theory and the results of de Rham cohomology.
## Definition
The Laplace operator is a second order differential operator in the n-dimensional Euclidean space, defined as the divergence of the gradient:
$\Delta =\nabla ^{2}=\nabla \cdot \nabla .$
Equivalently, the Laplacian is the sum of all the unmixed second partial derivatives:
$\Delta =\sum _{{i=1}}^{n}{\frac {\partial ^{2}}{\partial x_{i}^{2}}}.$
Here, it is understood that the $x_{i}$ are Cartesian coordinates on the space; the equation takes a different form in spherical coordinates and cylindrical coordinates, as shown below.
In the three-dimensional space the Laplacian is commonly written as
$\Delta ={\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {\partial ^{2}}{\partial y^{2}}}+{\frac {\partial ^{2}}{\partial z^{2}}}.$
As we shall see later, the Laplacian can be generalized to non-Euclidean spaces, where it may be elliptic or hyperbolic. For example, in the Minkowski space the Laplacian becomes the d'Alembert operator or d'Alembertian
$\square ={\partial ^{2} \over \partial x^{2}}+{\partial ^{2} \over \partial y^{2}}+{\partial ^{2} \over \partial z^{2}}-{\frac {1}{c^{2}}}{\partial ^{2} \over \partial t^{2}}.$
The D'Alembert operator is often used to express the Klein-Gordon equation and the four-dimensional wave equation. The sign in front of the fourth term is negative, while it would have been positive in the Euclidean space. The additional factor of c is required because space and time are usually measured in different units; a similar factor would be required if, for example, the x direction was measured in inches, and the y direction was measured in centimeters. Indeed, physicists usually work in units such that c=1 in order to simplify the equation.
### Coordinate expressions
In three dimensions, it is common to work with the Laplacian in a variety of different coordinate systems. Given a function f, in cylindrical coordinates, one has:
$\Delta f={1 \over r}{\partial \over \partial r}\left(r{\partial f \over \partial r}\right)+{1 \over r^{2}}{\partial ^{2}f \over \partial \theta ^{2}}+{\partial ^{2}f \over \partial z^{2}}.$
$\Delta f={1 \over r^{2}}{\partial \over \partial r}\left(r^{2}{\partial f \over \partial r}\right)+{1 \over r^{2}\sin \theta }{\partial \over \partial \theta }\left(\sin \theta {\partial f \over \partial \theta }\right)+{1 \over r^{2}\sin ^{2}\theta }{\partial ^{2}f \over \partial \phi ^{2}}.$
The spherical coordinates Laplacian can also be written in this form:
$\Delta f={1 \over r}{\partial ^{2} \over \partial r^{2}}\left(rf\right)+{1 \over r^{2}\sin \theta }{\partial \over \partial \theta }\left(\sin \theta {\partial f \over \partial \theta }\right)+{1 \over r^{2}\sin ^{2}\theta }{\partial ^{2}f \over \partial \phi ^{2}}.$
See also the article Nabla in cylindrical and spherical coordinates.
### Identities
If f and g are functions, then the Laplacian of the product is given by
$\Delta (fg)=(\Delta f)g+2(\nabla f)\cdot (\nabla g)+f(\Delta g).$
## Laplace-Beltrami operator
The Laplacian can be exteneded to functions defined on surfaces, or more generally, on Riemannian and pseudo-Riemannian manifolds. This more general operator goes by the name Laplace-Beltrami operator. One defines it, just as the Laplacian, as the divergence of the gradient. To be able to find a formula for this operator, one will need to first write the divergence and the gradient on a manifold.
If $g$ denotes the (pseudo)-metric tensor on the manifold, one finds that the volume form in local coordinates is given by
${\mathrm {vol}}_{n}:={\sqrt {|g|}}\;dx^{1}\wedge \ldots \wedge dx^{n}$
where the $dx^{i}$ are the 1-forms forming the dual basis to the basis vectors
$\partial _{i}:={\frac {\partial }{\partial x^{i}}}$
for the local coordinate system, and $\wedge$ is the wedge product. Here $|g|:=|\det g|$ is the absolute value of the determinant of the metric tensor. The divergence of a vector field X on the manifold can then be defined as
${\mathcal {L}}_{X}{\mathrm {vol}}_{n}=({\mbox{div}}X)\;{\mathrm {vol}}_{n}$
where ${\mathcal {L}}_{X}$ is the Lie derivative along the vector field X. In local coordinates, one obtains
${\mbox{div}}X={\frac {1}{{\sqrt {|g|}}}}\partial _{i}{\sqrt {|g|}}X^{i}$
Here (and below) we use the Einstein notation, so the above is actually a sum in i.
The gradient of a scalar function f may be defined through the inner product $\langle \cdot ,\cdot \rangle$ on the manifold, as
$\langle {\mbox{grad}}f(x),v_{x}\rangle =df(x)(v_{x})$
for all vectors $v_{x}$ anchored at point x in the tangent bundle $T_{x}M$ of the manifold at point x. Here, df is the exterior derivative of the function f; it is a 1-form taking argument $v_{x}$. In local coordinates, one has
$\left({\mbox{grad}}f\right)^{i}=\partial ^{i}f=g^{{ij}}\partial _{j}f$
Combining these, the formula for the Laplace-Beltrami operator applied to a scalar function f is, in local coordinates
$\Delta f={\mbox{div grad}}\;f={\frac {1}{{\sqrt {|g|}}}}\partial _{i}{\sqrt {|g|}}\partial ^{i}f$.
Here, $g^{{ij}}$ are the components of the inverse of the metric tensor $g$, so that $g^{{ij}}g_{{jk}}=\delta _{k}^{i}$ with $\delta _{k}^{i}$ the Kronecker delta.
Note that the above definition is, by construction, valid only for scalar functions $f:M\rightarrow {\mathbb {R}}$. One may want to extend the Laplacian even further, to differential forms; for this, one must turn to the Laplace-deRham operator, defined in the next section.
One may show that the Laplace-Beltrami operator reduces to the ordinary Laplacian in Euclidean space by noting that it can be re-written using the chain rule as
$\Delta f=\partial _{i}\partial ^{i}f+(\partial ^{i}f)\partial _{i}\ln {\sqrt {|g|}}.$
When $|g|=1$, such as in the case of Euclidean space, one then easily obtains
$\Delta f=\partial _{i}\partial ^{i}f$
which is the ordinary Laplacian. Using the Minkowski metric with signature (+++-), one regains the D'Alembertian given previously. Note also that by using the metric tensor for spherical and cylindrical coordinates, one can similarly regain the expressions for the Laplacian in spherical and cylindrical coordinates. The Laplace-Beltrami operator is handy not just in curved space, but also in ordinary flat space endowed with a non-linear coordinate system.
Note that the exterior derivative d and -div are adjoint:
$\int _{M}df(X)\;{\mathrm {vol}}_{n}=-\int _{M}f{\mbox{div}}X\;{\mathrm {vol}}_{n}$ (proof)
where the last equality is an application of Stokes theorem. Note also, the Laplace-Beltrami operator is symmetric:
$\int _{M}f\Delta h\;{\mathrm {vol}}_{n}=\int _{M}\langle {\mbox{grad}}f,{\mbox{grad}}h\rangle \;{\mathrm {vol}}_{n}=\int _{M}h\Delta f\;{\mathrm {vol}}_{n}$
for functions f and h.
## Laplace-de Rham operator
In the general case of differential geometry, one defines the Laplace-de Rham operator as the generalization of the Laplacian. It is a differential operator on the exterior algebra of a differentiable manifold. On a Riemannian manifold it is an elliptic operator, while on a pseudo-Riemannian manifold it is hyperbolic. The Laplace-de Rham operator is defined by
$\Delta ={\mathrm {d}}\delta +\delta {\mathrm {d}}=({\mathrm {d}}+\delta )^{2},\;$
where d is the exterior derivative or differential and δ is the codifferential. When acting on scalar functions, the codifferential may be defined as δ = −∗d∗, where ∗ is the Hodge star; more generally, the codifferential may include a sign that depends on the order of the k-form being acted on.
One may prove that the Laplace-de Rahm operator is equivalent to the previous definition of the Laplace-Beltrami operator when acting on a scalar function f; see the Laplace operator article proofs for details. Notice that the Laplace-de Rham operator is actually minus the Laplace-Beltrami operator; this minus sign follows from the conventional definition of the properties of the codifferential. Unfortunately, Δ is used to denote both; which can sometimes be a source of confusion.
### Properties
Given scalar functions f and h, and a real number a, the Laplace-de Rham operator has the following properties:
1. $\Delta (af+h)=a\Delta f+\Delta h\!$
2. $\Delta (fh)=f\Delta h+2\partial _{i}f\partial ^{i}h+h\Delta f$ (proof) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 38, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.970901608467102, "perplexity": 211.81687070361542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172000.93/warc/CC-MAIN-20170219104612-00252-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://www.nature.com/articles/s41467-019-08609-z?error=cookies_not_supported | ## Introduction
Human transthyretin (TTR) is a 55 kDa homotetrameric protein that transports thyroxine (T4) and retinol-binding protein (RBP) in the serum and cerebrospinal fluid (Fig. 1a–c). Wild-type (WT) TTR is intrinsically amyloidogenic, and causes amyloid fibril formation in elderly individuals, resulting in senile systemic amyloidosis (SSA)1,2. A large number ( > 100) of TTR mutants have been identified (http://amyloidosismutations.com). The majority of these are implicated in familial amyloid polyneuropathy/myocardiopathy (FAP/FAC) and several other forms of amyloidosis3,4,5. Of special interest are the T119M and S52P mutations. The T119M mutation is non-amyloidogenic, and heterozygous individuals carrying the T119M mutation and an amyloidogenic mutation, such as V30M, either remain asymptomatic of FAP or present a more benign form of the disease6,7. On the other hand, individuals carrying the S52P mutation develop aggressive and early-onset fatal amyloidosis8,9. Currently, only one drug—tafamidis (trade name Vyndaqel; Pfizer)—has been granted market authorisation for the treatment of TTR-FAP in adult patients having stage 1 symptomatic polyneuropathy10.
Considerable effort has been invested in seeking to identify the factors that underlie TTR amyloid formation including X-ray diffraction11,12,13, NMR14,15,16, solution scattering17, and mass spectrometry (MS)18. However, a molecular level understanding of the widely different behaviours of various TTR mutants has remained elusive11,12. In this paper, native MS, neutron crystallography, and computer simulations have been used to investigate the effects of the T119M and S52P mutations on TTR stability and the molecular determinants of amyloid formation. It is found that the S52P mutation destabilises TTR by altering the stability of the CD loop in the protein monomer. In contrast, the T119M mutation stabilises the dimer–dimer interface as well as TTR’s tertiary structure. Furthermore, it is shown how the stability of TTR tetramers is coupled to those of the monomers and dimers. Finally, it is suggested that tafamidis stabilises native tetrameric TTR in a similar way to that occurring for the T119M mutant in terms of thermodynamic, kinetic and structural features. A molecular mechanism is proposed by which the vastly different behaviours of the various TTR mutants can be understood.
## Results
### Kinetics of TTR tetramer dissociation
Tetramer dissociation is thought to be the rate-limiting step in amyloid formation19,20,21,22. MS was used to follow the subunit exchange dynamics of TTR over several days so that the effect of the two mutations, and that of tafamidis, on the rate of dissociation of the tetramer species could be compared. These experiments relied on the use of hydrogenated (H) and deuterated (D) TTR to track otherwise identical protein chains. H- and D-TTR were mixed in an equimolar ratio and monitored over time to observe the rate of dissociation of the homo-tetramers (4H, 4D) and the rate of formation of the hetero-tetramers (2H2D, 1H3D, 3H1D; Fig. 1d and Supplementary Figure 1). The two 2H2D assemblies that can be formed may have different association constants, given that one requires only association of homo-dimers while the other requires the formation of hetero-dimers before assembly into tetramers. However, these two forms cannot be differentiated by MS, and are necessarily considered together. Since it has been shown that the kinetics of TTR subunit exchange is susceptible to isotope effects18, a deuterated TTR species [(D)S52P] was used as a reference to allow a meaningful comparison of the TTR variants. The change in relative abundance of each tetrameric species over time, and the relative association/dissociation rate constants are shown in Fig. 1e–h.
The dissociation rate of homo-tetramers in the (H)WT-(D)S52P experiment (Fig. 1e) was 0.20 ± 0.03 day−1. Among the hetero-tetramer species, those containing identical dimers (2H2D; two hydrogenated and two deuterated TTR monomers) were formed at a higher rate (0.33 ± 0.04 day−1) than those formed by two different dimers (3H1D and 1H3D; 0.14 ± 0.03 day−1). The observed rate of dissociation of homo-tetramers in the (H)S52P-(D)S52P experiment (Fig. 1f) was about three times larger than in the (H)WT-(D)S52P (0.59 ± 0.03 day−1). In this case, however, the 3H1D/1H3D species were formed faster than the 2H2D ones: 0.83 ± 0.06 day−1 and 0.45 ± 0.01 day−1, respectively. In the (H)T119M-(D)S52P experiment (Fig. 1g), almost no tetramer dissociation was observed over the course of eleven days—a timescale over which essentially all of the TTR would be recycled in a physiological context (the biological half-life of TTR is 1–2 days23). A final experiment was carried out with (H)S52P and (D)S52P in the presence of tafamidis at a molar ratio of 1:1 (Fig. 1h). In this case, tetramer dissociation was almost eliminated by the presence of the drug. This also demonstrated that one molecule of tafamidis per tetramer was sufficient for effective stabilisation.
Overall, the MS experiments revealed that the S52P mutation increases the rate of dissociation of TTR tetramers by about a factor of three as compared with the WT (reduced tetramer stability was also observed as a function of pH—see Supplementary Figure 2a-b). In strong contrast, both the T119M mutation and tafamidis effectively abolished tetramer dissociation.
### Effect of mutations on TTR equilibria
Free energy (G) calculations were used to study the effects of the T119M and S52P mutations on the thermodynamic stability of TTR tetramers, dimers and monomers. These calculations were based on MD simulations24 and the thermodynamic cycle is shown in Fig. 2. Control over the potential energy function used allowed specific residues to be changed into other ones, so that the free energy change associated with the modifications could be calculated for the protein in its tetrameric (ΔGtetr), dimeric (ΔGdime) or monomeric (ΔGmono) states. Differences between these ΔG values provide an estimate of the stability of the mutant tetramers, dimers and monomers relative to the WT: ΔΔGtetr,, ΔΔGdime, and ΔΔGmono respectively (Fig. 2).
The calculations suggested that the S52P mutation would not have a significant effect on the stability of the tetramers and dimers for their dissociation to dimers and monomers, respectively (ΔΔGtetr and ΔΔGdime). However, a large and statistically significant ΔΔGmono ( + 2.1 kcal mol−1) indicated that the introduction of the S52P mutation had a destabilising effect on the monomer with respect to the WT-TTR, corresponding to a shift towards unfolded TTR by a factor of ~34. The T119M mutation had no effect on the stability of the dimers. However, in contrast to the case for S52P, a large stabilising effect on the tetramers was observed (ΔΔGtetr = −8.9 kcal mol−1; equilibrium constant fold-change of ~3∙106). In addition, the T119M monomers were calculated to be more stable than WT monomers by 1 kcal mol−1 (~5-fold increase in stability), a small but statistically significant effect.
When considering the overall process of tetramer dissociation and unfolding, the S52P mutation was estimated to shift the equilibrium toward the unfolded state by 8.1 kcal mol−1 (a change of ~8∙105), whereas the T119M mutation was estimated to have the opposite effect of stabilising the tetramers by 12.4 kcal mol−1 (a change of ~109). Given that the two mutation sites are not in close contact, it is reasonable to assume additivity for their associated energetic effects and on this basis the data suggest that in a double S52P/T119M-TTR mutant, the T119M mutation would be able to balance the destabilising effect of the S52P mutation; this is in agreement with the observed in vivo protective effect of the T119M mutation25,26.
In summary, the free energy calculations show that the S52P mutation results in a TTR protein having a less stable fold. However, this appears to have little direct impact on tetramer and dimer dissociation. In contrast, the T119M mutation directly affects tetramer dissociation to dimers by stabilising the tetramer species and also has the effect of stabilising the protein fold.
### Neutron diffraction reveals the basis of TTR instability
In order to study the intramolecular interactions of S52P- and T119M-TTR in atomic detail, room-temperature neutron crystallography studies were carried out. The crystal structures of the S52P mutant, the T119M mutant, and the S52P mutant in complex with tafamidis, were determined at resolutions of 1.80 Å, 1.85 Å, and 2.00 Å, respectively. Most of the interaction network at the monomer–monomer and dimer–dimer interfaces (Supplementary Figure 3), as well as the energetics of bridging water molecules (Supplementary Figure 4), were found to be conserved across the WT and mutant structures. However, crucial differences are identified as the source of the different kinetic and thermodynamic stabilities observed.
In the T119M structure, the longer Met119 side chain extends across the thyroxine-binding channel into a hydrophobic pocket surrounded by residues Leu17, Ala19, Leu110 and Val121 (Supplementary Figure 5). These interactions, as also noted in previous studies, enhance the association of two dimers and hence the overall stability of the tetramer27.
The amino acid at position 52 plays a crucial role. In the WT and the T119M mutant structures, the amide-D and the side-chain hydroxyl of the Ser52 residue form hydrogen bonds with OƔ of Ser50, resulting in a stable CD loop (Fig. 3a). The highly amyloidogenic S52P mutant has a proline residue at position 52 in place of the serine present in the WT. The lack of a main-chain amide-H and a side-chain hydroxyl group in the proline residue prevents the formation of two hydrogen bonds and causes a loss of stability of the protein (Fig. 3b). The distance between amide-N of residue 52 and Cα of Ser50 in the WT and T119M structures is 4.2 Å (Fig. 3a). The same distance is 0.4 Å wider in the S52P mutant, implying a looser CD loop (Fig. 3b). This destabilises the β-turn where the residue is located as well as the associated C and D strands.
In the S52P/tafamidis complex, the CD loop is not stabilised by tafamidis binding and the distance between Ser52-N and Ser50-Cα remains as 4.6 Å. In contrast to the findings of Bulawa et al.28, no water molecules were found bridging the protein and tafamidis in this complex. Instead, tafamidis induced a 180° flip to the Thr119 side chain (Fig. 3c), with the hydroxyl moiety now forming a hydrogen bond to a water molecule that bridges the two TTR dimers via Asp18; this was observed in both binding sites. Interestingly, this water molecule fills the same space that is occupied by the Met119 side chain in the non-pathogenic T119M mutant (Fig. 3d). Grand canonical Monte Carlo calculations29 suggested that the stabilisation of this water molecule upon tafamidis binding may substantially contribute toward the affinity of tafamidis, by about −2.8 kcal mol−1 (Supplementary Figure 4).
Collectively, the neutron structures of T119M-TTR, and S52P-TTR in apo and holo forms, have revealed that most interactions between the β-strands forming the core of the protein are conserved. However, the β-turn between the C and D strands is loosened in S52P-TTR, suggesting that this location is crucial to the reduced stability of this mutant. The appearance of a bridging water molecule in the S52P/tafamidis complex results in a conformation that resembles that of Met119 in T119M-TTR in this region of the protein, and may provide insight to the mechanism by which the drug stabilises the TTR tetramer.
### Coupling between quaternary and tertiary structure stability
High-temperature MD simulations were used to study the effect of TTR quaternary structure on the kinetics of TTR unfolding. Simulations were carried out at 598 K for the tetramers, dimers and monomers of WT-TTR with two modern force fields (Amber99sb*-ILDNP30,31 and Charmm3632). As a measure of unfolding, the fraction of native contacts Q was employed33, where 1 indicates a folded protein and 0 an unfolded protein. A similar metric was used to define the fraction of monomer–monomer and dimer–dimer interface contacts retained during the high-temperature simulations (Fig. 3e; Charmm36 results in Supplementary Figure 6). At the temperature of 298 K, all monomers and interfaces were stable with Q close to one for simulations of up to 1 μs (Supplementary Figure 7).
In Fig. 3f, the degree of unfolding of TTR monomers over the course of 100 ns simulations is shown for simulations of the tetramer, dimer and monomer in solution. The monomer in solution was found to unfold faster than the monomers that were part of a dimer, which in turn unfolded faster than the monomers that were part of the tetramer. Thus, it appeared that the quaternary structure of TTR had an effect on the stabilisation of the individual monomeric chains. It is conceivable that the opposite would hold true as well: i.e., that highly stable monomeric chains (as in T119M) may enhance the stability of the tetramer, whereas unstable monomeric chains (as in S52P) are likely to have a detrimental effect on the stability of the tetramers. The same effect was observed for the stability of the monomer–monomer interface. This interface was more resistant to disruption than the dimer–dimer interface (Fig. 3e). However, when simulating the TTR dimer, the same interface became much more easily disrupted within the timeframe of the simulations (Fig. 3g).
Partial-least squares (PLS) functional mode analysis (FMA)34,35 was used to identify regions of the TTR protein that contributed most to its unfolding while being part of a tetrameric unit. In Fig. 3h, a WT-TTR monomer is shown and colour-coded according to the root mean square fluctuations of the maximally correlated mode. The figure shows how the largest loss of native contacts observed was due to large motions in the C and D strands (Charmm36 results in Supplementary Figure 8). Similar observations were also made when calculating the fraction of native contacts by strand (Supplementary Figure 9).
These MD simulations showed that TTR tertiary structure is stabilised by the formation of tetrameric and dimeric units. Similarly, the monomer–monomer interface is stabilised by the association of TTR dimers into tetramers. Taken together, the observations explain why tetramer dissociation is the rate-limiting step in the formation of amyloid. In addition, in agreement with previous NMR data16,36, PLS-FMA identify the C and D strands as the regions of TTR tetramers most prone to unfolding.
## Discussion
Based on the data presented, a molecular mechanism that explains the different behaviours of the T119M and S52P mutants is proposed (Fig. 4). It is suggested that partial unfolding events originating at the CD strands lead to a parallel equilibrium of folded and partially unfolded TTR states (tetramers, dimers and monomers) that allows the effects of the S52P and T119M mutations to be rationalised in the context of the current and previous experimental observations13,16,36,37. Given that a number of other pathogenic mutations are located on the C and D strands (e.g. L55P, S50R, E54G), the mechanism proposed here may apply to other mutations in this region of the protein8,13,38,39.
S52P-TTR tetramers were observed to dissociate faster than the WT in solution. However, no thermodynamic or structural cause for a direct destabilisation of S52P tetramers could be established from these observations or from those published previously. In contrast, the free energy calculations described here provide evidence that the tertiary structure of S52P-TTR is thermodynamically less stable than that of the WT. This, together with structural (neutron diffraction) and dynamical (simulation) evidence of a loose CD loop, suggests that unfolding events at the CD strands are likely. As it is expected that a partially unfolded TTR would proceed towards dissociated species and amyloid formation faster than the corresponding fully folded protein (Fig. 4a), this parallel equilibrium can explain the high amount of amyloid fibrillation of S52P-TTR (Supplementary Figure 2c) as well as the faster tetramer dissociation rates observed.
T119M-TTR tetramers were found to be highly resistant to dissociation. The free energy calculations estimated a stabilising effect of 8.9 kcal mol−1 on the tetramer, which corresponds to a change in equilibrium constant of ~3∙106. This effect can be understood in terms of the increased contact area between the two dimers, which stabilises the dimer–dimer interface (Supplementary Figure 5). In addition, the free energy calculation results suggest a tertiary fold in T119M-TTR that is more stable than in the WT by about 1 kcal mol−1. As illustrated in Fig. 4b, tetramer stabilisation directly reduces the population of amyloidogenic building blocks. Furthermore, a stable tertiary structure can protect against the unfolding of monomeric TTR. Finally, the coupling between TTR quaternary structure and the stability of the TTR fold suggested by the simulations indicates that the highly stable T119M-TTR tetramer reduces the probability of the CD strands unfolding (Fig. 4b).
Stabilisation of the native tetrameric structure by small molecule binding to the T4 binding sites of TTR is a well-cited rationale for the inhibition of TTR amyloidogenesis40. MS kinetic experiments have shown how tafamidis, like the T119M mutation, strongly inhibits tetramer dissociation. The binding affinity of tafamidis to TTR quantifies the stabilising effect of the drug on TTR tetramers, and was measured to be ~12 kcal mol−1 (Kd1 = 2 nM)28. The stabilising effect of T119M on the tetramer (with respect to the dimer) was found to be about 9 kcal/mol. However, considering the further gains in TTR fold stability conferred by the mutation (ΔΔGT119M (mono)), the overall thermodynamic effect is very similar to that of the tafamidis-S52P complex, with ΔΔGT119M (unf) ≈ −12 kcal mol−1. Tafamidis therefore stabilises native TTR thermodynamically and kinetically in a manner that is comparable to T119M. However, it is difficult to predict the extent to which the two separate effects of tetramer and fold stabilisation may impact on the in vivo protection against amyloidogenesis. Structurally, the binding of tafamidis results in the reorientation of Thr119 and the emergence of a water molecule in the binding pocket. This water molecule is located in the same position as the side chain of Met119 in the T119M mutant.
The results presented also suggest that TTR tetramers have a stabilising effect on the monomer–monomer interface, which becomes weakened when TTR is in its dimeric form (Fig. 3g). Thus, even though the monomer–monomer interface is more stable than the dimer–dimer interface16,41 when the tetramer is intact, the loss of native quaternary structure can destabilise the monomer–monomer interface such that TTR dimers are able to quickly proceed to further dissociation into monomers (in agreement with the data described here and elsewhere22,42). This observation further explains why tetramer dissociation is the rate-limiting step in the formation of amyloid. It is noted that dissociation might not be the only factor causing the formation of TTR fibrils (e.g. seeding with ex vivo fibrils has been found to promote the formation of fibrils in vitro) but it is a necessary one43.
Finally, a number of studies44,45,46,47 have identified a TTR fragment (residues 49–127) in ex vivo amyloid fibrils. This fragment is formed by proteolytic cleavage of the peptide bond between Lys48 and Thr49, with the S52P mutant being particularly susceptible to it45,47. Within the mechanism set out in Fig. 4, the higher rate of proteolytic cleavage in S52P-TTR can be explained by an increased accessibility to Lys48 due to a higher propensity to unfold at the CD strands. Cleavage of the unfolded structure causes an irreversible transition to a partially unfolded state, thus preventing refolding and ultimately enhancing the rate of fibril formation.
In summary, the results described here provide novel structural and dynamical insights into the opposing effects of the S52P and T119M mutations in TTR, as well as the effects of tafamidis binding on the stability of TTR. The results provide molecular level detail of direct relevance to TTR amyloidogenicity, and have provided a framework for further investigation into the effects of residue mutation on TTR’s states and their equilibria, and for the development of novel targeted therapies for FAP and SSA.
## Methods
### Protein preparation and crystallization
The cDNA corresponding to the gene coding for the 127 amino acids of human transthyretin (TTR) protein was cloned into a pET-M11 vector encoding an N-terminal His6 tag and a TEV cleavage site (EMBL Protein and Purification Facility, Germany) and then expressed in E.coli BL21 (DE3) cells (Invitrogen). The S52P and T119M mutants were generated using QuikChange Lightning Multi Site-Directed Mutagenesis kit (Strategene). WT-TTR cDNA sequence and the sequences of primers used for S52P and T119M mutations are shown in Supplementary Table 1. Deuterated protein was produced in the Deuteration Laboratory (D-Lab) platform within ILL's Life Sciences Group48. Cells were adapted to perdeuterated Enfors minimal medium and grown in a fed-batch fermenter culture at 30 °C using d8-glycerol (99% deuterium; Euriso-top) as the only carbon source. H-TTRs and D-TTRs were purified in an identical manner. Cell paste was resuspended homogeneously in lysis buffer (20 mM Tris pH 8, 250 mM NaCl, 3 mM imidazole) in the presence of EDTA-free protease inhibitor cocktail (Roche). The cells were lysed by ultrasonication on ice at 50% amplitude and 25 sec pulses. The lysate was cleared by centrifugation at 18,000 rpm at 4 °C for 30 min. The supernatant containing crude protein extracts was recovered and purified using benchtop gravity-flow chromatography. Nickel-nitriloacetic acid (Ni2+-NTA) resin was pre-washed with lysis buffer and left in incubation with the protein at 4 °C for at least an hour to increase binding efficiency. The protein/resin mix was then loaded into 10 ml disposable column. The column was washed three times with wash buffer (20 mM Tris pH 8, 15 mM imidazole, NaCl at 500 mM, 1 M, and 250 mM, respectively) before the protein was eluted (elution buffer: 20 mM Tris pH8, 250 mM NaCl, 250 mM imidazole). Fractions containing protein were then pooled together for TEV protease treatment to remove His6-tag and dialysed into gel filtration buffer (10 mM Tris pH 7.5, 50 mM NaCl, 1 mM DTT) overnight at 4 °C. Protein was separated from the cleaved poly-histidine tail using the same gravity-flow column described above. Flowthrough from the column was concentrated, filtered (with 0.22 μm membrane pore size) and loaded onto Superdex 75 HiLoad 16/600 gel filtration column (GE Healthcare) running at 1 ml/min at room temperature. Peak fractions were pooled and concentrated using Amicon Ultra centrifugal filter units (Millipore) and stored at −20 °C. Analysis was carried out on 12% Tris-Tricine gel after each step of purification to monitor the purity of the protein. The details of the expression and purification have been described previously18,49. While the deuterated TTR was expressed under conditions where only deuterium atoms were present, protiated (1H-based) solutions were used during purification. Prior to protein crystallization, the purified protein was buffer-exchanged using a deuterated solution so that the labile protium atoms acquired during the purification steps that involved hydrogenated solutions were replaced by deuterium. Tafamidis (HPLC purity: ≥ 98%; Carbosynth Limited, U.K.) was dissolved in 100% deuterated dimethyl sulfoxide (D-DMSO) for crystallization. All crystals were grown by sitting-drop vapour diffusion at 18 °C using deuterated protein and deuterated solution. The TTR S52P mutant crystal (~0.60 mm3) were grown in 1.9 M malonate pD 6.4 with a protein concentration of 25 mg/ml (drop volume 50 µl; protein-to-buffer ratio of 7:5), while the T119M mutant crystal (~0.80 mm3) were in 1.9 M malonate pD 5.9 with 40 mg/ml protein concentration (drop volume 50 µl; protein-to-buffer ratio of 1:1). For the crystal of the S52P/tafamidis complex (~0.11 mm3), a protein concentration of 20 mg/ml and tafamidis concentration of 678 μM in 10% D-DMSO (protein-to-ligand molar ratio of 1:2) were used (drop volume 40 µl; protein-to-buffer ratio of 1:1). For neutron data collection, the crystals were mounted in quartz capillaries and surrounded by a small amount of mother liquor from the crystallization well. The capillaries were sealed tightly using wax to eliminate the diffusion of gas and atmospheric water.
### Neutron data collection and processing
Details of the neutron data collection for WT-TTR have been reported by Haupt et al49. Neutron quasi-Laue diffraction data from the crystals of the S52P mutant and the S52P/tafamidis complex were collected at room temperature using the LADI-III diffractometer50 at the Institut Laue-Langevin (ILL), Grenoble. For the S52P mutant, a neutron wavelength range (Δλ/λ = 30%) of 2.9–3.9 Å was used, with data extending to 1.8 Å resolution. As is typical for a Laue experiment, the crystal was held stationary at different φ (vertical rotation axis) settings for each exposure. A total of 26 images (exposure time of 12 h per image) were collected from two different crystal orientations. For the S52P/tafamidis complex, a neutron wavelength range of 2.7–3.6 Å was used with data extending to 2.0 Å resolution. In total, 13 images were collected (exposure time of 8 h per image) from three different crystal orientations. The neutron diffraction images were indexed and integrated using the LAUE suite program LAUEGEN51. The program LSCALE52 was used to determine the wavelength-normalisation curve using the intensity of symmetry-equivalent reflections measured at different wavelengths and to apply wavelength-normalisation calculations to the observed data. The data were then merged in SCALA53. Relevant data collection statistics are summarised in Supplementary Table 2.
Neutron monochromatic diffraction data for the T119M mutant crystal were collected at room temperature using the instrument BIODIFF54 operated by the Forschungsreaktor München II (FRM II) and the Jülich Centre for Neutron Science (JCNS) at Garching. For data collection, the wavelength was set at 2.67 Å (Δλ/λ = 2.9%). A total of 202 frames were recorded with a rotation range of 0.35° and an exposure time of 53 min per frame. The diffraction data were indexed, integrated and scaled using HKL200055 to a resolution of 1.85 Å. The output file of HKL2000 in SCA format was converted into MTZ format by using scalepack2mtz program in the CCP4 suite56. Relevant data collection statistics are summarised in Supplementary Table 2.
### X-ray data collection and processing
X-ray diffraction data of the crystals of all three variants were recorded on beamline ID30B57 at the European Synchrotron Research Facility (ESRF), Grenoble using a heavily attenuated X-ray beam (1.96% for S52P; 19.17% for T119M; 9.83% for S52P/tafamidis) of wavelength 0.9763 Å. For S52P and S52P/tafamidis complex, the same crystals used for data collection at LADI-III was used; whereas for T119M, a crystal from the same crystallization drop as the one used at BIODIFF was used. Data were recorded at room temperature on capillary-mounted crystals. Data were processed to the same maximum resolution of the corresponding neutron data with XDS58, scaled and merged with SCALA53, and converted to structure factors using TRUNCATE in the CCP4 suite56. Relevant data collection statistics are summarised in Supplementary Table 2.
### Joint neutron and X-ray structure refinement
The PDB structures 5CLX (refined X-ray models of TTR S52P; data collection at 100 K) was used as starting models for joint X-ray and neutron refinement against the datasets of S52P and S52P/tafamidis; and 5CM1 (refined X-ray models of TTR T119M; data collection at 100 K) was used for the datasets of T119M. The phenix.refine program59 in the PHENIX package60 was used for refinement. The preparation of the starting model, the refinement settings and workflow, and the modelling of solvent molecules were as detailed in Haupt et al.41. Using the ReadySet option in PHENIX60, exchangeable hydrogen and deuterium atoms were placed at appropriate sites of the protein models with deuteriums elsewhere. For positions where both hydrogen and deuterium atoms were modelled, the occupancies of both were set to 0.5 and then their occupancies were refined, with their total occupancy constrained to 1. D2O molecules were added using ReadySet based on the positive neutron scattering length density in Fo–Fc maps. Coot61 was used for model modifications, such as addition of solvent molecules, and rotamer- and torsion-angle adjustments, according to positive and negative nuclear scattering length density in both 2Fo–Fc and Fo–Fc maps. The final refinement statistics are summarised in Supplementary Table 3.
### Monitoring subunit composition of tetrameric TTR by native MS
The details of the instrumental arrangements and sample preparation protocols have been described previously18. MS analyses were carried out on a quadrupole time-of-flight mass spectrometer (Q-TOF Ultima, Waters Corporation, Manchester, U.K.) that was modified for the detection of high masses62,63. As has been tested previously18, experiments carried out at 4 °C in static (non-shaking) conditions provided well-resolved MS spectra, hence these conditions were used for the work described here. Prior to native MS analysis, proteins were buffer-exchanged into 250 mM ammonium acetate pH 7. All the labile sites (i.e. N- and O-bound deuterium atoms) were allowed to exchange from D to H. The mass difference between (H)TTR and (D)TTR was contributed by carbon-bound Ds that were incorporated into the amino acid chain during synthesis and were non-exchangeable. Subunit exchange was initiated by mixing a solution containing one unlabelled protein variant and one with deuterium-labelled protein variant at 1:1 molar ratio. A concentration of 3 μM for each protein tetramer solution was used for all experiments. The relative abundance of the tetramers was calculated from the peak area of the 13 + to 15 + charge states and expressed as a percentage of the total area of the peaks assigned to the tetrameric TTR. For the experiment of mixing HS52P and DS52P in the presence of tafamidis, tafamidis was in 0.5% DMSO at a TTR:tafamidis molar ratio of 1:1. In this case, the relative abundance was calculated from the peak area of the 12 + to 14 + charge states. The presence of DMSO resulted in broader peaks on the MS spectra and less charge on the protein particles. The MS data were fitted by Bayesian regression using a first order kinetics model.
### Native MS data fitting and parameter estimation
Bayesian regression of the native MS data was performed in python using the PyMC3 library64. The following one-phase exponential decay model was used:
$$y(t) = c + (y_0 - c)e^{ - kt}$$
(1)
where t is the time (in minutes), y(t) is the relative abundance of each species (in percentage), y0 is the abundance at t=0, c is the abundance at t= and k the rate constant (in inverse minutes); c<y0 for dissociating species, and c>y0 for associating species. For y0, a normally distributed prior was used, with mean 50% and standard deviation of 15% for dissociation, and with mean of 0% and standard deviation of 5% for association. For c, a uniform prior between 0% and y0 was used for dissociation, and between y0 and 100% for association. For k, a half normal prior with mean of 0 min−1 and standard deviation of 10 min−1 was used. White noise was modelled with a half normal prior with zero mean and standard deviation of 10%. The posterior estimate of the parameters was obtained by drawing 10,000 Markov chain Monte Carlo (MCMC) samples with the No-U-Turn Sampler (NUTS) algorithm. Five thousand tuning steps were carried out and discarded before colleting the posterior samples used for the estimate. The maximum a posteriori estimate (MAP) of the parameter space was passed to the sampler as the starting value. The mean and standard deviation of the rate constants (k) from their posterior distributions are reported.
### High-temperature molecular dynamics simulations
Molecular dynamics (MD) simulations were performed with Gromacs 201665. The starting structure for the simulations was the WT (PDB-ID 4PVM) variant of TTR protonated at pH 7 with pdb2gmx. The dimeric and monomeric models were obtained from the same neutron crystal structure: for the dimers, chains A and B were used, and for the monomers chain A was used. The protein was modelled with the Amber99SB*-ILDNP30,31 and the Charmm36 (Nov. 2016)32 force field, and water molecules with the TIP3P model66. The protein was solvated in a dodecahedral box with periodic boundary conditions and a minimum distance between the solute and the box of 12 Å. Sodium and chloride ions were added to neutralise the systems at the concentration of 0.15 M.
Energy minimisation was performed using a steepest descent algorithm with force tolerance of 10 kJ mol−1 nm−1 for a maximum of 10,000 steps. 1 ns NVT and then NPT equilibrating simulations were performed with all solute heavy atoms restrained with a force constant of 1000 kJ mol1 nm2. In these simulations, the temperature was coupled with the stochastic v-rescale thermostat by Bussi et al.67. at the target temperature of 298 K; the pressure was controlled with the Berendsen weak coupling algorithm68 at a target pressure of 1 bar. A leap-frog integrator was used with a time-step of 4 fs and virtual sites69 were used for all solute hydrogens. All bonds were constrained with the P-LINCS algorithm70. The particle mesh Ewald (PME) algorithm71 was used for electrostatic interactions with a real space cut-off of 12 Å. The Verlet cut-off scheme was used with a Lennard–Jones interaction cut-off of 12 Å and a buffer tolerance of 0.005 kJ mol−1 ps−1. For the production simulations at 298 K, a single unbiased 1 μs simulation in the NPT ensemble was performed using the Parrinelo–Rahman pressure coupling scheme72. For the simulations at 598 K, ten simulations of 100 ns were performed in the NVT ensemble. Coordinates were saved every 50 ps.
The fraction of native contacts (Q), as described by Best et al.33, was employed as a measure of the degree of protein (un)folding and native-likeness of protein–protein interfaces. When used to describe protein folding, the list of native contacts was built by taking all pairs of heavy atoms i and j that are within 4.5 Å of each other and are on two different residues separated by at least three other residues. When describing protein–protein interfaces, the original definition of the authors was adapted so that the list of native contacts included all pairs of heavy atoms (i, j) that are within 4.5 Å of each other and are on two different protein chains. Once the set of atom pairs is defined, Q is then calculated as follows:
$$Q(x) = \frac{1}{N}\mathop {\sum}\nolimits_{(i,j) \in N} {\frac{1}{{1 + {\mathrm{exp}}[\beta (r_{i,j}\left( x \right) - \lambda r_{i,j}^0)]}}}$$
(2)
where N is the number of atom pairs (i,j) forming the native contacts, ri,j(x) is the distance between atom i and atom j in the configuration x, $$r_{i,j}^0$$ is the distance between i and j is the native state (i.e. the starting neutron crystal structure used), β is a smoothing parameters with value of 5 Å−1 and λ is a factor that accounts for fluctuations that takes the value of 1.8 for atomistic simulations. MD trajectories were analysed and Q calculated with scripts written in python and using the mdtraj library73.
Partial Least-Squares Functional Mode Analysis (PLS-FMA)34,35 was performed on the main chain of the proteins and using the fraction of native contacts as the functional property of interest. In particular, we analysed with PLS-FMA the trajectories of the tetramer simulations, using the fraction of native contacts of each monomeric unit as the measure of unfolding. About 75% of simulation snapshots were used for training and 25% for validation. The model was built using 20 PLS components, which resulted in a Pearson correlation coefficient between the data and the model ≥ 0.97 for both the training and the validation sets, for both force fields. We refer to the ensemble weighted maximally correlated mode (ewMCM) as described Krivobokova et al.35 simply as the PLS-FMA mode.
### Free energy calculations of protein mutation
Alchemical free energy calculations were performed using Gromacs 201665 and a custom version of Gromacs 4.6 that implements the soft-core potential proposed by Gapsys et al.74. The input hybrid topologies were generated using the pmx python tool75. The starting structures for the calculations were the WT-TTR (PDB-ID 4PVM), S52P-TTR (PDB-ID 5NFW) and T119M-TTR (PDB-ID 5NFE) protonated at pH 7 with pdb2gmx. The dimeric and monomeric starting structures were obtained from the same neutron structures; for the dimers, chains A and B were used, and for the monomers chain A was used. The unfolded proteins were modelled as capped tripeptides, where the central amino acid was the one being mutated, and the other two were the adjacent amino acids in protein sequence. Proteins were modelled with the Amber99SB*-ILDNP force field30,31 and water molecules with the TIP3P model66. The proteins were solvated in a dodecahedral box with periodic boundary conditions and a minimum distance between the solute and the box of 12 Å. Sodium and chloride ions were added to neutralise the systems at the concentration of 0.15 M.
Energy minimisation was performed using a steepest descent algorithm for a maximum of 10,000 steps. Temperature and pressure were equilibrated with ten independent 0.5 ns simulations in the NPT ensemble, with all solute heavy atoms restrained with a force constant of 1000 kJ mol1 nm2. A leap-frog stochastic dynamics integrator76 was used with time-step of 2 fs; temperature was coupled using Langevin dynamics at the target temperature of 298 K, and pressure with the Berendsen weak coupling algorithm68 at a target pressure of 1 bar. The particle mesh Ewald (PME) algorithm71 was used for electrostatic interactions with a real space cut-off of 11 Å, a spline order of 4, a relative tolerance of 105 and a Fourier spacing of 1.2 Å. The Verlet cut-off scheme was used with a Van der Waals interaction cut-off of 11 Å and a buffer tolerance of 0.005 kJ mol−1 ps−1. Bonds involving hydrogens were constrained with the P-LINCS algorithm70. Ten equilibrium simulations of 10 ns duration were then initiated from the last frame of the short (0.5 ns) equilibration simulations. From each of the ten equilibrium simulations, a non-equilibrium trajectory was spawn every 0.2 ns, for a total 50 trajectories per equilibrium simulation and 500 trajectories overall. Forward and reverse transformations were performed for both mutants, with 500 non-equilibrium trajectories per transformation: WT→S52P, S52P→WT, WT→T119M, T119M→WT. Coordinates for hybrid residues were built with pmx75 after extracting the starting configurations of the systems. Energy minimisation was performed on the dummy atoms only, before equilibrating velocities with a 10 ps simulation. Then, the non-equilibrium alchemical transformation was performed over 100 ps. The non-equilibrium simulations were performed using a custom version of Gromacs 4.6 that implements the soft-core potential described in Gapsys et al.74, where both vdW and Coulombic interactions were soft-cored. Free energy differences were estimated using the Bennet’s Acceptance Ratio (BAR)77 as implemented in pmx75. Uncertainties in the ΔG values were calculated by taking the standard error of the BAR estimate from the ten independent equilibrium simulations and related non-equilibrium trajectories.
### Grand canonical monte carlo calculations
The simulation package ProtoMS 3.3 was used for Grand Canonical Monte Carlo (GCMC) calculations and data analysis29,78. The starting tetramer structures for the calculations were the WT-TTR (PDB-ID 4PVM), S52P-TTR (PDB-ID 5NFW), S52P-TTR/tafamidis (PDB-ID 6FFT) and T119M-TTR (PDB-ID 5NFE) with protonation states as resolved experimentally. Proteins were modelled with the Amber ff14SB force field79, water with the TIP3P model, and tafamidis with the GAFF/AM1-BCC force field66,80.
For the calculation of the stability of W17, a GC box was created around the water molecule by adding 2 Å padding in all three dimensions and used for both the calculations in S52P-TTR and S52P-TTR/tafamidis. For the calculation of the monomer–monomer interface hydration free energy, the grand canonical box was defined so to encompass the interface containing the conserved water molecules of interest; the same box was used for all proteins, which had been previously aligned. Because the hydration free energy results might be sensitive to the protonation state of the histidine residues in proximity of these conserved water molecules, from each experimental structure we generated the other two proteins by mutating the amino acids at positions 52 and 119. Thus, for each TTR variant, we performed three sets of calculations, based on each neutron structure and its respective protonation states of His31, His56, His88 and His90.
Protein residues that were further than between 16 and 20 Å away from the grand canonical region were removed, with the exact distance chosen to retain whole residues. The systems were then solvated up to a radius of 30 Å around the grand canonical region. All simulations were carried out at 298 K, and a 10 Å cut off was applied to the non-bonded interactions. Before initiating the GCMC simulations, the systems were equilibrated using 50 million (M) solvent-only moves in the canonical ensemble, so to equilibrate the water around the proteins. Water molecules present in the pre-defined GCMC box were then removed; this set of coordinates represented the starting point of the GCMC simulations.
For the calculation of the monomer–monomer interface hydration, a set of 32 simulations were performed at a range of Adams values from −29 to + 2 at unit increments. For the calculations of the stability of W17, Adams values between −18 and 0 were used instead. For each window, 15 M equilibration moves were performed only on the grand canonical solute, with insertion, deletion and translation/rotation moves generated at the same ratio. An additional 5 M equilibration moves followed where the protein and solvent molecules were also sampled, before starting the production simulation of 50 M moves. Half of the MC moves were dedicated to the grand canonical water molecules, and the other half was split between protein residues and solvent in proportion to the number of solvent molecules and protein residues, according to the ratio of 1:5. Hamiltonian exchange was employed78, with exchanges being performed every 0.2 M steps. Data for analysis, energies, number of GC solutes present, and coordinates were saved to file every 0.1 M moves.
The above procedure was repeated three times for the monomer–monomer interface hydration calculation (i.e. 9 calculations for each TTR variant, three repeated calculations per starting structure) and ten times for the W17 stability calculation. The hydration free energy of the whole grand canonical region, and the binding free energy of individual water molecules, were computed using grand canonical integration (GCI) as described in Ross et al.29. GCI was performed using the calc_gci.py script that is part of the ProtoMS 3.3 tools, with the data from all repeats being analysed together. The amount of data discarded as equilibration was determined using the equilibration detection tool available in calc_series.py. The same amount of data was discarded from all windows after determining the average number of moves needed for equilibration. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8180344104766846, "perplexity": 3567.433685838801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710829.5/warc/CC-MAIN-20221201153700-20221201183700-00073.warc.gz"} |
http://math.stackexchange.com/questions/459129/finding-all-solutions-to-an-inequality-equation | Finding all solutions to an inequality equation
I have the following inequality that I need to find all solutions of:
$2x^3-8x > 5x^2-20$
My guess is that you would have to turn this into a polynomial equation and let the right hand side equal to $0$ (i.e. $2x^3-5x^2-8x+20=0$). By using the factor theorem you could guess a solution that is a factor of $20$, then use long division to solve for the other two roots. But how would you know whether the inequality is greater than the root (i.e. $>$) or less than the root (i.e. $<$)? Is it something you just need to guess and check? Or is there another way?
-
I think you can use that $x=2$ is a root of your equation to help you factor the lefthand side, and then use a sign chart to determine where the left side is positive. – user84413 Aug 3 '13 at 22:46
Notice that both sides factor:
$$2x(x-2)(x+2)>5(x-2)(x+2)$$
$$(2x-5)(x-2)(x+2)>0$$
The roots are thus $-2,2,$ and $\frac{5}{2}$. Since this is a positive cubic, we know it approaches infinity as $x$ gets large, so we must have $x>\frac{5}{2}$ as possible solutions. Next, notice that none of the roots are double roots, so the polynomial will change sign at each. This means that it is negative in the range $(2,\frac{5}{2})$, positive in the range $(-2,2)$, and negative in the range $(-\infty,-2)$. So the answer is:
$$(-2,2)\cup(\frac{5}{2},\infty)$$
Notice I've done just about what you recommended, except factoring made finding the roots easier, and I didn't need to test any points because of the shape of a cubic polynomial with no double roots.
-
Once you have found the roots of the polynomial, you can write the inequality in the form: $$(x-a)(x-b)(x-c)>0,$$ with $a\leq b \leq c$. The real line is so divided in 4 intervals:$$\mathbb R=(-\infty,a] \cup (a,b]\cup (b,c]\cup (c,+\infty).$$ Now all you need to do is to check the sign of the polynomial in these four regions.
-
@Ryan Actually, you only need to check it in one region. The sign will alternate between regions. – Ataraxia Aug 3 '13 at 22:48
This is not true in the general case: take $$p(x)=\frac{1}{3}x^3+x^2-\frac{4}{3},$$ which is zero in $-2$ and negative in all neighbourhood of $-2$. It is possible to have a maximum in a root. – pppqqq Aug 3 '13 at 23:13
Well that's the case when a root has an even multiplicity, in which case I take there to be an implicit "interval" (-2,-2). Guess I should have clarified that point. – Ataraxia Aug 3 '13 at 23:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8911762237548828, "perplexity": 116.41433716368095}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802773061.155/warc/CC-MAIN-20141217075253-00018-ip-10-231-17-201.ec2.internal.warc.gz"} |
http://mathcentral.uregina.ca/QQ/database/QQ.09.07/h/dee2.html | SEARCH HOME
Math Central Quandaries & Queries
Question from Dee, a student: I have a few questions that im have trouble working out... 1. One number is three times another number; if each is increased by 1 the sum of the reciprocals is 10/21. Find the numbers 2.A tank can be filled by 2 pipes together in 6 hours; if the larger pipe alone takes 5 hours less than the smaller to fill the tank, find the time in which each pipe alone would fill the tank
Dee,
Each of these problems involve reciprocals. It is explicit in the first problem. Suppose the two numbers are x and y then one is three times the other so I can say y = 3x. The second fact you are told is that if each is increased by 1 (so the numbers would be x + 1 and y + 1) then the sum of the reciprocals is 10/21. Thus
1/(x + 1) + 1/(y + 1) = 10/21
Use the expression y = 3x to write this equation in terms of the one variable x. Add the fractions on the left side and solve for x.
I hope this helps,
Penny
Math Central is supported by the University of Regina and The Pacific Institute for the Mathematical Sciences. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8231817483901978, "perplexity": 387.93405451527326}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824293.62/warc/CC-MAIN-20171020173404-20171020193404-00602.warc.gz"} |
https://www.physicsoverflow.org/9350/blochs-theorem-and-blochs-state | Bloch's theorem and Bloch's state
+ 1 like - 0 dislike
219 views
The question is not so much about the theorem, but more about what it means in this context: see this link.
So yes, because of Bloch's theorem the Hamiltonian eigenstates in a crystalline system can be written as \begin{align} \psi_{n,\vec{k}}(\vec{r})=e^{i\vec{k}\cdot\vec{r}}u_{n,\vec{k}}(\vec{r}), \end{align} and so the Berry connection can be defined: \begin{align} A_{n}(\vec{k})=i\langle n(\vec{k})|\nabla_{\vec{k}}|n(\vec{k})\rangle, \end{align} but what in the world is $|n(\vec{k})\rangle$?
I've read a few articles on topological insulators and they always seem to start off with the Bloch wavefunction $e^{i\vec{k}\cdot\vec{r}} u_k(\vec{r})$, and then somehow they magically get the ket $|u(\vec{k})\rangle$ from which the Berry connection is defined... is $|u(\vec{k})\rangle$ the column vector comprised of the Fourier coefficients of $u_\vec{k}(\vec{r})$ w.r.t. $e^{i\vec{G}\cdot\vec{r}}$ or what?
This post imported from StackExchange Physics at 2014-03-24 04:14 (UCT), posted by SE-user nervxxx
As the article says, $n$ is the band index. If you were to represent the ket $|u_{n}(\mathbf{k})\rangle$ in a basis (in this case the basis of a Hilbert space spanned by bands) then you could write it as a column vector where each component corresponds to the Bloch wavefunction for each band (labeled by $n$). Note: for a the simplest topological insulator model you need at least two bands: valence and conduction band. In that case you'll have a $2 \times 1$ column vector.
This post imported from StackExchange Physics at 2014-03-24 04:14 (UCT), posted by SE-user NanoPhys
@NanoPhys How do you get rid of the position $\vec{r}$ dependence then? Is that absorbed in the way the inner product is defined? But $|n(\vec{k})\rangle$ should exist on its own, and should be independent of $\vec{r}$. How does one get it from $u_{n,\vec{k}}(\vec{r})$?
This post imported from StackExchange Physics at 2014-03-24 04:14 (UCT), posted by SE-user nervxxx
Yes, that is correct. The $\mathbf{r}$ does indeed get absorbed in the definition of the inner product. Note that a Bloch state is uniquely labeled by $n$ and $\mathbf{k}$ independent of what basis it is represented in. In your case, you are writing down the Bloch "wavefunction" $\psi_{n,\mathbf{k}}(\mathbf{r})\propto e^{i\mathbf{k}\cdot\mathbf{r}}u_{n,\mathbf{k}}(\mathbf{r})$ in the position basis. The inner product $\langle u_{n}(\mathbf{k})|\dots|u_{n}(\mathbf{k})\rangle$ has to be basis independent.
This post imported from StackExchange Physics at 2014-03-24 04:14 (UCT), posted by SE-user NanoPhys
@NanoPhys So what would Bloch's theorem be, without going into any explicit basis (in particular, the position basis)? What I mean is, what can we say about the abstract kets $|\psi\rangle$? See arxiv.org/pdf/1304.5693v3.pdf , eqn 27-30. I'm not sure why that is true unless he's already assuming that the kets are already in the position basis
This post imported from StackExchange Physics at 2014-03-24 04:14 (UCT), posted by SE-user nervxxx
Equation (28) is valid. The $e^{i\mathbf{k}\cdot\mathbf{r}}$ is simply a phase factor; two kets can, in general, be related in that fashion. To make better sense of it, take a $\langle\mathbf{r}|$ on both sides. You'll get $\langle\mathbf{r}|\psi_{n\mathbf{k}}\rangle = e^{i \mathbf{k} \cdot\mathbf{r}} \langle \mathbf{r}|u_{n\mathbf{k}}\rangle\Rightarrow\psi_{n\mathbf{k}}(\mathbf{r}) = e^{i\mathbf{k}\cdot\mathbf{r}}u_{n\mathbf{k}}(\mathbf{r})$, which is our familiar Bloch wavefunction
This post imported from StackExchange Physics at 2014-03-24 04:14 (UCT), posted by SE-user NanoPhys
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsO$\varnothing$erflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8305960893630981, "perplexity": 712.4810132677845}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250616186.38/warc/CC-MAIN-20200124070934-20200124095934-00549.warc.gz"} |
https://www.physicsforums.com/threads/a-ball-rolling-down-a-slope-find-its-velocity.724022/ | # Homework Help: A ball rolling down a slope, find its velocity.
1. Nov 20, 2013
### Milsomonk
Hi guy's, I've got an interesting problem in my coursework, I have to say I am stumped, so here it is.
A hollow sphere of mass 56 grams, starts at rest and is allowed to roll without slipping 2 metres down a slope at an angle of 25 degrees. Gravity is assumed as 10 ms^2. clculate the velocity after 2 metres.
I=2/3*wr^2 is given as a relevant equation.
I'm not even sure where to start without being given a measurement for r or any dimensions. I've calculated the gravitational potential energy in the hope i can find the proportions of KE(rotational) and KE(linear), But im at a dead end there.
Any help would be hugely appreciated as I'm baffled.
Cheers
2. Nov 20, 2013
### Staff: Mentor
What might be conserved, since there is no slipping?
Tip: When not given a quantity, such as r, just assume you may not need it. Just solve it symbolically and see what happens.
3. Nov 20, 2013
### Milsomonk
Will it still accelerate at 10 ms^2 straight down? despite the slope? so then we have a time component for the distance.
4. Nov 20, 2013
### Staff: Mentor
No. 10 m/s^2 would be the acceleration of an object in free fall, not something in contact with a slope.
To solve for the acceleration, if you would like to do that, you'll need to consider all the forces acting and apply Newton's 2nd law for translation and rotation.
But there's an easier way. Answer the question I raised in earlier post.
5. Nov 20, 2013
### Milsomonk
Sorry, I was trying to understand your initial question, (Im not good at this), So the energy is conserved? MGH? ive worked that out but thats where im stuck.
6. Nov 20, 2013
### Staff: Mentor
Yes, the energy is conserved.
At the top of the ramp, the energy is all potential. But what about at the bottom? How can you express the total energy of the rolling ball at the bottom?
7. Nov 20, 2013
### Milsomonk
I get that, some will be rotational and some will be linear. Its just a case of working out the proportion of each, and then finding a velocity from that that im stuck with. Thanks fo the help by the way, I am getting there i think.
8. Nov 20, 2013
### Staff: Mentor
That's exactly it. Hint: The two energies are connected by the fact that the ball rolls without slipping. How do you express that mathematically?
9. Nov 20, 2013
### Milsomonk
PE=MGH so MGH=KE(rotational)+KE(linear). so if there is no slip, does that make the split 50/50?
10. Nov 20, 2013
### Tanya Sharma
The relation is correct but they are not in the ratio 1:1 .
Express the relation symbolically in terms of the given parameters like 'm' ,'v', 'R' ....
Use the rolling without slipping constraint.
Last edited: Nov 20, 2013
11. Nov 21, 2013
### Milsomonk
Ok so I think I might be getting somewhere, just one smal hurdle. Heres what ive got.
mgh=1/2(2/3mr^2)(V/r)^2+1/2mv^2
Then I cancelled the m from both sides gh=1/2(2/3r^2)(V/r)^2+1/2v^2
Where I'm stuck is how to get rid of r.
12. Nov 21, 2013
### Tanya Sharma
There is an r2 term in both the numerator and denominator of the first term on the Right Hand Side .
You missed cancelling them
Last edited: Nov 21, 2013
13. Nov 21, 2013
### Milsomonk
AHA silly me, awesome thankyou all for your help I have now done it!! only took me three days haha :) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9647433161735535, "perplexity": 1043.5649787009372}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592650.53/warc/CC-MAIN-20180721164755-20180721184755-00097.warc.gz"} |
https://philpapers.org/s/Chris%20Lambie-Hanson | Works by Chris Lambie-Hanson
13 found
Order:
1. Simultaneous Stationary Reflection and Square Sequences.Yair Hayut & Chris Lambie-Hanson - 2017 - Journal of Mathematical Logic 17 (2):1750010.
We investigate the relationship between weak square principles and simultaneous reflection of stationary sets.
Export citation
Bookmark 7 citations
2. Aronszajn Trees, Square Principles, and Stationary Reflection.Chris Lambie-Hanson - 2017 - Mathematical Logic Quarterly 63 (3-4):265-281.
We investigate questions involving Aronszajn trees, square principles, and stationary reflection. We first consider two strengthenings of math formula introduced by Brodsky and Rinot for the purpose of constructing κ-Souslin trees. Answering a question of Rinot, we prove that the weaker of these strengthenings is compatible with stationary reflection at κ but the stronger is not. We then prove that, if μ is a singular cardinal, math formula implies the existence of a special math formula-tree with a cf-ascent path, thus (...)
No categories
Export citation
Bookmark 5 citations
3. Knaster and Friends II: The C-Sequence Number.Chris Lambie-Hanson & Assaf Rinot - 2020 - Journal of Mathematical Logic 21 (1):2150002.
Motivated by a characterization of weakly compact cardinals due to Todorcevic, we introduce a new cardinal characteristic, the C-sequence number, which can be seen as a measure of the compactness of a regular uncountable cardinal. We prove a number of ZFC and independence results about the C-sequence number and its relationship with large cardinals, stationary reflection, and square principles. We then introduce and study the more general C-sequence spectrum and uncover some tight connections between the C-sequence spectrum and the strong (...)
Export citation
Bookmark 2 citations
4. Squares, Ascent Paths, and Chain Conditions.Chris Lambie-Hanson & Philipp Lücke - 2018 - Journal of Symbolic Logic 83 (4):1512-1538.
Export citation
Bookmark 3 citations
5. Squares and Covering Matrices.Chris Lambie-Hanson - 2014 - Annals of Pure and Applied Logic 165 (2):673-694.
Viale introduced covering matrices in his proof that SCH follows from PFA. In the course of the proof and subsequent work with Sharon, he isolated two reflection principles, CP and S, which, under certain circumstances, are satisfied by all covering matrices of a certain shape. Using square sequences, we construct covering matrices for which CP and S fail. This leads naturally to an investigation of square principles intermediate between □κ and □ for a regular cardinal κ. We provide a detailed (...)
Export citation
Bookmark 5 citations
6. Diagonal Supercompact Radin Forcing.Omer Ben-Neria, Chris Lambie-Hanson & Spencer Unger - 2020 - Annals of Pure and Applied Logic 171 (10):102828.
Motivated by the goal of constructing a model in which there are no κ-Aronszajn trees for any regular $k>\aleph_1$, we produce a model with many singular cardinals where both the singular cardinals hypothesis and weak square fail.
Export citation
Bookmark 1 citation
7. Separating Diagonal Stationary Reflection Principles.Gunter Fuchs & Chris Lambie-Hanson - 2021 - Journal of Symbolic Logic 86 (1):262-292.
We introduce three families of diagonal reflection principles for matrices of stationary sets of ordinals. We analyze both their relationships among themselves and their relationships with other known principles of simultaneous stationary reflection, the strong reflection principle, and the existence of square sequences.
Export citation
Bookmark 1 citation
8. The Hanf Number for Amalgamation of Coloring Classes.Alexei Kolesnikov & Chris Lambie-Hanson - 2016 - Journal of Symbolic Logic 81 (2):570-583.
Export citation
Bookmark 3 citations
9. Good and Bad Points in Scales.Chris Lambie-Hanson - 2014 - Archive for Mathematical Logic 53 (7-8):749-777.
We address three questions raised by Cummings and Foreman regarding a model of Gitik and Sharon. We first analyze the PCF-theoretic structure of the Gitik–Sharon model, determining the extent of good and bad scales. We then classify the bad points of the bad scales existing in both the Gitik–Sharon model and other models containing bad scales. Finally, we investigate the ideal of subsets of singular cardinals of countable cofinality carrying good scales.
No categories
Export citation
Bookmark 2 citations
10. Bounded Stationary Reflection II.Chris Lambie-Hanson - 2017 - Annals of Pure and Applied Logic 168 (1):50-71.
Export citation
Bookmark 1 citation
11. Simultaneously Vanishing Higher Derived Limits Without Large Cardinals.Jeffrey Bergfalk, Michael Hrusak & Chris Lambie-Hanson - forthcoming - Journal of Mathematical Logic.
Export citation
Bookmark
12. Forcing a □(Κ)-Like Principle to Hold at a Weakly Compact Cardinal.Brent Cody, Victoria Gitman & Chris Lambie-Hanson - 2021 - Annals of Pure and Applied Logic 172 (7):102960.
Export citation
Bookmark
13. Knaster and Friends III: Subadditive Colorings.Chris Lambie-Hanson & Assaf Rinot - forthcoming - Journal of Symbolic Logic:1-48.
Export citation
Bookmark | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9033753275871277, "perplexity": 2404.0426289305938}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571234.82/warc/CC-MAIN-20220811042804-20220811072804-00341.warc.gz"} |
https://math.stackexchange.com/questions/1993670/constructing-a-circle-given-another-circle-and-three-points | # Constructing a Circle Given Another Circle and Three Points
Given a circle $o$ and distinct points $A, B,S$ outside $o$, how can I construct a circle $o'$ through $A$ and $B$, such that $S$ lies on the line joining the two intersection points of $o$ and $o'$
Any hints,
Thanks
• Are you sure it can be done generally? I am thinking of a counter example where o has center at (0,0), A is (1,1) and B is (1,-1). Any o' has to have center on the horizontal axis and hence intersects o, if it indeed does so, at point with identical y coordinates. That is, if S is (5,5) your o' does not exists?
– Jan
Oct 31 '16 at 22:17
• You need restrictions for the position of the points $A,B,S$. In many cases there are not solution, in particular when these three points are collinear. But there is a lot of many other. Oct 31 '16 at 22:20
• It does not say generally, it just says construct a circle Oct 31 '16 at 22:21
Well, such circle $\omicron'$ might not exist. However, there is a solution if you extend the problem in the following form: Given a circle $\omicron$ and distinct points $A, B$ and $S$ outside $\omicron$, construct a circle $\omicron'$ through $A$ and $B$, such that $S$ lies on the radical axis of the two circles $\omicron$ and $\omicron'$.
1. Construct the circle $k_S$ centered at $S$ and orthogonal to $\omicron$.
2. Construct the inverse image $A'$ of $A$ with respect to $k_S$, or alternatively construct the inverse image $B'$ of $B$ with respect to $k_S$ (or you can do both it is still going to work).
3. Draw the unique circle $\omicron'$ passing through $A, B, A'$ (or through $A, B, B'$ which is going to be the same circle $\omicron'$). In any case $\omicron'$ passes through the four points $A, A', B, B'$.
4. If $\omicron$ and $\omicron'$ intersect at two points, then $S$ will lie on the line determined by the two intersection points of $\omicron$ and $\omicron'$. If $\omicron$ and $\omicron'$ touch at one point, then $S$ will lie on the line tangent to both $\omicron$ and $\omicron'$ at their point of contact. If $\omicron$ and $\omicron'$ do not intersect, then $S$ will lie on their radical axis.
In any case, $S$ will lie on the radical axis of $\omicron$ and $\omicron'$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8988889455795288, "perplexity": 93.72336122466962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058263.20/warc/CC-MAIN-20210927030035-20210927060035-00529.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/275165-elements-torsion-n-group-g.html | # Thread: Elements of torsion n of a group G
1. ## Elements of torsion n of a group G
Hello, community! I've been struggling forever with this problem. Let G be abelian and G(n) denote the set of elements g of G such that g^n=e. I'm asked to prove G(n) is a subgroup which is okay, nothing really special about it. However, I'm also asked to prove that given any prime p, it is true that |G(p)|^2 ≥ |G(p^2)|, where the bars help denote the number of elements in each group. Thanks for any help!
Pd: G is also finite.
2. ## Re: Elements of torsion n of a group G
Well, it is easy to show that $G(p)$ is a subgroup of $G(p^2)$. I would guess some application of the Lagrange theorem would be appropriate. Assume that $|G(p^2)|>|G(p)|^2$ and find some contradiction, maybe. What have you tried?
3. ## Re: Elements of torsion n of a group G
This is easy if you know the "fundamental theorem of finite abelian groups" -- you can find this theorem and proofs thereof on the net. I've been trying to prove this without the fundamental theorem, but don't see how to do it. If you can't solve the problem with the theorem, post again and I'll try to help. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.978424608707428, "perplexity": 351.77918993743265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948569405.78/warc/CC-MAIN-20171215114446-20171215140446-00006.warc.gz"} |
http://mathhelpforum.com/new-users/209651-cant-prove-equation-print.html | # Can't prove this equation
• December 12th 2012, 03:13 AM
wille13
Can't prove this equation
I need help on how to prove that 9^(n+3) + 4^n is divisible by 5. Please help, I have no idea of how to solve this
• December 12th 2012, 03:29 AM
MarkFL
Re: Can't prove this equation
I would use induction, observing that:
$(9^{n+4}+4^{n+1})-(9^{n+3}+4^n)=5\cdot9^{n+3}+3(9^{n+3}+4^n)$
• December 12th 2012, 03:29 AM
coolge
Re: Can't prove this equation
$9^n 9^3 + 4^n$
$(5+4)^n 729 + 4^n$
Use binomial theorem to expand $(5+4)^n$
All terms except the last term $4^n$ is divisible by 5.
Take the two left over terms.
$4^n * 729 + 4^n$. This is divisible by 5.
• December 12th 2012, 10:56 AM
richard1234
Re: Can't prove this equation
Or you can observe that
$9^{n+3} + 4^n = 729(9^n) + 4^n$
$\equiv 729(4^n) + 4^n$ (mod 5)
$\equiv 730(4^n)$ (mod 5)
$\equiv 0$ (mod 5)
• December 12th 2012, 03:02 PM
Deveno
Re: Can't prove this equation
Quote:
Originally Posted by richard1234
Or you can observe that
$9^{n+3} + 4^n = 729(9^n) + 4^n$
$\equiv 729(4^n) + 4^n$ (mod 5)
$\equiv 730(4^n)$ (mod 5)
$\equiv 0$ (mod 5)
even simpler:
9 = 4 (mod 5), whence 93 = 43 = 4 (mod 5) (since 42 = 16 = 1 (mod 5)).
thus 9n+3 + 4n = (4n)4 + 4n = 5(4n) = 0 (mod 5).
why do this? because why should i have to calculate the cube of 9, when i can calculate the cube of 4 instead (729 is a number i don't use everyday)?
• December 12th 2012, 04:18 PM
richard1234
Re: Can't prove this equation
Quote:
Originally Posted by Deveno
why do this? because why should i have to calculate the cube of 9, when i can calculate the cube of 4 instead (729 is a number i don't use everyday)?
Yeah that solution's slightly simpler than mine. I just happen to have 9^3 memorized. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8131564259529114, "perplexity": 1537.8446800097042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272256.16/warc/CC-MAIN-20140728011752-00020-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://pure.unipa.it/it/publications/sterilizzazione-con-microonde-di-rifiuti-sanitari-determinazione- | # STERILIZZAZIONE CON MICROONDE DI RIFIUTI SANITARI: DETERMINAZIONE DIRETTA DELL’EFFICACIA DEL PROCESSO
Risultato della ricerca: Articlepeer review
## Abstract
In the sterilization of those Health Care Waste that are marked as possibly infectious, microwaves (MW) have long been proposed as an alternative means to steam. The effectiveness of the operation is assessed determining the fraction that has survived to the sterilizing agent of a known starting population of micro-organisms. Customarily, this population is introduced into the waste mass in the form of one or more sealed vials. These make up an artificial environment which is completely under control; but onto it the sterilizing agent could a priori behave with higher or lower effectiveness, compared with the loose mass which is directly exposed to it. As far as the traditional steam sterilization has been the only process available, the meaning and representativeness of the micro-organisms’ response in sealed vials have not been questioned. In principle, however, a penetrative physical sterilizing agent – as MWs are – could sterilize a standard vial’s content better than would do steam, which will just flow around it. If an operator is about deciding whether to shift to MWs, this success induces him to reduce the energy to feed to the waste mass, compared to that he deemed satisfactory before, when transferred by steam. Since the onset of MW sterilization technique, therefore, a need for validation in the most realistic conditions arose. This demand drives researchers to work out techniques for bacterial count that have no barriers: that is, techniques that count the same tracing cells, but after they, 1) have been freely dispersed in the whole mass; 2) have undergone the same disinfecting actions as the surrounding mass; 3) have been sampled from the mass at the end of the process. It is evident that – in order to gain certainties in phase 2 – two severe uncertainties have been unwillingly introduced as phases 1 and 3. Indeed, the experimental campaign on which this paper reports was aimed at simulating at lab scale the MW sterilization of synthetic waste samples which had been contaminated with known amounts of spores; to get quantitative information on the efficiency and identify the possibly critical steps of the whole procedure. In the 6 different sessions that were run, the operational variables were the waste moisture content (25% – 80%) and the amount of energy supplied as MW; residence time 40 min instead was common to all tests. The temperature patterns were recorded, and at the end the whole mass was washed to detach the spores for following cultivation – count. In this way the critical features of the procedure were identified and ranked by severity. It was foreseen that the procedure would be time-consuming and would require handling of considerable amounts of water and glassware. The experiments showed something more serious: actually, the physiological solution alone – without any surfactant agent added – is unable to detach quantitatively the spores from the waste chips and the beaker walls. Addition of a few drops of surfactants – as was done by Oliveira et al. (2010) – is thus a technical detail which is critical for the success of the whole treatment – analysis chain. Of course we ought to have the certainty that the surfactant chosen does not interfere with the growing medium and / or the bacterial viability in the plate cultivation following. The loss of viable spores in washing the waste batches was calculated by us: 1) sampling and cultivating 1 ml wash solution coming from the control sample inoculated and not irradiated (called K+); and then, 2) comparing the result with the known inoculum. Regrettably, less
Lingua originale Italian 71-86 16 INGEGNERIA DELL'AMBIENTE 5 n.2/2018 Published - 2018 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8645368814468384, "perplexity": 2182.434314695992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704843561.95/warc/CC-MAIN-20210128102756-20210128132756-00354.warc.gz"} |
http://www.gradesaver.com/the-alchemist-coelho/q-and-a/while-reading-the-novel-what-did-you-predict-incorrectly-what-did-you-correctly-predict-for-instance-what-part-of-the-novel-was-not-surprising-and-what-part-of-the-novel-surprised-you-the-most--299695 | # While reading the novel, what did you predict incorrectly? What did you correctly predict? For instance, what part of the novel was not surprising and what part of the novel surprised you the most?
While reading the novel, what did you predict incorrectly? What did you correctly predict? For instance, what part of the novel was not surprising and what part of the novel surprised you the most? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8442236185073853, "perplexity": 1305.7586228503349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607120.76/warc/CC-MAIN-20170522211031-20170522231031-00115.warc.gz"} |
https://homework.cpm.org/category/ACC/textbook/acc6/chapter/5%20Unit%205/lesson/CC1:%205.1.1/problem/5-5 | ### Home > ACC6 > Chapter 5 Unit 5 > Lesson CC1: 5.1.1 > Problem5-5
5-5.
Draw a rectangle with a width of $8$ units and a length of $6$ units.
1. What is the enlargement ratio if you enlarge the figure to have a width of $16$ units and a length of $12$ units?
• $8(?) = 16$
$6(?) = 12$
1. If you wanted to reduce the $8$ by $6$ rectangle by a ratio of $\frac { 1 } { 4 }$, what would the dimensions of the new rectangle be?
• What is one fourth of $8$? What is one fourth of $6$?
The rectangle would be $2$ units by $1\frac{1}{2}$ units. | {"extraction_info": {"found_math": true, "script_math_tex": 13, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9518684148788452, "perplexity": 678.9711023217484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989749.3/warc/CC-MAIN-20210510204511-20210510234511-00486.warc.gz"} |
https://k1monfared.wordpress.com/2015/10/17/on-convex-bodies/ | On convex bodies
Dmitry Ryabogin from Kent State University gave two talks today here at UCalgray. The first talk was about convex bodies with congruent projections. You can read about it here: PDF. And their recent results here: arXiv. The question asks for two convex bodies $k$ and $L$ in $\mathbb{R}^n$, if orthogonal projections of them into each hyperplane is a rotation and translation of each other, is it true that $K = \pm L + a$ where $a$ is a translation?
One side interesting thing is the following question: Does there exist a convex body $C$ in $\mathbb{R}^2 (\mathbb{R}^n)$, which is not a disk (sphere), such that it can be rotated arbitrarily between two parallel lines (hyperplanes) without loosing its contact with either one of them.
It turns out that such things are well known and they are called convex bodies of constant width. It goes back at least to Euler. Here is one way to construct one in $\mathbb{R}^2$: Start with an equilateral triangle ABC and draw a circle centred at A and with radius AB=AC. Repeat this for other two vertices. The intersection of the circles can be shown that has constant width. The more interesting part is that if you use this as a drill you’ll make a whole that is a square! Guess what’s the length of the square!
This doesn’t quite make a wheel in the conventional way that the axle is connected to the centre. But if you put something on top of 4 of them, it’ll always remain parallel to the ground, of course. Here is a link from Wikipedia. And you can read more about them in Chapter 3 (Convex Bodies of Constant Width) of the book Convexity and Its Applications, by Chakerian and Groemer.
From Wikipedia: Rouleaux triangle
A naive question is if we consider all projections into all subspaces, rather than just hyperplanes, does this question become trivial, or is it equivalent to the original problem? Or maybe something else? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 8, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8035559058189392, "perplexity": 257.299784524989}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891539.71/warc/CC-MAIN-20180122193259-20180122213259-00450.warc.gz"} |
http://mathoverflow.net/questions/91994/recognizing-the-4-sphere-and-the-adjan-rabin-theorem?sort=votes | # Recognizing the 4-sphere and the Adjan--Rabin theorem
The problem of recognizing the standard $S^n$ is the following: Given some simplicial complex $M$ with rational vertices representing a closed manifold, can one decide (in finite time) if $M$ is homeomorphic to $S^n$.
For $n=1$, this is obvious, and for $n=2$, one can solve it by computing $\chi(M)$. A solution for $n=3$ is due to
J.H. Rubinstein. An algorithm to recognize the 3-sphere. In Pro- ceedings of the International Congress of Mathematicians, vol- ume 1, 2, pages pp. 601–611, Basel, 1995. Birkhäuser.
By a theorem of S.P. Novikov, the problem is unsolvable if $n\geq 5$. The idea is the following: By the Adjan--Rabin theorem, there is a sequence of super-perfect groups $\pi_i$ for which the triviality problem is unsolvable. Now construct homology spheres $\Sigma_i$ with fundamental groups $\pi_i$. If one can decide which of the $\Sigma_i$ are standard spheres, then one can solve the triviality problem for the fundamental groups.
Question: Is the recognition problem for $S^4$ solvable?
The problem with this proof of S.P. Novikov's theorem is that there is no result that asserts that for any given super-perfect group $\pi$ there is a homology $4$-sphere satisfying $\pi_1(\Sigma) = \pi$. However, Kervaire has proved that every perfect group with the same amount of generators and relators may be realized as the fundamental group of a homology $4$-sphere.
Thus the question: Is there an improved Adjan--Rabin theorem that asserts the existence of a sequence of perfect groups $\pi_i$ with the same amount of generators and relators, the triviality problem of which is unsolvable?
-
As mentioned algorithmic 4-sphere recognition is an open problem. Since Rubinstein's solution to the 3-sphere recognition problem is so simple and elegant, perhaps the first thing you might guess is, why not try those techniques in dimension 4? Normal surfaces, crushing normal 3-spheres, searching for almost-normal 3-spheres.
That theory is still in its infancy. Rubinstein and his former student Bell Foozwell have been developing normal co-dimension one manifold theory in triangulated manifolds. They have a "normalization" process that follows Rubinstein's general normal/almost-normal schema but it appears to do a fair bit of damage to the manifolds, so it's not clear to me if anything like this could eventually be used for 4-sphere recognition, but maybe some creative variant of the idea will work-out.
Another closely-related problem would be an algorithmic Schoenflies theorem, to determine if a normal 3-sphere bounds a ball.
-
A presentation with the same number of generators and relations is called balanced. The triviality problem for balanced presentations (indeed, the word problem for balanced presentations) is a major unsolved problem. Googling the phrase 'triviality problem for balanced presentations' will give lots of references. Note that you may automatically assume that your groups $\pi_i$ are perfect, since the class of perfect groups is recursive.
-
Thank you for the answer, I'll look into that. And thank you in particular for the remark on perfect groups! – Malte Mar 23 '12 at 14:32
Recognition of $S^4$ is listed as an open problem in the survey of Shmuel Winberger "Homology Manifolds" (page 1088): http://www.maths.ed.ac.uk/~aar/homology/shmuel2.pdf with exactly the same reasoning that HW explained. Note that fundamental groups of homology 4-spheres need not be balanced (an example of Hausmann and Weinberger from 1984), still, nobody so far was able to exploit this.
-
The Hausmann--Weinberger example is good news. One might not have to solve the triviality problem for balanced groups then to get the S.P. Novikov theorem for $n=4$. – Malte Mar 23 '12 at 14:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9331655502319336, "perplexity": 541.7134269085765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446218.95/warc/CC-MAIN-20151124205406-00126-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://stats.stackexchange.com/questions/359369/should-a-cost-function-in-ml-always-be-written-as-an-average-over-all-training-s | # Should a cost function in ML always be written as an average over all training samples?
I'm building a non-kernelized Support Vector Machine classifier . The problem that I need to solve is:
$$\min_{w} \left(w^{t}w + \sum_{i=0}^{n}\max\left(0, 1-y_i\left(w^{t}x_{i} + b\right)\right)\right)$$
I want to solve this problem using subgradient descent, so I construct the cost function:
$$J_{1}(w,b) = w^{t}w + C\frac{1}{n}\sum_{i=0}^{n}\max(0, 1-y_i(w^{t}x_{i} + b))$$
But, I found one source online(I will provide a link if I manage to find it again) where they wrote the cost function as follows:
$$J_{2}(w,b) = \frac{1}{n}\sum_{i=0}^{n}(w^{t}w + C\max(0, 1-y_i(w^{t}x_{i} + b)))$$
I tried both versions of the cost function using their respective gradients, and it seems to me using $J_{2}$ gives a behavior of the SVM that looks most alike to the mental model of the behavior of SVM I have.
So my question is: which one of the two versions of the cost function is the correct one (or the better one) in this case, and why?
And more generally, should we write cost functions that are as whole written as an average over the training samples (as in $J_{2}$), or is it better to write some components as an average over the training samples and others not as an average (as in $J_{1}$)?
By the way, I'm using vanilla subgradient descent (as opposed to Stochastic GD), meaning my implementation makes updates of the weights and the bias using all the training samples per each update.
which one of the two versions of the cost function is the correct one (or the better one) in this case, and why?
I suspect you were intending to take the average, in which case the sum would typically be written from $i=1$ to $n$. If that's the case, then your expressions $J_1$ and $J_2$ for the cost function would be equivalent (you can rearrange each to get the other).
And more generally, should we write cost functions that are as whole written as an average over the training samples (as in $J_2$), or is it better to write some components as an average over the training samples and others not as an average (as in $J_1$)?
In my experience, the form of $J_1$ is more typical. It clearly expresses the cost function as a sum of two terms: 1) the loss (hinge loss in this case), which is summed/averaged over data points, and 2) the penalty/regularization term ($\ell_2$ penalty in this case), which depends only on the weights/parameters.
Not all cost functions can be written as a sum/average over data points. For example, some might involve pairs of data points. Or, an operation other than a sum might be of interest. Or, considering cost functions for general optimization problems, there may be no data at all.
Edit: Proof that $J_1$ and $J_2$ are equivalent
I'll assume here that the sums should be from $1$ to $n$.
Start with the expression for $J_2$:
$$\frac{1}{n} \sum_{i=1}^n \left ( w^T w + C \max(0, 1-y_i(w^T x_i + b)) \right )$$
Split the sum into two separate sums:
$$= \frac{1}{n} \sum_{i=1}^n w^T w + \frac{1}{n} \sum_{i=1}^n C \max(0, 1-y_i(w^T x_i + b))$$
Factor out terms that don't depend on $i$:
$$= \frac{1}{n} w^T w \sum_{i=1}^n 1 + C \frac{1}{n} \sum_{i=1}^n \max(0, 1-y_i(w^T x_i + b))$$
Note that $\sum_{i=1}^n 1 = n$:
$$= w^T w + C \frac{1}{n} \sum_{i=1}^n \max(0, 1-y_i(w^T x_i + b))$$
This is now the expression for $J_1$.
• Can you please show how you can rearrange $J_{1}$ to get $J_{2}$, because I don't think they are equivalent? In $J_{2}$ you add more importance to the norm of the weights because you add the dot product n-times instead of once. – user3071028 Jul 29 '18 at 10:36
• I edited the post to show what you're asking for – user20160 Jul 29 '18 at 12:58
If we excluded the $w^Tw$ term, the two objectives would be equivalent as they would be multiples of each other. So the only difference is weither this term is weighted or not. The $w^Tw$ term serves to penalize large weights to prevent overfitting. Generally regularization terms get their own weighting parameter, often denoted as $\lambda$ that toggles the relative importance of the two terms: $$\lambda w^Tw + \frac{1}{n}\sum^n_{i=1} f(x_i,y_i)$$ Since the regularization term doesn't depend on the number of training examples, there no intuitive reason for its weight to be $1/n$ but it might still be useful to use $\lambda <1$. Often the optimal value for this parameter is found via cross validation with left out/unseen data. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8169375061988831, "perplexity": 251.4257416568227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145989.45/warc/CC-MAIN-20200224224431-20200225014431-00491.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/165317-silly-yet-annoying-porblem-concening-vector-rotation.html | # Math Help - Silly yet annoying porblem concening vector rotation
1. ## Silly yet annoying porblem concening vector rotation
Hi guys,
I wish to find the rotation matrix which rotates a general 3d vector (x,y,z) into a z-axis vector (0,0,z'). z' can of course be computed easily, but I'm interested in the rotation matrix. If possible, I'd like to avoid solving a set of 9 algebraic equations to find it.
Can anyone help with this?
2. I recently worked out the matrix multiplications necessary to rotation a vector around a given axis through a given angle. That is related to this problem.
First, you need to know that, in two dimensions, the matrix that rotates around the origin through angle $\theta$ is of the form $\begin{bmatrix}cos(\theta) & -sin(\theta) \\ sin(\theta) & cos(\theta)\end{bmatrix}$. You can see that is true by looking at what it does to the "basis" vectors $\begin{bmatrix}1 \\ 0 \end{bmatrix}$ and $\begin{bmatrix}0 \\ 1\end{bmatrix}$:
$\begin{bmatrix}cos(\theta) & -sin(\theta) \\ sin(\theta) & cos(\theta)\end{bmatrix}\begin{bmatrix}1 \\ 0\end{bmatrix}= \begin{bmatrix}cos(\theta) \\ sin(\theta)\end{bmatrix}$ which is clearly a vector making angle $\theta$ with the x-axis and
$\begin{bmatrix}cos(\theta) & -sin(\theta) \\ sin(\theta) & cos(\theta)\end{bmatrix}\begin{bmatrix}0 \\ 1\end{bmatrix}= \begin{bmatrix} -sin(\theta) \\ cos(\theta)\end{bmatrix}$ which is clearly a vector making angle $\theta$ with the y-axis.
In three dimensions it is easy extend that and see that $\begin{bmatrix}cos(\theta) & -sin(\theta) & 0 \\ sin(\theta) & cos(\theta) & 0 \\ 0 & 0 & 1 \end{bmatrix}$ rotates through an angle $\theta$ around the z-axis and then to see that $\begin{bmatrix}1 & 0 & 0 \\ 0 & cos(\theta) & -sin(\theta) \\ 0 & sin(\theta) & cos(\theta)\end{bmatrix}$ and $\begin{bmatrix}cos(\theta) & 0 & -sin(\theta) \\ 0 & 1 & 0 \\ sin(\theta) & 0 & cos(\theta)\end{bmatrix}$ rotate around the x and y axes, respectively.
To rotate through angle $\theta$, around arbitrary axis $x_0\vec{i}+ y_0\vec{j}+ z_0\vec{k}$, use the following strategy:
1) Rotate around the z- axis so that the vector $\vec{v}= x_0\vec{i}+ y_0\vec{j}+ z_0\vec{k}$ is rotated into $\vec{u}= r\vec{i}+ 0\vec{j}+ z_0\vec{k}$ in the xz-plane. That is, rotate so that the y component is 0. Since a rotation preserves length, we must have $r= \sqrt{x_0^2+ y_0^2}$.
2) Rotate around the y- axis so that the vector $\vec{u}$ is rotated into the vector $\vec{w}= 0\vec{i}+ 0\vec{j}+ \rho\vec{k}$ pointing along the z-axis. Again, since a rotation preserves length, we must have $\rho= \sqrt{x_0^2+ y_0^2+ z_0^2}= \sqrt{r^2+ z_0^2}$.
3) Rotate around the z- axis through angle $\theta$
4) Rotate back reversing the rotation in (2).
5) Rotate back reverseing the rotation in (3).
The rotation in (1), since it is about the z-axis, is of the form $\begin{bmatrix}cos(\theta) & - sin(\theta) & 0 \\ sin(\theta) & cos(\theta) & 0 \\ 0 & 0 & 1\end{bmatrix}$. Since the specific angle is not relevant, I am going to call that $\begin{bmatrix}c & -s & 0 \\ s & c & 0 \\ 0 & 0 & 1\end{bmatrix}$.
That is, we must have $\begin{bmatrix}c & -s & 0 \\ s & c & 0 \\ 0 & 0 & 1\end{bmatrix}\begin{bmatrix}x_0 \\ y_0 \\ z_0\end{bmatrix}= \begin{bmatrix} r \\ 0 \\ z_0\end{bmatrix}$.
That gives the two equations $cx_0- sy_0= r$ and $sx_0+ cy_0= 0$. From the second equation, $s= -\frac{y_0}{x_0}c$. Putting that into the first equation, $cx_0+ \frac{y_0^2}{x_0}c= c\frac{x_0^2+ y_0^2}{y_0}= c\frac{r^2}{y_0}= r$ so that $c= \frac{y_0}{r}$. From that, $s= -\frac{x_0}{r}$.
That is, the matrix required for the first rotation is $\begin{bmatrix}\frac{y_0}{r} & \frac{x_0}{r} & 0 \\ -\frac{x_0}{r} & \frac{y_0}{r} & 0 \\ 0 & 0 & 1\end{bmatrix}$.
Now, for (2) we need to rotate around the y-axis. Specifically, we need $\begin{bmatrix}c & 0 & -s \\ 0 & 1 & 0 \\ s & 0 & c\end{bmatrix}\begin{bmatrix} r \\ 0 \\ z_0\end{bmatrix}= \begin{bmatrix}0 \\ 0 \\ \rho\end{bmatrix}$.
That gives the two equations $cr- sz_0= 0$ and $sr+ cz_0= \rho$. From the first equation, $s= \frac{r}{z_0}c$. Putting that into the second equation, $\frac{r^2}{z_0}c+ z_0c= \frac{r^2+ z_0^2}{z_0}c= \frac{\rho^2}{z_0}= \rho$ so that $c= \frac{z_0}{\rho}$ and then $s= \frac{r}{\rho}$.
That is, the matrix giving the rotation in (2) is $\begin{bmatrix}\frac{z_0}{\rho} & -\frac{r}{\rho} & 0 \\ 0 & 1 & 0 \\ \frac{r}{\rho} & 0 & \frac{z_0}{\rho}\end{bmatrix}$.
The rotation in (3), about the z-axis through angle $\theta$ is, of course, $\begin{bmatrix}cos(\theta) & -sin(\theta) & 0 \\ sin(\theta) & cos(\theta) & 0 \\ 0 & 0 & 1\end{bmatrix}$.
The matrix in (4) is the inverse of the matrix in (2) so we are rotating around the y-axis but through the negative angle. Since cosine is an even function and sine is odd, that just changes the sign on the "s" terms. The matrix needed is
$\begin{bmatrix}\frac{z_0}{\rho} & \frac{r}{\rho} & 0 \\ 0 & 1 & 0 \\ -\frac{r}{\rho} & 0 & \frac{z_0}{\rho}\end{bmatrix}$
The matrix is (5) is the inverse of the matrix is (1) so we just need to change the signs on the "s" terms. The matrix is
$\begin{bmatrix}\frac{y_0}{r} & -\frac{x_0}{r} & 0 \\ \frac{x_0}{r} & \frac{y_0}{r} & 0 \\ 0 & 0 & 1\end{bmatrix}$
Putting that all together, to rotate the vector $\begin{bmatrix}a \\ b \\ c \end{bmatrix}$, around the axis vector $\begin{bmatrix}x_0 \\ y_0 \\ z_0\end{bmatrix}$ through angle $\theta$, do the matrix multiplications:
$\begin{bmatrix}\frac{y_0}{r} & -\frac{x_0}{r} & 0 \\ \frac{x_0}{r} & \frac{y_0}{r} & 0 \\ 0 & 0 & 1\end{bmatrix}$ $\begin{bmatrix}\frac{z_0}{\rho} & \frac{r}{\rho} & 0 \\ 0 & 1 & 0 \\ -\frac{r}{\rho} & 0 & \frac{z_0}{\rho}\end{bmatrix}$ $\begin{bmatrix}cos(\theta) & -sin(\theta) & 0 \\ sin(\theta) & cos(\theta) & 0 \\ 0 & 0 & 1\end{bmatrix}$ $\begin{bmatrix}\frac{z_0}{\rho} & -\frac{r}{\rho} & 0 \\ 0 & 1 & 0 \\ \frac{r}{\rho} & 0 & \frac{z_0}{\rho}\end{bmatrix}$ $\begin{bmatrix}\frac{y_0}{r} & \frac{x_0}{r} & 0 \\ -\frac{x_0}{r} & \frac{y_0}{r} & 0 \\ 0 & 0 & 1\end{bmatrix}$ $\begin{bmatrix}a \\ b \\ c \end{bmatrix}$ where, again, $r= \sqrt{x_0^2+ y_0^2}$ and $\rho= \sqrt{x_0^2+ y_0^2+ z_0^2}= \sqrt{r^2+ z_0^2}$.
That is to rotate around the vector $x_0\vec{i}+ y_0\vec{j}+ z_0\vec{k}$, through angle $\theta$.
To rotate so that vector $\vec{u}= a\vec{i}+ b\vec{j}+ c\vec{k}$ is rotated into vector $\ve{v}= x\vec{i}+ y\vec{j}+ z\vec{k}$, we need to rotate around an axis vector perpendicular to both through the angle they make with each other. Since it is direction that is important here, not length, it is sufficient to assume that both have length 1. If not, just divide each by its length to make that true.
The vector perpendicular to both is, of course, the cross product of the two vectors: $\ve{u}\times\vec{v}= (bz- cy)\vec{i}+ (cx- az)\vec{j}+ (ay- bx)\vec{k}$. That is our " $x_0\vec{i}+ y_0\vec{j}+ z_0\vec{k}$ above: $x_0= bz- cy$, $y_0= cx-az$, and $z_0= ay- bx$.
The angle between the two vectors is given by their dot product: $\vec{u}\cdot\vec{v}= ||\vec{u}||||\vec{v}|| cos(\theta)$. Since it is direction that is important here, not length, it is sufficient to assume that both have length 1. If not, just divide each by its length to make that true. With [math | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 64, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9655080437660217, "perplexity": 177.4870253928017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065488.33/warc/CC-MAIN-20150827025425-00250-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://brilliant.org/problems/dare-to-use-newton-sums/ | # Totally Newton
Algebra Level 3
If $$\alpha$$ and $$\beta$$ are the roots of $$x^2+x+1=0$$, then find the value of
$$\alpha^{2015}+\beta^{2015}$$.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9356741309165955, "perplexity": 554.2432993556027}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549429548.55/warc/CC-MAIN-20170727222533-20170728002533-00081.warc.gz"} |
https://rd.springer.com/article/10.1007%2Fs13209-019-0190-z | SERIEs
pp 1–22
# Fewer babies and more robots: economic growth in a new era of demographic and technological changes
• Juan F. Jimeno
Open Access
Original Article
## Abstract
This paper surveys recent research on the macroeconomic implications of demographic and technological changes. Lower fertility and increasing longevity have implications on the age population structure and, therefore, on the balance between savings and investment. Jointly with meagre productivity growth, this implies a low natural rate of interest that conditions the effectiveness of monetary and fiscal policies, especially in a world of high debt. New technological changes (robots, artificial intelligence, automation) may increase productivity growth but at the risk of having disruptive effects on employment and wages. The survey highlights the main mechanism by which demographic and technological changes, considered both individually and in conjunction, affect per capita growth and other macroeconomic variables.
## Keywords
Population ageing Technological progress Innovation Automation Economic growth
J11 O33 O41
## 1 Introduction
In most developed countries, the weight of the working-age population in total population is bound to decrease significantly in the forthcoming decades because of the retirement of the baby boomers, decreasing fertility during the recent past decades, and further increases in longevity. At the same time, there is a new wave of technological changes, built upon the development of robotics and artificial intelligence (AI) that is generating some anxiety about the displacement of human labour with disruptive effects on employment and wages.
Awareness of these trends has led to a revival of the secular stagnation hypothesis (Hansen 1939). Its main insight is characterising a macroeconomic regime under which low labour supply growth, population ageing, poor productivity growth, and high public debt leads to a savings glut, depressed investment, and, hence, a very low natural interest rate and a permanent deficit of aggregate demand that may not be corrected by macropolicies. As for technological changes, the conventional wisdom, focused on factor-augmenting technological progress, concludes that productivity growth is associated with changes in the composition of employment by worker skills but does not affect the long-run level of aggregate employment. This view is being challenged on the presumption that robotisation and AI, rather than being complement to human labour and, hence, increase labour productivity, may lead to a global displacement of workers, regardless of their skills.
When considered together, demographic and technological changes give rise to some conceptual questions regarding the determinants of economic growth, namely, (i) Does population ageing impulse automation and, hence, productivity growth (and, if so, how)?, (ii) Do robotisation and AI have different economic implications from factor-augmenting technological progress?, (iii) To what extent a very low natural rate of interest associated to population ageing and either low productivity growth or disruptive technological changes constrain macrostabilisation policies?, and if so, (iv) What are the policy alternatives to combat a persistent deficit of aggregate demand?
This paper surveys recent literature on macroeconomics and labour economics and provides empirical evidence that have some bearing on these questions. It highlights the main transmission mechanisms involved in the analysis of the macroeconomic implications of demographic and technological changes. Awareness of these transmission mechanisms is important for designing economic policies (both in the macrostabilisation front and with long-run objectives) that could address the big challenges of the new macroeconomic scenario
The structure of the paper is as follows. Section 2 lays out the characteristics of the demographic trends, reviews models that formalise the secular stagnation hypothesis and the determinants of the natural rate of interest, and revisits the empirical evidence on the impact of demographics on GDP, employment, and productivity. Section 3 is devoted to models of technological progress that beyond factor-augmenting technological progress consider the possibility of global displacement of workers (and not only skills) by robots and AI. Section 4 highlights the main general equilibrium effects and the aggregate constraints relevant to understand the consequences of demographic and technological changes jointly considered. Finally, Sect. 5 concludes with general remarks, some of them related to policy implications.
## 2 The revival of the secular stagnation hypothesis
In this, Sect. 1 highlights some basic facts about demographic changes and provides some background on the macroeconomic implications of these changes by, first, laying out a simple model of the natural rate of interest, and, secondly, by looking at the cross-country/time series correlations between some demographic indicators and macroeconomic variables (GDP per capita, employment rate, and productivity growth).
### 2.1 The new demographic scenario
The current demographic scenario is the result of three main developments: (i) baby boomers approaching the retirement age, (ii) a permanent fall in fertility rates, and (iii) a continuous rise of longevity.
Figures 1 and 2 provide some statistics on these factors that shape population forecasts for the rest of this century.1 As a result of the baby boomers approaching retirement ages, the share of older population started to increase in the last decade of the XXth century (Fig. 1). Due to the fall in fertility and the increase in longevity (see Fig. 2), population ageing will continue increasing during the rest of the XXIst century (at a higher rate in the first five decades). Thus, between 2015 and 2050, the ratio of the population aged 50 and above over the population aged 20–49 years of age will increase by 42 pp in Europe (from 0.93 to 1.35) and by 25 pp in the USA (from 0.86 to 1.11), while the share of population 20–69 (a good approximation to working-age population) in total population will decrease by 8 pp in Europe (from 67% to 59%) and by 4 pp in the USA (from 64% to 60%).
### 2.2 Demography and macroeconomics: a first pass
In principle, there are several channels by which demographic changes may have macroeconomic implications. Apart from mechanic scale effects of the size of the population, the age structure is mostly relevant for wealth accumulation. Insofar as, the propensity to consume out of wealth depends on age, changes in fertility, mortality, and the relative size of the retired population which have implications for savings. Moreover, investment also depends on working-age population and productivity growth. Hence, these demographic changes affect the balance between savings and investment that determines the so-called natural rate of interest (i.e. the rate of interest at which savings are equal to investment at full employment). When monetary policy cannot accommodate the natural rate of interest because of the zero lower bound (ZLB) constraint on policy rates, the economy is bound to get trapped into an equilibrium with low growth and high unemployment.
The mechanisms by which demographic change affects savings are well understood (see, for instance, Carvalho et al. 2016). First, there is a deleveraging effect (emphasised by Eggertsson and Mehrotra 2014) associated with the lower size of the young population cohorts and, hence, less demand for credit. Secondly, there is also an increase in aggregate desired savings as adult workers (now the baby boomers) reach the age period at which labour productivity peaks in the working life cycle. Thirdly, as longevity increases, savings also increase, and more so when, because of high public debt, future pension transfers are expected to fall. Moreover, if future productivity growth is expected to be lower than the current one, the increase in desired savings associated with the above-mentioned demographic changes is even higher (Jimeno 2015).
As for investment, lower labour supply growth increases the capital–labour ratio and, hence, to a period of capital depletion and investment slowdown (Hall 2017). As in the case of savings, the effects of demographic changes on investment are amplified by high debt and low expected productivity growth, insofar as these two factors also reduce firm demand for capital, and may potentially originate “stagnation traps” (Benigno and Fornaro 2017): weak aggregate demand depresses growth (through lower consumption and investment), and low expected future growth depresses demand. Aksoy et al. (2019) also identify an additional channel by which population ageing leads to lower investment and growth running through lower innovation.2
### 2.3 The natural rate of interest
Standard estimates of the natural rate of interest (i.e. Laubach and Williams 2003; Holston et al. 2017) show that both in the USA and in Europe, they are on a decreasing trend since the 1980s; currently, close to zero or even in negative values. Unfortunately, as Fiorentini, Galesi, Perez-Quirós, and Sentana (2018, FGPS henceforth) show, there is a large degree of uncertainty in those estimates when the output gap is insensitive to the interest rate, and when inflation is insensitive to the output gap, phenomena are associated with a situation in which the ZLB binds. To solve this problem, FGPS (2018) propose an alternative estimation strategy that delivers a general decline in natural interest rates that started from the beginning of the XXth century until roughly the 1960s; thereafter, natural interest rates follow a generalised rise that peaks around the end of the 1980s; and eventually, rates converge to very low or even negative levels over the 2000s, so that currently the natural rates of interest, both in the US and in Europe, are close to $$-$$ 2%.
Moreover, FGPS (2018) attribute the initial rise and subsequent decline of the natural rate of interest that started in the 1960s to three factors: productivity growth, demographic changes, and risk. A productivity slowdown may explain the fall in the natural rate of interest by a decrease in investment, while population ageing and increasing risk may do the same, through a rise in the propensity to save. In their results, changes in the demographic composition account for most of the rise and fall of the natural interest rate in the USA, the euro area, and Canada, while productivity growth is not very significant, and risk explains part of the rise of the real interest rate, and a substantial component of the fall since roughly the 1990s.
#### 2.3.1 Some theoretical background
To identify the main determinants of the natural interest rate and provide some insights on the likelihood of a secular stagnation scenario, I consider a version of Eggertsson and Mehrotra’s (2014) three-period OLG model, extended to include exogenous technical progress, and a public sector accumulating debt in order to implement some income transfers across generations. The focus is mainly on how savings decisions and demand for credit determine the natural interest rate, and to that end, both productivity growth and inter-generational transfers by fiscal policy are important factors to consider.3
Households At each moment, three generations (young, y, middle, m, and old, o) coexist. The size of the young generation at t is denoted by $$N_{t}^{y},$$ and it exogenously grows at rate $$n_{t}.$$ Hence, $$N_{t}^{y}=(1+n_{t})N_{t-1}^{y} =(1+n_{t})N_{t}^{m}=(1+n_{t})(1+n_{t-1})N_{t}^{o}.$$
The young generation is credit constrained, does not produce, and receives no income. Therefore, to consume, they borrow from the middle generation, up to a limit $$D_{t}$$ (inclusive of interest payments).
The middle generation provides labour (inelastically), receives all income (labour earnings and capital income, Y), and saves: (i) to pay for debt accumulated while young, (ii) to buy capital (at price $$p^{k}$$), (iii) to lend to the young generation ($$B_{t}^{m}$$), and iv) to hold public bonds ($$B_{t}^{g}$$). Capital depreciates at rate $$\delta$$.
There is a public sector that taxes income at rate $$\tau _{t}$$ and spends $$N_{t}^{m}G_{t},$$ to be financed by tax revenues, $$\tau _{t}Y_{t},$$ and (one-period) bonds held by the middle generation, $$N_{t}^{m}B_{t}^{g}$$. Public expenditures are assumed to be spent in providing income to the old generation (as in a Pay-As-You-Go pension system).
The old generation consumes all of its savings (plus interest receipts) and government transfers.4
Thus, the household’s problem is:
\begin{aligned} \underset{\{c_{t}^{y},c_{t+1}^{m},c_{t+2}^{o}\}}{\max }&E_{t}[\log c_{t} ^{y}+\beta \log c_{t+1}^{m}+\beta ^{2}\log c_{t+2}^{o}]\\ \hbox {s.t.}&c_{t}^{y} \le B_{t}^{y}; \ \ \ \ \ \ \ \ \ \ \ \ (1+r_{t})B_{t}^{y}\le D_{t}\\&c_{t+1}^{m}+p_{t+1}^{k}\frac{K_{t+1}}{N_{t+1}^{m}}+(1+r_{t})B_{t}^{y} =(1-\tau _{t+1})\frac{Y_{t+1}}{N_{t+1}^{m}}-(B_{t+1}^{g}+B_{t+1}^{m})\\&c_{t+2}^{o} =p_{t+2}^{k}(1-\delta )\frac{K_{t+1}}{N_{t+1}^{m}} +(1+r_{t+1})(B_{t+1}^{g}+B_{t+1}^{m})+\frac{N_{t+2}^{m}}{N_{t+2}^{o}}G_{t+2} \end{aligned}
where $$\beta$$ is the time discount factor, and r is the real interest rate at which households borrow and lend.
The Euler equation for consumption is:
\begin{aligned} \frac{1}{c_{t}^{m}}=\beta \frac{1+r_{t}}{c_{t+1}^{o}} \end{aligned}
while consumption of the old generation is determined by the corresponding budget constraint, and, something similar happens for the young generation assuming that the debt constraint binds:
\begin{aligned} c_{t}^{y}&=\frac{D_{t}}{1+r_{t}}\\ c_{t}^{o}&=p_{t}^{k}k_{t-1}(1-\delta )+(1+r_{t-1})(B_{t-1}^{g} +B_{t-1}^{m})+\frac{N_{t}^{m}}{N_{t}^{o}}G_{t} \end{aligned}
where $$k_{t-1}=\frac{K_{t-1}}{N_{t-1}^{m}}$$. Thus, savings (per member of the middle generation, excluding capital investment) at time t are given by:
\begin{aligned} -(B_{t}^{m}+B_{t}^{g})= & {} \frac{\beta }{1+\beta }\left[ (1-\tau _{t})y_{t} -D_{t-1}-p_{t}^{k}k_{t}\right] \\&-\frac{1}{1+\beta }\frac{1+n_{t}}{1+r_{t}} G_{t+1}-\frac{1}{1+\beta }\frac{(1-\delta )p_{t+1}^{k}k_{t}}{1+r_{t}} \end{aligned}
while the demand for loans is the sum of the (private) debt of the young generation and the supply of (public) bonds:
\begin{aligned} \frac{N_{t}^{y}D_{t}}{1+r_{t}}+N_{t}^{m}B_{t}^{g} \end{aligned}
Public debt dynamics The accumulation of public debt is straightforward: the supply of public bonds is the sum of the bonds issued in the previous period, interest payments, and the primary deficit to be financed each period:
\begin{aligned} N_{t}^{m}B_{t}^{g}&=N_{t-1}^{m}B_{t-1}^{g}(1+r_{t-1}) +N_{t}^{m}G_{t}-\tau _{t}Y_{t}\\ B_{t}^{g}&=\frac{1+r_{t-1}}{1+n_{t-1}}B_{t-1}^{g}+G_{t} -\tau _{t} \frac{Y_{t}}{N_{t}^{m}} \end{aligned}
Hence, the debt-to-GDP ratio ($$b=N^{m}B^{g}/Y$$) is given by
\begin{aligned} b_{t}^{g}=\frac{1+r_{t-1}}{1+n_{t-1}}\frac{y_{t-1}}{y_{t}}b_{t-1}^{g} +g_{t}-\tau _{t} \end{aligned}
where $$y=Y/N^{m}$$ and $$g=G/y.$$
Supply side The production function is Cobb–Douglas and there is exogenous technological progress (indexed by $$A_{t}$$, growing at the exogenous rate $$a_{t}$$). Labour supply is inelastic, so that employment is given by proportion of the middle generation who is working, and capital is rented out in the same period as when investment takes place:
\begin{aligned} Y_{t}=A_{t}K_{t}^{1-\alpha }L_{t}^{\alpha };\ \ \ \ \ \ \ \ \ \ \ \ \ L_{t} =(1-u_{t})N_{t}^{m} \end{aligned}
where $$u_{t}$$ is the unemployment rate. Normalised by the size of the middle generation, $$N_{t}^{m}$$, the production function can be written as follows
\begin{aligned} y_{t}=A_{t}k_{t}^{1-\alpha }(1-u_{t})^{\alpha } \end{aligned}
Hence, the FOC for cost minimisation are:
\begin{aligned} w_{t}&=\frac{\alpha y_{t}}{1-u_{t}} \end{aligned}
(1)
\begin{aligned} r_{t}^{k}&=\frac{(1-\alpha )(1-\tau _{t})y_{t}}{k_{t}} \end{aligned}
(2)
the corresponding Euler equation for capital is:
\begin{aligned} \frac{p_{t}^{k}-r_{t}^{k}}{c_{t}^{m}}=\frac{\beta [p_{t+1}^{k} (1-\delta _{t})]}{c_{t+1}^{o}} \end{aligned}
and the arbitrage condition linking the rental rate of capital and the real interest rate is:
\begin{aligned} r_{t}^{k}=p_{t}^{k}-\frac{(1-\delta )p_{t+1}^{k}}{1+r_{t}}\ge 0 \end{aligned}
(3)
For the given current and future price of capital and the depreciation rate, this equation gives the impact of the real interest rate on capital accumulation, assuming away financial distortions that could introduce an additional wedge between the real interest rate and the rental rate of capital. Combining Eqs. (1) to (3) with the production function yields the following relationship:
\begin{aligned} \frac{1}{1+r_{t}}=\frac{p_{t}^{k}-\widetilde{A_{t}}(1-\tau _{t}) w_{t}^{\frac{\alpha }{\alpha -1}}}{(1-\delta )p_{t+1}^{k}} \end{aligned}
(4)
where $$\widetilde{A}=(1-\alpha )\alpha ^{\frac{\alpha }{1-\alpha }} A^{\frac{1}{1-\alpha }}.$$
Wage and price determination Eggertsson and Mehrotra (2014) consider downward nominal wage rigidity, so that wages are given by:
\begin{aligned} W_{t}&=\max \left\{ \overline{W_{t}},P_{t}F_{L}(K_{t},N_{t}^{m})\right\} \\ \overline{W_{t}}&=\gamma W_{t-1}+(1-\gamma )P_{t}F_{L}(K_{t},N_{t}^{m}) \end{aligned}
where P is the aggregate price level, and $$F_{L}(.)$$ is the marginal productivity of labour.
Alternatively, I also consider the possibility of wages being constrained by real rigidities.5 In this case, I assume that the real wage cannot decrease below a certain level, $$\overline{w_{t}},$$ because of the existence of wage norms or imperfections in the labour market, and, hence, the prevailing wage is given by
\begin{aligned} w_{t}=\max \left\{ \overline{w_{t}},F_{L}(K_{t},N_{t}^{m})\right\} \end{aligned}
Monetary policy Monetary policy is determined by a Taylor rule with a zero lower bound on the policy nominal interest rate, while the Fisher equation relates nominal and real interest rates, so that, respectively:
\begin{aligned} 1+i_{t}= & {} \max \left\{ 1,(1+i_{t}^{*})\left( \frac{\Pi _{t}}{\Pi ^{*} }\right) ^{\phi _{\pi }}\right\} \\ 1+r_{t}= & {} \frac{1+i_{t}}{\Pi _{t+1}}; \ \ \ \Pi _{t+1}=\frac{P_{t+1}}{P_{t}} \end{aligned}
where $$i_{t}^{*}$$, $$\Pi ^{*}$$, and $$\phi ^{\pi }$$ are policy parameters.
Full employment equilibrium Consider first, the case in which neither wage rigidities nor the ZLB are binding. In this case, the economy is at full employment, and the real interest rate, $$r_{t,}^{f},$$ is determined by the condition equating supply and demand for loans, i.e.:
\begin{aligned} 1+r_{t}^{f}= & {} \frac{1+i^{*}}{\Pi ^{*}}\nonumber \\= & {} \frac{(1+\beta )\left[ (1+n_{t})d_{t}+(1-\delta )p_{t+1}^{k}\frac{kt}{yt}\right] +(\tau _{t+1} +b_{t+1}^{g} )(1+n_{t})\frac{y_{t+1}}{y_{t}}}{\beta \left[ \alpha (1-\tau _{t}) -d_{t-1} \frac{y_{t-1}}{y_{t}}-b_{t}^{g}\right] }\nonumber \\ \end{aligned}
(5)
where $$d=D/y$$. This equation identifies the determinants of the natural interest rate ($$r_{t}^{f}$$), given paths for the expected price of capital ($$p_{t+1}^{k}$$) and output per capita growth ($$\frac{y_{t+1}}{y_{t}}$$), which are endogenous variables of the model. However, with a long-run perspective under which exogenous technological change determines both the relative price of capital and productivity growth, it provides some interesting insights:
• The population growth rate: as population growth falls ($$n_{t}$$), the natural interest rate falls, since there are fewer young people demanding credit. Notice however that there is another effect of population growth on the natural interest rate. First, as population growth falls, expected transfers to the old generation also fall, since the relative size of the middle generation to finance those transfers will be smaller. This implies lower future income for the old generation and, thus, an increase in savings that pushes down the natural interest rate even further.
• (Current and next-period) Productivity growth rates: a higher current productivity growth rate, $$a_{t}$$, increases savings since it allows the middle generation to pay for its debt accumulated while young using a lower fraction of its income. Hence, disposable income available for savings is higher, and the natural rate is lower. Higher next-period productivity growth, $$a_{t+1}$$, decreases savings since expected transfers to the older generation are higher, for given tax rates and debt ratios, and, thus, the natural interest rate is higher.
• The future value of capital: a decrease in the price of capital or a higher depreciation rate decrease the equilibrium real interest rate, since future expected income of the middle generation is lower, and, hence, its savings are higher.
• Private debt: the lower the demand for credit by the young generation, $$d_{t}$$, the lower the equilibrium real interest rate. Also, the lower the private debt accumulated by the middle generation while young, the higher savings are, and, thus, the lower is the natural rate.
• (Current and next-period) Tax rates and public debt ratios: a higher current tax rate crowds out savings by lowering disposable income, and, hence, increases the natural rate. A higher next-period tax increases expected future income of the old generation and, thus, it also crowds out savings and increases the natural rate of interest. As for the debt ratios, the current one increases the demand for loans, while the future one, increases expected transfers to the old generation, so that high debt ratios push the natural rate up.6
The secular stagnation regime There is, however, an alternative equilibrium under which either because of a deleveraging shock, or declining population and productivity growth, the natural rate of interest is negative, and the ZLB prevents the policy interest rate to fall to accommodate it. When this happens, the real interest rate is higher than the natural rate, and output and employment are below their full employment levels. In this regime, wage rigidities are important for the adjustment of policies that try to close the gap between the actual and the natural rates of interest by increasing inflation expectations. With downward nominal wage rigidity, these policies reduce real wages and, hence, savings, while increasing capital profitability and investment demand. Alternatively, if real wages are rigid downwards, increasing inflation expectations has a lower effect on the real rate and, hence, on output and employment.7
### 2.4 Demographic change and economic growth
Apart form its impact on the natural interest rate, there are other transmission mechanisms by which demographic change may have an impact on per capita GDP growth. One is the mechanic composition effect from the diminishing weight of the working-age population (as the employment rate falls, GDP per capita also falls). Another is through labour productivity growth, either by changing the capital–labour ratio or by affecting total factor productivity (TFP) growth.
The empirical evidence on the effects of demographic change on productivity growth is steadily increasing. While Acemoglu and Restrepo (2017a) argue that population ageing, by giving an impulse to automation, may result in higher GDP per capita (insofar as the higher productivity of new machines compensates for the negative effect of a lower employment rate), Aksoy et al. (2019), by stressing the importance of the age structure of the population for innovation, conclude that in forthcoming decades, GDP per capita growth will be lower. In this regard, Eggertsson et al. (2018) also argue that the positive effect found by Acemoglu and Restrepo (2017a) vanished during the 2008–2015 period, when the ZLB was binding and the economy was, arguably, in a secular stagnation regime.
Table 1
Population ageing and growth of GDP per capita
GDP per capita (annual rate of growth)
1950–2015
1990–2015
OECD 1990–2015
Change in
0.386***
0.538***
− 0.018
Old ratio
(0.134)
(0.165)
(0.096)
#Observations
8657
4162
846
#Countries
168
168
34
Old ratio is the proportion of population aged 50 and more over the population aged 20–49. Country fixed effects included. ***p-value < 0.01, **p-value < 0.05, *p-value < 0.1
Table 2
Population ageing and employment rate
Change in employment rate
1950–2015
1990–2015
OECD 1990–2015
Change in
0.020
0.046**
0.023
Old ratio
(0.020)
(0.023)
(0.029)
#Observations
7761
4087
846
#Countries
166
166
34
Old ratio is the proportion of population aged 50 and more over the population aged 20–49. Country fixed effects included. ***p-value < 0.01, **p-value < 0.05, *p-value < 0.1
Just to illustrate the main facts, and following Acemoglu and Restrepo (2017a), Tables 1, 2, 3, and 4 present measures of the statistical association between population ageing (measured as the proportion of population aged 50 and more over the population aged 20–49), on the one hand, and per capita GDP growth, employment, and TFP and labour productivity growth, on the other. They are obtained by linear regressions with annual data since 1950 for 168 countries.8 When considering all the countries, either for the whole sample period (1950–2015) or for the most recent one (1990–2015), population ageing is associated with increase in GDP per capita that are brought up by both higher employment rates and, especially, higher productivity growth.9 However, when considering only OECD countries during the most recent period (1990–2015), arguably the countries and the period where automation has proceeded more rapidly, there is no statistically significant association between population ageing, on the one hand, and GDP per capita growth, employment, and productivity growth. These results cast doubts on the extent to which automation is driving the observed co-movements between population ageing and macroeconomic variables.
Table 3
Population ageing and labour productivity growth
labour productivity (annual rate of growth)
1950–2015
1990–2015
OECD 1990–2015
Change in
0.409***
0.487***
− 0.055
Old ratio
(0.126)
(0.150)
(0.064)
#Observations
7761
4087
846
#Countries
166
166
34
Old ratio is the proportion of population aged 50 and more over the population aged 20–49. Country fixed effects included. ***p-value < 0.01, **p-value < 0.05, *p-value < 0.1
Table 4
Population ageing and Total Factor Productivity (TFP)
TFP (annual rate of growth)
1950–2015
1990–2015
OECD 1990–2015
Change in
0.346***
0.379***
0.012
Old ratio
(0.105)
(0.134)
(0.060)
#Observations
5662
2784
846
#Countries
112
112
34
Old ratio is the proportion of population aged 50 and more over the population aged 20–49. Country fixed effects included. ***p-value < 0.01, **p-value < 0.05, *p-value < 0.1
## 3 The new technological era: searching for robots and their macroeconomic implications
Low population growth and population ageing do not necessarily lead the economy to a secular stagnation regime. If trend productivity growth remains high, the balance of savings and investment at full employment may still deliver a conventional macroeconomic equilibrium with the standard properties (see Eq. 5). Macroeconomic models typically consider factor-augmenting technological progress that leads to higher economic growth without disruptive effects on employment and wages. The question is then if, in the demographic transition that is about to happen, the economy will enjoy sufficient productivity growth and of the same nature as in previous episodes of rapid technological changes.
By now, it is pretty clear that the new wave of technological changes is coming mostly from developments in robotics and AI For definition purposes, a robot is (International Federation of Robotics 2017):
“An automatically controlled, reprogrammable, multipurpose manipulator programmable in three or more axes, which may be either fixed in place or mobile for use in industrial automation applications”
There are industrial robots (used in manufacturing) and service/professional robots (used for non-commercial tasks, usually by lay persons). Implicit in the definition, there is the assumption that robotisation and automation are closely equivalent concepts, as both refer to the development by which an industrial robot (“a machine”) is able to fulfil productive tasks previously performed by human labour.
Data on the stock of industrial robots, so-defined, are provided by the data set World Robotics Industrial Robots (WRIB) constructed by aggregating data from national robot associations and robot suppliers. This data set covers nearly all industrial robot suppliers worldwide (around 90% of market share). According to these data, in 2016, there were around 300,000 industrial robots in the world, and the stock of industrial robots was expected to increase at an average annual rate of 15% during 2016–2020.
By combining the WRIB data set with EU KLEMS, I compute the ratio of industrial robots to employment. Figure 3 plots this ratio for selected countries during the recent period (2000–2015).10 The cross-country variation is very much related to the sectoral composition of output, since robots are more prevalent in manufacturing. As for the time evolution, it seems that automation is progressing more rapidly in Europe (mostly, in Germany) than in the USA.11 In any case, it is noteworthy that the penetration of robots so far is fairly low and, therefore, the main consequences of automation are still to be revealed.
As for AI, defined as “the capability of a machine to imitate intelligent human behaviour” or “an agent’s ability to achieve goals in a wide range of environments”, the possibilities are even wider than for automation. AI makes it more plausible to automate an ever-increasing number of tasks previously performed by human labour and also changes the process by which new ideas and technologies are created, helping to solve complex problems. Thus, by scaling up creative efforts, AI could lead to singularities under which there is unbounded machine learning, and, therefore, unbounded growth (Aghion et al. 2017). While developments in AI leading to more automation are reflected in the statistics on the penetration of robots in production (presented above), the new development affecting the creation of new tasks and technologies are, by their own nature, more difficult to measure given the state-of-the-art statistical methods.
### 3.1 Models of automation: a review
The conventional wisdom about the economic consequences of technological changes boils down to two main conclusions: (i) over the long-run GDP per capita, labour productivity, and TFP all grow at the same rate, and (ii) over the same period, there are no significant effects of technological progress on employment, although its sectoral and occupational compositions do change. This is basically the result of considering technological changes as factor-augmenting, assuming elasticities of substitution between labour and capital that are not too low, and looking at the evolution of employment and wages at the balanced-growth path. Autor and Salomons (2018) review the evidence thoroughly and show that indeed, technological changes affect the sectoral and occupational composition of employment (with job polarisation and increase in wage inequality being distinguishing features of the most recent experience in this regard) but without altering employment and unemployment equilibrium rates.
When analysing automation, the dominant approach is the task-based framework (Acemoglu and Restrepo 2018a) that leads to different transmission mechanisms from the conventional factor-augmenting approach that focuses instead on the skill contents of technological change.12 Under the task-based framework, output is produced by a combination of tasks that can be performed by capital (equipment/machines/robots) and labour, either in combination or in isolation.
The new approach also encompasses three different effects of technological change. One is the displacement effect that decreases labour demand as human labour is replaced by machines, and capital intensity increases. This effect is in part compensated by the productivity effect generated by the cost-reducing consequences of new technologies (as under factor-augmenting technological change). Finally, there is the reinstatement effect, namely, the creation of new tasks and new goods and services that require human labour. However, the transmission mechanisms by which those effects take place are somehow different to those associated with the conventional factor-augmenting technological progress.
Hence, to implement this framework one has to take a stance on which tasks are performed by which inputs (the different forms of capital and labour), the degree of complementarity between capital and labour, how new tasks are invented, and, finally, how new machines/robots are produced and used in the production of other goods and services. As a result, there are alternative views on how robots and AI should be modelled for economic analysis.
First, new machines could be considered the combination of capital, code, and skilled labour, so that robots will be as if human skills/intelligence were embedded in capital equipment. Alternatively, one can think of robots/AI as code embodied in capital that reproduces itself and is able to solve problems without any need to be “intelligent” in a human sense.13
There is a second issue regarding which tasks could be performed by the new machines, beyond whether they become “capital with human skills” or tools that are able to perform tasks without the need of replicating human skills. One view is that flexibility, judgement, and common sense are difficult to automate (Polanyi’s paradox), and, hence, workers will remain more productive than machines in tasks requiring versatility, adaptability, and human contact and interactions. Another view is that while high-level reasoning requires few computational resources to replicate, low-level sensorimotor skills require much more (Moravec’s paradox). Hence, it will be low-skilled/manual tasks what will be mostly performed by workers.
In any case, be the displacement effect of technological changes concentrated on high skilled or on manual tasks, another issue is to what extent there will be either full substitution of workers by machines or there will be complementarities between machines and human labour to be exploited. Again, two alternative views emerge. One is that machines will never be able to perform all the tasks needed for production of goods and services and, hence, there will always be jobs to be filled by workers. Another contemplates full automation (a singularity) made it possible by regularizing the environment, so that tasks can be fulfilled only by machines without the needs of flexibility, judgement, and common sense embedded in (some) workers. A similar outcome might arise by developing machines that attempt to infer tacit rules from context, abundant data, and applied statistics, so that by learning they become able to fulfil any task.
Given all the uncertainties, it is not surprising that studies trying to quantify the number of jobs “at risk of being automated” offer a wide range of estimates (see, for instance, Arntz et al. 2017; Frey and Osborne 2017). As for observed effects on employment and wages, there is also no consensus: Graetz and Michaels (2018) find that increased robot use contributed 0.37 pp to annual labour productivity growth, with nil employment effects, but reducing low-skilled workers’ employment share. Acemoglu and Restrepo (2017b) find that one more robot per thousand workers reduces the employment to population ratio by 0.18–0.34 pp and wages by 0.25–0.5%. Lordan and Neumark (2018) conclude that increasing the minimum wage decreases significantly the share of automated employment held by low-skilled workers, and increases the likelihood that low-skilled workers in automated jobs become unemployed. Finally, Dauth et al. (2017) find that every robot destroys two manufacturing jobs (23% of overall decline in manufacturing employment), mostly for entrants, but has no displacement effect on incumbents, so that at the aggregate, there are no aggregate losses in employment. They also find that robots raise productivity but not wages.
### 3.2 Analysing the macroeconomic effects of automation
An early attempt at introducing automation in macroeconomic and growth models is by Benzell et al. (2015). They envisage robots as the combination of code and capital goods, so that high-tech workers produce code, while low-tech workers are employed in tasks in the production of services.14 Thus, production of goods and services is given by:
\begin{aligned} y(i)= & {} z_{t}\left[ \theta _{k}\widetilde{k}_{t}(i)^{\alpha }+(1-\theta _{k}) l_{t}(i)^{\alpha }\right] ^{\frac{1}{\alpha }}\\ l_{t}(i)= & {} \left[ \theta _{a}\widetilde{a}_{t}(i)^{\phi }+(1-\theta _{a} )n_{t}(i)^{\phi }\right] ^{\frac{1}{\phi }}\\ A_{t}= & {} \delta A_{t-1}+zH_{A_{t}} \end{aligned}
where $$Y_{t}$$: Goods, $$S_{t}$$: Services, $$G_{t}$$: low-tech workers, $$H_{t}$$: high-tech workers ($$H_{S}$$ employed in S, and $$H_{S}$$ in the production of code, A).
Under this specification of technology, the displacement effect is most evident. The introduction of new code reduces the compensation of low-skilled workers and savings, and, hence, both investment and the stock of capital. This mechanism has two relevant implications, one is that it generates boom/bust technological cycles. Another is that machines might lead to a “immiseration scenario” under which capital is crowded out and the labour income share falls substantially. Thus, despite the large productivity gains associated with robotics and AI, the possibility of a stagnation equilibrium similar to the secular stagnation equilibrium cannot be disregarded.
A more sanguine view of the displacement effect is in Lin and Weise (2018) who envisage robots as plain substitutes of labour in a DSGE framework. They assume that the inputs are capital and an aggregation of robots and human labour. Thus, if $$\widetilde{k}_{t}$$ is utilisation-adjusted traditional capital, $$\widetilde{a}_{t}$$ is utilisation-adjusted robots, and $$n_{t}$$ human labour input, production of intermediate input i, y(i), is given by:
\begin{aligned} y(i)= & {} z_{t}\left[ \theta _{k}\widetilde{k}_{t}(i)^{\alpha }+(1-\theta _{k} )l_{t}(i)^{\alpha }\right] ^{\frac{1}{\alpha }} \\ l_{t}(i)= & {} \left[ \theta _{a}\widetilde{a}_{t}(i)^{\phi }+(1-\theta _{a} )n_{t}(i)^{\phi }\right] ^{\frac{1}{\phi }} \\ \end{aligned}
where z is a productivity shift parameter.
From here, they focus on the implications for business cycles and monetary policy. Their main results are that (i) a fall in the relative price of robots causes labour’s share to fall, (ii) responses to $$z_{t}$$ and monetary policy shocks depend on the elasticity of substitution in the aggregation of robots and human labour, $$\phi$$, and (iii) the presence of robots weakens the correlation between output and employment and, hence, increase the volatility of output, inflation, and employment.
A more comprehensive framework of the consequences of robots substituting human labour in production requires modelling of the generation of new tasks. This is what Acemoglu and Restrepo (2018a, 2019) have accomplished in a series of recent working papers. They assume that tasks are produced by combining either labour or capital with a task-specific intermediary q(i). Some tasks $$i>I,$$ $$I\in [N-1,N]$$ can only be produced by labour, while others, $$i\le I,$$ $$I\in [N-1,N]$$ could be automated and produced either by capital or labour. Thus,
\begin{aligned} y(i)= & {} B\left[ \eta q(i)^{\frac{\varsigma -1}{\varsigma }}+(1-\eta ) \left( \gamma (i)l(i)\right) ^{\frac{\varsigma -1}{\varsigma }}\right] ^{\frac{\varsigma }{\varsigma -1}}, \ \ \ \ i>I\\ y(i)= & {} B\left[ \eta q(i)^{\frac{\varsigma -1}{\varsigma }}+(1-\eta )(k(i) +\gamma (i)l(i))^{\frac{\varsigma -1}{\varsigma }}\right] ^{\frac{\varsigma }{\varsigma -1}}, \ \ \ \ i\le I \end{aligned}
In their framework, besides the conventional productivity effect, there are two main driving forces. First, there is automation that implies that robots displace workers, and, secondly, there is the creation of new complex tasks where humans have some comparative advantage (the so-called reinstatement effect). Under this framework, there are two crucial elements. One is how new tasks are created and whether they are performed by human labour or by machines. Another is the mechanism by which the economy converges to a balanced-growth path, if it does. Acemoglu and Restrepo (2018a) assume that the creation of new tasks is endogenous, depends on resources devoted to innovation, and new tasks are initially performed by human labour, and consider a balanced-growth path under which the set of automated tasks grows at the same rate as the set of tasks performed by human labour. They find that depending on innovation, there could be periods in which automation runs ahead of the creation of new complex tasks, but they eventually self-correct with the economy returning to a situation where employment and the labour share remain invariant to the pace of automation.
Aghion et al. (2017) argue that the existence and characteristics of the balanced-growth path in this type of models are the consequences of the so-called “Baumol’s cost disease”, namely, relative price adjustments resulting in growth being determined by the production factor whose productivity increases by less (or, as they put it, by growth “constrained not by what we are good at but rather by what is essential and yet hard to improve”). They show that this mechanism generates sufficient conditions for a balanced-growth path to exist, even with nearly complete automation.
This literature review suggests that there are good reasons to believe that new technological changes, based on the development of robotics and AI, may have macroeconomic implications beyond those considered by conventional analysis. Rather than focusing on skills and the complement/substitution relationship between labour and new capital goods, it may be more relevant to consider a task-based framework under which worker displacement may occur for all skills, and worker reinstatement depends on innovation rather than on training and human capital accumulation. Moreover, given the disruptive effects on employment and wages that this technological change may have, the higher productivity growth brought by automation might not translate into higher long-run growth. The next section presents some new results that illustrate these two claims.
## 4 Robotics, artificial intelligence, and population ageing
Under the task-based framework for the analysis of technological changes, there are several transmission mechanisms by which population ageing might condition innovation, automation, and growth. One arises from assuming that workers of different ages have different skills with regard to the risk of being automated. This generates an interesting transmission channel by which demographic changes translate into the creation of new tasks and automation, and, hence, affect growth and the labour share. For instance, Acemoglu and Restrepo (2018b) argue that population ageing fosters automation because middle-aged workers have skills used in tasks that are more easily automated. Another interesting hypothesis is that population ageing alters the consumption baskets, affecting the relative price of goods and, hence, giving different incentives for innovation, and automation of tasks performed in production.
Nevertheless, for addressing the macroeconomic implications of the combination of population ageing and automation, it is necessary to build a fully fledged general equilibrium model where innovation, automation, capital, and labour demand are all endogenously determined, and the resource constraints of the economy are precisely spelled out. Basso and Jimeno (2018) carry out this type of exercise in an economy where there are four main structures: (i) A goods production sector where producers aggregate intermediate goods/tasks and a continuum of intermediate goods firms that employ a composite of goods from all firms (inputs), capital and either robots or labour, (ii) A robot production sector that transforms final goods into robots and sells them to intermediate producers, (iii) An innovation sector that generates new tasks (product creation) and develops procedures so that robots can be used in an existing tasks, iv) Households with a life cycle structure (worker, retired), supplying labour (workers), accumulating assets, and consuming a composite of all varieties produced.
This model contains most of the transmission mechanisms by which demographic and technological changes affect the economy. First, due to the life cycle structure, population ageing has an impact on savings (and, hence, on the equilibrium interest rate) as stressed by the literature on the revival of secular stagnation. Secondly, since it adopts the task-based framework for production, it also embeds the productivity, displacement, and reinstatement effects highlighted by the recent literature on technological changes. Thirdly, by modelling endogenous growth by innovation and automation, and by making them explicit the relative profitability of both activities and the resources employed by them, it gives raise to a trade-off (static and dynamic) between innovation and automation that is often neglected. This trade-off arises from two constraints. One is the more resources that are devoted to automation, there are less resources available for innovation. Another is that if innovation slows down, eventually automation also slows down, as tasks ought to be invented before they can be automated. Finally, it gives some scope to the possibility that population ageing may make innovation more difficult by the special relevance of young workers labour supply for this sector.
Needless to say, the results of simulations carried out with a calibrated version of the Basso and Jimeno (2018) model are contingent on several assumptions regarding the specifications of production, innovation and automation sectors. However, under standard assumptions required to make the economy converge to a balanced-growth path, two main conclusions can be drawn from their analysis. One is that a reduction in labour supply, in the long-run, decreases per capita growth. The intuition (and analytical result) is that as the economy converges to a new balanced-growth path, the shares of the labour intensive and the automated sectors in final production ought to remain constant, which means that the stock of robots and labour supply must grow at the same rate, which is lower due to the fall in fertility. The second set of (numerical) results is obtained by simulating the demographic transition in the USA and in the main European countries, as forecasted by the Population Division of the United Nations. Initially, as interest rates fall due to the increase in savings brought up by the fall in fertility and the rise of longevity, there are more resources to invest in capital accumulation, automation, and innovation, and, hence, the growth rate increases. However, as labour supply declines as the demographic transition progresses, there is less innovation (because of the labour supply effect on R&D) and, hence, less new tasks created, and, eventually, less automation. (Since the introduction of robots need new tasks to be created.) Therefore, eventually the growth rates of consumption, investment, and GDP decrease (see Fig. 4).
## 5 Concluding remarks
The macroeconomic implications of demographic changes are relatively well known. After all, there is a long tradition of overlapping generation models where the standard transmission mechanisms (mostly through changes in savings and investment) have been extensively analysed. Extending these models to consider other likely effects of population ageing is now enjoying a revival in economic research. For instance, implications for the effectiveness of traditional macrostabilisation policies are the focus of many applications of the state-of-the-art models (see, for instance, Carvalho et al. 2016, for monetary policy, and Basso and Rachedi 2017, for fiscal policy). Consequences for the needs of inter-generational redistribution associated with population ageing are also a top item in the research agenda regarding pensions and the design of social policies. What is less certain is the macroeconomic implications of the new wave of technological changes, associated with robotics and artificial intelligence.
Shifting analysis from factor-augmenting technological change (which constitutes the conventional wisdom) to a task-based framework in which replacement, productivity, and reinstatement effects can all take place simultaneously provides new insights on how robotics and artificial intelligence may impinge upon the economy. This paper has surveyed recent developments in the macroeconomic analysis of demographic and technological changes to provide some insights on the nature of the uncertainties that arise by the interaction between demography and technology.
We draw two main messages. First, by revisiting recent results from the application of the task-based framework for the analysis of technological change, we identify three main sources of uncertainty about their macroeconomic implications: (i) the degree to which new machines and human labour will be complements or substitutes in the production of existing tasks embedded in the production of goods and services, (ii) the speed to which tasks performed by human labour could be automated, and (iii) the rate at which new tasks are created. Secondly, by looking at the effects of technological change under the task-based framework taking place at the same time that population ageing, we conclude that it is likely that even though population ageing creates incentives for automation, per capita growth will slow down during the demographic transition that most countries are going through.
Apart from the policy implications for macrostabilisation policies already mentioned, there are many other areas of economic policies that will be affected by these demographic and technological changes. Together with negative effects on per capita growth, the new wave of technological changes may bring a decline in labour shares, at a time in which conventional social policies, which mostly channelled taxes from the young to the old, will require more resources. This probably will require a full reconsideration of the fiscal and transfer systems. Nevertheless, it could be a good idea to delay it until we really know what is going on with robotics and AI.
## Footnotes
1. 1.
Data from the Population Division of the United Nations cover the period 1950–2100, with forecasts based on some assumptions for the years after 2015.
2. 2.
3. 3.
This section draws from Jimeno (2015).
4. 4.
For simplicity and without loss of generality, I leave aside mortality risk and changes in retirement age that affect the relative size of the old cohort.
5. 5.
See Shimer (2012) on the relevance of real wage rigidities in generating jobless recoveries.
6. 6.
Under a specification of the utility function allowing for precautionary savings, there will be an additional negative effect on savings from increasing uncertainty over future productivity growth, price of capital, taxes, and public debt ratios.
7. 7.
See Jimeno (2015) for a more formal discussion of the equilibria under alternative assumptions regarding wage determination.
8. 8.
Acemoglu and Restrepo (2017a) use the same data but look at the changes over the whole sample period. Obviously, the time horizon at which demographic changes have macroeconomic implications is likely to be larger than one year. However, even without paying too much attention to the dynamics of these effects, the statistical association between demographic changes and macro variables is easily observed even at a high frequency.
9. 9.
Results that are qualitatively similar to those obtained by Acemoglu and Restrepo (2017a).
10. 10.
EU KLEMS (http://www.euklems.net/) is a dataset providing cross-country measures of output, inputs, and productivity.
11. 11.
The quantitative results of the calibrated model by Basso and Jimeno (2018) suggest that this is related to the fact that population ageing started earlier and is proceeding at a higher pace in Europe than in the USA.
12. 12.
An earlier of model of automation is Zeira (1998) upon which the task-based framework is developed. See also Zeira (2006). As for studies on the skill contents of technological change, see Autor et al. (2003).
13. 13.
This is A.I. as making a machine behave in ways that would be called intelligent if a human was so behaving, not necessarily as humans do behave in the same task.
14. 14.
Resembling what happens in Google: “Humans work themselves out of jobs by teaching the machines how to act”.
## References
1. Acemoglu D, Restrepo P (2017a) Secular stagnation? The effect of aging on economic growth in the age of automation. Am Econ Rev 107(5):174–179
2. Acemoglu D, Restrepo P (2017b) Robots and jobs: evidence from US labour markets. NBER working paper no. 23285Google Scholar
3. Acemoglu D, Restrepo P (2018a) The race between machine and man: implications of technology for growth, factor shares and employment. Am Econ Rev 108(6):1488–1542
4. Acemoglu D, Restrepo P (2018b) Demographics and automation. NBER working paper 24421Google Scholar
5. Acemoglu D, Restrepo P (2019) Automation and new tasks: how technology displaces and reinstates labor. J Econ Perspect (forthcoming)Google Scholar
6. Aghion P, Jones BF, Jones CI (2017) Artificial Intelligence and Economic Growth. NBER working paper no. 23928Google Scholar
7. Aksoy Y, Basso HS, Smith R, Grasl T (2019) Demographic structure and macroeconomic trends. Am Econ J Macroecon 11(1):193–222
8. Arntz M, Gregory T, Zierahn U (2017) Revisiting the risk of automation. Econ Lett 159:157–160
9. Autor D, Salomons A (2018) Is automation labour displacing? Productivity growth, employment, and the labour share. Brookings Papers on Economic ActivityGoogle Scholar
10. Autor D, Levy F, Murnane RJ (2003) The skill content of recent technological change: an empirical exploration. Q J Econ 118(4):1279–1333
11. Basso H, Jimeno JF (2018) From secular stagnation to robocalypse? Implications of demographic and technological changes, mimeoGoogle Scholar
12. Basso H, Rachedi O (2017) The young, the old, and the government: demographics and fiscal multipliers. Banco de España, working paper (forthcoming)Google Scholar
13. Benigno G, Fornaro L (2017) Stagnation traps. Rev Econ Stud 85(3):1425–1470.
14. Benzell SG, Kotlikoff LJ, LaGarda G, Sachs JD (2015) Robots are us: some economics of human replacement. NBER working paper no. 20941Google Scholar
15. Carvalho C, Ferrero A, Nechio F (2016) Demographics and real interest rates: inspecting the mechanism. Eur Econ Rev 88:208–226 September
16. Dauth W, Findeisen S, Suedekum J, Woessner N (2017) German robots—the impact of industrial robots on workers. CEPR discussion paper no. 12306Google Scholar
17. Derrien F, Kecskés A, Nguyen P (2017) Demographics and innovation. HEC Paris research paper no. FIN-2017-1243Google Scholar
18. Eggertsson GB, Mehrotra NR (2014) A model of secular stagnation. NBER working paper 20574Google Scholar
19. Eggertsson GB, Lancastre M, Summers L.H (2018) Aging, output per capita and secular stagnation. NBER working paper no. 24902Google Scholar
20. Fiorentini G, Galesi A, Pérez-Quirós G, Sentana E (2018) The rise and fall of the natural interest rate. CEPR discussion paper no. DP13042. SSRN: https://ssrn.com/abstract=3214560
21. Frey CB, Osborne MA (2017) The future of employment: how susceptible are jobs to computerisation? Technol Forecast Soc Change 114(C):254–280
22. Graetz G, Michaels G (2018) Robots at work. Rev Econ Stat 100(5):753–768
23. Hall RE (2017) The anatomy of stagnation in a modern economy. Economica 84(333):1–127
24. Hansen A (1939) Economic progress and declining population growth. Am Econ Rev 29:1939Google Scholar
25. Holston K, Laubach T, Williams JC (2017) Measuring the natural rate of interest: international trends and determinants. J Int Econ 108:S59–S75
26. International Federation of Robotics (2017) The impact of robots on productivity, employment and jobs, mimeoGoogle Scholar
27. Jimeno JF (2015) Long lasting consequences of the European crisis. ECB working paper 1832Google Scholar
28. Laubach T, Williams JC (2003) Measuring the natural rate of interest. Rev Econ Stat 85:1063–1070
29. Lin T, Weise CL (2018) A new Keynesian model with robots: implications for business cycles and monetary policy. SSRN: https://ssrn.com/abstract=3064229 or
30. Lordan G, Neumark D (2018) People versus machines: the impact of minimum wages on automatable jobs. NBER working paper no. 23667Google Scholar
31. Shimer R (2012) Wage rigidities and jobless recoveries. J Monet Econ 59:S65–S77
32. Zeira J (1998) Workers, machines, and economic growth. Q J Econ 113:1091–1117
33. Zeira J (2006) Machines as engines of growth. CEPR discussion paper no. 5429Google Scholar | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.817463755607605, "perplexity": 4017.436881146032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527839.19/warc/CC-MAIN-20190419141228-20190419163228-00382.warc.gz"} |
https://rd.springer.com/chapter/10.1007/978-3-319-00539-3_8 | # Multiple Regression
• S. Sreejesh
• Sanjay Mohapatra
• M. R. Anusree
Chapter
## Abstract
Multiple regression analysis is one of the dependence technique in which the researcher can analyze the relationship between a single-dependent (criterion) variable and several independent variables.
## Keywords
Multiple Regression Analysis Coping Skill Multiple Regression Equation Partial Regression Coefficient Main Window
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9367918968200684, "perplexity": 2902.0041845780247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583739170.35/warc/CC-MAIN-20190120204649-20190120230649-00617.warc.gz"} |
https://bora.uib.no/bora-xmlui/handle/1956/917/browse?type=subject&value=Scattering+theory | Now showing items 1-1 of 1
• #### Modelling and inversion of time-lapse seismic data using scattering theory
(Master thesis, 2015-06-01)
Waveform inversion methods can be used to obtain high-resolution images of the elastic and acoustic property changes of petroleum reservoirs under production, but remains computationally challenging. Efficient approximations ... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9300079941749573, "perplexity": 4521.0313209817905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487623942.48/warc/CC-MAIN-20210616124819-20210616154819-00224.warc.gz"} |
http://en.m.wikibooks.org/wiki/Ordinary_Differential_Equations/Substitution_1 | # Ordinary Differential Equations/Substitution 1
First-Order Differential Equations
As we saw in a previous example, sometimes even though an equation isn't separable in its original form, it can be factored into a form where it is. Another way you can turn non-separable equations into separable ones is to use substitution methods.
## General substitution procedureEdit
All substitution methods use the same general procedure:
1. Take a term of the equation and replace it with a variable v. The key is that the new variable must cover all instances of the variable y. Otherwise substitution would not help.
2. Solve for $\frac{dy}{dx}$ in terms of $v$ and $\frac{dv}{dx}$. To do this, take the equation $v=f(x,y)$ where $f$ is the term you replaced and take its derivative.
3. Plug in $\frac{dv}{dx}$ and solve for $v$.
4. Plug $v$ into the original term replaced, and solve for $y$.
## Constant coefficient substitutionEdit
Lets say we have an equation with a term ay+bx+c, such as
$\frac{dy}{dx}=G(ay+bx+c).$
where G is a function. This is non-separable. But we can sometimes solve these equations by turning the term into a function v, defining v(x,y) and finding v'(x,y,y').
$v(x,y)=ay+bx+c \,$
The trick with the derivation of v is that y is also a function of x. The derivative of v thus becomes
$\frac{dv}{dx}=a\frac{dy}{dx}+b$
In maxima this looks like so:
(%i1) v:a*y(x)+b*x+c;
and
(%i2) diff(v,x);
yielding
(%o1) a*(dy/dx)+b
Next, we rearrange terms and solve for y'(x,v,v'):
$\frac{dy}{dx}=\frac{\frac{dv}{dx}-b}{a}$
Now plug v back into the original equation, $\frac{dy}{dx}=G(ay+bx+c)$, and get it into the form $\frac{dv}{dx}=f(v)$
$\frac{\frac{dv}{dx}-b}{a}=G(v)$
$\frac{dv}{dx}=aG(v)+b$
Solve for v, that is integrate on both sides:
$\frac{dv}{dx}=aG(v)+b$
$\frac{dv}{aG(v)+b}= dx$
$\int \frac{dv}{aG(v)+b}= \int dx$
$\int \frac{dv}{aG(v)+b}= x+D$
Once you have v(x), plug back into the definition of v(x) to get y(x).
$y(x)=\frac{v(x)-c-bx}{a}$
It is highly suggested that one should not memorize this equation, and instead remember the method of solving the problem. The final equation is rather obscure and easy to forget, but if one knows the method, he/she can always solve it. It will also help if one uses other substitution methods.
### Example 1Edit
$\frac{dy}{dx}=(x+y+3)^2$
Lets replace the quantity being raised to a power with v.
$v=x+y+3 \,$
Now lets find v'.
$\frac{dv}{dx}=\frac{dy}{dx}+1$
Solve for y'
$\frac{dy}{dx}=\frac{dv}{dx}-1$
Plug in for y and y':
$\frac{dv}{dx}-1=v^2$
$\frac{dv}{dx}=v^2+1$
Now we solve for v, using the methods we learned in Separable Variables:
$\frac{dv}{v^2+1}=dx$
$\int \frac{dv}{v^2+1}=\int dx$
$\tan^{-1}(v)=x+C \,$
$v=\tan(x+C) \,$
Now that we have v(x), plug back in and find y(x).
$y+x+3=\tan(x+C) \,$
$y=\tan(x+C)-x-3 \,$
## Other methodsEdit
These are not the only possible substitution methods, just some of the more common ones. Substitution methods are a general way to simplify complex differential equations. If you ever come up with a differential equation you can't solve, you can sometimes crack it by finding a substitution and plugging in. Just look for something that simplifies the equation. Remember that between v and v' you must eliminate the y in the equation.
### Example 2Edit
$2y\frac{dy}{dx}=y^2+x-1$
This equation isn't separable, and none of the methods we previously used will quite work. Let's use a custom substitution of v=y2+x-1. Solve for v':
$\frac{dv}{dx}=2y\frac{dy}{dx}+1$
$2y\frac{dy}{dx}=\frac{dv}{dx}-1$
Plug into the original equation
$\frac{dv}{dx}-1=v$
Solve for v
$\frac{dv}{v+1}=dx$
$\int \frac{dv}{v+1}=\int dx$
$\ln(v+1)=x+C \,$
$v=Ce^x-1 \,$
Now plug in and get y
$Ce^x-1=y^2+x-1 \,$
$y^2=Ce^x-x \,$
Pretty easy after using that substitution. Keep this method in mind, you will use this for more complex equations. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 44, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8900747895240784, "perplexity": 629.5052171860137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921318.10/warc/CC-MAIN-20140909041250-00017-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://community.wolfram.com/groups/-/m/t/2281006 | GROUPS:
# Proving S_n will be negative infinitely often in Wolfram|Alpha?
Posted 17 days ago
135 Views
|
0 Replies
|
1 Total Likes
|
Let $$S_ n=\sin^{-1}\left(\sin\left(2\pi^{2}\left(n!\right)\right)\right)+\frac{\pi}{5}$$ where n is a positive integer.I need to prove that $S_n<0$ for infinitely many positive integral values of n. When I entered $S_n$ in Wolfram Alpha, I got its graph, which is available here.How can I prove that $S_n$ will be negative infinitely often? What are the ways to prove it? Please help me proving this. Thank You. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.976030170917511, "perplexity": 291.40045240973154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487648373.45/warc/CC-MAIN-20210619142022-20210619172022-00616.warc.gz"} |
http://math.stackexchange.com/questions/399032/pullback-calculation | # Pullback Calculation
If we define the 2-form $\omega=\frac{1}{r^3}(x_1dx_2\wedge dx_3+x_2dx_3\wedge dx_1+x_3dx_1\wedge dx_2)$ with $r=\sqrt{x_1^2+x_2^2+x_3^2}$
If we now define $x(\theta,\phi)=(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta)$ for $\theta\in(0,\pi)$ and $\phi\in (0,2\pi)$
I now want to show that the pullback of this map is:
$x^*\omega=\sin\theta d\theta\wedge d\phi$
Now my definition of the pullback of a map $c:[a,b]\rightarrow D$ is $\alpha(c'(t))dt$ but am unsure as to how to proceed? I can see that this curve $x$ is a parametrization of the sphere in $\mathbb{R}^3$ but after that I am a bit lost?
Thanks for any help
-
simply plug the expressions for $x_{1,2,3}$ in terms of $\theta,\phi$ into $\omega$ and calculate the resulting form. – O.L. May 22 '13 at 9:36
@O.L. sorry I am confused as to how I evalute $x'(t)$ what it is in terms of $\theta$ and $\phi$? – hmmmm May 22 '13 at 9:54
@hmmmm: What O.L. means is this: You should plug in $x_1 = \sin \theta \cos \phi$ and $x_2 = \sin\theta \sin \phi$, and so on. Then, for example, $$dx_1 = d(\sin\theta \cos\phi) = d(\sin\theta)\cos\phi + \sin\theta \,d(\cos\phi) = \cos\theta \cos \phi\, d\theta - \sin\theta\sin\phi \,d\phi.$$ Similarly for the others. – Jesse Madnick May 22 '13 at 10:07
I guess $c(t)$ appeared in a one-dimensional example which is not very much related to your question. Here, instead of a curve mapped to some space, you have a surface (2D sphere). In a sense, the analog of $t$ here is the pair $(\theta,\phi)$, and the analog of $c$ is given by the triple of functions $x_{1,2,3}(\theta,\phi)$. – O.L. May 22 '13 at 10:10
A more general equation for how to compute pullbacks is as follows: If $f: M \to N$ is a smooth map of smooth manifolds and $\omega$ is an $n$-form on $N$, written in local coordinates as $$\omega = \sum_I \omega_I dy^{i_1} \wedge \cdots \wedge dy^{i_n}$$ then $f^*\omega$ can be written in induced coordinates as $$f^*\omega = \sum_I (\omega_I \circ f) d(y^{i_1} \circ f) \wedge \cdots \wedge d(y^{i_n} \circ f)$$ where you think of $y^i$ as the function which picks out the $i^{th}$ coordinate of $f$.
Your $\omega$ is given by $$\omega = \omega_{23} dx_2 \wedge dx_3 + \omega_{31} dx_3 \wedge dx_1 + \omega_{12} dx_1 \wedge dx_2$$ where $\omega_{23} = \frac{x_1}{r^3}, \omega_{31} = \frac{x_2}{r^3}$ andd $\omega_{12} = \frac{x_3}{r^3}$. So now you just use the formula: I will do the $\omega_{23} dx_2 \wedge dx_3$ term as an example:
$$f^*(\omega_{23} dx_2 \wedge dx_3) = (\omega_{23} \circ f) d(x_2 \circ f) \wedge d(x_3 \circ f)$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9627209901809692, "perplexity": 131.43816578059983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115861872.41/warc/CC-MAIN-20150124161101-00221-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://scribesoftimbuktu.com/evaluate-6-37106-2/ | # Evaluate 6.37*10^6
Scientific notation can be used to represent very large or small numbers. A number written in scientific notation will have one non-zero digit to the left of the decimal point, multiplied by a power of , with the power corresponding to the amount of places the decimal should move.
The result can be shown in multiple forms.
Scientific Notation:
Expanded Form:
Evaluate 6.37*10^6
### Solving MATH problems
We can solve all math problems. Get help on the web or with our math app
Scroll to top | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9565895795822144, "perplexity": 523.7380642994068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00087.warc.gz"} |
http://www.ma.utexas.edu/mp_arc-bin/mpa?yn=10-95 | 10-95 F. Pezzotti
Mean-field Limit and Semiclassical Approximation for Quantum Particle Systems (326K, LATeX 2e) Jun 17, 10
Abstract , Paper (src), View paper (auto. generated ps), Index of related papers
Abstract. We consider a system constituted by $N$ identical particles interacting by means of a mean-field Hamiltonian. The evolution of the above system is taken into account both in the classical and in the quantum framework, as well as the corresponding effective dynamics when $N o\infty$. It is well known that, in the limit $N o\infty$, the one-particle state obeys to the Vlasov equation, in the classical case, and to the Hartree equation, in the quantum framework. Moreover, in both situations propagation of chaos holds. In this work, we present an overview over known results concerning the problem, with particular attention to the case of smooth pair-interaction potentials, and we analize the link between the asymptotics $N o\infty$ (Mean-Field Limit) and the semiclassical aprroximation ($\hbar o 0$). We dicuss and present in a unified way some known results, highlighting some open problems on that topic. In particular, we discuss in a wider way the result presented in [29] (see References), giving even an outlook on generalizations and possible applications.
Files: 10-95.src( desc , 10-95.tex ) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9745363593101501, "perplexity": 694.2623451900124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806736.55/warc/CC-MAIN-20171123050243-20171123070243-00177.warc.gz"} |
http://mathhelpforum.com/differential-geometry/164452-torus-smooth-function.html | # Math Help - Torus and smooth function
1. ## Torus and smooth function
Hello,
I want to show that this function is smooth:
$
f:T->\mathbb{R} , f(x)=e^1(x)=x^1
$
T is the Torus defined as: $T=\{((R+r*cosv)cosu,(R+r*cosv)sinu,r*sinv) \subset \mathbb{R}^3: u,v \in [0,2\pi)\}$ and R,r>0 constant.
I don't know how f could be differentiated. My Problem is, that f is not in the standard form f(x,y,z)=x. it is something like f((R+r.....))=(R+r*cosu)cosv.
I'm a little bit confused. Is here any trick or something
regards
2. in your case you just need to show that f(u,v) = (R + r cosv)cosu is smooth, since the parameterization (u,v) is already smooth.
3. Hello,
i have shown, that f(x,y,z)=x is smooth (the projection on first coord.) and we know that our parametrization is also smooth, i.e. $\phi(u,v)=(R+r*cos.....,...,..)$
But i'm not really sure, why this is enough for being smooth related to f.
Anyway:
Now i want to show, that our function f is a morse function.
But i can't see the critical points of f??
if i differentiate f(x,y,z)=x i get Df=(1,0,0): $\mathbb{R}^3->\mathbb{R}$ this is a surjective function isn't it? so there are no critical points.
Can you explain me, what is wrong in my way of thinking?
Thanks
Regards
4. a "smooth structure" needs to be defined to define smooth functions, just as a manifold does. So I said "in your case", that is, if we don't involve the abstract definition of a manifold, we just define a "smooth structure" on a surface as its smooth parametrization, that is, a smooth map from an area D of R^2 to R^3.
Then a smooth function from a surface is defined to be the smooth function from its parameter domain D.
For your new question, f DOES have critical points on T. You can see this from the expression f(u,v) = (R + r cosv)cosu, or just think that f is defined on a compact surface it must have maximum and minimum points, those points being critical points.
The problem of your way is, you need to "restrict" f to T. Suppose the natural inclusion map from T to R^3 is i, that is, i(x,y,z)=(x,y,z), then the gradient of f restricted to T is i*(df), that is, the pull back of df by i. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.967727541923523, "perplexity": 481.2415752689631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375102712.76/warc/CC-MAIN-20150627031822-00257-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://jmlr.org/beta/papers/v13/el-yaniv12a.html | # Active Learning via Perfect Selective Classification
Ran El-Yaniv, Yair Wiener.
Year: 2012, Volume: 13, Issue: 9, Pages: 255−279
#### Abstract
We discover a strong relation between two known learning models: stream-based active learning and perfect selective classification (an extreme case of 'classification with a reject option'). For these models, restricted to the realizable case, we show a reduction of active learning to selective classification that preserves fast rates. Applying this reduction to recent results for selective classification, we derive exponential target-independent label complexity speedup for actively learning general (non-homogeneous) linear classifiers when the data distribution is an arbitrary high dimensional mixture of Gaussians. Finally, we study the relation between the proposed technique and existing label complexity measures, including teaching dimension and disagreement coefficient. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8946596384048462, "perplexity": 2613.364343579188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488507640.82/warc/CC-MAIN-20210622033023-20210622063023-00326.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.