url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
http://www.cut-the-knot.org/blue/LamesTheorem.shtml | # Lamé's Theorem - the Very First Application of Fibonacci Numbers
Among the unique properties of number five, Joe Roberts counts the appearance of five in one of the formulations of Lamé's theorem:
In carrying out the Euclidean algorithm to find the greatest common divisor of two positive integers $a$ and $b$, the number of steps needed will never exceed 5 times the number of base 10 digits in the smaller of the two integers $a$ and $b$.
Various aspects of this theorem, first proved by (Gabriel) Lamé in 1844, are quite regularly rediscovered. No doubt a "natural high" occurs each time this happens. (At least that was so in my case.)
In a 1992 publication, Roberts refers to a 1939 text on Number Theory, which tells me that Lamé's remarkable theorem is not very well known, at least among non-specialists. In his Mathematical Gems II, Ross Honsberger refers to the 1924 edition of W. Sierpinski book Elementary Theory of Numbers; the book has been republished twice since, but by now became a bibliographic rarity. It is available online for download as djvu file.
Lamé's theorem and the Euclidean algorithm have been discussed at length in the 1969 D. Knuth's Seminumerical Algorithms, now available in the third edition (1997) and lately in a most unusual book by V. H. Moll. So, we have three somewhat different formulations, each emphasizing a little different aspects of the theorem.
### Lamé's Theorem (Honsberger)
The number of steps (i.e., divisions) in an application of the Euclidean algorithm never exceeds 5 times the number of digits in the lesser.
### Lamé's Theorem (Knuth)
For $n\ge 1$, let integers $u$ and $v$, $u\gt v\gt 0$, be such that processing $u$ and $v$ by the Euclidean algorithm takes exactly $n$ division steps. Moreover, assume that $u$ is the least possible number satisfying that requirement. Then $u=F_{n+2}$ and $v=F_{n+1}$, where $\{F_k\}$ is the Fibonacci sequence.
Knuth also proves
### Corollary
For $0\lt u,v\lt N$, the number of the division steps needed by the Euclidean algorithm to process $u$ and $v$ does not exceed $\big\lceil log_{\phi}(\sqrt{5}N)\big\rceil -2$.
### Lamé's Theorem (Moll)
Let $a,b\in\mathbb{N}$ with $a\gt b$. The number of steps in the Euclidean algorithm is about $\mbox{log}_{10}b/\mbox{log}_{10}\phi$. This is at most five times the number of decimal digits of $b$.
$\phi$ here is of course the Golden Ratio ($\displaystyle\phi = \frac{1+\sqrt{5}}{2}$) whose appearance may not be surprising after the Fibonacci sequence made its debut in Knuth's formulation. In all the proofs the Fibonacci numbers play a most fundamental role and, as Knuth observes, this was their first ever practical application; many more followed.
On this page it is convenient to define the Fibonacci numbers recursively
$F_n= \begin{cases} 1 & \mbox{if } n = 0; \\ 1 & \mbox{if } n = 1; \\ F_{n-1}+F_{n-2} & \mbox{if } n > 1. \\ \end{cases}$
Let $p_n$ denote the number of digits in $F_n$. As R. L. Duncan showed in 1966, the number of the division steps in the Euclidean algorithm required to compute $\mbox{gcd}(F_{n+1}, F_{n})$ always satisfies the inequality
$\displaystyle n\gt\frac{p_n}{\mbox{log}_{10}\phi} - 5$
while Lamé's result could be reformulated as $\displaystyle n\lt\frac{p_n}{\mbox{log}_{10}\phi} + 1$. In 1967, J. L. Brown has proved a stronger result:
There exist an infinite number of distinct positive integers $n$ such that the determination of $\mbox{gcd}(F_{n+1},F_{n})$ by the Euclidean algorithm requires exactly $n$ divisions with $n$ satisfying
$\displaystyle n\gt\frac{p_n}{\mbox{log}_{10}\phi} - \frac{1}{2}$,
making the estimate in Lamé's theorem the best possible.
All proofs of the above-mentioned results use but basic properties of the Fibonacci numbers. The Honsberger/Sierpinski one is based on the following
### Lemma
For all $n\ge 1$, $F_{n+5}\gt 10\cdot F_{n}$.
### Proof
From the basic recurrence, $F_{k+2}=F_{k+1}+F_{k}$, $F_{k+2}=2\cdot F_{k}+F_{k-1}$, so we obtain successively
\begin{align} F_{n+5} &= F_{n+4} + F_{n+3} \\ &= 2\cdot F_{n+3} + F_{n+2} \\ &= 3\cdot F_{n+2} + 2\cdot F_{n+1} \\ &= 5\cdot F_{n+1} + 3\cdot F_{n} \\ &= 8\cdot F_{n} + 5\cdot F_{n-1} \\ &= 13\cdot F_{n-1} + 8\cdot F_{n-2} \\ &= 21\cdot F_{n-2} + 13\cdot F_{n-3} \\ &> 10\cdot (2F_{n-2} + F_{n-3}) \\ &= 10\cdot F_{n}. \end{align}
From this it is immediate that $F_{n+5}$ has at least one more decimal digit than $F_n$. It follows by induction that $p_{n+5t}\gt 10^{t}p_{n}$. By direct inspection, numbers $F_n$, for $1\le n\le 5$ are single-digit. They
have at least $2$ digits for $5\lt n\le 10$,
have at least $3$ digits for $10\lt n\le 15$,
have at least $4$ digits for $15\lt n\le 20$,
and, in general, have at least $k$ digits for $5(k-1)\lt n\le 5k$. In other words, for $5(k-1)\lt n\le 5k$, $p_{n}\ge k\ge n/5$. Thus it is always the case that $p_{n}\ge n/5$.
Let's remember that; this in an important step in proving Lamé's theorem. But in order to complete the proof, we need to bring in the Euclidean algorithm.
Given to integers, $a$ and $b$, $a\gt b\gt 0$, we set $a = r_0$, $b=r_1$ and divide with remainder. Assuming Euclid's algorithm takes $n$ steps,
\begin{align} r_{0} &= r_{1}q_{1} + r_{2}, 0\le r_{2}\lt r_{1} \\ r_{1} &= r_{2}q_{2} + r_{3}, 0\le r_{3}\lt r_{2} \\ r_{2} &= r_{3}q_{3} + r_{4}, 0\le r_{4}\lt r_{3} \\ &= \cdots \\ r_{n-2} &= r_{n-1}q_{n-1} + r_{n}, 0\le r_{n}\lt r_{n-1} \\ r_{n-1} &= r_{n}q_{n}. \\ \end{align}
Note that $q_{n}\ge 2$, for, otherwise, we would have $r_{n-1} = r_{n}$, in contradiction with the previous step of the algorithm ($0\le r_{n}\lt r_{n-1}$.) We now proceed backwards, starting with $r_{n}\ge 1$, which implies $r_{n}\ge F_{1}.$ Next, $r_{n-1}\ge 2r_{n} = F_{2}.$ And further,
$r_{n-2} = r_{n-1}q_{n-1} + r_{n} \gt r_{n-1} + r_{n} \gt F_{2} + F_{1} = F_{3}.$
In general, for $1\le k\le n$,
\begin{align} r_{n-k} &= r_{n-k+1}q_{n-k+1} + r_{n-k+2} \\ & \gt r_{n-k+1} + r_{n-k+2} \\ & \gt F_{k} + F_{k-1} = F_{k+1}, \end{align}
such that at the end of the process(i.e., at the beginning of the algorithm) when $k=n-1$ and $k=n$, $b=r_{1}\gt F_{n}$ and $a=r_{0}\gt F_{n+1}$. Therefore, if integers $a$ and $b$, $a\gt b\gt 0$, are such that the Euclidean algorithm takes exactly $n$ steps, then necessarily $b\gt F_{n}$ and has at least as many digits, which, as we found previously, is at least $n/5$. And this proves the Sierpinski/Honsberger formulation.
Now, for Moll's formulation. Recollect that $\phi ^{2}=\phi + 1$. We use that to prove by induction another
### Lemma
For $n\gt 1$, $F_{n}\gt \phi ^{n-1}.$
### Proof
Although, the statement is to be proved for $n\gt 1$, it saves time to observe that $F_{1}=1\ge \phi ^ {0}$. Also $F_{2}=2\gt \phi ^1$. Then
$F_{3} = F_{2} + F_{1} \gt 1 + \phi = \phi ^{2}.$
And, in general,
$F_{k+2} = F_{k+1} + F_{k} \gt \phi ^{k} + \phi ^{k-1} = \phi ^{k-1} (1 + \phi) = \phi ^{k-1} \phi ^{2} = \phi ^{k+1}.$
Now, as we already found, if, for $a$ and $b$, $a\gt b\gt 0$, the Euclidean algorithm takes $n$ steps, then $b\gt F_{n}$ so that $b\gt \phi ^{n-1}$, and $n-1 \lt \frac{\mbox{log}_{10}b}{\mbox{log}_{10}\phi}$. If $b$ is a $k$-digit integer, then $10^{k-1}\le b\lt 10^k$, and since $1/\mbox{log}_{10}(\phi)=4.78497...$, $n-1\lt 5k$, or $n\le 5k$. This exactly means that the number of the division steps in an application of Euclidean algorithm is at most five times the number of decimal digits of the smaller of the two numbers the algorithm has been applied to.
Almost obviously a pair of consecutive Fibonacci numbers provide a "worse case scenario" - the longest possible application of the Euclidean algorithm relative to the length of the numbers. This is because the quotients $q_k$ in the application of the algorithm in this case are all 1 (meaning the least reduction in length), except of course the last one.
Finally, T. E. Moore has investigated the distribution of pairs $(a ,b)$ for a given number of steps (DC = "division count") in the Euclidean algorithm. The article includes a Basic programs that has been run on an Apple computer! Adding a 4-fold symmetry, Moore produced a sequence of mysterious diagrams:
### References
1. J. L. Brown, Jr., On Lamé's Theorem, Fibonacci Quarterly, v 5, n 2 (April 1967), 153-160
2. R. L. Duncan, Note on the Euclidean Algorithm, The Fibonacci Quarterly, v 4, n 4, (August 1966) 367-68.
3. R. Honsberger, Mathematical Gems II, MAA, 1976
4. D. Knuth, The Art of Computer Programming, v2, Seminumerical Algorithms, Addison-Wesley, 1997 (3rd edition)
5. V. H. Moll, Numbers and Functions: From a Classical-Experimental Mathematician's Point of View, AMS, 2012
6. T. E. Moore, Euclid's Algorithm and Lamé's Theorem on a Microcomputer, Fibonacci Quarterly, v 27, n 4 (August 1989), 290-295
7. J. Roberts, Lure of the Integers, MAA, 1992
8. W. Sierpinski, Elementary Theory of Numbers: Second English Edition, North Holland, 1988
9. J. V. Uspensky & M. A. Heaslet. Elementary Number Theory, McGraw-Hill, 1939.
[an error occurred while processing this directive] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9991999268531799, "perplexity": 706.0111879252867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158633.40/warc/CC-MAIN-20180922182020-20180922202420-00057.warc.gz"} |
https://math.stackexchange.com/questions/1633993/why-is-the-axiom-of-infinity-necessary?noredirect=1 | # Why is the Axiom of Infinity necessary?
I am having trouble seeing why the Axiom of Infinity is necessary to construct an infinite set. According to a professor of who's mine teaching a class on "infinity," the Peano axioms are only adequate to establish the existence of all of the natural numbers, but not also that there is an infinite set consisting of them. To do so, we must stipulate not only the Axiom of Induction, but that there also exists an inductive set (via the Axiom of Infinity).
So, why does the existence of an infinite set of the natural numbers not just follow from the existence of all of the natural numbers?
• Just because every natural number exists doesn't necessarily imply that there's an infinite set. After all: Every ordinal exists, but there is no set of all ordinals. – T. Bongers Jan 31 '16 at 3:01
• There's nothing that says given any collection of objects there exists a set containing them. In fact inconsistencies arise if you allow that. So you have to show that a given set exists using the axioms, and you need more axioms to prove that particular set exists. – Matt Samuel Jan 31 '16 at 3:01
• You've probably already learned that the set of all sets doesn't exist. Just replace the word "natural number" with "set" and you'll see that that argument isn't valid. – David Jan 31 '16 at 3:06
• @MattSamuel thanks for your response. For clarification, do you mean that having a certain property/satisfying a predicate is not enough to determine a set? For example, we also spoke of inconsistencies arising from "comprehension," i.e., for any property P, there is a set whose members are the objects with property P. So, for a collection of objects each having the property of being a natural number, this is not enough to determine the existence of a set containing them? – ata Jan 31 '16 at 3:12
• It would seem, then, that no set-existence axioms at all are needed. If the elements that are supposed to go into the set $S$ exists, then the set exists ipso facto, right? How do you keep from getting the universal set, or the Russell set? – bof Jan 31 '16 at 3:25
BrianO's answer is spot-on, but it seems to me you may not be too familiar with models and consistency proofs, so I'll try to provide a more complete explanation. If anything it may better steer you towards what you need to study, as admittedly I'm about to gloss over a lot of material.
Why do we need the axiom of infinity? Because we know (and can prove) that the other axioms of ZFC cannot prove that any infinite set exists. The way this is done is roughly by the following steps:
• Remember a set of axioms $\Sigma$ is inconsistent if for any sentence $A$ the axioms lead to a proof of $A \land \neg A$. This can be written as $\Sigma \vdash A \land \neg A \to \neg Con(\Sigma)$
• If $Inf$ is the statement "an infinite set exists", then $\neg Inf$ is the statement "no infinite sets exist".
• The axiom of infinity is essentially the assumption that $Inf$ is true and hence $\neg Inf$ is false.
• If we don't need the axiom of infinity, then with the other axioms $ZFC^* = ZFC - Inf$, we should be able to prove $Inf$ as a theorem, in other words we'll posit that $ZFC^* \vdash Inf$
• We assume that $ZFC$, and hence the subset $ZFC^*$, are consistent.
• We then add $\neg Inf$ as an axiom to $ZFC^*$, which we'll call $ZFC^+$
• By showing that $(ZFC - Inf) + \neg Inf$ has a model (a set in which all the axioms are true when quantifiers range only over the elements of the set), we can prove the relative consistency $Con(ZFC) \to Con(ZFC^+)$. In other words we're basically just proving $ZFC^+$ is consistent, but we need to be explicit that this proof assumes $ZFC$ is consistent.
• The model we want is $HF$, the set of all hereditarily finite sets. I'll leave it you to verify all the axioms of $ZFC^+$ hold in this set. But the important point is $HF \models ZFC^+$, and our relative consistency is proven. (This follows from Godel's completeness theorem)
• We are assuming that $ZFC^* \vdash Inf$, but because $ZFC^+$ is an extension of $ZFC^*$ it must also be the case that $ZFC^+ \vdash Inf$. But then we have $ZFC^+ \vdash Inf \land \neg Inf$ and is thus inconsistent, a contradiction.
Thus we must conclude that our hypothesis $ZFC^* \vdash Inf$ is false and there is no proof of $Inf$ from the other axioms of ZFC. $Inf$ must be taken as an axiom to be able to prove that any infinite set exists.
• Hi @DanSimon, thanks for the response, this was certainly helpful as a more involved demonstration. – ata Jan 31 '16 at 15:33
• Thanks for delving deeper. You spelled out many connections that my admittedly terse response takes for granted. +1! – BrianO Feb 6 '16 at 12:10
The existence of each natural number follows from the other axioms of set theory, but if you drop the Axiom of Infinity (AxInfinity), the resulting theory ZFC-AxInfinity has a (transitive) model consisting of the hereditarily finite sets, which contains no infinite sets. The axioms of ZFC-AxInfinity provide no way to gather all the natural numbers into a single set.
• It would be worthwhile to add that PA and ZFC-AxInf+$\neg$AxInf are bi-interpretable. – Pedro Sánchez Terraf Jan 31 '16 at 4:42
• Thanks for your response, I'll have to look a bit more into ZFC and some of those other terms. – ata Jan 31 '16 at 5:45
• @PedroSánchezTerraf It would — and thanks for doing so in your comment ;) It would take a fair amount of talk (about coding) to spell that out to an acceptable extent, so I'm inclined to refrain from adding it to the answer. – BrianO Jan 31 '16 at 6:23
The point is once you collect all the natural numbers into a set, you can now treat that set as an atomic object like any other and you can do all the things you can do with a set to it. So, for example, you can make a set that has the set of natural numbers as an element, you can construct the powerset of natural numbers, you can make functionals (functions taking functions) of natural number functions.
The radical thing about Cantor's set theory was the combination of sets that can contain sets (with the usual operations from finite set theory) and infinite sets. Each idea alone isn't that big a deal. Finite set theory is a perfectly reasonable thing, powersets included. Having a "type" of natural numbers, is also a reasonable thing, it just states what operations you are allowed to do on things that have that type. In particular, in (simple) type theory, you can't make a function that returns a type itself, whereas in set theory it is a completely valid definition to say: $f(1) = \mathbb{N}; f(2) = \mathbb{Z}$.
So the crucial thing is, in the context of the overall theory of sets, the Axiom of Infinity states that not only do the natural numbers exist (effectively) but that you can hold it in your hands and manipulate the set as a whole like any other. This is what finitists rebel against. They don't have a problem with an "infinitude" of natural numbers (though they would say an "unbounded amount"), but with being able to manipulate that infinitude in the exact same way you would manipulate the finite set: $\{1, 2, 3\}$.
• Thanks for your response. I see here why it's useful to have this set, but I still don't see from this how the axiom of infinity is necessary to obtain it/why it's not possible to construct the set without this axiom. – ata Jan 31 '16 at 5:45
• Yeah, this doesn't answer that question. Your question then, is whether the Axiom of Infinity is derivable from the other axioms of ZFC and the easiest way to show it isn't is via the means BrianO discusses, i.e. providing a model where it is false. – Derek Elkins Jan 31 '16 at 5:52
• @ata In the absence of AxInfinity, which axioms will you use to form the set of all integers? The only possible candidates are the replacement schema and comprehension schema. Comprehension only lets you isolate a subset of an already existing set; replacement only lets you form the range of a definable single-valued function applied to an existing set. It's not hard to see (prove, by induction on formulas) that neither of these can yield an infinite set if applied to finite sets, which is why the hereditarily finite sets provide a model of ZFC-AxInfinity. – BrianO Jan 31 '16 at 6:26
• @BrianO Think I'm starting to understand it a little better now, thanks for your help. – ata Jan 31 '16 at 15:34
• @ata You're welcome. – BrianO Jan 31 '16 at 15:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.899380087852478, "perplexity": 239.6933200469776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257361.12/warc/CC-MAIN-20190523184048-20190523210048-00237.warc.gz"} |
https://mathzsolution.com/what-is-exponentiation/ | # What Is Exponentiation?
Is there an intuitive definition of exponentiation?
In elementary school, we learned that
where $b$ is an integer.
Then later on this was expanded to include rational exponents, so that
From there we could evaluate decimal exponents like $4^{3.24}$ by first converting to a fraction.
However, even after learning Euler’s Identity, I feel as though there is no discussion on what exponentiation really means. The definitions I found are either overly simplistic or unhelpfully complex. Once we stray from the land of rational powers into real powers in general, is there an intuitive definition or explanation of exponentiation?
I am thinking along the lines of, for example, $2^\pi$ or $3^{\sqrt2}$ (or any other irrational power, really). What does this mean? Or, is there no real-world relationship?
To draw a parallel to multiplication:
If we consider the expression $e\cdot \sqrt5$, I could tell you that this represents the area of a rectangle with side lengths $e$ cm and $\sqrt5$ cm. Or maybe $e \cdot \pi$ is the cost of $\pi$ kg of material that costs $e$ dollars per kg.
Of course these quantities would not be exact, but the underlying intuition does not break down. The idea of repeated addition still holds, just that fractional parts of terms, rather than the entire number, are being added.
So does such an intuition for exponentiation exist? Or is this one of the many things we must accept with proof but not understanding?
This question stems from trying to understand complex exponents including Euler’s identity and $2^i$, but I realized that we must first understand reals before moving on the complex numbers.
My chief understanding of the exponential and the logarithm come from Spivak’s wonderful book Calculus. He devotes a chapter to the definitions of both.
Think of exponentiation as some abstract operation $f_a$ ($a$ is just some index, but you’ll see why it’s there) that takes a natural number $n$ and spits out a new number $f_a(n)$. You should think of $f_a(n) = a^n$.
To match our usual notion of exponentiation, we want it to satisfy a few rules, most importantly $f_a(n+m) = f_a(n)f_a(m)$. Like how $a^{n+m} = a^na^m$.
Now, we can extend this operation to the negative integers using this rule: take $f_a(-n)$ to be $1/f_a(n)$. then $f_a(0) = f_a(n-n) = f_a(n)f_a(-n) = 1$, like how $a^0=1$.
Then we can extend the operation to the rational numbers, by taking $f_a(n/m) = \sqrt[m]{f_a(n)}$. Like how $a^{n/m} = \sqrt[m]{a^n}$.
Now, from here we can look to extend $f_a$ to the real numbers. This takes more work than what’s happened up to now. The idea is that we want $f_a$ to satisfy the basic property of exponentiation: $f_a(x+y)=f_a(x)f_a(y)$. This way we know it agrees with usual exponentiation for natural numbers, integers, and rational numbers. But there are a million ways to extend $f_a$ while preserving this property, so how do we choose?
Answer: Require $f_a$ to be continuous.
This way, we also have a way to evaluate $f_a(x)$ for any real number $x$: take a sequence of rational numbers $x_n$ converging to $x$, then $f_a(x)$ is $\lim_{n\to\infty} f_a(x_n)$. This seems like a pretty reasonable property to require!
Now, actually constructing a function that does this is hard. It turns out it’s easier to define its inverse function, the logarithm $\log(z)$, which is the area under the curve $y=1/x$ from $1$ to $z$ for $0. Once you've defined the logarithm, you can define its inverse $\exp(z) = e^z$. You can then prove that it has all the properties of the exponential that we wanted, namely continuity and $\exp(x+y)=\exp(x)\exp(y)$. From here you can change the base of the exponential: $a^x = (e^{\log a})^x = e^{x\log a}$.
To conclude: the real exponential function $\exp$ is defined (in fact uniquely) to be a continuous function $\mathbb{R}\to\mathbb{R}$ satisfying the identity $\exp(x+y)=\exp(x)\exp(y)$ for all real $x$ and $y$. One way to interpret it for real numbers is as a limit of exponentiating by rational approximations. Its inverse, the logarithm, can similarly be justified.
Finally, de Moivre's formula $e^{ix} = \cos(x)+i\sin(x)$ is what happens when you take the Taylor series expansion of $e^x$ and formally use it as its definition in the complex plane. This is more removed from intuition; it's really a bit of formal mathematical symbol-pushing. | {"extraction_info": {"found_math": true, "script_math_tex": 50, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9166070222854614, "perplexity": 256.6472432761588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494986.94/warc/CC-MAIN-20230127132641-20230127162641-00857.warc.gz"} |
http://www.amsi.org.au/ESA_middle_years/Year6/Year6_md/Year6_1d.html | # Year 6
## Number and Algebra
### Connecting fractions, decimals and percentages
There is a relationship between decimals, fractions and percentages. Whilst each of these areas is often taught separately, there is value in demonstrating how fractions, decimals and percentages are related.
Decimals are a convenient and useful way of writing fractions with denominators 10, 100, 1000 and so on.
So $$\dfrac{3}{10}$$ is written as 0.3, $$\dfrac{2}{100}$$ is written as 0.02, $$\dfrac{11}{100}$$ is written as 0.11 and we write $$\dfrac{434}{1000}$$ as 0.434 in decimal form.
This is best done with simple, familiar fractions, decimals and percentages to start with.
Even though there are different ways of writing a number, the value remains the same. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.933658242225647, "perplexity": 916.128614188362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584913.24/warc/CC-MAIN-20211016170013-20211016200013-00254.warc.gz"} |
https://tex.stackexchange.com/questions/409869/stringstrings-package-redefines-math-symbols | # stringstrings package redefines math symbol(s)
I am loading stringstrings package in my document and the double vertical bar \| (in math mode) is printed as |0 (single vertical bar followed by zero). I checked the stringstrings.sty file and noticed indeed the following lines
\def\PipeCode{0}
\def\EncodedPipe{\EscapeChar\PipeCode}
\def\Pipe{|}
\let\|\EncodedPipe
Below, two minimal working examples showing the issue.
\documentclass{article}
\begin{document}
Without {\tt stringstrings}: $\|$ $$\|$$
\end{document}
\documentclass{article}
\usepackage{stringstrings}
\begin{document}
With {\tt stringstrings}: $\|$ $$\|$$
\end{document}
I suppose stringstrings does it due to some string manipulation issues, but is there a way to restore the original symbol, compatibly with the purposes of the package? Besides, are there any other redefinitions I should be aware of?
Edit: I sort of fixed the issue by redefining the command \| myself in the preamble:
\let\doublebar\|
\usepackage{stringstrings}
\def\|\doublebar
but I am afraid it might clash with the package. Moreover, page 16 of the manual lists (some of?) the redefinitions, but I couldn't understand how the thing is supposed to be fixed.
• Sorry, I did that package when I was young and foolish, which is to say, before I knew about this site. – Steven B. Segletes Jan 11 '18 at 14:01
• You shouldn't be using \| anyhow: \lVert and \rVert are better (they need amsmath). On the other hand, there are alternatives to stringstrings. – egreg Jan 11 '18 at 14:06
• Thanks @egreg, I am in fact loading xstring as well, but due to some issues with expansion and my not-so-high expertise with TeX hacks I found stringstrings package very useful for what I was looking for. – AndreasT Jan 11 '18 at 14:18
• @AndreasT Maybe you can ask about those issues. – egreg Jan 11 '18 at 14:19
You can "save" the definition of \| before you load the stringstrings package using, for example, \let\pipe\|. After loading stringstrings you can now use \pipe instead of \|.
Here's a complete MWE:
\documentclass{article}
\let\pipe\|
\usepackage{stringstrings}
\begin{document}
With {\tt stringstrings}: $\|$ $$\|$$ $\pipe$
\end{document}
which produces:
After loading stringstrings you could put \let\|\pipe but this could, conceivably, break something defined by stringstrings so I suggest using \pipe instead. Instead of \pipe you can, of course, call this anything you like - although I'd recommend avoiding existing LaTeX command names. For example, \let\cow\| would work equally well.
• Thanks @Andrew for your prompt reply, this is what I also thought. I just find it difficult to convince the other co-authors to type \pipe or \cow :) in place of \|, so I guess I'll take the risks of directly redefining \|. – AndreasT Jan 11 '18 at 14:14
• @AndreasT Pity, \cow has a certain appeal... – Andrew Jan 11 '18 at 19:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8918088674545288, "perplexity": 2799.55158431696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665976.26/warc/CC-MAIN-20191113012959-20191113040959-00440.warc.gz"} |
http://libros.duhnnae.com/2017/jul6/150036789686-Comparison-of-structure-and-transport-properties-of-concentrated-hard-and-soft-sphere-fluids-Condensed-Matter-Soft-Condensed-Matter.php | # Comparison of structure and transport properties of concentrated hard and soft sphere fluids - Condensed Matter > Soft Condensed Matter
Comparison of structure and transport properties of concentrated hard and soft sphere fluids - Condensed Matter > Soft Condensed Matter - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.
Abstract: Using Newtonian and Brownian dynamics simulations, the structural andtransport properties of hard and soft spheres have been studied. The softspheres were modeled using inverse power potentials \$V\sim r^{-n}\$, with \$1-n\$the potential softness. Although the pressure, diffusion coefficient andviscosity depend at constant density on the particle softness up to extremelyhigh values of \$n\$, we show that scaling the density with the freezing pointfor every system effectively collapses these parameters for \$n\geq 18\$including hard spheres, for large densities. At the freezing points, the longrange structure of all systems is identical, when the distance is measured inunits of the interparticle distance, but differences appear at short distancesdue to the different shape of the interaction potential. This translates intodifferences at short times in the velocity and stress autocorrelationfunctions, although they concur to give the same value of the correspondingtransport coefficient for the same density to freezing ratio; the microscopicdynamics also affects the short time behaviour of the correlation functions andabsolute values of the transport coefficients, but the same scaling with thefreezing density works for Newtonian or Brownian dynamics. For hard spheres,the short time behaviour of the stress autocorrelation function has beenstudied in detail, confirming quantitatively the theoretical forms derived forit.
Autor: Erik Lange, Jose B. Caballero, Antonio M. Puertas, Matthias Fuchs
Fuente: https://arxiv.org/ | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9100947976112366, "perplexity": 4734.965093581287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515564.94/warc/CC-MAIN-20181023002817-20181023024317-00208.warc.gz"} |
https://www.physicsforums.com/threads/thermal-radiation-in-qm.17102/ | 1. Mar 26, 2004
### Hydr0matic
The quanta first appeared in modern physics when investigating the nature of thermal radiation, or blackbody radiation. In a sense, it was the beginning of QM. Yet, even though I've read a lot about both BB radiation and quantum theory, I can't really tell you exactly how BB radiation fit into QMs description of radiation.
If you read any basic QM text today you'd probably first read about Planck and how he solved the problem with BB radiation by introducing the quanta. You'd probably also read about the Bohr model and schrödinger waves and how photons are emitted when electrons shift energy levels. What I've noticed about these texts though, is that they never make any connecting between these two types of radiation. It's as though they were two separate phenomena.
Many texts also give you the impression that blackbody radiation is something unreal, not present in nature. They don't clarify that it's the blackbody that's ideal and unnatural, not the radiation. The fact that BB radiation is emitted by all matter is always left out.
I think this is intentional though, not to confuse the reader. Because, to be honest, I'd be confused when first reading how all matter emit continous spectra of radiation, and then reading about atoms and how all matter emit discrete spectra of radiation.
Please enlighten me - how does thermal radiation, a phenomena so abundant in nature, fit into QMs desciption of radiation ?
2. Mar 26, 2004
### TeV
WithOut too much oversimplfying,I'd advise you to remember that Planck constant is so small and there are so many levels of E=nh that these almost enumarable chunks of energy levels as fundamental steps make in macroscopic apperance thermodynamic spectra to look smooth and continious...
3. Mar 26, 2004
### jcsd
For most (but not all purposes), blackbody radiation can be thought of as em radiation and therefore it's quanta are photons (generally speaking that is, as there's no reason to confine all blackbody radiation to the em spectrum).
4. Mar 26, 2004
### Hydr0matic
I'm not questioning wheather the thermal radiation spectra is truly continuous or not. My question is... what's the source of this (truly or not) continuous thermal spectra ? ... Take a piece of Lithium for example.. the atoms in such a piece emit rather few discrete spectral lines, yet.. it also emits a continuous thermal radiation spectrum... where do those photons come from ?
5. Mar 27, 2004
### TeV
Hydr0matic,
In all the cases EM radiation origin is in sudden movement/shifting of charge.
But there is a difference in origin between discrete radiation (strict spectral lines ) in say excited gasses (electron transition from one "orbit" energy level to another in strict energy) property inherent to a single atom energy (that serve as fingertip indetification for elements),and "continuos" thermal radiation of atom gas colisions due to the entrophy of whole system of atoms/molecules gas elements.
P.S.Erratum for my first post :"=" to be interpreted as proportionality "~"
6. Mar 27, 2004
### TeV
Even more preceise on thermal energy,radiation associated with,entropy etc.
When photons with a given energy equilibirates with bulk of matter,the termal energy of atoms,molecules ,whichever matter parts, is comparable with the energy of the photons.A body in equilibrium is called a BB,and the wavelenght at which BB with temperature T has the greatest radiant power is given by Wien's law (albeit whole EM spectra contributes )
Of course,absolute equilibrium can be never established (thermodinamic laws wieved through the entropy concept) and BB is just an idealization.
7. Mar 27, 2004
### Hydr0matic
So you're saying that the origin of the thermal radiation is the general movement/acceleration of the atoms/charges in the system (for example a gas) .. correct ?
That's the point of my original question - the origin of this radiation is very classical in nature... just accelerating charges emitting EM waves..
It just seems so incompatible with QM ..
Follow-up question... How does QM explain the distribution of this thermal radiation ?
I have read tons about Planck and BB radiation, but in real life, the things that emit this radiation are nothing like blackbodies, nor are they ovens with SMH-oscillator-walls....
8. Mar 27, 2004
### Hydr0matic
Yes, but why ? What kind of process is taking place here, and how does this result in thermal radiation distributed like that of a BB ?
Like I just said, a real particle system (any piece of matter) is nothing like the system from which Planck derived his law...
9. Mar 27, 2004
### TeV
1.Emitting and absorbing,emiting and reasorbing..That's the mechanism and it is quantum mechanical one.Only discrete amounts of E=nhf are excepted or emited.
2.You must agree that both in classical and quantum model thermal energy must be finite. For given temperature for extra high frequencies energy that corresponds to intensity goes unrealisticaly higher and higher in classical theory.That's the moment when Planck with his model came in and derived the Planck law of radiation of BB.
3.Not true.Every system in nature is finitely energy level quantizied
10. Mar 27, 2004
### Hydr0matic
1. And what if the thermal radiation photons don't match the discrete energy levels within the atoms ? Practically none of the TR will be part of this process ...- TR is continuous, atomic is discrete.
2,3. Did I suggest Reyleigh-Jeans was the answer ? Did I suggest anything at all ? ... "classical" implies continuity, not infinite energy.
11. Mar 27, 2004
### TeV
Is the matter at certain intrisic energy level (characteristic of temperature) constitued only of constantly unionized atoms or.. are there perhaps ions ,or electron gas in metals,or mixture of ANY egzotic particle mixes in nature if you want, that interchange energies and move so vigorously that actualy interaction via photons occurs mainly in ping-pong manner? Is there perhaps law of conservation of momentum satisfied when photon and electron and atom interact?Is there possibility that in some cases vast energy contribution in thermodynamic state in bulk of material is due to dynamics of free electrons in interspaces?Can high energy level among atoms in order to satisfy energy conservation be interchanged even among nucleoses via photon of adjanced atoms?Finaly ,are there such thermodynamic levels where nucleos exhibit mass defect according to Einstein relation in order to release energy?
And all these proceses (and more of them not mentioned here) if radiate energy ,radiate it by Planck law ?Amazing ha?
More ultraviolet "Jeans" to ponder about.
regards
12. Mar 28, 2004
### Hydr0matic
Now you changed your answer ... But anyway, you're basically talking about compton scattering.. and perhaps thermal bremsstrahlung, and similar processes right ? .. Well, I don't buy it.
Scattering processes occur only when photon wavelength matches the de broglie of the subatomic particles, and this isn't naturally the case with thermal radiation. At normal temperatures TR is in the long wavelength spectrum, and scattering with subatomic particles occur with X-rays and UV light. So I can't really see how there's any "ping-pong" action goin' on. And this isn't a radiative process anyway so I don't see why Planck should apply to it. In fact, you haven't mentioned any process which Planck should apply to.
I could be wrong though, but even if I was... the process still isn't anything like the one Planck derived his law from, so I don't see why it's applicable.
But as you can see, there's a lot of stuff I don't see .. so I'm probably wrong ...
Why ? .. What you mentioned may explain why the spectrum is continuous, but I can't see how it explains the intensity distribution.
Last edited: Mar 28, 2004
13. Mar 29, 2004
### TeV
This is extrated from your first post ,and maybe here lays a stumbling block of your confusion.Seems I have completely oversaw the possibility that you miss something quite fundamental :the point that Planck's law is derived from observing radiating intensity from SOLID bodies (objects).This is emphasized in almost every textbox on the subject.I thought you understood that?
Again,rereading gives me now the impression you may be reffering to difference in radiating spectra of very rarified gases (like interstelar one) of incomparably smaller DENSITY than those of the bodies.
If that turns be the case,than expanation is quite elementar.So,my last shot:The radiation of BB differs significantly from radiation of highly rarified gases.Every gas of such density,as already said before radiates dominantely in certain wavelenght regions-spectral lines and stripes characteristic for each chemical element like fingertip prints in humans.
On other hand,all other firm bodies radiate continious spectrum like BB.This is due to comparably higher density,where each body particle affects the radiation of other particles of large number particle system.(The only parameter desirable to know would be density,not a chemical structure nor details of the process in each separate case.)To get general grasp of treating ALL solid objects systes regardless of details and energy processes, desirable is to use Fermi quantum statistics.So,to repeat :Planck's law of radiation is of continious spectra for all the bodies under these terms and the shape of intensity curve against wavelenght graph has the same character like that of BB.
Now,if anything else is bothering you,let someone else try to "enlighten" you.
I can't do any more.Most definitely you will not make me write Fermi statistics expressions here.
regards
derived for all objects
14. Mar 29, 2004
### Hydr0matic
No, that's not my "issue". I was wondering why Planck's law is applicable to all solid bodies, given the fact that the derivation involved some very specific circumstances - An oven in thermal equilibrium with SMH oscillators in the walls constrained in emitting capabilities. These specific criterias are not found in an arbitrary solid body, so why does PL apply to it ?
If I were to figure out some sort of theory on the social structure of African lions, I can not apply this theory on cats, or birds ... [ A very bad analogy I know, but I'm desperate here ]
My point was this - a general law like Planck's, that applies to basically all matter, should be derived from an equally general model ...
NOW, I know - since you told me - this can be achieved with Fermi functions (Fermi-Dirac distribution?) and I'm longer wondering about this.
On the other hand, here's another question ... All these statistical distribution functions - http://ece-www.colorado.edu/~bart/book/book/chapter2/ch2_5.htm#2_5_1
... they also seem very classical in nature. With this statistical derivation, is quantization really neccessary ? I quess my question is - could one derive a classical law describing BB radiation with the help of statistical distribution functions ?
Hey, it was you who brought up the gas, not me ...
I, on the other hand, gave a SOLID body example ..
15. Mar 29, 2004
### TeV
O.K.,me overworked,in many cases too quick in going through somebody elses' texts..
Important thing is you have better feelling now about the problem.
Yup,"gasses" word used in many ways in the posts:rarefied gasses,then as phraseology of electron gass in metals ,and at one stage I was even reffering to fussion process of nuclous ala Sun spectrum ( that still obey Planck's law quite well despite being called gasseous giant and having nonlinear temperature distribution)
regards | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8876909613609314, "perplexity": 2246.303222027691}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543316.16/warc/CC-MAIN-20161202170903-00062-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/306952/what-does-it-mean-to-divide-by-the-degeneracy-of-the-state-in-this-textbook-exce | # What does it mean to divide by the degeneracy of the state in this textbook excerpt?
This section of Griffiths Introduction to Quantum Mechanics deals with Boltzmann, Fermi-Dirac, and Bose-Einstein distributions. I don't understand this line (highlighted in yellow):
Let's talk only of Maxwell-Boltzmann here to keep it simple. Originally, we had
$$N_n=d_ne^{-(\alpha+\beta E_n)}$$
This was explained in the book to be the equation for the most probable occupation number for distinguishable particles. Then, in the image above, the author divides by $d_n$ to result in "the number of particles in a particular state with that energy", but I don't quite understand this. Could someone explain this bit in simpler terms? Or with a simple example?
• I think equation 5.103 is “a mean occupation of the states with energy ε”, or, in other words, it’s “a probability of occupation”. – Orient Nov 16 '19 at 6:33
The formulas in Griffiths are correct, but the explanation is pretty clumsy, because he's basically done the derivation 'in reverse'. For simplicity I'll just talk about the distinguishable particle case, but the others are similar.
The derivation in the forward direction looks like this: the Maxwell-Boltzmann distribution is the distribution that maximizes the entropy given fixed energy. Here, the entropy is defined as $$S \sim \sum p_i \log p_i$$ and the $p_i$ are the probabilities of occupancies of each state (not each energy level!). If you carry out the constrained optimization, using a similar method to Griffiths, you'll arrive at equation 5.103.
Now, the probability of occupancy of a state only depends on its energy. Let's say that the probability of occupancy of a state at some energy is $p_n = 1/2$, and the degeneracy is $d_n = 10^6$. Then by the law of large numbers, the total occupancy $N_n$ of this entire energy level will be very close to $p_n d_n = (1/2) 10^6$. The occupancy could certainly be more or less, but the probability distribution will be peaked about this central value.
The only problem with this approach is that the definition of $S$ is a little unintuitive. So instead, Griffiths works only with occupancy numbers $N_n$, so he can just "count the number of ways" to achieve those numbers instead of dealing with the probabilities $p_n$. Then, he implicitly takes the high $d_n$ limit, so that $N_n \approx p_n d_n$, and calculates $p_n = N_n / d_n$.
The high $d_n$ limit is necessary so that the probability estimated by this ratio is accurate. For example, if $p_n = 2/3$ but $d_n = 10$, the most likely occupancy number could be $N_n = 7$. Then dividing would give the approximation $p_n \approx 0.7$. For our calculated value of $p_n$ to be good, we must take $d_n$ to infinity.
A final muddy point is that Griffiths accidentally calls the probabilities $p_n$ "the most likely occupancy numbers of a state", even though this makes no sense because $p_n$ isn't even an integer, it's a probability between $0$ and $1$. This clumsy wording is because Griffiths has swept all of the probability language under the rug in favor of occupancy numbers, but it's just not right.
• Honestly, I've been struggling with some of the wording in this book for 2 semesters now. Do you have any resources that ideally explain the entire book in other words or maybe just explains this section in the book more intuitively? – DarthVoid Jan 24 '17 at 18:17
• @DarthVoid I had basically the same problem with Griffiths, and ended up having to relearn everything. I think Shankar is a good alternative reference. If you already know stat mech, you can just flip to the back of most books on it for a better derivation of these distributions. – knzhou Jan 24 '17 at 18:26
• I'll check out Shankar's book, thanks for the tip. – DarthVoid Jan 24 '17 at 21:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.910174548625946, "perplexity": 231.09749658751468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643354.47/warc/CC-MAIN-20210618230338-20210619020338-00466.warc.gz"} |
https://physics.stackexchange.com/questions/146849/why-is-time-evolution-of-wavefunctions-non-trivial | # Why is time evolution of wavefunctions non-trivial?
(Note: This post focuses on a single simple example, however I'm asking about the error in general in my logic).
Consider the infinite potential well "particle in a box" system described by
$$V(x)=\begin{cases}0&\text{if }0<x<L\\\infty&\text{otherwise}\end{cases}.$$
It's fairly easy to find the wavefunctions $\psi_n(x)=\langle E_n\vert\psi\rangle$ by solving the time independent Schroedinger equation:
$$\psi_n(x)=\sqrt\frac{2}{L}\sin\left(\frac{n\pi}{L}x\right)$$
Now, since $\mathcal{\hat H}$ is Hermitian we know there is a complete set of eigenstates $\vert E_n\rangle$ such that, for any initial state $\vert\psi,0\rangle$ we can write
$$\vert\psi,0\rangle = \sum_k a_k\vert E_k\rangle$$
The problem of evolving the state $\vert\psi,0\rangle$ in time is easily reduced to
$$\vert\psi,t\rangle = \sum_k a_k e^{-iE_n t/\hbar}\vert E_k\rangle$$
But the wavefunction of this state is given by
$$\Psi(x,t) =\sum_ka_ke^{-iE_n t/\hbar}\psi_n(x) = \sum_ka_k\sqrt{\frac 2 L}e^{-iE_n t/\hbar}\sin\left(\frac{n\pi}{L}x\right)$$
and taking $\vert\vert^2$s to obtain the probability distribution yields a time-independent function. Hence the time evolution of the probability this system is apparently trivial for any initial state, but I have heard from multiple sources and a demonstration applet that even for a superposition of two stationary states the particle oscillates throughout the box. What have I done wrong here?
• $\lvert x + y \rvert \neq \lvert x \rvert + \lvert y \rvert$ – ACuriousMind Nov 14 '14 at 22:33
• @ACuriousMind Unbelievable... I get so caught up in equations that I forget little mathematical tidbits like that. If you post that as an answer I'll accept. – theage Nov 14 '14 at 22:38
Are you asking why taking the squared modulus of a superposition of (eigen)states turns out to be considerably more complicated than the squared modulus of a single eigenstates? It this is so, I'd say because eigenstates of different energies evolve differently, and when you do the superposition and consider the square modulus you have to take into account all the interference terms like $$|a+b|^2 = |a|^2 + |b|^2 + 2 \Re (ab^*)$$ which are usually highly non trivial.
• Yes, this is right - in this analysis I assumed $\vert x+y\vert = \vert x\vert + \vert y\vert$. Thanks. – theage Nov 14 '14 at 22:46
• @theage: You, probably, thought of orthogonality of different $\psi_n$, but for that one should integrate. The integral is indeed trivial. – Vladimir Kalitvianski Nov 14 '14 at 23:02
Edit after seem Acuriousmind and glance's contributions.... they have the answer sorted out above.
Interesting experimental example of this is Zewails work - for example this paper, which is not behind a 'pay wall' where evolution on femtosecond timescale of a molecular vibrational 'wavepacket' make up of 'superposition of many states was observed'.
• I think you can... isn't that guaranteed by the TISE? – theage Nov 14 '14 at 22:38
• @theage - Ok with the TISE (time indep Schrod. Eq I guess) you get lots of nice stationary states and with a long laser pulse (for example) you could excite just one of these states - but then you would not have a very good time zero for your system - you would not know exactly when it was excited during the laser pulse - does that make sense? – tom Nov 14 '14 at 22:44
• One way or the other, unless I'm extremely far from the mark a small time translation isn't the difference between trivial and nontrivial evolution. I made a mathematical error, but thanks for the insights anyway. – theage Nov 14 '14 at 22:50
• @theage - I now see the point of what acuriousmind and glance have put - so I edit my answer... – tom Nov 14 '14 at 22:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9026581645011902, "perplexity": 348.5939142796889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390755.1/warc/CC-MAIN-20200526081547-20200526111547-00039.warc.gz"} |
https://en.wikipedia.org/wiki/Sequent | # Sequent
For other uses, see Sequent (disambiguation).
In mathematical logic, a sequent is a very general kind of conditional assertion.
$A_1,\,\dots,A_m \,\vdash\, B_1,\,\dots,B_n.$
A sequent may have any number m of condition formulas Ai (called "antecedents") and any number n of asserted formulas Bj (called "succedents" or "consequents"). A sequent is understood to mean that if all of the antecedent conditions are true, then at least one of the consequent formulas is true. This style of conditional assertion is almost always associated with the conceptual framework of sequent calculus.
## Introduction
### The form and semantics of sequents
Sequents are best understood in the context of general logical assertions, which may be classified into the following three cases.
1. Unconditional assertion. No antecedent formulas.
• Example: ⊢ B
• Meaning: B is true.
2. Conditional assertion. Any number of antecedent formulas.
1. Simple conditional assertion. Single consequent formula.
• Example: A1, A2, A3B
• Meaning: IF A1 AND A2 AND A3 are true, THEN B is true.
2. Sequent. Any number of consequent formulas.
• Example: A1, A2, A3B1, B2, B3, B4
• Meaning: IF A1 AND A2 AND A3 are true, THEN B1 OR B2 OR B3 OR B4 is true.
Thus sequents are a generalization of simple conditional assertions, which are a generalization of unconditional assertions.
The word "OR" here is the inclusive OR.[1] The motivation for disjunctive semantics on the right side of a sequent comes from three main benefits.
1. The symmetry of the classical inference rules for sequents with such semantics.
2. The ease and simplicity of converting such classical rules to intuitionistic rules.
3. The ability to prove completeness for predicate calculus when it is expressed in this way.
All three of these benefits were identified in the founding paper by Gentzen (1934, p. 194).
Not all authors have adhered to Gentzen's original meaning for the word "sequent". For example, Lemmon (1965) used the word "sequent" strictly for simple conditional assertions with one and only one consequent formula.[2] The same single-consequent definition for a sequent is given by Huth & Ryan 2004, p. 5.
### Syntax details
In a general sequent of the form
$\Gamma\vdash\Sigma$
both Γ and Σ are sequences of logical formulas, not sets. Therefore both the number and order of occurrences of formulas are significant. In particular, the same formula may appear twice in the same sequence. The full set of sequent calculus inference rules contains rules to swap adjacent formulas on the left and on the right of the assertion symbol (and thereby arbitrarily permute the left and right sequences), and also to insert arbitrary formulas and remove duplicate copies within the left and the right sequences. (However, Smullyan (1995, pp. 107–108), uses sets of formulas in sequents instead of sequences of formulas. Consequently the three pairs of structural rules called "thinning", "contraction" and "interchange" are not required.)
The symbol ' $\vdash$ ' is often referred to as the "turnstile", "right tack", "tee", "assertion sign" or "assertion symbol". It is often read, suggestively, as "yields", "proves" or "entails".
### Properties
#### Effects of inserting and removing propositions
Since every formula in the antecedent (the left side) must be true to conclude the truth of at least one formula in the succedent (the right side), adding formulas to either side results in a weaker sequent, while removing them from either side gives a stronger one. This is one of the symmetry advantages which follows from the use of disjunctive semantics on the right hand side of the assertion symbol, whereas conjunctive semantics is adhered to on the left hand side.
#### Consequences of empty lists of formulas
In the extreme case where the list of antecedent formulas of a sequent is empty, the consequent is unconditional. This differs from the simple unconditional assertion because the number of consequents is arbitrary, not necessarily a single consequent. Thus for example, ' ⊢ B1, B2 ' means that either B1, or B2, or both must be true. An empty antecedent formula list is equivalent to the "always true" proposition, called the "verum", denoted "⊤". (See Tee (symbol).)
In the extreme case where the list of consequent formulas of a sequent is empty, the rule is still that at least one term on the right be true, which is clearly impossible. This is signified by the 'always false' proposition, called the "falsum", denoted "⊥". Since the consequence is false, at least one of the antecedents must be false. Thus for example, ' A1, A2 ⊢ ' means that at least one of the antecedents A1 and A2 must be false.
One sees here again a symmetry because of the disjunctive semantics on the right hand side. If the left side is empty, then one or more right-side propositions must be true. If the right side is empty, then one or more of the left-side propositions must be false.
The doubly extreme case ' ⊢ ', where both the antecedent and consequent lists of formulas are empty is "not satisfiable".[3] In this case, the meaning of the sequent is effectively ' ⊤ ⊢ ⊥ '. This is equivalent to the sequent ' ⊢ ⊥ ', which clearly cannot be valid.
### Examples
A sequent of the form ' ⊢ α, β ', for logical formulas α and β, means that either α is true or β is true. But it does not mean that either α is a tautology or β is a tautology. To clarify this, consider the example ' ⊢ B ∨ A, C ∨ ¬A '. This is a valid sequent because either B ∨ A is true or C ∨ ¬A is true. But neither of these expressions is a tautology in isolation. It is the disjunction of these two expressions which is a tautology.
Similarly, a sequent of the form ' α, β ⊢ ', for logical formulas α and β, means that either α is false or β is false. But it does not mean that either α is a contradiction or β is a contradiction. To clarify this, consider the example ' B ∧ A, C ∧ ¬A ⊢ '. This is a valid sequent because either B ∧ A is false or C ∧ ¬A is false. But neither of these expressions is a contradiction in isolation. It is the conjunction of these two expressions which is a contradiction.
### Rules
Most proof systems provide ways to deduce one sequent from another. These inference rules are written with a list of sequents above and below a line. This rule indicates that if everything above the line is true, so is everything under the line.
A typical rule is:
$\frac{\Gamma,\alpha\vdash\Sigma\qquad \Gamma\vdash\alpha}{\Gamma\vdash\Sigma}$
This indicates that if we can deduce that $\Gamma,\alpha$ yields $\Sigma$, and that $\Gamma$ yields $\alpha$, then we can also deduce that $\Gamma$ yields $\Sigma$. (See also the full set of sequent calculus inference rules.)
## Interpretation
### History of the meaning of sequent assertions
The assertion symbol in sequents originally meant exactly the same as the implication operator. But over time, its meaning has changed to signify provability within a theory rather than semantic truth in all models.
In 1934, Gentzen did not define the assertion symbol ' ⊢ ' in a sequent to signify provability. He defined it to mean exactly the same as the implication operator ' ⇒ '. He wrote: "The sequent A1, ..., Aμ → B1, ..., Bν signifies, as regards content, exactly the same as the formula (A1 & ... & Aμ) ⊃ (B1 ∨ ... ∨ Bν)".[4] (Gentzen employed the right-arrow symbol between the antecedents and consequents of sequents. He employed the symbol ' ⊃ ' for the logical implication operator.)
In 1939, Hilbert and Bernays stated likewise that a sequent has the same meaning as the corresponding implication formula.[5]
In 1944, Alonzo Church emphasized that Gentzen's sequent assertions did not signify provability.
"Employment of the deduction theorem as primitive or derived rule must not, however, be confused with the use of Sequenzen by Gentzen. For Gentzen's arrow, →, is not comparable to our syntactical notation, ⊢, but belongs to his object language (as is clear from the fact that expressions containing it appear as premisses and conclusions in applications of his rules of inference)."[6]
Numerous publications after this time have stated that the assertion symbol in sequents does signify provability within the theory where the sequents are formulated. Curry in 1963,[7] Lemmon in 1965,[2] and Huth and Ryan in 2004[8] all state that the sequent assertion symbol signifies provability. However, Ben-Ari (2012, p. 69) states that the assertion symbol in Gentzen-system sequents, which he denotes as ' ⇒ ', is part of the object language, not the metalanguage.[9]
According to Prawitz (1965): "The calculi of sequents can be understood as meta-calculi for the deducibility relation in the corresponding systems of natural deduction."[10] And furthermore: "A proof in a calculus of sequents can be looked upon as an instruction on how to construct a corresponding natural deduction."[11] In other words, the assertion symbol is part of the object language for the sequent calculus, which is a kind of meta-calculus, but simultaneously signifies deducibility in an underlying natural deduction system.
### Intuitive meaning
A sequent is a formalized statement of provability that is frequently used when specifying calculi for deduction. In the sequent calculus, the name sequent is used for the construct, which can be regarded as a specific kind of judgment, characteristic to this deduction system.
The intuitive meaning of the sequent $\Gamma\vdash\Sigma$ is that under the assumption of Γ the conclusion of Σ is provable. Classically, the formulae on the left of the turnstile can be interpreted conjunctively while the formulae on the right can be considered as a disjunction. This means that, when all formulae in Γ hold, then at least one formula in Σ also has to be true. If the succedent is empty, this is interpreted as falsity, i.e. $\Gamma\vdash$ means that Γ proves falsity and is thus inconsistent. On the other hand an empty antecedent is assumed to be true, i.e., $\vdash\Sigma$ means that Σ follows without any assumptions, i.e., it is always true (as a disjunction). A sequent of this form, with Γ empty, is known as a logical assertion.
Of course, other intuitive explanations are possible, which are classically equivalent. For example, $\Gamma\vdash\Sigma$ can be read as asserting that it cannot be the case that every formula in Γ is true and every formula in Σ is false (this is related to the double-negation interpretations of classical intuitionistic logic, such as Glivenko's theorem).
In any case, these intuitive readings are only pedagogical. Since formal proofs in proof theory are purely syntactic, the meaning of (the derivation of) a sequent is only given by the properties of the calculus that provides the actual rules of inference.
Barring any contradictions in the technically precise definition above we can describe sequents in their introductory logical form. $\Gamma$ represents a set of assumptions that we begin our logical process with, for example "Socrates is a man" and "All men are mortal". The $\Sigma$ represents a logical conclusion that follows under these premises. For example "Socrates is mortal" follows from a reasonable formalization of the above points and we could expect to see it on the $\Sigma$ side of the turnstile. In this sense, $\vdash$ means the process of reasoning, or "therefore" in English.
## Variations
The general notion of sequent introduced here can be specialized in various ways. A sequent is said to be an intuitionistic sequent if there is at most one formula in the succedent (although multi-succedent calculi for intuitionistic logic are also possible). More precisely, the restriction of the general sequent calculus to single-succedent-formula sequents, with the same inference rules as for general sequents, constitutes an intuitionistic sequent calculus. (This restricted sequent calculus is denoted LJ.)
Similarly, one can obtain calculi for dual-intuitionistic logic (a type of paraconsistent logic) by requiring that sequents be singular in the antecedent.
In many cases, sequents are also assumed to consist of multisets or sets instead of sequences. Thus one disregards the order or even the numbers of occurrences of the formulae. For classical propositional logic this does not yield a problem, since the conclusions that one can draw from a collection of premises do not depend on these data. In substructural logic, however, this may become quite important.
Natural deduction systems use single-consequence conditional assertions, but they typically do not use the same sets of inference rules as Gentzen introduced in 1934. In particular, tabular natural deduction systems, which are very convenient for practical theorem-proving in propositional calculus and predicate calculus, were applied by Suppes (1957) and Lemmon (1965) for teaching introductory logic in textbooks.
## Etymology
Historically, sequents have been introduced by Gerhard Gentzen in order to specify his famous sequent calculus.[12] In his German publication he used the word "Sequenz". However, in English, the word "sequence" is already used as a translation to the German "Folge" and appears quite frequently in mathematics. The term "sequent" then has been created in search for an alternative translation of the German expression.
Kleene[13] makes the following comment on the translation into English: "Gentzen says 'Sequenz', which we translate as 'sequent', because we have already used 'sequence' for any succession of objects, where the German is 'Folge'."
## Notes
1. ^ The disjunctive semantics for the right side of a sequent is stated and explained by Curry 1977, pp. 189–190, Kleene 2002, pp. 290, 297, Kleene 2009, p. 441, Hilbert & Bernays 1970, p. 385, Smullyan 1995, pp. 104–105, Takeuti 2013, p. 9, and Gentzen 1934, p. 180.
2. ^ a b Lemmon 1965, p. 12, wrote: "Thus a sequent is an argument-frame containing a set of assumptions and a conclusion which is claimed to follow from them. [...] The propositions to the left of '⊢' become assumptions of the argument, and the proposition to the right becomes a conclusion validly drawn from those assumptions."
3. ^ Smullyan 1995, p. 105.
4. ^ Gentzen 1934, p. 180.
2.4. Die Sequenz A1, ..., Aμ → B1, ..., Bν bedeutet inhaltlich genau dasselbe wie die Formel
(A1 & ... & Aμ) ⊃ (B1 ∨ ... ∨ Bν).
5. ^ Hilbert & Bernays 1970, p. 385.
Für die inhaltliche Deutung ist eine Sequenz
A1, ..., Ar → B1, ..., Bs,
worin die Anzahlen r und s von 0 verschieden sind, gleichbedeutend mit der Implikation
(A1 & ... & Ar) → (B1 ∨ ... ∨ Bs)
6. ^ Church 1996, p. 165.
7. ^ Curry 1977, p. 184
8. ^ Huth & Ryan (2004, p. 5)
9. ^ Ben-Ari 2012, p. 69, defines sequents to have the form UV for (possibly non-empty) sets of formulas U and V. Then he writes:
"Intuitively, a sequent represents 'provable from' in the sense that the formulas in U are assumptions for the set of formulas V that are to be proved. The symbol ⇒ is similar to the symbol ⊢ in Hilbert systems, except that ⇒ is part of the object language of the deductive system being formalized, while ⊢ is a metalanguage notation used to reason about deductive systems."
10. ^ Prawitz 2006, p. 90.
11. ^ See Prawitz 2006, p. 91, for this and further details of interpretation.
12. ^
13. ^ Kleene 2002, p. 441 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 18, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9631614685058594, "perplexity": 1248.4068746610462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737929054.69/warc/CC-MAIN-20151001221849-00097-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/integral-of-a-gaussian.813373/ | # Integral of a gaussian
Tags:
1. May 11, 2015
### Ananthan9470
I need to evaluate ∫xp e-x2 eix dx from -∞ to ∞ Can someone please give me some pointers on how to do this? I am completely lost. I just need some hints or something.
2. May 12, 2015
### PeroK
Is this homework?
3. May 12, 2015
### Ananthan9470
No. I'm trying to learn quantum mechanics and this thing keeps popping up.
4. May 12, 2015
### PeroK
You'll get a better response if you post it in homework. Even if you're learning on your own, it still counts.
Can you integrate it without the complex exponential?
5. May 12, 2015
### mathman
To get you started, if p is even you need only cosx. If p is odd you need only isinx, where $e^{ix}=cosx+isinx$.
Next integrate by parts to reduce exponent from p to p-1, and continue until you get p = 0.
At the end you should have $\int_{-\infty}^{\infty}e^{-\frac{x^2}{2}}cosxdx$.
As an afterthought, it might be easier to start from
$\int_{-\infty}^{\infty}e^{-\frac{x^2}{2}}cosxdx$. Then integrate by parts to increase the exponent of x.
Last edited: May 13, 2015
Similar Discussions: Integral of a gaussian | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9315641522407532, "perplexity": 1346.9544218159467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187821088.8/warc/CC-MAIN-20171017110249-20171017130249-00892.warc.gz"} |
http://mathoverflow.net/questions/109218/unknotting-number-and-crossing-number?answertab=votes | # Unknotting number and crossing number
It is well known that if c(K)=2n+1, then u(K) is less than n+1. It can not be sharper because of the trefoil knot. On the other hand, if c(K)=2n, then similarly we have u(K) is less than n+1. I think u(K)=n is impossible in this case, i.e. there does not exist a knot K with c(K)=2n and u(K)=n. Maybe it is fairly easy, but I have no idea how to deduce it. Any hint is welcome :)
-
Related question: mathoverflow.net/questions/108312/… – Ian Agol Oct 9 '12 at 14:03
Here is another proof inspired from Makoto Ozawa's ascending number. Assume there exists a knot diagram K with c(K)=2n and u(K)=n. By switching each crossing point one can get $2^n$ different knot diagrams totally. We say two diagrams are connected if one diagram can be obtained from the other one by switching one crossing point. If u(K)=n, it is not difficult to conclude that every unknot diagram is isolated. However this is impossible since each ascending diagram is connected to another ascending diagram, and both of them represent unknots. Hence that is a contradiction. – czy Oct 10 '12 at 13:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9140588045120239, "perplexity": 701.1355521526286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345760572/warc/CC-MAIN-20131218054920-00075-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://crypto.stackexchange.com/questions/60293/is-this-rsa-sra-cipher-modification-secure/60298 | # Is this RSA / SRA cipher modification secure?
I have modified SRA algorithm to suit my problem and I'm wondering if it is still safe to use.
Main question is if RSA Encryption key that contains even number is a safe one way function?
In RSA algorithm: encrypted message is (message)^e1 % N
As far as I understand when I select e1 as even number, multiplicative inverse of e1 mod N won't exists.
If e1 is selected even, the algorithm strength remain unchanged? If not, what is a different way to public encryption key and ensure everybody that I do not posses decryption key?
If e1 is selected even, the algorithm strength remain unchanged?
Actually, as far as we know, this algorithm relies on a weaker security assumption (and hence is arguably stronger than RSA).
RSA is known to be breakable if you can factor the modulus; however it's possible that there is a way to break RSA without factoring. We don't know of such a way, but it may exist.
In contrast, it is well known that you can efficiently factor $n$ if you are given a method that takes $x^e \bmod n$, for $e$ even, and recovers a possible $x$ value. Hence there cannot be a (much) easier way to reconstruct $x$ than factoring $n$
On the other hand:
what is a different way to public encryption key and ensure everybody that I do not posses decryption key?
What makes you think you can convince anyone that, given $x^e \bmod n$ for even $e$, you cannot find $x$? If you know the factorization of $n$ (which, assuming that you are the guy that picked $n$, is a reasonable assumption), you can reconstruct a handful of possible $x$ values (no more than $\gcd(p-1, e) \times \gcd(q-1, e)$, which may be as small as 4), one of which is the correct value.
Now, what is the problem you're trying to solve? If you want to make sure that there is no 'decryption key', why don't you just, say, do a hash of the plaintext.
If there needs to be a 'decryption key' (and just that no one knows it), well, one way is to use one of the larger RSA challenge numbers as a normal RSA modulus (and use a large prime public exponent). However, we rarely need to do that; what are you trying to accomplish?
• I'm looking for such an algorithm that is commutative and based on one hashing and one encrypting. Hashing is the part where no one knows decryption key, and encryption is such a way that only I know decryption key. – mroknocy Jun 25 '18 at 19:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8495587110519409, "perplexity": 536.2320419652375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413901.34/warc/CC-MAIN-20200601005011-20200601035011-00359.warc.gz"} |
https://www.gradesaver.com/textbooks/science/physics/physics-for-scientists-and-engineers-a-strategic-approach-with-modern-physics-4th-edition/chapter-7-newton-s-third-law-conceptual-questions-page-176/6 | ## Physics for Scientists and Engineers: A Strategic Approach with Modern Physics (4th Edition)
According to Newton's third law, the force of the mosquito on the car is equal in magnitude to the force of the car on the mosquito. However, the resulting acceleration on each object is not equal. According to Newton's second law: F = ma. The force $F$ is equal in magnitude, but the mosquito's mass is much smaller than the car's mass. Therefore, the mosquito experiences a much greater acceleration than the car experiences as a result of the collision.
According to Newton's third law, the force of the mosquito on the car is equal in magnitude to the force of the car on the mosquito. However, the resulting acceleration on each object is not equal. According to Newton's second law: F = ma. The force $F$ is equal in magnitude, but the mosquito's mass is much smaller than the car's mass. Therefore, the mosquito experiences a much greater acceleration than the car experiences as a result of the collision. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9376057386398315, "perplexity": 169.7565884941496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591455.76/warc/CC-MAIN-20180720002543-20180720022543-00259.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/153930-group-action.html | 1. ## group action
I have the following question consisting of two parts: a) Let G be a finite group. Show that if H<G then
G is not the union of conjugates of H ?
b) Show that if G acts transitively on a set X of size at least 2
then some g in G acts without fixed points (Hint: Use a)
For a) I tried to use a counting argument.
Since |gHg^-1|=|H| then The total number of elements
are (|G|/|N(H)|).(|H|-1)<= (|G|/|H|).(|H|-1) <= ...
Does that help me in any thing ?
For b) I can say , let a,b in G then there is g s.t. g.a=b
and we can show that G_a=g^-1.G_b.g
from a) we can conclude there is an element g such that
is not included in any conjugate subgroup of H.
Can any one help on this question ? Thanks
2. Originally Posted by hgd7833
I have the following question consisting of two parts: a) Let G be a finite group. Show that if H<G then
G is not the union of conjugates of H ?
b) Show that if G acts transitively on a set X of size at least 2
then some g in G acts without fixed points (Hint: Use a)
For a) I tried to use a counting argument.
Since |gHg^-1|=|H| then The total number of elements
are (|G|/|N(H)|).(|H|-1)<= (|G|/|H|).(|H|-1) <= ...
Does that help me in any thing ?
For b) I can say , let a,b in G then there is g s.t. g.a=b
and we can show that G_a=g^-1.G_b.g
from a) we can conclude there is an element g such that
is not included in any conjugate subgroup of H.
Can any one help on this question ? Thanks
For (a) use the orbit-stabiliser theorem (your action will be conjugation)! You want to use the obrit-stabiliset theorem to find out stuff about the index of your group.
Again, (b) falls out quite quickly if you use the orbit-stabiliser theorem. However, I am unsure why you would need part (a). Have you come across the orbit-stabiliser theorem yet?..
3. For a) I would say, let H act on G by conjugation, then |G|=|Z(G)/\H|+ sum([H:C(a_i)] = |Z(G)/\H|+|O(H)|
So G is union of conjugates of H only if |G|= |O(H)| but this implies that |Z(G)/\H|= 0 which is impossible ( is this right ?)
For b) If G acts transitively on a set X then X has only one orbit. So |X|= | Fix(X)| + |O(x)| , so for every y in X there is g in G such that g.y=x hence
y belongs to O(x) . BUT we need a single g such that g.a is not a for every a in X.
Actually the question came ion this way, and they gave the hint on b to use a.
For a) I would say, let H act on G by conjugation, then |G|=|Z(G)/\H|+ sum([H:C(a_i)] = |Z(G)/\H|+|O(H)|
So G is union of conjugates of H only if |G|= |O(H)| but this implies that |Z(G)/\H|= 0 which is impossible ( is this right ?)
For b) If G acts transitively on a set X then X has only one orbit. So |X|= | Fix(X)| + |O(x)| , so for every y in X there is g in G such that g.y=x hence
y belongs to O(x) . BUT we need a single g such that g.a is not a for every a in X.
Actually the question came ion this way, and they gave the hint on b to use a.
5. Originally Posted by hgd7833
For a) I would say, let H act on G by conjugation, then |G|=|Z(G)/\H|+ sum([H:C(a_i)] = |Z(G)/\H|+|O(H)|
So G is union of conjugates of H only if |G|= |O(H)| but this implies that |Z(G)/\H|= 0 which is impossible ( is this right ?)
I am not entirely sure what you mean here, although I presume it is a perversion of the class equation. What do you mean by Z(G)/\H? Is it $Z(G) \setminus (Z(G) \cap H)$? This should be the centraliser of H, the elements which stay fixed under conjugation by H. And is O(H) the orbit of H? However, the way you have constructed this is, I believe, incorrect.
As you have covered the class equation, please go and look up the orbit-stabiliser theorem. It will tell you how the centraliser and the orbit of H are connected.
Originally Posted by hgd7833
For b) If G acts transitively on a set X then X has only one orbit. So |X|= | Fix(X)| + |O(x)| , so for every y in X there is g in G such that g.y=x hence
y belongs to O(x) . BUT we need a single g such that g.a is not a for every a in X.
Actually the question came ion this way, and they gave the hint on b to use a.
No! Look up the orbit-stabiliser theorem! $|X| \neq |Fix(X)|+|O(X)|$, the orbit-stabiliser theorem tells you how these are connected!
6. I worked on this problem and using your argument for a while. I think I got it now. I am acting the group G on H by group conjugation, i.e. g.H = gHg^-1
so, number of conjugates of H is the order of orbit of H, If G is the union of conjugates then for every g in G g belongs to one of the orbits,
but Fix(H) is not empty since e.H=e.H.e=H so the identity belongs to Fix(H) hence the result follows.
But this implies b follows trivially without using orbit - stabilizer theorem. Right ?
Thanks Swlabr | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9854270815849304, "perplexity": 679.6948562368801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120881.99/warc/CC-MAIN-20170423031200-00287-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://forum.math.toronto.edu/index.php?PHPSESSID=b7aenrh07hf8i9orocatni1484&action=profile;area=showposts;sa=messages;u=1679 | ### Show Posts
This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
### Messages - Muyao Chen
Pages: [1]
1
##### Final Exam / Re: FE-P1
« on: December 18, 2018, 01:46:33 PM »
$$f(z) = \frac{A}{z-(-1+i)} + \frac{B}{z-(-1-i)}$$
Solve A, B
$$A = -\frac{i}{2}, B = \frac{i}{2}$$
Then
$$f(z) = -\frac{i}{2}\frac{1}{z-(-1+i)} +\frac{i}{2} \frac{1}{z-(-1-i)}$$
When $\mid z \mid = r$
$$f(z) = -\frac{i}{2} \frac{1}{-1+i} \frac{1}{\frac{z}{-1+i}-1} + \frac{i}{2}\frac{1}{-1-i}\frac{1}{\frac{z}{-1-i}-1}$$
$$= \frac{i}{2} \frac{1}{-1+i} \frac{1}{1 - \frac{z}{-1+i}} - \frac{i}{2}\frac{1}{-1-i}\frac{1}{1 - \frac{z}{-1-i}}$$
$$= \frac{i}{2} \frac{1}{-1+i} \sum_{n=0}^{\infty} (\frac{z}{-1+i})^{n} - \frac{i}{2}\frac{1}{-1-i}\sum_{n=0}^{\infty}(\frac{z}{-1-i})^{n}$$
So that converge at
$$\mid \frac{z}{-1+i} \mid < 1$$
$$\mid z \mid < \sqrt{2}$$
So that not converge at $\mid z \mid = \sqrt{2}$
When $\mid z \mid = R$
$$f(z) = -\frac{i}{2} \frac{1}{z} \frac{1}{ 1 - \frac{-1+i}{z}} + \frac{i}{2}\frac{1}{z}\frac{1}{1 - \frac{-1-i}{z}}$$
$$= \frac{i}{2} \frac{1}{z} \sum_{n=0}^{\infty} (\frac{-1+i}{z})^{n} + \frac{i}{2}\frac{1}{z}\sum_{n=0}^{\infty}(\frac{-1-i}{z})^{n}$$
So that converge at
$$\mid \frac{-1+i}{z} \mid < 1$$
$$\mid z \mid > \sqrt{2}$$
So that not converge at $\mid z \mid = \sqrt{2}$
2
##### Quiz-7 / Re: Q7 TUT 0102
« on: December 01, 2018, 12:25:02 PM »
Because as R $\rightarrow \infty$, others goes to 0.
3
##### Quiz-7 / Re: Q7 TUT 5201
« on: November 30, 2018, 11:32:41 PM »
$f(z) = z^{4} -3z^{2} + 3$ substitute $w = z^{2}$
$$z^{4} -3z^{2} + 3 = w^{2} - 3w + 3 = 0$$
$w = \frac{3 \pm i \sqrt 3}{2}$, so $z=\sqrt{\frac{3 \pm i \sqrt 3}{2}}$
For z in $[0, R]$:
$$f(x)= x^{4}-3x^{2}+ 3$$
$f(0) = 3$, then $arg(f(z)) = 0$
For $z = Re^{it}$ And $0 \leq t \leq \frac{\pi}{2}$:
$$f(Re^{it}) = R^{4}e^{4it} - eRe^{2it} + 3 = R^{4}(e^{4it} - \frac{3e^{2it}}{R^{3}} + \frac{3}{R^{4}}) = R^{4}e^{4it}(1 - \frac{3}{R^{3} e^{2it}} +\frac{3}{R^{4}})$$
As $R \rightarrow \infty$
$f(z)= R^{4}e^{4it}$ along 2, so that t goes from 0 to $\frac{\pi}{2}$, then $arg(f(z))$ goes from $4 *0 = 0$ to $4* \frac{\pi}{2}$ = 2$\pi$
For $z = iy$, And $0 \leq y \leq R$:
$$f(iy) = y^{4} - 3y + 3$$
$$f(0) = 3$$
Then $arg(f(z)) = 0$
Then $$\triangle arg(f(z)) = 0 +2 \pi + 0 = 2 \pi$$
$$\frac{1}{2 \pi}[\triangle arg(f(z))] = N_{0} - N_{p} = N_{0} = \frac{1}{2 \pi}2 \pi = 1$$
Then only one solution of the function in the first quadrant.
4
##### Quiz-7 / Re: Q7 TUT 0102
« on: November 30, 2018, 10:07:26 PM »
$$f(z) = 2z^{4} - 2iz^{3} + z^{2}+ 2iz -1$$
Consider contour in the upper half-plane with radius R.
let $$z = Re^{i \theta}$$
with$$\theta \in [0, \pi]$$
Then$$f(Re^{i \theta}) = R^{4}( (2e^{4i \theta}) + O( \frac{1}{R} ))$$
Then$$argf(Re^{i \theta}) = 4 \pi$$
On the real axis,$$f(x) = 2x^{4}+x^{2}-1-2ix(x^{2}-1)$$
Then zeros for the real part is:$$x = \frac{-1 \pm 3}{4}$$
for the imaginary part is:$$x = \pm 1, 0$$
When $$x < -1$$it's in first quadratic.
When $$-1 < x < \frac{-1 \pm 3}{4}$$it's moved to fourth quadrant.
So that$$\triangle argf(x) = -2 \pi$$
Then$$\frac{1}{2 \pi} (\triangle argf(z)) = 1$$
So that calculate the number of zeroes of the following function in the upper half-plane is 1.
5
##### Quiz-7 / Re: Q7 TUT 0201
« on: November 30, 2018, 09:52:48 PM »
$$p(z) = ze^{z} - \frac{1}{4}$$
Since $$f(0) \neq 0$$
It would be same as finding the number of zeros in
$$\mid z \mid < 2$$
On
$$\mid z \mid = 2$$
$$\mid ze^{z}\mid = 2e^{Re(z)} > 2e^{-2} = 0.276 > \frac{1}{4}$$
So p(z) and $ze^{z}$ have the same number of zero in $\mid z \mid < 2$.
So that number of zeros of f(z)is one in $0 < \mid z \mid < 2$.
6
##### Term Test 2 / Re: TT2B Problem 5
« on: November 24, 2018, 11:23:38 AM »
$$f(z) = \frac{1}{z-3} - \frac{1}{z-5}$$
(a) $\mid z \mid$ $<$ 3
$$f(z) = \frac{1}{3} - \frac{1}{1 - \frac{z}{3}} - \frac{1}{5} \frac{1}{1+ \frac{z}{5}} = - \frac{1}{3} \sum_{n=0}^{\infty} \frac {z}{3}^{n} - \frac{1}{5} \sum_{n=0}^{\infty} - \frac {z}{5}^{n} = \sum_{n=0}^{\infty} (- \frac{1}{3^{n +1}} - \frac{(-1)^{n}}{5^{n+1}}) z^{n}$$
(b) 3 $<$ $\mid z \mid$ $<$ 5
$$f(z) = \frac{1}{z} \frac{1}{1 - \frac{3}{z}} - \frac{1}{5} \frac{1}{1+ \frac{z}{5}} = \frac{1}{z} \sum_{n=0}^{\infty} \frac {3}{z}^{n} - \frac{1}{5} \sum_{n=0}^{\infty} - \frac {z}{5}^{n} = \sum_{n=0}^{\infty} \frac{3^{n}}{z^{n+1}} - \sum_{n=0}^{\infty} \frac{(-1)^{n}}{5^{n+1}}z^{n} = \sum_{n = - \infty}^{1} 3^{-n} z^{n-1} - \sum_{n=0}^{\infty} \frac{(-1)^{n}}{5^{n+1}}z^{n}$$
(c) $\mid z \mid$ $>$ 5
$$f(z) = \frac{1}{z} \frac{1}{1 - \frac{3}{z}} - \frac{1}{z} \frac{1}{1+ \frac{5}{z}} = \frac{1}{z} \sum_{n=0}^{\infty} \frac {3}{z}^{n} - \frac{1}{z} \sum_{n=0}^{\infty} - \frac {5^{n}}{z^{n}} = \sum_{n=0}^{\infty} (\frac{3^{n}}{z^{n+1}} - (-1) ^{n} \frac{5^{n}}{z^{n+1}}) = \sum_{n = - \infty}^{0} (3^{-n} - (-1)^{-n} 5^{-n} ) z^{n-1}$$
7
##### Quiz-6 / Re: Q6 TUT 5201
« on: November 17, 2018, 08:58:45 PM »
The first four terms of the Laurent series is:
$\frac{2}{z^{2}}$ + $\frac{1}{6}$ +$\frac{z^{2}}{120}$ +$\frac{z^{4}}{3024}$ +...
8
##### Quiz-6 / Re: Q6 TUT 0203
« on: November 17, 2018, 08:50:04 PM »
Write
f(z) = $\frac{g(z)}{(z- z_{0})^{l}}$
so
f '(z) = $\frac{g'(z)(z- z_{0})^{l} - g(z)(z- z_{0})^{l -1}}{(z- z_{0})^2}$
=g'(z) $\frac{1}{(z- z_{0})^{l}}$ - g(z)l$\frac{1}{(z- z_{0})^{l + 1}}$
so that
$\frac{f'}{f}$ = $\frac{g'(z) \frac{1}{(z- z_{0})^{l}} - g(z)l\frac{1}{(z- z_{0})^{l + 1}}}{\frac{g(z)}{(z- z_{0})^{l}}}$
= $\frac{g'}{g}$ - $\frac{l}{z-z_{0}}$
Then
$Res(\frac{f'}{f}, z_{0})$ = $Res(\frac{g'}{g}$ - $\frac{l}{z-z_{0}}, z_{0})$ = $Res(\frac{- l}{z-z_{0}}, z_{0})$ = $\frac{- l}{(z-z_{0})'}$ = $\frac{-l}{1}$ = $-l$
9
##### Quiz-6 / Re: Q6 TUT 0301
« on: November 17, 2018, 06:07:07 PM »
$\frac{sin z}{(z-π)^2}$ = $\frac{-sin (z- \pi)}{(z-π)^2}$ = $-(z - \pi)^{-2}sin (z- \pi)$
= $-(z - \pi)^{-2} \sum_n^∞ \frac{(-1)^{n}(z- \pi)^{2n+1}}{(2n+1)!}$
= $\sum_n^∞ \frac{(-1)^{n+1}(z- \pi)^{2n-1}}{(2n+1)!}$
= $\frac{-1}{z - \pi} +\sum_k^∞ (-1)^{k+1} \frac{(z- \pi)^{2k+1}}{(2n+1)!}$
$Res(f, \pi) = -1$
10
##### Quiz-6 / Re: Q6 TUT 0102
« on: November 17, 2018, 05:58:19 PM »
$\frac{z}{(sinz)^2}$
sinz = 0, Z = 0
numerator f(z)= z, f(0) = 0, f'(z) $\neq$ 0, so order = 1
g(z)= $(sinz)^2$, g(0)= 0, g'(z) = 2sinzcosz = sin2z,
g''(z) = 2cos2z $\neq$ 0, so order = 2.
so it's simple pole
Then
$\frac{z}{(sinz)^2}$ = $a_{-1}Z^{-1} + a_{0} +a_{1}Z + a_{2}Z^{2}+ ...$
$\frac{z}{(Z - \frac{Z^{3}}{3!} +\frac{Z^{5}}{5!} -...)^2}$ = $a_{-1}Z^{-1} + a_{0} +a_{1}Z + a_{2}Z^{2}+ ...$
$Z = (Z - \frac{Z^{3}}{3!} +\frac{Z^{5}}{5!} -...)^2(a_{-1}Z^{-1} + a_{0} +a_{1}Z + a_{2}Z^{2}+ ...)$
Then
$\frac{z}{(sinz)^2}$ = $\frac{1}{z}$ + $\frac{z}{3}$ + $\frac{z^{3}}{15}$ + $\frac{2Z^{5}}{189}$ +...
$Res(f;0)= 1$
Pages: [1] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9019534587860107, "perplexity": 4392.708297475633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585209.43/warc/CC-MAIN-20211018190451-20211018220451-00683.warc.gz"} |
https://openstax.org/books/university-physics-volume-3/pages/11-challenge-problems | University Physics Volume 3
# Challenge Problems
University Physics Volume 3Challenge Problems
### Challenge Problems
78.
Electrons and positrons are collided in a circular accelerator. Derive the expression for the center-of-mass energy of the particle.
79.
The intensity of cosmic ray radiation decreases rapidly with increasing energy, but there are occasionally extremely energetic cosmic rays that create a shower of radiation from all the particles they create by striking a nucleus in the atmosphere. Suppose a cosmic ray particle having an energy of $1010GeV1010GeV$ converts its energy into particles with masses averaging $200MeV/c2200MeV/c2$.
(a) How many particles are created? (b) If the particles rain down on a $1.00-km21.00-km2$ area, how many particles are there per square meter?
80.
(a) Calculate the relativistic quantity $γ=11−v2/c2γ=11−v2/c2$ for 1.00-TeV protons produced at Fermilab. (b) If such a proton created a $π+π+$ having the same speed, how long would its life be in the laboratory? (c) How far could it travel in this time?
81.
Plans for an accelerator that produces a secondary beam of K mesons to scatter from nuclei, for the purpose of studying the strong force, call for them to have a kinetic energy of 500 MeV. (a) What would the relativistic quantity $γ=11−v2/c2γ=11−v2/c2$ be for these particles? (b) How long would their average lifetime be in the laboratory? (c) How far could they travel in this time?
82.
In supernovae, neutrinos are produced in huge amounts. They were detected from the 1987A supernova in the Magellanic Cloud, which is about 120,000 light-years away from Earth (relatively close to our Milky Way Galaxy). If neutrinos have a mass, they cannot travel at the speed of light, but if their mass is small, their velocity would be almost that of light. (a) Suppose a neutrino with a $7-eV/c27-eV/c2$ mass has a kinetic energy of 700 keV. Find the relativistic quantity $γ=11−v2/c2γ=11−v2/c2$ for it. (b) If the neutrino leaves the 1987A supernova at the same time as a photon and both travel to Earth, how much sooner does the photon arrive? This is not a large time difference, given that it is impossible to know which neutrino left with which photon and the poor efficiency of the neutrino detectors. Thus, the fact that neutrinos were observed within hours of the brightening of the supernova only places an upper limit on the neutrino’s mass. (Hint: You may need to use a series expansion to find v for the neutrino, since its $γγ$ is so large.)
83.
Assuming a circular orbit for the Sun about the center of the Milky Way Galaxy, calculate its orbital speed using the following information: The mass of the galaxy is equivalent to a single mass $1.5×10111.5×1011$ times that of the Sun (or $3×1041kg3×1041kg$), located 30,000 ly away.
84.
(a) What is the approximate force of gravity on a 70-kg person due to the Andromeda Galaxy, assuming its total mass is $10131013$ that of our Sun and acts like a single mass 0.613 Mpc away? (b) What is the ratio of this force to the person’s weight? Note that Andromeda is the closest large galaxy.
85.
(a) A particle and its antiparticle are at rest relative to an observer and annihilate (completely destroying both masses), creating two $γγ$ rays of equal energy. What is the characteristic $γγ$-ray energy you would look for if searching for evidence of proton-antiproton annihilation? (The fact that such radiation is rarely observed is evidence that there is very little antimatter in the universe.) (b) How does this compare with the 0.511-MeV energy associated with electron-positron annihilation?
86.
The peak intensity of the CMBR occurs at a wavelength of 1.1 mm. (a) What is the energy in eV of a 1.1-mm photon? (b) There are approximately $109109$ photons for each massive particle in deep space. Calculate the energy of $109109$ such photons. (c) If the average massive particle in space has a mass half that of a proton, what energy would be created by converting its mass to energy? (d) Does this imply that space is “matter dominated”? Explain briefly.
87.
(a) Use the Heisenberg uncertainty principle to calculate the uncertainty in energy for a corresponding time interval of $10−43s10−43s$. (b) Compare this energy with the $1019GeV1019GeV$ unification-of-forces energy and discuss why they are similar.
Order a print copy
As an Amazon Associate we earn from qualifying purchases. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 18, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9340234398841858, "perplexity": 646.2117031517732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499890.39/warc/CC-MAIN-20230131190543-20230131220543-00090.warc.gz"} |
https://diffgeom.subwiki.org/w/index.php?title=Ring_torus&oldid=2066&diff=prev&printable=yes | # Difference between revisions of "Ring torus"
Jump to: navigation, search
## Definition
The ring torus is a form of embedding of the torus in three-dimensional Euclidean space. This surface type is not unique up to isometry or even up to similarity transformations, but rather, depends on two parameters for a description up to isometry and on one parameter for a description up to similarity transformations.
A ring torus can be defined as the surface obtained by revolving a circle about a line in its plane that does not intersect it.
To describe a ring torus up to isometry, we need two parameters:
• The radius of the circle being revolved, which we call the tube radius and denote by $a$.
• The perpendicular distance from the center of the circle being revolved to the axis of revolution, which we denote by $c$.
The condition that the axis of revolution does not intersect the circle being revolved is equivalent to the condition $c > a$.
### Implicit and parametric descriptions
Degree of generality Implicit description What the parameters mean Parametric description What the additional parameters mean Comment
Arbitrary Fill this in later
Up to rigid motions (rotations, translations, reflections) $(\sqrt{x^2 + y^2} - c)^2 + z^2 = a^2$, where $a,c$ are positive and $c > a$. $c$ is the radius of the central circle (spine) of the ring torus, and $a$ is the tube radius of the ring torus. This describes the ring torus where the axis of revolution is the $z$-axis. $x = (c + a \cos v)\cos u, y = (c + a \cos v)\sin u, z = a \sin v$ $u$ is an angle giving local polar coordinates for the point any fixed location of the circle being rotated. $v$ is the angle giving polar coordinates for the center of the circle, on the spine circle.
Up to similarity transformations We could rescale the above to normalize either one of $c$ and $a$ to 1, but we cannot normalize both simultaneously.
## Related surfaces
Note that neither of the surfaces below is topologically a torus.
• Horn torus is a related construct where $c = a$, i.e., the case of a circle being revolved about its tangent line. This is topologically not even a manifold.
• Spindle torus is a related construct where $c < a$, i.e., the case of a circle being revolved about a line intersecting it. A spindle torus has an inner and outer surface respectively called a lemon (the surface of revolution of a circular lens) and an apple. The spindle torus is topologically not even a manifold, but, taken individually, the lemon and the apple are topologically both 2-spheres.
## Fundamental forms and curvatures
For the table below, we consider the parametric description:
$x = (c + a \cos v)\cos u, y = (c + a \cos v)\sin u, z = a \sin v$
Quantity Meaning in general Value
$E,F,G$ for first fundamental form using this parametrization The Riemannian metric is given by $ds^2 = E \, du^2 + 2F \, du \, dv + G \, dv^2$ Fill this in later
$e,f,g$ for second fundamental form using this parametrization Fill this in later Fill this in later
principal curvatures using this parametrization eigenvalues of the shape operator Fill this in later
mean curvature arithmetic mean of the principal curvatures = half the trace of the shape operator Fill this in later
Gaussian curvature product of the principal curvatures = determinant of shape operator Fill this in later
## Verification of theorems
### Gauss-Bonnet theorem
Fill this in later | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9680164456367493, "perplexity": 544.7967256882062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655879738.16/warc/CC-MAIN-20200702174127-20200702204127-00483.warc.gz"} |
http://mathoverflow.net/questions/161925/exact-sequence-of-the-fundamental-group-of-the-general-fiber | # Exact sequence of the fundamental group of the general fiber
Let $f\colon X\rightarrow Y$ be a morphism of complex algebraic varieties. Let $y\in Y$ be a general point, then we have a sequence of homomorphisms of fundamental groups induced by the inclusion of the general fiber $f^{-1}(y)$ and $f$ $$\pi_1 ( f^{-1}(y) ) \rightarrow \pi_1 ( X) \rightarrow \pi_1(Y).$$ If $X$ and $Y$ are smooth, there exists conditions such that this sequence is exact. (e.g. Generalized Zariski-van Kampen theorem and its application to Grassmannian dual varieties - Ishiro Shimada).
There exists conditions such that this sequence is exact in $\pi_1(X)$ for $X$ and $Y$ being normal?
Im interested in the particular case when $f$ is the good quotient for the action of an reductive algebraic group on $X$.
Any comment will be highly appreciated.
Have a look at SGA I, Expose 10 (here). This is in the context of the algebraic $\pi _1$, but I guess the proof extends directly to the topological one. There are no hypotheses on the varieties; the map must be proper and separable (= reduced fibers in your case). – abx Mar 31 '14 at 6:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9684842228889465, "perplexity": 166.2434398825361}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928869.6/warc/CC-MAIN-20150521113208-00263-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/131021/constrained-optimization-problems-resulting-in-equal-variable-assignments | # Constrained optimization problems resulting in equal variable assignments
Suppose you trying to find the extreme value of some function $f(x_1,x_2,\ldots,x_n)$ over the set $\{x_i\}_n$ that is constrained by an unweighted sum as follows $\sum_n x_i=S$. Let's assume that $f(\cdot)$ is differentiable and convex.
Problems like these commonly arise in engineering and other disciplines, and one usually solves them using the method of Lagrange multipliers by constructing a Lagrangian multiplier $\mathcal{L}(\lambda,x_1,x_2,\ldots,x_n)=f(x_1,x_2,\ldots,x_n)+\lambda(\sum_n x_i-S)$ and finding the stationary point by solving a system of $n+1$ equations $\frac{\partial \mathcal{L}}{\partial x_i}=0~\mbox{for}~i=1,2,\ldots,n; \frac{\partial \mathcal{L}}{\partial \lambda}=0$.
I am interested in $f(\cdot)$'s for which the result is equal $x_i$'s: $x_1=x_2=\ldots=x_n$ (under unweighted sum constraint). Is there a way to "test" $f(\cdot)$ (or its derivates) to infer that the variable assignments must be equal at the stationary point? Seems to me that some kind of symmetry property is required in $f(\cdot)$ for that to occur, but I can't quite formulate it.
-
The definite articles seem to reflect an implicit assumption that there is exactly one stationary point. If this is the case, then there is indeed a simple sufficient but not necessary condition for all arguments to be equal at the stationary point: If $f$ is invariant under permutations of its arguments and there is only one stationary point, then this must necessarily have all arguments equal, since permuting any other tuple of arguments would lead to further stationary points.
This condition is clearly not necessary; for instance $f(x,y)=x^2+2y^2$ has a single stationary point at $x=y=0$ but no permutation symmetry.
If you don't know that there's only one stationary point, then all bets are off. For instance, the function $f(x,y)=(x^2+y^2-1)^2$ (plot) has permutation symmetry and rotation symmetry and has an entire circle of minima at $x^2+y^2=1$, most of which don't have equal arguments, and the function $f(x,y)=\mathrm e^{-2((x-1)^2+y^2)}+\mathrm e^{-2((x+1)^2+y^2)}+\mathrm e^{-2(x^2+(y-1)^2)}+\mathrm e^{-2(x^2+(y+1)^2)}$ (plot) has permutation symmetry and has four maxima, none of which have equal arguments. However, generally speaking, if a function has permutation symmetry, there's a good chance that the stationary point you're interested in will have all arguments the same.
Also, if a function has rotation symmetry, the centre of rotation is necessarily a stationary point.
-
Hmmm. Now, suppose that I know that $f(\cdot)$ is a polynomial of degree 1 (i.e., there are no instances of $x_i$ that are powers of anything other than zero or one). Does this make the problem easier? – M.B.M. Apr 12 '12 at 22:44
Also, is a function with rotation symmetry as follows: $f(x_1,x_2,\ldots,x_n)=f(x_2,x_3,\ldots,x_n,x1)=f(x_3,x_4,\ldots,x_1,x_2)=\ldots=f(x_n,x_1,\ldots,x_{n-1})$? What do you mean by centre of rotation then? Or did you mean an odd function? – M.B.M. Apr 12 '12 at 23:09
Another supposition: $f(\cdot)$ is a ratio of two polynomials of degree 1, with the numerator having permutation symmetry and the denominator having the following symmetry: $f_d(x_1,x_2,…,x_n)=f_d(x_2,x_3,…,x_n,x_1)=f_d(x_3,x_4,…,x_1,x_2)=…=f_d(x_n,x_1,…,x_{n−1})$. Are those conditions sufficient to prove that stationary point is found at $x_1=x_2=\ldots=x_n$? – M.B.M. Apr 12 '12 at 23:23
I can augment the original question with all these additional queries... – M.B.M. Apr 12 '12 at 23:24
@M.B.M.: I just noticed that I forgot to address the fact that your question was specifically about optimization under a constraint on the sum of the arguments. However, the same sort of considerations apply; for instance, in my last example, the sum of four Gaussians, if you add the constraint $x+y=1$ you get two maxima that don't have equal arguments. – joriki Apr 12 '12 at 23:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9141544699668884, "perplexity": 147.29232892472712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097512.42/warc/CC-MAIN-20150627031817-00011-ip-10-179-60-89.ec2.internal.warc.gz"} |
http://en.wikipedia.org/wiki/Complete_theory | # Complete theory
Jump to: navigation, search
In mathematical logic, a theory is complete if it is a maximal consistent set of sentences, i.e., if it is consistent, and none of its proper extensions is consistent. For theories in logics which contain classical propositional logic, this is equivalent to asking that for every sentence φ in the language of the theory it contains either φ itself or its negation ¬φ.
Recursively axiomatizable first-order theories that are rich enough to allow general mathematical reasoning to be formulated cannot be complete, as demonstrated by Gödel's incompleteness theorem.
This sense of complete is distinct from the notion of a complete logic, which asserts that for every theory that can be formulated in the logic, all semantically valid statements are provable theorems (for an appropriate sense of "semantically valid"). Gödel's completeness theorem is about this latter kind of completeness.
Complete theories are closed under a number of conditions internally modelling the T-schema:
• For a set $S\!$: $A \land B \in S$ if and only if $A \in S$ and $B \in S$,
• For a set $S\!$: $A \lor B \in S$ if and only if $A \in S$ or $B \in S$.
Maximal consistent sets are a fundamental tool in the model theory of classical logic and modal logic. Their existence in a given case is usually a straightforward consequence of Zorn's lemma, based on the idea that a contradiction involves use of only finitely many premises. In the case of modal logics, the collection of maximal consistent sets extending a theory T (closed under the necessitation rule) can be given the structure of a model of T, called the canonical model.
## Examples
Some examples of complete theories are:
## References
• Mendelson, Elliott (1997). Introduction to Mathematical Logic (Fourth edition ed.). Chapman & Hall. p. 86. ISBN 978-0-412-80830-2. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9531365633010864, "perplexity": 304.71842000461817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510265454.51/warc/CC-MAIN-20140728011745-00241-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://mytechmemo.blogspot.com/2011/03/how-to-find-minimizer-in-r.html | ## Friday, March 11, 2011
### How to find minimizer in R
Suppose x is a vector, the following command finds its minimum:
min(x)
To find the minimizer, we can use the following command:
which(x==min(x))
It gives the index for the minimum. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9535590410232544, "perplexity": 1370.4161761784385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592387.80/warc/CC-MAIN-20180721051500-20180721071500-00229.warc.gz"} |
https://www.physicsforums.com/threads/pde-constrained-to-a-curve.699739/ | # PDE constrained to a curve
1. ### Sunfire
221
Hello folks,
If we have the expression, say
$\frac{∂f}{∂r}$+$\frac{∂f}{∂θ}$, am I allowed to change it to
$\frac{df}{dr}$+$\frac{df}{dr}$$\frac{dr}{dθ}$,
if "f" is constrained to the curve r=r(θ).
My reasoning is that since the curve equation is known, then f does not really depend on the angle θ, but only on r (and r is a function of the angle, kind of a compound function).
Does this make sense?
2. ### Khashishi
This seems right conceptually, but notationally, some of those should be partial derivatives.
##\frac{\partial f}{\partial r} + \frac{\partial f}{\partial r} \frac{dr}{d\theta} = \frac{df}{dr}##
3. ### Sunfire
221
Yes, thank you, this makes a lot of sense. The chain rule for partial derivatives. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9886224865913391, "perplexity": 1149.0791653149693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098685.20/warc/CC-MAIN-20150627031818-00160-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://scholarworks.wmich.edu/dissertations/2100/ | ## Dissertations
#### Title
On Distance in Graphs and Digraphs
8-1990
#### Degree Name
Doctor of Philosophy
#### Department
Mathematics
Dr. Gary Chartrand
Dr. Alfred Boals
Dr. Naveed Sherwani
Dr. Yousef Alavi
#### Abstract
One of the most basic concepts associated with a graph is distance. In this dissertation some new definitions of distance in graphs and digraphs are introduced. One principle goal is to extend certain known results involving the standard distance function on graphs to the field of digraphs with an appropriate concept of distance. Several parameters as well as subgraphs and subdigraphs defined in terms of distance are investigated.
Chapter I gives a brief overview of the history of distance and generalized distance in graphs. By presenting a listing of major results in this area, it provides a background for the chapters to follow.
In Chapter II some results concerning distance in graphs are presented. It is proved that, for a graph G and integers r and d with 1 $\leq$ r $<$ d $\leq$ 2r, there exists a connected graph H of radius r and diameter d such that the center of H is isomorphic to G. A new distance in graphs, called detour distance, is introduced. A generalized Steiner distance in graphs is discussed as well.
In Chapter III maximum distance in digraphs is introduced. It is proved that maximum distance is a metric. The m-radius, m-diameter, m-center, m-periphery and m-median, defined in terms of maximum distance, are studied. In particular, it is proved that every oriented graph is isomorphic to the m-center of some strong oriented graph.
For an oriented graph D, the appendage number of D is defined as the minimum number of vertices required to add to D to produce an oriented graph H such that the m-center of H is isomorphic to D. The main result of Chapter IV is a characterization of oriented acyclic graphs having appendage number 2.
In Chapter V sum distance in digraphs is defined. The s-eccentric set, s-radius, s-diameter, s-center, s-appendage number and s-periphery are investigated. In particular, characterizations of s-eccentric sets and s-peripheries of oriented graphs are presented.
#### Access Setting
Dissertation-Open Access
COinS | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9282050132751465, "perplexity": 1039.2326870461047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943555.25/warc/CC-MAIN-20230320175948-20230320205948-00048.warc.gz"} |
https://physics.stackexchange.com/questions/98972/gaussian-probability-distribution | # Gaussian Probability Distribution?
The uncertainty principle states that,
$$\sigma _{{x}}\sigma _{{p}}\geq {\frac {\hbar }{2}}.$$
It is mentioned from many sources that the probability distribution of the particle position and momentum would follow a Gaussian distribution.
Why is it a Gaussian distribution? is this the distribution that minimizes uncertainty? Is this distribution definitely the case for the uncertainty principle or can it be different under different conditions? Has this been proven?
What are the formulas for position and momentum probability distributions of a free particle? How is this derived from the wave function? What would be the formulas of the probability distributions for the position and momentum for a system of 2 identical bosons separated by a distance $R$?
• Is this the distribution that minimizes uncertainty? On that one note, Wikipedia has this to say: "The normal distribution saturates the [entropic uncertainty principle] inequality, and it is the only distribution with this property, because it is the maximum entropy probability distribution among those with fixed variance." Just don't get mixed up between information-theoretic entropy and Fourier/quantum standard deviations.
– user10851
Feb 13, 2014 at 1:42
• There is no joint probability distribution for position and momentum of a quantum particle, precisely because these two observables do not commute. The object that most closely resembles it is the so-called Wigner function (en.wikipedia.org/wiki/Wigner_quasiprobability_distribution), which is not a bona fide probability density for not being everywhere non-negative. It is also most certainly not a (complex) Gaussian in general - this happens precisely when the uncertainty principle is minimized, as observed by Schrödinger himself. The corresponding state is called a coherent state. Feb 13, 2014 at 1:46
• @ Pedro Does this mean that the probability distribution of a quantum particle is an uncertain formula by itself and it is impossible to have an exact formula for the distribution for a particular specific case? for example a free particle with no potential in space. Feb 13, 2014 at 2:17
• No, it simply means that you cannot have a probability distribution for a quantum particle in phase space. Of course you have it for all possible values of a single observable (such as position or momentum). More generally, if you have $n$ observables $O_1,\ldots,O_n$ that commute (up to domain subtleties for unbounded observables that do not concern us here), you can find a joint probability distribution for their possible values. Feb 13, 2014 at 3:26
• @ Pedro. Thank you. If we measure the particle position to Δx giving uncertainty in momentum ΔP=ℏ/(2Δx) - then what would be the corresponding probability distributions? Feb 13, 2014 at 3:35
I believe this can be attributed to the central limit theorem, which states that a large number of samples from a population with a well-defined variance will follow a gaussian distribution. The key idea is that because of quantum mechanics, we must treat both position and momentum as random variables; the uncertainty principle gives us a relation between the variance of the two quantities.
We cannot talk about the "formula for position" per se; however we can derive a deterministic formula for the wavefunction, which represents the probability density for these random variables. The exact form of the wavefunction is dependent on the problem, but can (in principle) generally be obtained from the Schrödinger equation.
Wikipedia has a good writeup for the free particle. The Hamiltonian for a free particle with fixed momentum $\mathbf{p}$ is $\mathcal{H} = \mathbf{p}^2/2m$ (the potential is zero). Eigenstates of this Hamiltonian are plane-waves in position-space (that is, their wavefunctions oscillate throughout space and time): $$\psi(\mathbf{x}, t) = Ae^{i(\mathbf{x}\cdot\mathbf{p}-Et)/\hbar}$$ that means that the probability distribution is simply: $$\left|\psi(\mathbf{x},t)\right|^2 = \left|A\right|^2$$ which is a constant independent of position $\mathbf{x}$. Note that this wavefunction cannot be renormalized to unity, but the takeaway is that the particle is equally likely to be anywhere. This is consistent with the uncertainty princple: since we specified $\mathbf{p}$ exactly ($\sigma_p=0$), the uncertainty in position is infinite.
For more complex systems, the Hamiltonian is not always exactly known; this is often the case in multi-particle systems, such as atoms. In still other cases, the Hamiltonian is known but cannot be solved analytically.
• For a free particle in space with zero potential, what formula would represent the probability distribution derived from the Schrodinger equation? Feb 13, 2014 at 2:13
• I've updated my answer to include more detail about the free particle case. Let me know if it's still unclear. Feb 13, 2014 at 2:33
• Thank you. If we measure the particle position to $\Delta x$ giving uncertainty in momentum $\Delta P = \hbar/(2 \Delta x)$ - then what would be the corresponding probability distributions? Feb 13, 2014 at 2:42
It is not correct that the probability distribution of $x$ and $p$ are Gaussian in general.
Take a simple system of a particle moving in some potential $V(x)$.
The probability distribution of $x$ is the square of the wave-function $\Psi(x)$ of the particle, i.e. the probability of finding your particle in $[x,x+dx]$ is $|\Psi(x)|^2dx$.
The probability distribution of $p$ is the square of the momentum-space wave-function $\Psi(p) = \int dx \Psi(x) e^{ipx}$ (the Fourier transform of $\Psi(x)$).
It is only when the wave-function $\Psi(x)$ is a Gaussian that the uncertainty principle is minimized, i.e. $\sigma_x \sigma_p = \frac{\hbar}{2}$. See (http://en.wikipedia.org/wiki/Fourier_transform#Uncertainty_principle) for a proof that $\frac{\hbar}{2}$ is the lower limit.
Now why exactly the Gaussian? To minimize the uncertainty product we need a wave-function that is sufficiently well localized both in both real space and Fourier space. If we squeeze a function in real space and it broadens in Fourier space and vice versa. The Gaussian happens to be the unique function that maintains its 'shape' when Fourier transformed, i.e. the Fourier transform of a Gaussian (with variance $\sigma^2$) is just another Gaussian (with variance $1/(4\sigma^2)$) and the product of variance (uncertainty) remains a constant independent of $\sigma$.
Finally, there exist many systems where the uncertainty principle is not minimized. The simplest example is a 'particle in a box' (http://en.wikipedia.org/wiki/Particle_in_a_box). Here the ground state has $\sigma_x\sigma_p = \frac{\hbar}{2} \times \sqrt{\frac{\pi^2}{3}-2}$
• Thank you. For example, a gas in a box at a given temperature and pressure, what can we say about the average uncertainty in position and momentum of the particles? What would be the average probability distributions for position and momentum of the gas atoms? Feb 13, 2014 at 4:15
• That will depend on how the particles are interacting with each other. I have not seen such a calculation before. The UP is not that interesting for such large systems as we don't really measure or talk about individual atoms when we have a gas: its macroscopic quantities like pressure and temperature that count. Anyway, if the particles don't interact (collide), then the total wave-function is just the product of the individual wave-functions and we have the same uncertainty relation for each of the gas-particles as for the single particle in a box. Feb 13, 2014 at 4:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9793144464492798, "perplexity": 222.4341280431911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00458.warc.gz"} |
https://quantummechanics.ucsd.edu/ph130a/130_notes/node359.html | ## Hyperfine Splitting in a B Field
If we apply a B-field the states will split further. As usual, we choose our coordinates so that the field is in direction. The perturbation then is
where the magnetic moments from orbital motion, electron spin, and nuclear spin are considered for now. Since we have already specialized to s states, we can drop the orbital term. For fields achievable in the laboratory, we can neglect the nuclear magnetic momentin the perturbation. Then we have
As an examples of perturbation theory, we will work this problem for weak fields, for strong fields, and also work the general case for intermediate fields. Just as in the Zeeman effect, if one perturbation is much bigger than another, we choose the set of states in which the larger perturbation is diagonal. In this case, the hyperfine splitting is diagonal in states of definite while the above perturbation due to the B field is diagonal in states of definite . For a weak field, the hyperfine dominates and we use the states of definite . For a strong field, we use the states. If the two perturbations are of the same order, we must diagonalize the full perturbation matrix. This calculation will always be correct but more time consuming.
We can estimate the field at which the perturbations are the same size by comparing to . The weak field limit is achieved if gauss.
* Example: The Hyperfine Splitting in a Weak B Field.*
The result of this is example is quite simple . It has the hyperfine term we computed before and adds a term proportional to which depends on .
In the strong field limit we use states and treat the hyperfine interaction as a perturbation. The unperturbed energies of these states are . We kept the small term due to the nuclear moment in the B field without extra effort.
* Example: The Hyperfine Splitting in a Strong B Field.*
The result in this case is
Finally, we do the full calculation.
* Example: The Hyperfine Splitting in an Intermediate B Field.*
The general result consists of four energies which depend on the strength of the B field. Two of the energy eigenstates mix in a way that also depends on B. The four energies are
and
These should agree with the previous calculations in the two limits: B small, or B large. The figure shows how the eigenenergies depend on B.
We can make a more general calculation, in which the interaction of the nuclear magnetic moment is of the same order as the electron. This occurs in muonic hydrogen or positronium. * Example: The Hyperfine Splitting in an Intermediate B Field.*
Jim Branson 2013-04-22 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9751999378204346, "perplexity": 477.611646663239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687484.46/warc/CC-MAIN-20170920213425-20170920233425-00649.warc.gz"} |
http://www.totalcurvereviewed.com/info/1063/3319.htm | # 澳门新葡8455最新网站,www.8455.com,新葡京最新官网
Large deviations of occupation measures for SPDEs
Using the hyper-exponential recurrence criterion, a large deviation principle for the occupation measure is derived for a class of non-linear monotone stochastic partial differential equations. The main results are applied to many concrete SPDEs such as stochastic $p$-Laplace equation, stochastic porous medium equation, stochastic fast-diffusion equation, and even stochastic real Ginzburg-Landau equation driven by $\alpha$-stable noises. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.899069607257843, "perplexity": 687.4303213564181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391923.3/warc/CC-MAIN-20200526222359-20200527012359-00047.warc.gz"} |
http://tex.stackexchange.com/users/17609/razorxsr?tab=activity&sort=all&page=2 | # RazorXsr
less info
reputation
39
bio website location age member for 2 years, 5 months seen May 4 '14 at 18:48 profile views 23
Engineering student working on software development for an applications based on mathematical models and economics.
# 45 Actions
Oct29 asked Italicize all text in the body matching a pattern Oct29 accepted Formatting table border and text alignment in LaTeX table Oct28 comment Formatting table border and text alignment in LaTeX table How did you achieve the decimal alignment? Excellent response.. But as you noted - it is a bit of a sore sight :( Oct28 comment Formatting table border and text alignment in LaTeX table Hi @peter... I think the image is a bit fuzzy but I want double lined border, not thick lines. Oct28 asked Formatting table border and text alignment in LaTeX table Aug31 comment Bibliography Issue apacite: Argument of \@@cite has an extra } @Kurt: Actually I found a work around to this issue. I used the `natbibapa` option for the `apacite` package and that seemed to solve all the issues. Instead of using `\cite` I had to use `\citep` for referencing. Aug25 revised Bibliography Issue apacite: Argument of \@@cite has an extra } added 594 characters in body Aug25 asked Bibliography Issue apacite: Argument of \@@cite has an extra } Aug25 accepted Make the contents of a sentence fit to a line? Aug24 comment Make the contents of a sentence fit to a line? Thanks Fran. The suggestion worked very well. Can you please elaborate a little bit on the usage of this command (`\resizebox') and also how it fit in this context? - the guidance would be very helpful indeed!! Thanks again for the solution! Aug24 revised Make the contents of a sentence fit to a line? Felt that the initial pieces of code I gave was unnecessary in the current context. Aug24 suggested approved edit on Make the contents of a sentence fit to a line? Aug24 revised Make the contents of a sentence fit to a line? deleted 237 characters in body Aug24 revised Make the contents of a sentence fit to a line? added 437 characters in body Aug24 awarded Editor Aug24 revised Make the contents of a sentence fit to a line? added 827 characters in body Aug24 asked Make the contents of a sentence fit to a line? Aug16 awarded Scholar Aug16 comment Change equation font to Times New Roman? Wow Mico! That pretty much addresses all my issues. Thanks for the detailed answer. The acronym definitions were a neat trick to improve the readability of the code. Thank you again! Aug16 accepted Change equation font to Times New Roman? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8638486266136169, "perplexity": 3602.6147575973177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121961657.12/warc/CC-MAIN-20150124175241-00007-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://git.rockbox.org/cgit/rockbox.git/tree/manual/main_menu/main.tex?id=e28e2fccb8bc218b1bbeb40c53756974bdbe4d41 | summaryrefslogtreecommitdiffstats log msg author committer range
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 % $Id$ % \chapter{The Main Menu} \section{Introducing the Main Menu} \screenshot{main_menu/images/ss-main-menu}{The main menu}{} The \setting{Main Menu} is the screen from which the rest of the Rockbox functions can be accessed. It is used for a variety of functions, which are detailed below. All options in Rockbox can be controlled via the \setting{Main Menu}. To enter the \setting{Main Menu}, \opt{IRIVER_H100_PAD,IRIVER_H300_PAD}{press the \ButtonMode\ button.} \opt{RECORDER_PAD}{press the \ButtonFOne\ button.} \opt{PLAYER_PAD,IPOD_4G_PAD,ONDIO_PAD}{press the \ButtonMenu\ button.} \opt{IAUDIO_X5_PAD}{hold the \ButtonPlay\ button.} All settings are stored on the unit. However, Rockbox does not spin up the disk solely for the purpose of saving settings. Instead, Rockbox will save settings when it spins up the disk the next time, for example when refilling the MP3 buffer or navigating through the file browser. Changes to settings may therefore not be saved unless the \dap\ is shut down safely (see page \pageref{ref:Safeshutdown}). \section{Navigating the Main Menu} \opt{RECORDER_PAD,ONDIO_PAD,IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD,IPOD_4G_PAD}{ \begin{table} \begin{btnmap}{}{} \opt{IPOD_4G_PAD,IPOD_VIDEO_PAD}{\ButtonScrollFwd} \opt{RECORDER_PAD,ONDIO_PAD,IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD}{\ButtonUp} & Moves up in the menu.\\ & Inside a setting, increases the value or chooses next option \\ % \opt{IPOD_4G_PAD,IPOD_VIDEO_PAD}{\ButtonScrollBack} \opt{RECORDER_PAD,ONDIO_PAD,IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD}{\ButtonDown} & Moves down in the menu.\\ & Inside a setting, decreases the value or chooses previous option \\ % \opt{RECORDER_PAD}{\ButtonPlay/\ButtonRight} \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD}{\ButtonSelect/\ButtonRight} \opt{ONDIO_PAD,IPOD_4G_PAD,IPOD_VIDEO_PAD}{\ButtonRight} & Selects option \\ % \opt{RECORDER_PAD,IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonOff/\ButtonLeft} \opt{IAUDIO_X5_PAD,ONDIO_PAD,IPOD_4G_PAD,IPOD_VIDEO_PAD}{\ButtonLeft} & Exits menu, setting or moves to parent menu\\ \end{btnmap} \end{table} } \opt{PLAYER_PAD}{ \begin{table} \begin{btnmap}{}{} % \ButtonLeft & Selects previous option in the menu.\\ & Inside an setting, decreases the value or chooses previous option \\ % \ButtonRight & Selects next option in the menu.\\ & Inside an setting increases the value or chooses next option \\ % \ButtonPlay & Selects item \\ % \ButtonStop & Exit menu, setting or moves to parent menu.\\ \end{btnmap} \end{table} } \section {Recent Bookmarks} \screenshot{main_menu/images/ss-list-bookmarks}% {The list bookmarks screen}{} If the \setting{Save a list of recently created bookmarks} option is enabled then you can view a list of several recent bookmarks here and select one to jump straight to that track. See page \pageref{ref:Bookmarkconfigactual} for more details on configuring bookmarking in Rockbox. \note{This option is off by default.} \section{Sound Settings} The \setting{Sound Settings} menu offers a selection of sound properties you may change to customize your listening experience. The details of this menu are covered in detail starting on page \pageref{ref:configure_rockbox_sound}. \section{General Settings} The \setting{General Settings} menu allows you to customize the way Rockbox looks and the way it plays music. The details of this menu are covered in detail starting on page \pageref{ref:configure_rockbox_general}. \section{Manage Settings} The \setting{Manage Settings} option allows the saving and re-loading of user configuration settings, browse the hard drive for alternate firmwares, and finally to reset your \dap\ back to initial configuration. The details of this menu are covered in detail starting on page \pageref{ref:ManageSetting}. \section{Browse Themes} This option will display all the currently installed themes on the \dap, press \ButtonRight\ to load the chosen theme and apply it. A theme is basically a configuration file, stored in a specific directory, that typically changes the WPS \opt{h1xx,h300,x5}{and remote WPS}, font used and on some platforms additional information such as background image and text colours. There are a number of themes that ship with Rockbox. If none of these suit your needs, many more can be downloaded from \url{www.rockbox.org/twiki/bin/view/Main/\opt{RECORDER_PAD}{/WpsArchos} \opt{h1xx}{WpsIriverH100}\opt{h300,ipodcolor}{WpsIriverH300} \opt{ipodvideo}{WpsIpod5g}\opt{ipodnano}{WpsIpodNano} \opt{ipodmini}{WpsIpodMini}\opt{x5}{WpsIaudioX5}}. Some of the downloads from this site will actually be standalone WPS files, others will be full-blown themes. \note{Themes do not have to be purely visual. It is quite possible to create a theme that switches between audio configurations for use in the car, with headphones and when connected to an external amplifier. See the section on Making Your Own Settings File'' on page \pageref{ref:CreateYourOwnWPS} for more details. } \opt{CONFIG_TUNER}{ \section{\label{ref:FMradio}FM Radio \opt{ondio}{(OndioFM Only)}} \opt{x5}{\note{Not currently Implemented on X5}} \screenshot{main_menu/images/ss-fm-radio-screen}{The FM radio screen}{} This menu option switches to the radio screen. The FM radio has the ability \opt{HAVE_RECORDING}{to record and } to remember station frequency settings (presets). \opt{recorderv2fm,ondio}{ \begin{table} \begin{btnmap}{}{} \ButtonLeft, \ButtonRight & Change frequency in 0.1 MHz steps.\\ & For automatic station seek, hold \ButtonLeft/\ButtonRight\ % for a little longer. \\ % \ButtonUp, \ButtonDown & Change volume \\ % \opt{RECORDER_PAD}{ \ButtonPlay & \emph{(EXPERIMENTAL)}\\ & freezes all screen updates.May enhance radio reception in some cases.\\ } \opt{RECORDER_PAD}{\ButtonOn}\opt{ONDIO_PAD}{\ButtonOff} & Leave the radio screen with the radio playing \\ % \opt{RECORDER_PAD}{\ButtonOff}\opt{ONDIO_PAD}{hold \ButtonOff} & Back to Main Menu.\\ \end{btnmap} \end{table} } \fixme{Add Radio recording and Preset keys to FM Recorder and Ondio FM} \opt{h1xx,h300,x5}{ \begin{table} \begin{btnmap}{}{} \ButtonLeft, \ButtonRight & Change frequency in 0.1 MHz steps. \\ Hold \ButtonLeft, \ButtonRight & Seeks to next station or preset\\ % \ButtonUp, \ButtonDown & Change volume \\ % \opt{IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonOn} \opt{IAUDIO_X5_PAD}{\fixme{TBD}} & Mutes radio playback \\ % \opt{IRIVER_H100_PAD,IRIVER_H300_PAD}{Hold \ButtonOn} \opt{IAUDIO_X5_PAD}{\fixme{TBD}} & Switches between SCAN and PRESET mode.\\ % \ButtonSelect & Opens a list of radio presets. You can view all the presets that you have, and switch to the station.\\ Hold \ButtonSelect & Displays the FM radio settings menu.\\ % \opt{IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonMode} \opt{IAUDIO_X5_PAD}{\fixme{TBD}} % & Keeps radio playing and returns to the main menu. You can then press OFF/STOP to browse the file tree while listening to the radio\\ % \opt{IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonOff} \opt{IAUDIO_X5_PAD}{\fixme{TBD}} & Stops the radio and returns to Main Menu.\\ \end{btnmap} \end{table} } \begin{description} \item[Saving a preset:] Up to 32 of your favourite stations can be saved as presets. Press \opt{RECORDER_PAD}{\ButtonFOne} \opt{ONDIO_PAD}{\ButtonMenu} \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD}{\ButtonSelect} to go to the menu, then select \opt{recorderv2fm,ondio}{Save preset''.} \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD}{Add preset''} Enter the name (maximum number of characters is 32). \opt{IRIVER_H100_PAD,IRIVER_H300_PAD}{Press \ButtonOn\ to save.} \opt{IAUDIO_X5_PAD}{Press \fixme{TBD} to save.} \item[Selecting a preset:] \opt{ONDIO_PAD,RECORDER_PAD} { Press \opt{RECORDER_PAD}{\ButtonFTwo}\opt{ONDIO_PAD}{\fixme{FixMe}} to go to the preset list. Use \ButtonUp\ and \ButtonDown\ to move the cursor and then press \opt{RECORDER_PAD}{\ButtonPlay}\opt{ONDIO_PAD}{\fixme{FixMe}} to select. Use \ButtonLeft\ to leave the preset without selecting anything. } \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD} { Press \ButtonSelect\ to go to the preset list. Use \ButtonUp\ and \ButtonDown\ to move the cursor and then press \ButtonSelect\ to select. Use \ButtonLeft\ to leave the preset without selecting anything. } \item[Removing a preset:] \opt{ONDIO_PAD,RECORDER_PAD}{ Press \opt{RECORDER_PAD}{\ButtonFOne}\opt{ONDIO_PAD}{\fixme{FixMe}} to go to the menu, then select Remove preset''. } \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD}{ Press \ButtonSelect\ to go to the preset list. Use \ButtonUp\ and \ButtonDown\ to move the cursor and then Hold \ButtonSelect\ on the preset to that you wish to remove, then select Remove preset.'' } \opt{RECORDER_PAD}{ \item[Recording:] Press \ButtonFThree\ to start recording the currently playing station. Press \ButtonOff\ to stop recording. Press \ButtonPlay\ again to seamlessly start recording to a new file. The settings for the recording can be changed in the \ButtonFOne\ menu before starting the recording. See page \pageref{ref:Recordingsettings} for details of recording settings. } \end{description} \note{The radio will turn off when starting playback of an audio file.} } \opt{HAVE_RECORDING}{ \section{\label{ref:Recording}Recording} \opt{x5}{\note{Not Implemented on X5 yet}} \subsection{\label{ref:Whilerecordingscreen}While Recording Screen} \screenshot{main_menu/images/ss-while-recording-screen}{The while recording screen}{} Entering the Recording'' option in the Main menu launches the recording application. The screen shows the time elapsed and the size of the file being recorded. A peak meter is present to allow you set Gain correctly. \opt{MASCODEC}{The frequency, channels and quality} \opt{SWCODEC}{The frequency and channels} settings are shown on the last line. The controls for this screen are: \begin{table} \begin{btnmap}{}{} \ButtonLeft & Decreases Gain \\ % \ButtonRight & Increases Gain \\ % \opt{RECORDER_PAD,IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonOn} \opt{ONDIO_PAD,IAUDIO_X5_PAD,IPOD_4G_PAD}{\fixme{FixMe}} & Starts recording. \\ & While recording: button closes the current file and opens a new one.\\ % \opt{RECORDER_PAD,IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonOff} \opt{ONDIO_PAD,IAUDIO_X5_PAD,IPOD_4G_PAD}{\fixme{FixMe}} & Exits Recording Screen.\\ & While recording: Stop recording \\ % \opt{RECORDER_PAD}{\ButtonFOne} \opt{ONDIO_PAD}{\ButtonMenu} \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,IPOD_4G_PAD,IAUDIO_X5_PAD}{Hold \ButtonSelect} & Opens Recording Settings screen (see below) \\ % \opt{RECORDER_PAD}{ \ButtonFTwo & Quick menu for recording settings. A quick press will leave the screen up (press {\ButtonFTwo} again to exit),while holding it will close the screen when you release it. \\ } % \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,IPOD_4G_PAD,IAUDIO_X5_PAD}{ \ButtonSelect & Quick menu for recording settings. \\ } % \opt{RECORDER_PAD}{ \ButtonFThree & Quick menu for source setting. \\ & Quick/hold works as for {\ButtonFTwo}. \\ & While recording: Start a new recording file \\ } \end{btnmap} \end{table} \subsection{\label{ref:Recordingsettings}Recording Settings} \screenshot{main_menu/images/ss-recording-settings}{The recording settings screen}{} \opt{MASCODEC}{ \begin{description} \item[Quality:] Choose the quality here (0 to 7). Default is 5, best quality is 7, smallest file size is 0. This setting effects how much your sound sample will be compressed. Higher quality settings result in larger MP3 files. The quality setting is just a way of selecting an average bit rate, or number of bits per second, for a recording. When this setting is lowered, recordings are compressed more (meaning worse sound quality), and the average bitrate changes as follows. \end{description} \begin{table}[h!] \begin{center} \begin{tabularx}{0.75\textwidth}{lX}\toprule \emph{Frequency} & \emph{Bitrate} (Kbit/s) -- quality 0$\rightarrow$7 \\\midrule 44100Hz stereo & 75, 80, 90, 100, 120, 140, 160, 170 \\ 22050Hz stereo & 39, 41, 45, 50, 60, 80, 110, 130 \\ 44100Hz mono & 65, 68, 73, 80, 90, 105, 125, 140 \\ 22050Hz mono & 35, 38, 40, 45, 50, 60, 75, 90 \\\bottomrule \end{tabularx} \end{center} \end{table} } \begin{description} \item[Frequency:] Choose the recording frequency (sample rate) -- 48kHz, 44.1kHz, 32kHz and 24kHz, 22.05kHz, 16kHz are available. Higher sample rates use up more disk space, but give better sound quality. This setting determines which frequency range can accurately be reproduced during playback, Lower frequencies produce smaller files. \opt{MASCODEC}{ The frequency setting also determines which version of the MPEG standard the sound is recorded using:\\ MPEG v1 for 48, 44.1 and 32\\ MPEG v2 for 24, 22.05 and 16\\ } \item[Source:] Choose the source of the recording. This can be \opt{recorder,recorderv2fm,h1xx}{SPDIF (digital),} microphone or line in. \opt{CONFIG_TUNER}{For recording from the radio see page \pageref{ref:FMradio}.} \opt{recorder,recorderv2fm,h100} {\note{You cannot change the sample rate for digital recordings.}} \item[Channels:] This allows you to select mono or stereo recording. Please note that for mono recording, only the left channel is recorded. Mono recordings are usually somewhat smaller than stereo. \item[Independent Frames:] The independent frames option tells the \dap\ to encode with the bit reservoir disabled, so the frames are independent of each other. This makes a file easier to edit. \item[Time Split:] This option is useful when timing recordings. If set to active it stops a recording at a given interval and then starts recording again with a new file, which is useful for long term recordings. \newline The splits are seamless (frame accurate), no audio is lost at the split point. The break between recordings is only the time required to stop and restart the recording, on the order of 2 -- 4 seconds. \newline Options (hours:minutes between splits): off, 24:00, 18:00, 12:00, 10:00, 8:00, 6:00, 4:00, 2:00, 1:20 (80 minute CD), 1:14 (74 minute CD), 1:00, 00:30, 00:15, 00:10, 00:05. \item[Prerecord Time:] This setting buffers a small amount of audio so that when the record button is pressed, the recording will begin from that number of seconds earlier. This is useful for ensuring that a recording begins before a cue that is being waited for.\\ Options: Off, 1 -- 30 seconds \item[Directory:] Allows changing the location where the recorded files are saved. The default location is \fname{/recordings}. \item[Show recording screen on startup:] If set to yes, the \dap\ will start up with the while recording screen showing.\\ Options: Yes, No\\ \item[Clipping Light:] Causes the backlight to flash on when clipping has been detected.\\ Options: Off, Remote unit only, Main and remote unit, Main unit only. \end{description} } \section{\label{ref:playlistoptions}Playlist Options} This menu allows you to work with playlists. Playlists can either be created automatically by playing a file in a directory directly, which will cause all of the files in that directory to be placed in the playlist, or they can be created by hand using the \setting{File Menu} (see page \pageref{ref:Filemenu}) or using the \setting{Playlist Options} menu. Both automatic and manually created playlists can be edited using this menu. \begin{description} \item[Create Playlist:] Rockbox will create a playlist with all tracks in the current directory and all subdirectories. The playlist will be created one folder level up'' from where you currently are. \item[View Current Playlist:] Displays the contents of the playlist currently stored in memory. \item[Save Current Playlist:] Saves the current dynamic playlist, excluding queued tracks, to the specified file. If no path is provided then playlist is saved to current directory (see page \pageref{ref:Playlistsubmenu}). \item[Recursively Insert Directories: ] If set to \setting{On}, then when a directory is inserted or queued into a dynamic playlist, all subdirectories will also be inserted. If set to \setting{Ask}, Rockbox will prompt the user about whether to include subdirectories. Options: \setting{Off}, \setting{Ask}, \setting{On} \item[Warn When Erasing Dynamic Playlist: ] If set to \setting{Yes}, Rockbox will provide a warning if the user attempts to take an action that will cause Rockbox to erase the current dynamic playlist. Options: \setting{Yes}, \setting{No} \end{description} \section{Browse Plugins} With this option you can load and run various plugins that have been written for Rockbox. There are a wide variety of these supplied with Rockbox, including several games, some impressive demos and a number of utilities. A detailed description of the different plugins begins on page \pageref{ref:plugins}. \section{\label{ref:Info}Info} This option shows RAM buffer size, battery voltage level and estimated time remaining, disk total space and disk free space. \opt{player}{Use the MINUS and PLUS keys to step through several pages of information.} \begin{description} \item[Rockbox Info:] Displays some basic system information. This is, from top to bottom, the amount of memory Rockbox has available for storing music (the buffer), battery status, hard disk size and the amount of free space on the disk. \item[Version:] Software version and credits display. \item[Debug (Keep Out!):] This submenu is intended to be used \emph{only} by Rockbox developers. It shows hardware, disk, battery status and other technical information. \warn{It is not recommended that users access this menu unless instructed to do so in the course of fixing a problem with Rockbox. If you think you have messed up your settings by use of this menu please try to reset \emph{all} settings before asking for help.} \end{description} \opt{player}{ \section{Shutdown} This menu option saves the Rockbox configuration and turns off the hard drive before shutting down the machine. For maximum safety this procedure is recommended when turning off the \dap. (There is a very small risk of hard disk corruption otherwise.) See page \pageref{ref:Safeshutdown} for more details. } \opt{RECORDER_PAD,IRIVER_H100_PAD,IRIVER_H300_PAD,IPOD_4G_PAD,IAUDIO_X5_PAD,IPOD_VIDEO_PAD} { \section{Quick Menu} Whilst not strictly part of the \setting{Main Menu}, it is worth noting that a few of the more commonly used settings are available from the \setting{Quick Menu}. The \setting{Quick Menu} screen is accessed by holding the \opt{RECORDER_PAD}{\ButtonFTwo} \opt{IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonMode} \opt{IPOD_4G_PAD,IPOD_VIDEO_PAD}{\ButtonMenu} \opt{IAUDIO_X5_PAD}{\ButtonRec} key, and it allows rapid access to the \setting{Shuffle} and \setting{Repeat} modes (Page \pageref{ref:PlaybackOptions}) and the \setting{Show Files} option (Page \pageref{ref:ShowFiles}). } | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8193696737289429, "perplexity": 1897.7547785440568}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662588661.65/warc/CC-MAIN-20220525151311-20220525181311-00434.warc.gz"} |
https://www.cfd-online.com/W/index.php?title=Molar_fraction&diff=9506&oldid=9504 | # Molar fraction
(Difference between revisions)
Revision as of 15:43, 18 December 2008 (view source) (alcolire)← Older edit Latest revision as of 17:04, 18 December 2008 (view source)Peter (Talk | contribs) m (Reverted edits by LaroaLleto (Talk) to last version by Praveen) Line 1: Line 1: - cbocricl The molar fraction (or mole fraction) of the k-th species, $X_k$, is the ratio between the number of moles of species k, The molar fraction (or mole fraction) of the k-th species, $X_k$, is the ratio between the number of moles of species k, $n_k$ and the total numeber of moles $n$ $n_k$ and the total numeber of moles $n$
The molar fraction (or mole fraction) of the k-th species, $X_k$, is the ratio between the number of moles of species k, $n_k$ and the total numeber of moles $n$
$X_k = \frac{n_k}{n}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9551844000816345, "perplexity": 3219.0413625244832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812758.43/warc/CC-MAIN-20180219171550-20180219191550-00625.warc.gz"} |
http://nrich.maths.org/7620 | ### Escape from Planet Earth
How fast would you have to throw a ball upwards so that it would never land?
### Gravity Paths
Where will the spaceman go when he falls through these strange planetary systems?
# Receding Baseball
##### Stage: 5 Challenge Level:
Suppose a pitcher throws a ball in such a way that the distance between him and the ball is always increasing. It is given that the acceleration due to gravity is $g = 9.81 \mathrm{m/s}^2$ and any air resistance is negligible.
First of all, draw an example throw which has this property, and draw another which doesn't. Find conditions which are necessary to make such a throw.
1) Does it depend on the angle at which the ball is thrown?
2) Does it depend on the initial speed of the ball?
3) How does your result change if the player is at different altitudes? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.850412905216217, "perplexity": 459.30323222995855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190181.34/warc/CC-MAIN-20170322212950-00161-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://direct.mangolassi.it/tags/parameters | @JaredBusch said in Need a good example of getting powershell arguments: I'll hit the google later, because I am on other things, but I found that something I touched today could very easily be improved if I can add parameter handling to the powershell script. Now, the basics are easy as it is all in the \$ARGS variable/object. But I want to have some safety checking. because it is easier to do things right the first time. Example: I want a parameter to note if I should make the thing being done the default. I can pass a 1 like dothing.ps1 1 and I can simply code something to check \$ARG[0] eq "1" but that is not very explanatory to the person using the script. This is more explanatory dothing.ps1 -default for a command. So has anyone seen a good example of parameter handling that I can put into my dothing.ps1 script? I'm not sure I understand exactly what you mean. Taking a guess here, but how I understand is that you'd want to add this at the top of your script: [cmdletbinding()] param ( [Parameter()] [Switch]\$Default ) if (\$Default) { Write-Host "The -Default parameter was specified." } else { Write-Host "The -Default parameter was NOT specified." } Doing that will give you the following output: PS > .\JBTest.ps1 -Default The -Default parameter was specified. PS > .\JBTest.ps1 The -Default parameter was NOT specified. If you want to accept input from a pipeline to work with, let me know. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8190405964851379, "perplexity": 1414.016106897786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250601628.36/warc/CC-MAIN-20200121074002-20200121103002-00052.warc.gz"} |
https://www.scholars.northwestern.edu/en/publications/network-structural-origin-of-instabilities-in-large-complex-syste | Network structural origin of instabilities in large complex systems
Chao Duan, Takashi Nishikawa*, Deniz Eroglu, Adilson E. Motter
*Corresponding author for this work
Research output: Contribution to journalArticlepeer-review
Abstract
A central issue in the study of large complex network systems, such as power grids, financial networks, and ecological systems, is to understand their response to dynamical perturbations. Recent studies recognize that many real networks show nonnormality and that nonnormality can give rise to reactivity—the capacity of a linearly stable system to amplify its response to perturbations, oftentimes exciting nonlinear instabilities. Here, we identify network structural properties underlying the pervasiveness of nonnormality and reactivity in real directed networks, which we establish using the most extensive dataset of such networks studied in this context to date. The identified properties are imbalances between incoming and outgoing network links and paths at each node. On the basis of this characterization, we develop a theory that quantitatively predicts nonnormality and reactivity and explains the observed pervasiveness. We suggest that these results can be used to design, upgrade, control, and manage networks to avoid or promote network instabilities.
Original language English (US) eabm8310 Science Advances 8 28 https://doi.org/10.1126/sciadv.abm8310 Published - Jul 2022
• General
Fingerprint
Dive into the research topics of 'Network structural origin of instabilities in large complex systems'. Together they form a unique fingerprint. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8792241215705872, "perplexity": 3104.261223381602}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710473.38/warc/CC-MAIN-20221128034307-20221128064307-00672.warc.gz"} |
https://mospace.umsystem.edu/xmlui/handle/10355/5296/browse?value=Cersosimo%2C+Dario+O.&type=author | Now showing items 1-1 of 1
• #### Evaluation of novel hovering strategies to improve gravity-tractor deflection merits
(University of Missouri--Columbia, 2011)
The gravity-tractor (GT) consists of a spacecraft hovering inertially over a small asteroid. This equilibrium state is achieved by the action of a pair of engines that balance the gravitational acceleration. Due to Newton's ... | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9433968663215637, "perplexity": 3903.321023115399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861743914.17/warc/CC-MAIN-20160428164223-00014-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://www.alviniwiewiorki.pl/zambia/Jan_4316/ | # CritiCAL SPEed oF BALl mILL fORMulA and DERivatION
##### Ball Mill Critical Speed - Mineral Processing & Metallurgy
2015-6-19 · A Ball Mill Critical Speed (actually ball, rod, AG or SAG) is the speed at which the centrifugal forces equal gravitational forces at the mill shell’s inside surface and no balls will fall from its position onto the shell. The imagery below helps explain what goes on inside a mill as speed varies. Use our online formula. The mill speed is typically defined as the percent of the
##### Mill Critical Speed Formula Derivation - Grinding ...
2021-12-24 · The formula to calculate critical speed is given below. N c = 42.305 /sqt(D-d) N c = critical speed of the mill. D = mill diameter specified in meters. d = diameter of the ball. In practice Ball Mills are driven at a speed of 50-90% of the critical speed, the factor being influenced by economic consideration.
##### How to Calculate and Solve for Critical Mill of Speed ...
2021-7-18 · Now, Click on Ball Mill Sizing under Materials and Metallurgical Now, Click on Critical Speed of Mill under Ball Mill Sizing. The screenshot below displays the page or activity to enter your values, to get the answer for the critical speed of mill according to the respective parameters which is the Mill Diameter (D) and Diameter of Balls (d).. Now, enter the value appropriately
##### Ball Mill Operating Speed - Mechanical Operations Solved ...
2021-4-22 · The critical speed of ball mill is given by, where R = radius of ball mill; r = radius of ball. For R = 1000 mm and r = 50 mm, n c = 30.7 rpm. But the mill is operated at a speed of 15 rpm. Therefore, the mill is operated at 100 x 15/30.7 = 48.86 % of critical speed. If 100 mm dia balls are replaced by 50 mm dia balls, and the other conditions ...
##### Critical speed of ball mill derivation - Abindi Mining ...
2021-7-16 · Critical speed of ball mill formula derivation. 3 apr 2018 semiautogenous grinding sag mill and a ball mill.Apply bonds equation to industrial mills, which differ from the standard, for each mill there is a critical speed that creates centrifuging figure 37c of the with the help of figure 38, the concepts used in derivation of tumbling mills
##### SAGMILLING.COM .:. Mill Critical Speed Determination
The "Critical Speed" for a grinding mill is defined as the rotational speed where centrifugal forces equal gravitational forces at the mill shell's inside surface. This is the rotational speed where balls will not fall away from the mill's shell. Result #1: This mill would need to spin at RPM to be at 100% critical speed. Result #2: This mill's ...
##### Variables in Ball Mill Operation | Paul O. Abbe®
A Slice Mill of 72” diameter by 12” wide would replicate the result of a normal production, mill 72” in diameter as 120” long. A Slice Mill is the same diameter as the production mill but shorter in length. Click to request a ball mill quote online
##### Critical Velocity - Introduction, Formula, Derivation ...
The speed and direction in which the flow of a liquid changes from through tube smooth to turbulent is known as the critical velocity of the fluid. There are multiple variables on which the critical velocity depends, but whether the flow of the fluid is smooth or turbulent is determined by the Reynolds number.
##### Critical Velocity in Vertical Circular motion - formula ...
2017-9-21 · Critical Velocity Formula [ Minimum velocity at the highest point of the vertical circle] | √(gr) formula. Critical velocity formula is expressed as V1 = √(gr) where g is the acceleration due to gravity and r is the radius of the vertical
##### Lectures on Kinetic Theory of Gases and Statistical Physics
2020-11-30 · 16.4.6. Equation of State of a Quantum Ideal Gas140 16.4.7. Entropy and Adiabatic Processes140 16.5. Degeneration141 17. Degenerate Fermi Gas 143 17.1. Fermi Energy143 17.2. Mean Energy and Equation of State at T= 0144 17.3. Heat Capacity146 17.3.1. Qualitative Calculation147 17.3.2. Equation of State at T>0147 17.3.3.
##### Ball Mill Operating Speed - Mechanical Operations Solved ...
2021-4-22 · The critical speed of ball mill is given by, where R = radius of ball mill; r = radius of ball. For R = 1000 mm and r = 50 mm, n c = 30.7 rpm. But the mill is operated at a speed of 15 rpm. Therefore, the mill is operated at 100 x 15/30.7 = 48.86 % of critical speed. If 100 mm dia balls are replaced by 50 mm dia balls, and the other conditions ...
##### critical speed of ball mill derivation - BINQ Mining
2012-12-31 · critical speed of ball mill calculation. Derivation of critical speed of grinding mill – The Q&A wiki. At 75% critical speed this ball mill can be expected to draw 1840 HP with a 40% ball load. »More detailed
##### SAGMILLING.COM .:. Mill Critical Speed Determination
The "Critical Speed" for a grinding mill is defined as the rotational speed where centrifugal forces equal gravitational forces at the mill shell's inside surface. This is the rotational speed where balls will not fall away from the mill's shell. Result #1: This mill would need to spin at RPM to be at 100% critical speed. Result #2: This mill's ...
##### derivation of critical speed of a ball mill
Derivation Of Critical Speed Of Ball Mill. Derivation Of Critical Speed Of Ball Mill Jun 6 1993 lays the foundation for the derivation of a model if a large the mill speed nears 100 per cent of critical speed centrifug ing of the outer bonds equation implies that torque is zero for all j at for ball mill grinding control read
##### Ball Mill Parameter Selection – Power, Rotate Speed, Steel ...
2019-8-30 · 1 Calculation of ball mill capacity. The production capacity of the ball mill is determined by the amount of material required to be ground, and it must have a certain margin when designing and selecting. There are many factors affecting the production capacity of the ball mill, in addition to the nature of the material (grain size, hardness, density, temperature and
##### SAGMILLING.COM .:. tools
SAGMILLING.COM .:. tools. Mill Critical Speed Calculation. Estimates the critical speed of a grinding mill of a given diameter given the mill inside diameter and liner thickness. If given a measured mill rotation (RPM), then the mill`s fraction of the critical speed is
##### Rittinger - an overview | ScienceDirect Topics
For a given value of the filling rate of the bed of balls in the mill, the number of balls is proportional to mill. D 2 L/D CB 3, where L is the length of the mill. The mass that is increased per unit time is proportional to the rotation speed of the ferrule, indicating the output having a critical speed of 42/D 0.5 with the fraction φ of this ...
##### Critical Velocity - Introduction, Formula, Derivation ...
The speed and direction in which the flow of a liquid changes from through tube smooth to turbulent is known as the critical velocity of the fluid. There are multiple variables on which the critical velocity depends, but whether the flow of the fluid is smooth or turbulent is determined by the Reynolds number.
##### Critical Speed Yaw Analysis and Testing - jhscientific
2004-5-5 · Critical Speed Yaw Test, cont’d • A radius was calculated for each chord • If the vehicle is in a true critical speed yaw, there should be a reduction in speed from the first radius to the second radius. • We will calculate the speeds using the standard critical speed yaw equation and the drag factor from the Taurus, which was a non-ABS ...
##### Critical Velocity in Vertical Circular motion - formula ...
2017-9-21 · Critical Velocity Formula [ Minimum velocity at the highest point of the vertical circle] | √(gr) formula. Critical velocity formula is expressed as V1 = √(gr) where g is the acceleration due to gravity and r is the radius of the vertical
##### Ball Mill Operating Speed - Mechanical Operations Solved ...
2021-4-22 · The critical speed of ball mill is given by, where R = radius of ball mill; r = radius of ball. For R = 1000 mm and r = 50 mm, n c = 30.7 rpm. But the mill is operated at a speed of 15 rpm. Therefore, the mill is operated at 100 x 15/30.7 = 48.86 % of critical speed. If 100 mm dia balls are replaced by 50 mm dia balls, and the other conditions ...
##### What Is A Critical Speed In Ball Mill - chmielik.pl
critical speed ball mill calculation. The critical speed of the mill ampc is defined as the speed at which a single ball in equation d is the diameter inside the mill liners and le is the rod and ballBall Mill Critical Speed Mineral Processing Amp Metallurgy 2015619ensp ensp a ball mill critical speed (actually ball rod ag or sag) is the spee
##### critical speed of ball mill derivation - BINQ Mining
2012-12-31 · critical speed of ball mill calculation. Derivation of critical speed of grinding mill – The Q&A wiki. At 75% critical speed this ball mill can be expected to draw 1840 HP with a 40% ball load. »More detailed
##### Ball Mill Parameter Selection – Power, Rotate Speed, Steel ...
2019-8-30 · 1 Calculation of ball mill capacity. The production capacity of the ball mill is determined by the amount of material required to be ground, and it must have a certain margin when designing and selecting. There are many factors affecting the production capacity of the ball mill, in addition to the nature of the material (grain size, hardness, density, temperature and
##### derivation of critical speed of a ball mill
Derivation Of Critical Speed Of Ball Mill. Derivation Of Critical Speed Of Ball Mill Jun 6 1993 lays the foundation for the derivation of a model if a large the mill speed nears 100 per cent of critical speed centrifug ing of the outer bonds equation implies that torque is zero for all j at for ball mill grinding control read
##### ball mills formula - plage-les3pins.fr
formula of ball mill process - shoppingemporium.co.za. formula for critical speed ball mill. ball mill critical speed formula derivation Ball Mill Critical Speed 911M Jun 19 2015 A Ball Mill Critical Speed actually ball rod AG or SAG is the speed at which The Formula derivation ends up -formula for critical speed ball mill-,formula for calculating the critical speed of a ball millfree mill ...
##### TECHNICAL NOTES 8 GRINDING R. P. King
2009-7-30 · A simple equation for calculating net power draft is P 2.00 3 cD 2.5 m LKl kW (8.12) Kl is the loading factor which can be obtained from Figures 8.5 for the popular mill types. 3 c is the mill speed measured as a fraction of the critical speed. More reliable models for the prediction of the power drawn by ball, semi-autogenous and fully autogenous
##### Calculate the operating speed of ball mill if Operating ...
2021-4-1 · The critical speed of ball mill is given by, where R = radius of ball mill; r = radius of ball. But the mill is operated at a speed of 15 rpm. Therefore, the mill is operated at 100 x 15/30.7 = 48.86 % of critical speed.
2022-1-3 · A solid metal ball is falling in a long liquid column and has attained a terminal velocity of 4 m/s. What is the viscosity of the liquid if the radius of the metal ball is r = 5 cm and its density is $$\rho _{s}=8050\,kg/m^{3}$$. (The density of liquid 1 is 1000 𝑘𝑔/𝑚 3 and g is 10 m/s 2.) Solution: The radius of the sphere is r = 0.05 m. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9032127857208252, "perplexity": 1824.1646120046655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00787.warc.gz"} |
http://www.msri.org/workshops/654/schedules/17317 | # Mathematical Sciences Research Institute
Home » Workshop » Schedules » Mass concentration phenomena for the long-wave unstable thin-film equation
# Mass concentration phenomena for the long-wave unstable thin-film equation
## Connections for Women on Optimal Transport: Geometry and Dynamics August 22, 2013 - August 23, 2013
August 23, 2013 (02:00 PM PDT - 03:00 PM PDT)
Speaker(s): Marina Chugunova (Claremont Graduate University)
Location: MSRI: Simons Auditorium
Video
Abstract We study finite speed of support propagation and finite-time blow-up of non-negative solutions for the long-wave unstable thin-film equation. We consider a large range of exponents (n,m) within the super-critical m>n+2 and critical m+2 regimes. For the initial data with negative energy we prove that the solution that blows up in finite time satisfies mass concentration phenomena near the blow-up time. Joint work with: Mary Pugh, Roman Taranets
Supplements | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8514706492424011, "perplexity": 2484.0876438070254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423716.66/warc/CC-MAIN-20170721042214-20170721062214-00367.warc.gz"} |
http://math.stackexchange.com/questions/19370/demonstration-by-induction-1an-%e2%89%a51an | # demonstration by induction: $(1+a)^n ≥1+an$
Demonstrate by induction: $(1+a)^n ≥1+an$ is true, given a real number $a$, for any $n ∈ \mathbb N$. With $a > 0$
I need to demostre this using the induction principle. My doubt is in the second part of the demonstration.
This is what I have:
In order to demonstrate the predicate these two points must be true:
1. $P(1)$ must be True.
2. $∀ n ∈ \mathbb N , P(n) => P(n+1)$ is true.
We demonstrate the first point:
$(1+a)^1 ≥ 1+a*1$, $1+a ≥ 1+a$ So it is true
Now, the second part is where I have the problem. I do not know what to do. I understand the theory but I don't know how to apply it.
I tried this:
$(1+a)^{n+1} ≥ 1+a(n+1)$
But I don't see that as useful.
Any tips?
-
If $n$ is odd, it isn't true for all $a$, just for $a \geq -1$. – tpv Jan 28 '11 at 16:51
oh, sorry, I forget to add that A is positive. – Nerian Jan 28 '11 at 16:55
Just an FYI, the inequality in question is known as Bernoulli's Inequality. – user6459 Feb 1 '11 at 2:15
Ross has pretty much given you all the hints necessary to answer this particular question, so let me instead give you some general advice on proofs by induction.
In doing the induction, especially at first and until you get very comfortable with induction, be very explicit. Label the base Base. Label the inductive step as "Inductive step." State explicitly what the Induction Hypothesis is, state explicitly what you want to prove. Use this as a way to organize your thoughts and have the information at the ready.
Second: The key to most proofs by induction is to take the case you need to prove, and to somehow reduce it to "the previous case plus something extra"; then one applies the inductive hypothesis to simplify/answer the part that is the previous case, and then deal with the "something extra."
Here's an example different from the one at hand, so you can see what I mean.
Consider the following:
Prove that for all natural numbers $n$, $$1\cdot 2 + 2\cdot 3 + \cdots + n(n+1) = \frac{n(n+1)(n+2)}{3}.$$
Proof. We proceed by indution on $n$.
Base. We prove the statement for $n=1$: indeed, $1\cdot 2 = \frac{1(2)(3)}{3}$.
Inductive step.
Induction Hypothesis. We assume the result holds for $k$. That is, we assume that $$1\cdot 2 + 2\cdot 3 + \cdots + k(k+1) = \frac{k(k+1)(k+2)}{3}$$ is true.
To prove: We need to show that the result holds for $k+1$, that is, that $$1 \cdot 2 + 2\cdot 3 + \cdots + (k+1)(k+2) = \frac{(k+1)\bigl((k+1)+1\bigr)\bigl((k+1)+2\bigr)}{3}.$$
(Now comes the point where we take $1\cdot 2 + 2\cdot 3 + \cdots + (k+1)(k+2)$ and try to think of it as "the $k$-case plus something extra").
\begin{align*} &{1\cdot 2 + 2\cdot 3 + \cdots + (k+1)(k+2)}\\ \quad &= \Bigl(1\cdot 2+ 2\cdot 3+\cdots + k(k+1)\Bigr) + (k+1)(k+2) &&\mbox{(associativity of $+$)}\\ \quad&= \frac{k(k+1)(k+2)}{3} + (k+1)(k+2) &&\mbox{(This is the induction hypothesis!)}\\ &= \frac{k(k+1)(k+2)}{3} + \frac{3(k+1)(k+2)}{3} &&\mbox{(Just algebra)}\\ &= \frac{(k+1)(k+2)\bigl(k+3\bigr)}{3} &&\mbox{(factor out $(k+1)(k+2)$)}\\ &= \frac{(k+1)\bigl( (k+1)+1\bigr)\bigl((k+1)+2\bigr)}{3}. \end{align*} Thus, $1\cdot 2 + 2\cdot 3 + \cdots + (k+1)(k+2) = \frac{(k+1)\bigl((k+1)+1\bigr)\bigl((k+1)+2\bigr)}{3}$. This proves the inductive step.
Since the statement holds for $n=1$, and if it holds for $k$ then it holds for $k+1$, then by mathematical induction we conclude that $$1\cdot 2 + 2\cdot 3 + \cdots + n(n+1) = \frac{n(n+1)(n+2)}{3}$$ for all natural numbers $n$. QED
So: be very explicit and clear, to help organize your thoughts. As you get more comfortable with induction, you can leave off makign explicit statement of what you need to prove, etc., but until you are very comfortable, better to be explicit than to be confused.
-
I.e. in radix $3:\ \ (3^n-1)/2\ =\ 11\cdots 1\ >\ 1+1+\cdots +1\$ ($n$ times). $\$ See my answer. – Bill Dubuque Jan 28 '11 at 17:56
@Bill: Sigh; that was silly. It was the same problem, which is precisely what I was trying to avoid; I've changed it to something completely different. My goal was to exhibit the "be very explicit, be very clear, be very organized", not to discuss the particular problem. – Arturo Magidin Jan 28 '11 at 18:18
I used your example to redo the problem. I am still trying it. – Nerian Jan 28 '11 at 18:36
@Arturo: I reach to the point of 'k-case plus something extra'. I write: $(1+a)^k+1 = (1+a)^k*(1+a)$ . I see that the extra is (1+a). So I just add that to the other side? That's $(1+ak)(1+a)$. So I have: $(1+a)^k * (1+a) ≥ (1+ak)(1+a)$ . I move the (1+a) to the other side and that's: $(1+a)^k ≥ (1+ak)$ Is that right, that proofs the inductive step? – Nerian Jan 28 '11 at 18:51
Nerian: you don't want to move the $(1+a)$ to the other side here; that's undoing exactly what you did in the first place! Instead you want to work some algebra on that side and see if you can get to your 'target' statement (in this case, that $(1+a)^{k+1} \geq 1+a(k+1)$.) – Steven Stadnicki Jan 28 '11 at 19:24
Hint: Now you assume that $(1+a)^n \ge 1 + an$ and need to prove that $(1+a)^{(n+1)} \ge 1 + a(n+1)$. So $(1+a)^{(n+1)}=(1+a)^n*(1+a) \ge$ what?
Added:$(1+a)^{(n+1)}=(1+a)^n*(1+a) \ge (1+an)(1+a)$ by the inductive hypothesis $(1+an)(1+a)=1+a(n+1)+a^2n \ge 1+a(n+1)$ So, as Arturo showed in his example, we have proven that if the relation is true for $n$, it is also true for $n+1$. Given that you verified the base case of $n=1$, it is true for all $n$
-
[what]: 1+a(n+1) // 1+an+a // (1+a)+an ? – Nerian Jan 28 '11 at 16:54
@Nerian: Those three are all equal and are what you want the the right side of the greater than sign. So substitute in what you know-what replacement is there for (1+a)^n that will help? – Ross Millikan Jan 28 '11 at 16:58
@Ross: So the right side is OK. We need to transform the left side. A replacement for $(1+a)^n$...mmm. Well, I know that $(1+a)^n≥ 1+an$, which is part of the statement that we want to prove, is true. So if I take that out and just leave the rest, we have: $(1+a) ≥ a$ which is true. Is that what you meant? – Nerian Jan 28 '11 at 17:19
@Nerian: You are assuming that $(1+a)^n \geq 1+na$ as your induction hypothesis. You want to prove that from this you can conclude that $(1+a)^{n+1}\geq 1+(n+1)a$. So take $(1+a)^{n+1}$ and write it as $(1+a)^{n+1}=(1+a)(1+a)^n$. Now, by the induction hypothesis, you know that this entire product is greater than or equal to <fill in the blank>; now continue. – Arturo Magidin Jan 28 '11 at 17:33
@Nerian: I added the finish. – Ross Millikan Jan 31 '11 at 19:42
HINT $\$ Put $\rm\ \ x = 1+a\ \$ in $\rm\displaystyle\ \ \frac{x^n -1}{x-1}\ =\ x^{n-1}+\:\cdots\: +x+1\ \ge\ n\$ for $\rm\ x\ge 1$
Many induction problems are best solved in this manner, i.e. by transforming them into a form that makes the induction completely trivial.
-
I don't understand. Where is that fraction coming from? – Nerian Jan 28 '11 at 17:15
@Nerian: $\rm\ (1+a)^n\ >\ 1+an\ \iff\ ((1+a)^n-1)/a\ >\ n\:,\$ i.e. $\rm\ (x^n-1)/(x-1)\ >\ n\ \ \ \ \ \ \ \$ – Bill Dubuque Jan 28 '11 at 17:20
Before you do anything else, ask yourself: do you even believe that this is true? Have you tried any examples with specific numbers? Fixing the value of $a$ (especially taking $a$ to be very small) and varying the value of $n$, perhaps? Does the result seem plausible after doing this? Do your experiments suggest anything about what's going on?
-
(1 + a)^2 = 1 + 2a + a^2 would be a good start – Simpson17866 Sep 8 '15 at 13:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.937039852142334, "perplexity": 376.17383204697836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701168065.93/warc/CC-MAIN-20160205193928-00253-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://brilliant.org/problems/5-30/ | # 5 = ?
Let us consider a man who is jumping from a 5-storey building on Earth dies as soon as he reaches the surface. From which floor (minimum) must he be jumped in order to die on the Moon?
Consider:
1) Man's mass = 70 kg
2) Initial velocity = 0 m/s
3) Earth's gravity = 9.8 m/s
4) Moon's gravity = 1.6 m/s
5) Height of each floor = 3 m
× | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8471418619155884, "perplexity": 1877.6196605512544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828178.96/warc/CC-MAIN-20171024052836-20171024072836-00601.warc.gz"} |
http://wiki.stat.ucla.edu/socr/index.php?title=AP_Statistics_Curriculum_2007_Limits_LLN&diff=10983&oldid=10427 | # AP Statistics Curriculum 2007 Limits LLN
(Difference between revisions)
Revision as of 17:36, 29 June 2010 (view source)Jenny (Talk | contribs) (→Motivation)← Older edit Revision as of 16:53, 9 May 2011 (view source)IvoDinov (Talk | contribs) (added see also section)Newer edit → Line 29: Line 29:
[[Image:SOCR_Activities_Uniform_E_EstimateExperiment_Dinov_121907_Fig1.jpg|400px]]
[[Image:SOCR_Activities_Uniform_E_EstimateExperiment_Dinov_121907_Fig1.jpg|400px]]
+ + ===See also=== + * [[SOCR_EduMaterials_Activities_LawOfLargeNumbers| The SOCR Law of Large Numbers Activity]] + * [[SOCR_EduMaterials_Activities_LawOfLargeNumbers#Estimating_.CF.80_using_SOCR_simulation| Estimating π using SOCR simulation] + * [http://socr.ucla.edu/htmls/exp/LLN_Simple_Experiment.html Simple LLN Experiment Applet] + * [http://socr.ucla.edu/htmls/exp/Coin_Toss_LLN_Experiment.html Coin Toss LLN Experiment Applet]
## General Advance-Placement (AP) Statistics Curriculum - The Law of Large Numbers
### Motivation
Suppose we independently conduct one experiment repeatedly. Assume that we are interested in the relative frequency of the occurrence of one event whose probability to be observed at each experiment is p. The ratio of the observed frequency of that event to the total number of repetitions converges towards p as the number of (identical and independent) experiments increases. This is an informal statement of the Law of Large Numbers (LLN).
For a more concrete example, suppose we study the average height of a class of 100 students. Compared to the average height of 3 randomly chosen students from this class, the average height of 10 randomly chosen students is most likely closer to the real average height of all 100 students. Since the sample of 10 is a larger than the sample of 3, it is a better representation of the entire class. At one extreme, a sample of 99 of the 100 students will produce a sample average height almost exactly the same as the average height for all 100 students. On the other extreme, sampling a single student will be an extremely variant estimate of the overall class average weight.
### The Law of Large Numbers (LLN)
It is generally necessary to draw the parallels between the formal LLN statements (in terms of sample averages) and the frequent interpretations of the LLN (in terms of probabilities of various events).
Suppose we observe the same process independently multiple times. Assume a binarized (dichotomous) function of the outcome of each trial is of interest (e.g., failure may denote the event that the continuous voltage measure < 0.5V, and the complement, success, that voltage ≥ 0.5V – this is the situation in electronic chips which binarize electric currents to 0 or 1). Researchers are often interested in the event of observing a success at a given trial or the number of successes in an experiment consisting of multiple trials. Let’s denote p=P(success) at each trial. Then, the ratio of the total number of successes to the number of trials (n) is the average $\overline{X_n}={1\over n}\sum_{i=1}^n{X_i}$, where $X_i = \begin{cases}0,& \texttt{failure},\\ 1,& \texttt{success}.\end{cases}$ represents the outcome of the ith trial. Thus, $\overline{X_n}=\hat{p}$, the ratio of the observed frequency of that event to the total number of repetitions, estimates the true p=P(success). Therefore, $\hat{p}$ converges towards p as the number of (identical and independent) trials increases.
### SOCR LLN Activity
Go to SOCR Experiments and select the Coin Toss LLN Experiment from the drop-down list of experiments in the top-left panel. This applet consists of a control toolbar on the top followed by a graph panel in the middle and a results table at the bottom. Use the toolbar to flip coins one at a time, 10, 100, 1,000 at a time or continuously! The toolbar also allows you to stop or reset an experiment and select the probability of Heads (p) using the slider. The graph panel in the middle will dynamically plot the values of the two variables of interest (proportion of heads and difference of Heads and Tails). The outcome table at the bottom presents the summaries of all trials of this experiment.
### LLN Application
One demonstration of the law of large numbers provides practical algorithms for estimation of transcendental numbers. The two most popular transcendental numbers are π and e.
The SOCR E-Estimate Experiment provides the complete details of this simulation. In a nutshell, we can estimate the value of the natural number e using random sampling from Uniform distribution. Suppose $X_1, X_2, \cdots, X_n$ are drawn from Uniform distribution on (0, 1) and define $U= {\operatorname{argmin}}_n { \left (X_1+X_2+...+X_n > 1 \right )}$, note that all $X_i \ge 0$.
Now, the expected value $E(U) = e \approx 2.7182$. Therefore, by LLN, taking averages of $\left \{ U_1, U_2, U_3, ..., U_k \right \}$ values, each computed from random samples $X_1, X_2, ..., X_n \sim U(0,1)$ as described above, will provide a more accurate estimate (as $k \rightarrow \infty$) of the natural number e.
The Uniform E-Estimate Experiment, part of SOCR Experiments, provides a hands-on demonstration of how the LLN facilitates stochastic simulation-based estimation of e. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9734241962432861, "perplexity": 788.2371608184634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948514238.17/warc/CC-MAIN-20171212002021-20171212022021-00062.warc.gz"} |
https://tex.stackexchange.com/questions/494835/how-to-make-the-letter-k-that-denote-krylov-space/494846 | # How to make the letter “K” that denote Krylov space
I'm trying to denote the letter K that appear in notation Krylov Space
I've already tried to use \mathcal and \kappa but it's not the same.
• if you have a pdf of that (eg google suggested sam.math.ethz.ch/~mhg/pub/biksm.pdf) you can list the fonts it uses (just standard computer modern and ams fonts in that case) – David Carlisle Jun 8 at 15:48
• Welcome to the TeX.SE. What package are you using? – Sebastiano Jun 8 at 15:49
• For my humble opinion the K of your picture is the same of \mathcal{K}. – Sebastiano Jun 8 at 16:06
• You are probably doing \usepackage{mathptmx}, Look at my edited answer. – egreg Jun 8 at 16:27
It's the standard \mathcal{K}.
\documentclass{article}
\usepackage{amsmath}
\begin{document}
$\mathcal{K}(r_0;k)=\operatorname{span}\{r_0,Ar_0,\dots,A^kr_0\}$
\end{document}
I guess that your document uses mathptmx. Do like this:
\documentclass{article}
\usepackage{amsmath}
\usepackage{mathptmx}
\DeclareMathAlphabet{\mathcal}{OMS}{cmsy}{m}{n}
\begin{document}
$\mathcal{K}(r_0;k)=\operatorname{span}\{r_0,Ar_0,\dots,A^kr_0\}$
\end{document}
If you're using newtx, the code should be
\documentclass{article}
\usepackage{amsmath}
\usepackage{newtxtext,newtxmath}
\usepackage{fix-cm}
\DeclareMathAlphabet{\mathcal}{OMS}{cmsy}{m}{n}
\begin{document}
$\mathcal{K}(r_0;k)=\operatorname{span}\{r_0,Ar_0,\dots,A^kr_0\}$
\end{document}
looks a bit like kappa from txfonts, just a bit bigger:
\documentclass{article}
\usepackage{graphicx}
\usepackage{mathtools}
\usepackage{txfonts}
\begin{document}
$\text{\scalebox{1.4}{\kappa}}(r_0)$
\end{document}
• Why downvote? This seems to be a valid answer! – xxx--- Jun 9 at 15:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8553699254989624, "perplexity": 4171.239740819212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670268.0/warc/CC-MAIN-20191119222644-20191120010644-00163.warc.gz"} |
https://www.physicsforums.com/threads/integrating-a-polynomial.151228/ | # Integrating a polynomial
1. Jan 13, 2007
### snowJT
I Just want to know if this is how this should be done..
$$\frac{dy}{dx} = \frac{3y^2}{x}$$
$$dy = \frac{3y^2}{x}dx$$
$$dy = 3^2 x^-^1dx$$
$$\frac {1}{3} \int \frac {dy}{y^2} = \int x^-^1dx$$
$$\frac {1}{3} ln| y^2| = ln | x | + C$$
because I forget how to integrate the polynomial...
2. Jan 13, 2007
### cristo
Staff Emeritus
The right hand side is correct. Look again at $$\int y^{-2}dy$$ How do you integrate this?
3. Jan 13, 2007
### AlephZero
$$\frac {1}{3} \int \frac {dy}{y^2} = \int x^-^1dx$$
That's OK, but the integral of 1/y^2 isn't ln(y^2).
Look up how to integrate polynomials, if you forgot. Or if you know how to differentiate polynominals, work it out from the fact that integration is "anti-differentiation".
4. Jan 13, 2007
### snowJT
its not $$\frac{1}{3y}$$ is it?
5. Jan 13, 2007
### cristo
Staff Emeritus
Close; it's -1/(3y), since you must divide by the power of y(which is -1)
6. Jan 13, 2007
### snowJT
oops.. thanks I was thinking that was how you did it originally, I was just confused and did it that other way and forgot ket things... thanks
EDIT: Hmmm I also need to then solve for Y
$$y^-^1 = \frac {-ln|x|}{3} - \frac {C}{3}$$
$$y = \frac{-3}{ln|x|} - \frac{3}{C}$$
Last edited: Jan 13, 2007
7. Jan 13, 2007
### JJ420
that looks wrong to me
8. Jan 13, 2007
### JJ420
not even close i dont think
9. Jan 13, 2007
### JJ420
$$y=\frac{ln x + C}^-1{3}$$
Last edited by a moderator: Jan 13, 2007
10. Jan 13, 2007
### cristo
Staff Emeritus
$$y^{-1} = \frac {-ln|x|}{3} - \frac{C}{3}=\frac{-(ln|x|+C)}{3}$$
$$y=\frac{-3}{(ln|x|+C)}$$
Can you follow what I've done?
JJ420; we are here to help, not criticise!
11. Jan 13, 2007
### JJ420
$$y=\frac{ln x + C}^-1{3}$$
thats what i got but im dumb so dont trust me
12. Jan 13, 2007
### JJ420
oops that -13 is supposed to be an exponent of the numerator (-1) and 3 in the denominator
13. Jan 13, 2007
### cristo
Staff Emeritus
$$\frac{(ln x + C)^{-1}}{3}=\frac{1}{3(ln x + C)}$$ The 3 should definitely be in the numerator, and we need a - sign too. See my post:
Last edited: Jan 13, 2007
14. Jan 13, 2007
### snowJT
no I can't figure it out, common denominator?
15. Jan 13, 2007
### snowJT
oh.. yes I see
why is C positive and not negative
Last edited: Jan 13, 2007
16. Jan 13, 2007
### cristo
Staff Emeritus
Yea you just write ln|x| and C as one fraction with denominator 3. Then take the reciprocal of both sides (since the LHS is 1/y) to obtain an expression in terms of y.
Well, yes. In order to take the reciprocal of both sides, there must only be one fraction on each side.
17. Jan 13, 2007
### JJ420
looking back to the origonal i don't see how there is a negative
18. Jan 13, 2007
### cristo
Staff Emeritus
The minus sign comes about when integrating; integral(y-2)=-y-1 (+C)
19. Jan 13, 2007
### sara_87
snowJT what does C mean?
it means any constant...it can be a minus but since we don't know we just write + C
20. Jan 13, 2007
### snowJT
figured so, there were some examples I just found in my book
Similar Discussions: Integrating a polynomial | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9620790481567383, "perplexity": 4462.08852289054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320865.14/warc/CC-MAIN-20170626203042-20170626223042-00581.warc.gz"} |
https://docs.flexsim.com/en/21.1/Reference/PropertiesPanels/DashboardPanels/Text/Text.html | # The Text Panel
The Text panel contains options for displaying text on charts.
The following properties are on the Text panel:
### Precision
Specify the number of decimals to use on the chart. Note that this will only apply to text for floating point values.
### Font Size
Set the font size for general text on this chart in pixels.
### Title Font Size
Set the font size for the chart title in pixels.
### Axis Title Font Size
Set the font size for the axis titles in pixels.
### X Axis Title
Optional. Specify the text for the x-axis title.
### Y Axis Title
Optional. Specify the text for the y-axis title.
### Custom Title
If checked, the chart will show this property's value as the chart title. You can use this property to specify a title with special characters, or to specify that no title should be shown.
### Show Column Headers
This list allows you to specify which values, if that value is written as text on the chart, should include the column header for that value. For example, if you have a Type column, you might want the chart to show "Type: " before any type values are written. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8909170031547546, "perplexity": 2248.1757922511133}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662530553.34/warc/CC-MAIN-20220519235259-20220520025259-00791.warc.gz"} |
https://math.stackexchange.com/questions/1782294/what-about-dimension-of-w-1-cap-w-2 | # What about dimension of $W_1 \cap W_2$?
Let $$V$$ be vector space of dimension 6 over field $$Z/7Z$$. Let $$W_1$$ and $$W_2$$ be two subspaces of $$V$$. Let $$\dim(W_1)=4$$, $$\dim(W_2)=3$$. what about dimension of $$W_1 \cap W_2$$? I know the formula $$\dim(W_1 + W_2) = \dim(W_1) + \dim(W_2) - \dim(W_1 \cap W_2)$$. I tried with this formula but could not find the proper answer. I am confused with, is dimension ($$W_1 \cap W_2$$) one? or greater than equal to one or anything else?
• Of course, the dimension can be as big as 3, if $W_2\subset W_1$. It can't be bigger; it can be as small as 1, but no smaller; and it can be anything in between, that is, it can be 2. – Gerry Myerson May 12 '16 at 12:32
• @ Gerry Myerson what is the use of field Z/6Z here? – Arun Sharma May 12 '16 at 12:38
• Is $W_1 \bigcup W_2 = V$ true? – Arun Sharma May 12 '16 at 12:47
• Z / 6 Z isn't a field, nor does it appear in the question, Arun, and the union of two subspaces is never a vector space, unless one of the subspaces contains the other. – Gerry Myerson May 15 '16 at 12:46
Note that $$W_1 \subseteq W_1 + W_2 \subseteq V$$. So $$4 = \dim(W_1) \leq \dim(W_1 + W_2) \leq \dim(V) = 6$$. Putting in the information we know into your formula gives that:
$$\dim(W_1 + W_2) = 7 - \dim(W_1 \cap W_2).$$
Using our inequality with this gives:
$$4 \leq 7 - \dim(W_1\cap W_2) \leq 6.$$
This inequality simplifies to:
$$1 \leq \dim(W_1 \cap W_2) \leq 3.$$
• can we say $(W_1 \bigcup W_2)= V$ by the above information? – Arun Sharma May 12 '16 at 17:10
• $W_1 \cup W_2$ is not a subspace unless either $W_1 \subseteq W_2$ or $W_2 \subseteq W_1$. – Ken Duna May 12 '16 at 17:11
• So in fact, $W_1 \cup W_2$ cannot be $V$. – Ken Duna May 12 '16 at 17:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8871995806694031, "perplexity": 396.79556498497107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250624328.55/warc/CC-MAIN-20200124161014-20200124190014-00478.warc.gz"} |
http://www.mathematik.uni-kl.de/~zca/Reports_on_ca/14/paper_html/node4.html | Next: 5. Relating the Generalized Up: Relating Rewriting Techniques on Previous: 3. Relating the Word
# 4. Relating the Word and Ideal Membership Problems in Groups and Free Group Rings
In this section we want to point out how the Gröbner basis methods as introduced in [MaRe93,Re95] for general monoid rings when applied to group rings are related to the word problem. First we state that similar to theorem 1 the word problem for groups is equivalent to a restricted version of the membership problem for ideals in a free group ring. Let the group be presented by a string rewriting system such that there exists an involution , i.e for all we have , , and the . Every group has such a presentation. Notice that the set of rules TI is confluent with respect to any admissible ordering on . By we will denote the free group with presentation . The elements of will be represented by freely reduced words, i.e. we assume that the words do not contain any subwords of the form .
Theorem 6 ([Re95,MaRe95]) Let be a finite string rewriting system presenting a group and without loss of generality for all we assume that l and r are free reduced words. We associate the set of polynomials in with T.
Then for the following statements are equivalent:
(1)
.
(2)
.
Proof : 1.11.1
Next: 5. Relating the Generalized Up: Relating Rewriting Techniques on Previous: 3. Relating the Word
| ZCA Home | Reports | | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9486743211746216, "perplexity": 531.6883508991857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806066.5/warc/CC-MAIN-20171120130647-20171120150647-00339.warc.gz"} |
https://brilliant.org/problems/more-soviet-limits/ | # More soviet limits
Calculus Level 2
If $$L=\displaystyle \lim_{x \rightarrow 1} \dfrac{\sqrt[3]{x}-1}{\sqrt{x}-1}$$, and $$L$$ can be represented in the form $$\dfrac{a}{b}$$, find $$a-b$$. Ensure that $$a$$ and $$b$$ are relatively prime.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8284868597984314, "perplexity": 512.4893114392413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512015.74/warc/CC-MAIN-20181018214747-20181019000247-00151.warc.gz"} |
https://baas.aas.org/pub/2021n1i125p01/release/1 | # Updates on Galactic Gamma-ray Source Population Studies
Presentation #125.01 in the session “GW and MMA. Strong/Weak Gravitational Lensing, Relativistic Astrophysics”.
Published onJan 11, 2021
Updates on Galactic Gamma-ray Source Population Studies
The Fermi Large Area Telescope (LAT) has detected hundreds of Galactic gamma-ray sources, most of them pulsars. But the Galaxy contains tens of thousands of such sources which are still undetected due to their low flux, or because of conflation of the foreground with sources. Characterizing the general properties of detected sources would allow us to estimate the contribution to the diffuse Galactic emission from these undetected sources, and in turn it would help detection of new sources and even searches for dark matter. We present updates on our long-term effort to characterize the general properties of Galactic gamma-ray sources with source population studies and to estimate the number of sources below the Fermi LAT flux sensitivity threshold.
Here we show results after adjusting a model of the detected and undetected pulsars for best fit parameters, by comparing it with the Fermi 4FGL-DR2 catalog. We identify preliminary best fit luminosity function slope, minimum luminosity, and source density for a given source distribution. We then use the model with the best fit parameters to determine the number of pulsars as a function of flux, including those below the sensitivity threshold. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.848257839679718, "perplexity": 1967.086803011193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948976.45/warc/CC-MAIN-20230329120545-20230329150545-00059.warc.gz"} |
http://qualitymathproducts.com/dividing-with-fraction-bars/ | # Dividing with Fraction Bars
The Teacher’s Guides cover this in more depth, this is a good introduction that shows how simple the normally complex “concept” of fraction division can be!
Before showing how to use Fraction Bars to divide ¼ by 2/3, lets look at dividing 5/6 by 1/3. Looks tricky, but it can be so easy!
Using the idea of fitting one amount into another, we can see that 1/3 “fits into” 5/6 twice, with 1/6 remaining. By comparing the remaining 1/6 to the divisor 1/3, we see 1/6 is half of the divisor 1/3. So 5/6 divided by 1/3 is 2 and 1/2.
This is similar to the reasoning when dividing one whole number by another. For example, 17 divided by 5 is 3 with a remainder of 2. So the quotient is 3 2/5. In this example, we compare the remainder, 2, to the divisor, 5, and obtain the ratio 2/5.
Now let’s look at ¼ divided by 2/3. Since 2/3 is greater than ¼, it “fits into” ¼ zero times with a remainder of ¼. So we compare the remainder ¼ to the divisor 2/3. To make this comparison, it is convenient to replace the first two bars by bars with parts of the same size. Now if we compare 3 shaded parts to 8 shaded parts, the ratio is 3/8.
¼ ÷ 2/3 = 3/12 ÷ 8/12 = 3/8
Starting with examples for students where one shaded amount fits into a second shaded amount a whole number of times, students will be able to see that division of fractions is comparing two amounts, just like division of whole numbers. In this way, division of fractions makes sense. An initial example like the one above for 5/6 divided by 1/3 where students can see that 1/3 fits into 5/6 two and one-half times is good. Later bring in the “invert and multiply” rule to show that this method gives the same answers that they can see makes sense with a few simple examples with Fraction Bars. So viewing division as comparing two amounts to see how many times greater one amount is than another, works whether the number being used are whole numbers or fractions. And once we obtain bars with parts of the same size, (i.e. common denominators) finding the quotient of two fractions is just a matter of finding the quotients of whole numbers of part of the same size. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9497231245040894, "perplexity": 855.5970669518211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987781397.63/warc/CC-MAIN-20191021171509-20191021195009-00246.warc.gz"} |
https://lectures.quantecon.org/jl/wald_friedman.html | Code should execute sequentially if run in a Jupyter notebook
# A Problem that Stumped Milton Friedman¶
(and that Abraham Wald solved by inventing sequential analysis)
Co-authors: Chase Coleman
## Overview¶
This lecture describes a statistical decision problem encountered by Milton Friedman and W. Allen Wallis during World War II when they were analysts at the U.S. Government’s Statistical Research Group at Columbia University
This problem led Abraham Wald [Wal47] to formulate sequential analysis, an approach to statistical decision problems intimately related to dynamic programming
In this lecture, we apply dynamic programming algorithms to Friedman and Wallis and Wald’s problem
Key ideas in play will be:
• Bayes’ Law
• Dynamic programming
• Type I and type II statistical errors
• a type I error occurs when you reject a null hypothesis that is true
• a type II error is when you accept a null hypothesis that is false
• Abraham Wald’s sequential probability ratio test
• The power of a statistical test
• The critical region of a statistical test
• A uniformly most powerful test
## Origin of the problem¶
On pages 137-139 of his 1998 book Two Lucky People with Rose Friedman [FF98], Milton Friedman described a problem presented to him and Allen Wallis during World War II, when they worked at the US Government’s Statistical Research Group at Columbia University
Let’s listen to Milton Friedman tell us what happened
“In order to understand the story, it is necessary to have an idea of a simple statistical problem, and of the standard procedure for dealing with it. The actual problem out of which sequential analysis grew will serve. The Navy has two alternative designs (say A and B) for a projectile. It wants to determine which is superior. To do so it undertakes a series of paired firings. On each round it assigns the value 1 or 0 to A accordingly as its performance is superior or inferio to that of B and conversely 0 or 1 to B. The Navy asks the statistician how to conduct the test and how to analyze the results.
“The standard statistical answer was to specify a number of firings (say 1,000) and a pair of percentages (e.g., 53% and 47%) and tell the client that if A receives a 1 in more than 53% of the firings, it can be regarded as superior; if it receives a 1 in fewer than 47%, B can be regarded as superior; if the percentage is between 47% and 53%, neither can be so regarded.
“When Allen Wallis was discussing such a problem with (Navy) Captain Garret L. Schyler, the captain objected that such a test, to quote from Allen’s account, may prove wasteful. If a wise and seasoned ordnance officer like Schyler were on the premises, he would see after the first few thousand or even few hundred [rounds] that the experiment need not be completed either because the new method is obviously inferior or because it is obviously superior beyond what was hoped for $$\ldots$$
Friedman and Wallis struggled with the problem but, after realizing that they were not able to solve it, described the problem to Abraham Wald
That started Wald on the path that led him to Sequential Analysis [Wal47]
We’ll formulate the problem using dynamic programming
## A dynamic programming approach¶
The following presentation of the problem closely follows Dmitri Berskekas’s treatment in Dynamic Programming and Stochastic Control [Ber75]
A decision maker observes iid draws of a random variable $$z$$
He (or she) wants to know which of two probability distributions $$f_0$$ or $$f_1$$ governs $$z$$
After a number of draws, also to be determined, he makes a decision as to which of the distributions is generating the draws he observers
To help formalize the problem, let $$x \in \{x_0, x_1\}$$ be a hidden state that indexes the two distributions:
$\begin{split}\mathbb P\{z = v \mid x \} = \begin{cases} f_0(v) & \mbox{if } x = x_0, \\ f_1(v) & \mbox{if } x = x_1 \end{cases}\end{split}$
Before observing any outcomes, the decision maker believes that the probability that $$x = x_0$$ is
$p_{-1} = \mathbb P \{ x=x_0 \mid \textrm{ no observations} \} \in (0, 1)$
After observing $$k+1$$ observations $$z_k, z_{k-1}, \ldots, z_0$$, he updates this value to
$p_k = \mathbb P \{ x = x_0 \mid z_k, z_{k-1}, \ldots, z_0 \},$
which is calculated recursively by applying Bayes’ law:
$p_{k+1} = \frac{ p_k f_0(z_{k+1})}{ p_k f_0(z_{k+1}) + (1-p_k) f_1 (z_{k+1}) }, \quad k = -1, 0, 1, \ldots$
After observing $$z_k, z_{k-1}, \ldots, z_0$$, the decision maker believes that $$z_{k+1}$$ has probability distribution
$f(v) = p_k f_0(v) + (1-p_k) f_1 (v)$
This is a mixture of distributions $$f_0$$ and $$f_1$$, with the weight on $$f_0$$ being the posterior probability that $$x = x_0$$ [1]
To help illustrate this kind of distribution, let’s inspect some mixtures of beta distributions
The density of a beta probability distribution with parameters $$a$$ and $$b$$ is
$f(z; a, b) = \frac{\Gamma(a+b) z^{a-1} (1-z)^{b-1}}{\Gamma(a) \Gamma(b)} \quad \text{where} \quad \Gamma(t) := \int_{0}^{\infty} x^{t-1} e^{-x} dx$
We’ll discretize this distribution to make it more straightforward to work with
The next figure shows two discretized beta distributions in the top panel
The bottom panel presents mixtures of these distributions, with various mixing probabilities $$p_k$$
The code to generate this figure can be found in beta_plots.jl
### Losses and costs¶
After observing $$z_k, z_{k-1}, \ldots, z_0$$, the decision maker chooses among three distinct actions:
• He decides that $$x = x_0$$ and draws no more $$z$$‘s
• He decides that $$x = x_1$$ and draws no more $$z$$‘s
• He postpones deciding now and instead chooses to draw a $$z_{k+1}$$
Associated with these three actions, the decision maker can suffer three kinds of losses:
• A loss $$L_0$$ if he decides $$x = x_0$$ when actually $$x=x_1$$
• A loss $$L_1$$ if he decides $$x = x_1$$ when actually $$x=x_0$$
• A cost $$c$$ if he postpones deciding and chooses instead to draw another $$z$$
### Digression on type I and type II errors¶
If we regard $$x=x_0$$ as a null hypothesis and $$x=x_1$$ as an alternative hypothesis, then $$L_1$$ and $$L_0$$ are losses associated with two types of statistical errors.
• a type I error is an incorrect rejection of a true null hypothesis (a “false positive”)
• a type II error is a failure to reject a false null hypothesis (a “false negative”)
So when we treat $$x=x_0$$ as the null hypothesis
• We can think of $$L_1$$ as the loss associated with a type I error
• We can think of $$L_0$$ as the loss associated with a type II error
### Intuition¶
Let’s try to guess what an optimal decision rule might look like before we go further
Suppose at some given point in time that $$p$$ is close to 1
Then our prior beliefs and the evidence so far point strongly to $$x = x_0$$
If, on the other hand, $$p$$ is close to 0, then $$x = x_1$$ is strongly favored
Finally, if $$p$$ is in the middle of the interval $$[0, 1]$$, then we have little information in either direction
This reasoning suggests a decision rule such as the one shown in the figure
As we’ll see, this is indeed the correct form of the decision rule
The key problem is to determine the threshold values $$\alpha, \beta$$, which will depend on the parameters listed above
You might like to pause at this point and try to predict the impact of a parameter such as $$c$$ or $$L_0$$ on $$\alpha$$ or $$\beta$$
### A Bellman equation¶
Let $$J(p)$$ be the total loss for a decision maker with current belief $$p$$ who chooses optimally
With some thought, you will agree that $$J$$ should satisfy the Bellman equation
(1)$J(p) = \min \left\{ (1-p) L_0, \; p L_1, \; c + \mathbb E [ J (p') ] \right\}$
where $$p'$$ is the random variable defined by
$p' = \frac{ p f_0(z)}{ p f_0(z) + (1-p) f_1 (z) }$
when $$p$$ is fixed and $$z$$ is drawn from the current best guess, which is the distribution $$f$$ defined by
$f(v) = p f_0(v) + (1-p) f_1 (v)$
In the Bellman equation, minimization is over three actions:
1. accept $$x_0$$
2. accept $$x_1$$
3. postpone deciding and draw again
Let
$A(p) := \mathbb E [ J (p') ]$
Then we can represent the Bellman equation as
$J(p) = \min \left\{ (1-p) L_0, \; p L_1, \; c + A(p) \right\}$
where $$p \in [0,1]$$
Here
• $$(1-p) L_0$$ is the expected loss associated with accepting $$x_0$$ (i.e., the cost of making a type II error)
• $$p L_1$$ is the expected loss associated with accepting $$x_1$$ (i.e., the cost of making a type I error)
• $$c + A(p)$$ is the expected cost associated with drawing one more $$z$$
The optimal decision rule is characterized by two numbers $$\alpha, \beta \in (0,1) \times (0,1)$$ that satisfy
$(1- p) L_0 < \min \{ p L_1, c + A(p) \} \textrm { if } p \geq \alpha$
and
$p L_1 < \min \{ (1-p) L_0, c + A(p) \} \textrm { if } p \leq \beta$
The optimal decision rule is then
$\begin{split}\textrm { accept } x=x_0 \textrm{ if } p \geq \alpha \\ \textrm { accept } x=x_1 \textrm{ if } p \leq \beta \\ \textrm { draw another } z \textrm{ if } \beta \leq p \leq \alpha\end{split}$
Our aim is to compute the value function $$J$$, and from it the associated cutoffs $$\alpha$$ and $$\beta$$
One sensible approach is to write the three components of $$J$$ that appear on the right side of the Bellman equation as separate functions
Later, doing this will help us obey the don’t repeat yourself (DRY) golden rule of coding
## Implementation¶
Let’s code this problem up and solve it
To approximate the value function that solves Bellman equation (1), we use value function iteration
As in the optimal growth lecture, to approximate a continuous value function
• We iterate at a finite grid of possible values of $$p$$
• When we evaluate $$A(p)$$ between grid points, we use linear interpolation
This means that to evaluate $$J(p)$$ where $$p$$ is not a grid point, we must use two points:
• First, we use the largest of all the grid points smaller than $$p$$, and call it $$p_i$$
• Second, we use the grid point immediately after $$p$$, named $$p_{i+1}$$, to approximate the function value as
$J(p) = J(p_i) + (p - p_i) \frac{J(p_{i+1}) - J(p_i)}{p_{i+1} - p_{i}}$
In one dimension, you can think of this as simply drawing a line between each pair of points on the grid
Here’s the code
using Distributions
using QuantEcon.compute_fixed_point, QuantEcon.DiscreteRV, QuantEcon.draw, QuantEcon.LinInterp
using Plots
pyplot()
using LaTeXStrings
"""
For a given probability return expected loss of choosing model 0
"""
expect_loss_choose_0(p::Real, L0::Real) = (1-p)*L0
"""
For a given probability return expected loss of choosing model 1
"""
expect_loss_choose_1(p::Real, L1::Real) = p*L1
"""
We will need to be able to evaluate the expectation of our Bellman
equation J. In order to do this, we need the current probability
that model 0 is correct (p), the distributions (f0, f1), and a
function that can evaluate the Bellman equation
"""
function EJ(p::Real, f0::AbstractVector, f1::AbstractVector, J::LinInterp)
# Get the current distribution we believe (p*f0 + (1-p)*f1)
curr_dist = p*f0 + (1-p)*f1
# Get tomorrow's expected distribution through Bayes law
tp1_dist = clamp.((p*f0) ./ (p*f0 + (1-p)*f1), 0, 1)
# Evaluate the expectation
EJ = dot(curr_dist, J.(tp1_dist))
return EJ
end
expect_loss_cont(p::Real, c::Real,
f0::AbstractVector, f1::AbstractVector, J::LinInterp) =
c + EJ(p, f0, f1, J)
"""
Evaluates the value function for a given continuation value
function; that is, evaluates
J(p) = min(pL0, (1-p)L1, c + E[J(p')])
Uses linear interpolation between points
"""
function bellman_operator(pgrid::AbstractVector, c::Real,
f0::AbstractVector, f1::AbstractVector,
L0::Real, L1::Real, J::AbstractVector)
m = length(pgrid)
@assert m == length(J)
J_out = zeros(m)
J_interp = LinInterp(pgrid, J)
for (p_ind, p) in enumerate(pgrid)
# Payoff of choosing model 0
p_c_0 = expect_loss_choose_0(p, L0)
p_c_1 = expect_loss_choose_1(p, L1)
p_con = expect_loss_cont(p, c, f0, f1, J_interp)
J_out[p_ind] = min(p_c_0, p_c_1, p_con)
end
return J_out
end
# Create two distributions over 50 values for k
# We are using a discretized beta distribution
p_m1 = linspace(0, 1, 50)
f0 = clamp.(pdf(Beta(1, 1), p_m1), 1e-8, Inf)
f0 = f0 / sum(f0)
f1 = clamp.(pdf(Beta(9, 9), p_m1), 1e-8, Inf)
f1 = f1 / sum(f1)
# To solve
pg = linspace(0, 1, 251)
J1 = compute_fixed_point(x -> bellman_operator(pg, 0.5, f0, f1, 5.0, 5.0, x),
zeros(length(pg)), err_tol=1e-6, print_skip=5);
Running it produces the following output on our machine
Compute iterate 5 with error 0.08552607733051265
Compute iterate 10 with error 0.00038782894418165625
Compute iterate 15 with error 1.6097835344730527e-6
Converged in 16 steps
The distance column shows the maximal distance between successive iterates
This converges to zero quickly, indicating a successful iterative procedure
Iteration terminates when the distance falls below some threshold
### A more sophisticated implementation¶
Now for some gentle criticisms of the preceding code
By writing the code in terms of functions, we have to pass around some things that are constant throughout the problem
• $$c$$, $$f_0$$, $$f_1$$, $$L_0$$, and $$L_1$$
So now let’s turn our simple script into a type
This will allow us to simplify the function calls and make the code more reusable
We shall construct two types that
• store all of our parameters for us internally
• represent the solution to our Bellman equation alongside the $$\alpha$$ and $$\beta$$ decision cutoffs
• accompany many of the same functions used above which now act on the type directly
• allow us, in addition, to simulate draws and the decision process under different prior beliefs
#=
Author: Shunsuke Hori
=#
"""
This type is used to store the solution to the problem presented
in the "Wald Friedman" notebook presented on the QuantEcon website.
Solution
----------
J : AbstractVector
Discretized value function that solves the Bellman equation
lb : Real
Lower cutoff for continuation decision
ub : Real
Upper cutoff for continuation decision
"""
mutable struct WFSolution{TAV <: AbstractVector, TR<:Real}
J::TAV
lb::TR
ub::TR
end
"""
This type is used to solve the problem presented in the "Wald Friedman"
notebook presented on the QuantEcon website.
Parameters
----------
c : Real
Cost of postponing decision
L0 : Real
Cost of choosing model 0 when the truth is model 1
L1 : Real
Cost of choosing model 1 when the truth is model 0
f0 : AbstractVector
A finite state probability distribution
f1 : AbstractVector
A finite state probability distribution
m : Integer
Number of points to use in function approximation
"""
struct WaldFriedman{TR <: Real, TI <: Integer,
TAV1 <: AbstractVector, TAV2 <: AbstractVector}
c::TR
L0::TR
L1::TR
f0::TAV1
f1::TAV1
m::TI
pgrid::TAV2
sol::WFSolution
end
function WaldFriedman(c::Real, L0::Real, L1::Real,
f0::AbstractVector, f1::AbstractVector; m::Integer=25)
pgrid = linspace(0.0, 1.0, m)
# Renormalize distributions so nothing is "too" small
f0 = clamp.(f0, 1e-8, 1-1e-8)
f1 = clamp.(f1, 1e-8, 1-1e-8)
f0 = f0 / sum(f0)
f1 = f1 / sum(f1)
J = zeros(m)
lb = 0.
ub = 0.
WaldFriedman(c, L0, L1, f0, f1, m, pgrid, WFSolution(J, lb, ub))
end
"""
This function takes a value for the probability with which
the correct model is model 0 and returns the mixed
distribution that corresponds with that belief.
"""
current_distribution(wf::WaldFriedman, p::Real) = p*wf.f0 + (1-p)*wf.f1
"""
This function takes a value for p, and a realization of the
random variable and calculates the value for p tomorrow.
"""
function bayes_update_k(wf::WaldFriedman, p::Real, k::Integer)
f0_k = wf.f0[k]
f1_k = wf.f1[k]
p_tp1 = p*f0_k / (p*f0_k + (1-p)*f1_k)
return clamp(p_tp1, 0, 1)
end
"""
This is similar to bayes_update_k except it returns a
new value for p for each realization of the random variable
"""
bayes_update_all(wf::WaldFriedman, p::Real) =
clamp.(p*wf.f0 ./ (p*wf.f0 + (1-p)*wf.f1), 0, 1)
"""
For a given probability specify the cost of accepting model 0
"""
payoff_choose_f0(wf::WaldFriedman, p::Real) = (1-p)*wf.L0
"""
For a given probability specify the cost of accepting model 1
"""
payoff_choose_f1(wf::WaldFriedman, p::Real) = p*wf.L1
"""
This function evaluates the expectation of the value function
at period t+1. It does so by taking the current probability
distribution over outcomes:
p(z_{k+1}) = p_k f_0(z_{k+1}) + (1-p_k) f_1(z_{k+1})
and evaluating the value function at the possible states
tomorrow J(p_{t+1}) where
p_{t+1} = p f0 / ( p f0 + (1-p) f1)
Parameters
----------
p : Real
The current believed probability that model 0 is the true
model.
J : LinInterp
The current value function for a decision to continue
Returns
-------
EJ : scalar
The expected value of the value function tomorrow
"""
function EJ(wf::WaldFriedman, p::Real, J::LinInterp)
# Pull out information
f0, f1 = wf.f0, wf.f1
# Get the current believed distribution and tomorrows possible dists
# Need to clip to make sure things don't blow up (go to infinity)
curr_dist = current_distribution(wf, p)
tp1_dist = bayes_update_all(wf, p)
# Evaluate the expectation
EJ = dot(curr_dist, J.(tp1_dist))
return EJ
end
"""
For a given probability distribution and value function give
cost of continuing the search for correct model
"""
payoff_continue(wf::WaldFriedman, p::Real, J::LinInterp) = wf.c + EJ(wf, p, J)
"""
Evaluates the value function for a given continuation value
function; that is, evaluates
J(p) = min( (1-p)L0, pL1, c + E[J(p')])
Uses linear interpolation between points
"""
function bellman_operator(wf::WaldFriedman, J::AbstractVector)
c, L0, L1, f0, f1 = wf.c, wf.L0, wf.L1, wf.f0, wf.f1
m, pgrid = wf.m, wf.pgrid
J_out = similar(J)
J_interp = LinInterp(pgrid, J)
for (p_ind, p) in enumerate(pgrid)
# Payoff of choosing model 0
p_c_0 = payoff_choose_f0(wf, p)
p_c_1 = payoff_choose_f1(wf, p)
p_con = payoff_continue(wf, p, J_interp)
J_out[p_ind] = min(p_c_0, p_c_1, p_con)
end
return J_out
end
"""
This function takes a value function and returns the corresponding
cutoffs of where you transition between continue and choosing a
specific model
"""
function find_cutoff_rule(wf::WaldFriedman, J::AbstractVector)
m, pgrid = wf.m, wf.pgrid
# Evaluate cost at all points on grid for choosing a model
p_c_0 = payoff_choose_f0.(wf, pgrid)
p_c_1 = payoff_choose_f1.(wf, pgrid)
# The cutoff points can be found by differencing these costs with
# the Bellman equation (J is always less than or equal to p_c_i)
lb = pgrid[searchsortedlast(p_c_1 - J, 1e-10)]
ub = pgrid[searchsortedlast(J - p_c_0, -1e-10)]
return lb, ub
end
function solve_model!(wf::WaldFriedman; tol::AbstractFloat=1e-7)
bell_op(x) = bellman_operator(wf, x)
J = compute_fixed_point(bell_op, zeros(wf.m), err_tol=tol, print_skip=5)
wf.sol.J = J
wf.sol.lb, wf.sol.ub = find_cutoff_rule(wf, J)
return J
end
"""
This function takes an initial condition and simulates until it
stops (when a decision is made).
"""
function simulate(wf::WaldFriedman, f::AbstractVector; p0::Real=0.5)
# Check whether vf is computed
if sum(abs, wf.sol.J) < 1e-8
solve_model!(wf)
end
# Unpack useful info
lb, ub = wf.sol.lb, wf.sol.ub
drv = DiscreteRV(f)
# Initialize a couple useful variables
decision = 0
p = p0
t = 0
while true
# Maybe should specify which distribution is correct one so that
# the draws come from the "right" distribution
k = rand(drv)
t = t+1
p = bayes_update_k(wf, p, k)
if p < lb
decision = 1
break
elseif p > ub
decision = 0
break
end
end
return decision, p, t
end
abstract type HiddenDistribution end
struct F0 <: HiddenDistribution end
struct F1 <: HiddenDistribution end
"""
Uses the distribution f0 as the true data generating
process
"""
function simulate_tdgp(wf::WaldFriedman, f::F0; p0::Real=0.5)
decision, p, t = simulate(wf, wf.f0; p0=p0)
correct = (decision == 0)
return correct, p, t
end
"""
Uses the distribution f1 as the true data generating
process
"""
function simulate_tdgp(wf::WaldFriedman, f::F1; p0::Real=0.5)
decision, p, t = simulate(wf, wf.f1; p0=p0)
correct = (decision == 1)
return correct, p, t
end
"""
Simulates repeatedly to get distributions of time needed to make a
decision and how often they are correct.
"""
function stopping_dist(wf::WaldFriedman;
ndraws::Integer=250, f::HiddenDistribution=F0())
# Allocate space
tdist = Vector{Int64}(ndraws)
cdist = Vector{Bool}(ndraws)
for i in 1:ndraws
correct, p, t = simulate_tdgp(wf, f)
tdist[i] = t
cdist[i] = correct
end
return cdist, tdist
end
Now let’s use our type to solve Bellman equation (1) and verify that it gives similar output
# Create two distributions over 50 values for k
# We are using a discretized beta distribution
p_m1 = linspace(0, 1, 50)
f0 = clamp.(pdf(Beta(1, 1), p_m1), 1e-8, Inf)
f0 = f0 / sum(f0)
f1 = clamp.(pdf(Beta(9, 9), p_m1), 1e-8, Inf)
f1 = f1 / sum(f1);
wf = WaldFriedman(0.5, 5.0, 5.0, f0, f1; m=251)
J2 = compute_fixed_point(x -> bellman_operator(wf, x), zeros(wf.m), err_tol=1e-6, print_skip=5)
@printf("If this is true then both approaches gave same answer:\n")
print(isapprox(J1, J2; atol=1e-5))
We get the same output in terms of distance
Compute iterate 5 with error 0.0855260926408965
Compute iterate 10 with error 0.00038782882545862485
Compute iterate 15 with error 1.609783120581909e-6
Converged in 16 steps
If this is true then both approaches gave same answer:
true
The approximate value functions produced are also the same
Rather than discuss this further, let’s go ahead and use our code to generate some results
## Analysis¶
Now that our routines are working, let’s inspect the solutions
# Choose parameters
c = 1.25
L0 = 27.0
L1 = 27.0
Here’s a plot of some objects we’ll discuss one by one
The code to generate this figure can be found in wald_solution_plots.jl
### Value Function¶
In the top left subfigure we have the two beta distributions, $$f_0$$ and $$f_1$$
In the top right we have corresponding value function $$J$$
It equals $$p L_1$$ for $$p \leq \beta$$, and $$(1-p )L_0$$ for $$p \geq \alpha$$
The slopes of the two linear pieces of the value function are determined by $$L_1$$ and $$- L_0$$
The value function is smooth in the interior region, where the posterior probability assigned to $$f_0$$ is in the indecisive region $$p \in (\beta, \alpha)$$
The decision maker continues to sample until the probability that he attaches to model $$f_0$$ falls below $$\beta$$ or above $$\alpha$$
### Simulations¶
The bottom two subfigures show the outcomes of 500 simulations of the decision process
On the left is a histogram of the stopping times, which equal the number of draws of $$z_k$$ required to make a decision
The average number of draws is around 6.6
On the right is the fraction of correct decisions at the stopping time
In this case the decision maker is correct 80% of the time
### Comparative statics¶
Now let’s consider the following exercise
We double the cost of drawing an additional observation
Before you look, think about what will happen:
• Will the decision maker be correct more or less often?
• Will he make decisions sooner or later?
Here’s the figure
Notice what happens
The stopping times dropped dramatically!
Increased cost per draw has induced the decision maker usually to take only 1 or 2 draws before deciding
Because he decides with less, the percentage of time he is correct drops
This leads to him having a higher expected loss when he puts equal weight on both models
### A notebook implementation¶
To facilitate comparative statics, we provide a Jupyter notebook that generates the same plots, but with sliders
With these sliders you can adjust parameters and immediately observe
• effects on the smoothness of the value function in the indecisive middle range as we increase the number of grid points in the piecewise linear approximation.
• effects of different settings for the cost parameters $$L_0, L_1, c$$, the parameters of two beta distributions $$f_0$$ and $$f_1$$, and the number of points and linear functions $$m$$ to use in the piece-wise continuous approximation to the value function.
• various simulations from $$f_0$$ and associated distributions of waiting times to making a decision
• associated histograms of correct and incorrect decisions
## Comparison with Neyman-Pearson formulation¶
For several reasons, it is useful to describe the theory underlying the test that Navy Captain G. S. Schuyler had been told to use and that led him to approach Milton Friedman and Allan Wallis to convey his conjecture that superior practical procedures existed
Evidently, the Navy had told Captail Schuyler to use what it knew to be a state-of-the-art Neyman-Pearson test
We’ll rely on Abraham Wald’s [Wal47] elegant summary of Neyman-Pearson theory
For our purposes, watch for there features of the setup:
• the assumption of a fixed sample size $$n$$
• the application of laws of large numbers, conditioned on alternative probability models, to interpret the probabilities $$\alpha$$ and $$\beta$$ defined in the Neyman-Pearson theory
Recall that in the sequential analytic formulation above, that
• The sample size $$n$$ is not fixed but rather an object to be chosen; technically $$n$$ is a random variable
• The parameters $$\beta$$ and $$\alpha$$ characterize cut-off rules used to determine $$n$$ as a random variable
• Laws of large numbers make no appearances in the sequential construction
In chapter 1 of Sequential Analysis [Wal47] Abraham Wald summarizes the Neyman-Pearson approach to hypothesis testing
Wald frames the problem as making a decision about a probability distribution that is partially known
(You have to assume that something is already known in order to state a well posed problem. Usually, something means a lot.)
By limiting what is unknown, Wald uses the following simple structure to illustrate the main ideas.
• a decision maker wants to decide which of two distributions $$f_0$$, $$f_1$$ govern an i.i.d. random variable $$z$$
• The null hypothesis $$H_0$$ is the statement that $$f_0$$ governs the data.
• The alternative hypothesis $$H_1$$ is the statement that $$f_1$$ governs the data.
• The problem is to devise and analyze a test of hypothesis $$H_0$$ against the alternative hypothesis $$H_1$$ on the basis of a sample of a fixed number $$n$$ independent observations $$z_1, z_2, \ldots, z_n$$ of the random variable $$z$$.
To quote Abraham Wald,
• A test procedure leading to the acceptance or rejection of the hypothesis in question is simply a rule specifying, for each possible sample of size $$n$$, whether the hypothesis should be accepted or rejected on the basis of the sample. This may also be expressed as follows: A test procedure is simply a subdivision of the totality of all possible samples of size $$n$$ into two mutually exclusive parts, say part 1 and part 2, together with the application of the rule that the hypothesis be accepted if the observed sample is contained in part 2. Part 1 is also called the critical region. Since part 2 is the totality of all samples of size 2 which are not included in part 1, part 2 is uniquely determined by part 1. Thus, choosing a test procedure is equivalent to determining a critical region.
Let’s listen to Wald longer:
• As a basis for choosing among critical regions the following considerations have been advanced by Neyman and Pearson: In accepting or rejecting $$H_0$$ we may commit errors of two kinds. We commit an error of the first kind if we reject $$H_0$$ when it is true; we commit an error of the second kind if we accept $$H_0$$ when $$H_1$$ is true. After a particular critical region $$W$$ has been chosen, the probability of committing an error of the first kind, as well as the probability of committing an error of the second kind is uniquely determined. The probability of committing an error of the first kind is equal to the probability, determined by the assumption that $$H_0$$ is true, that the observed sample will be included in the critical region $$W$$. The probability of committing an error of the second kind is equal to the probability, determined on the assumption that $$H_1$$ is true, that the probability will fall outside the critical region $$W$$. For any given critical region $$W$$ we shall denote the probability of an error of the first kind by $$\alpha$$ and the probability of an error of the second kind by $$\beta$$.
Let’s listen carefully to how Wald applies a law of large numbers to interpret $$\alpha$$ and $$\beta$$:
• The probabilities $$\alpha$$ and $$\beta$$ have the following important practical interpretation: Suppose that we draw a large number of samples of size $$n$$. Let $$M$$ be the number of such samples drawn. Suppose that for each of these $$M$$ samples we reject $$H_0$$ if the sample is included in $$W$$ and accept $$H_0$$ if the sample lies outside $$W$$. In this way we make $$M$$ statements of rejection or acceptance. Some of these statements will in general be wrong. If $$H_0$$ is true and if $$M$$ is large, the probability is nearly $$1$$ (i.e., it is practically certain) that the proportion of wrong statements (i.e., the number of wrong statements divided by $$M$$) will be approximately $$\alpha$$. If $$H_1$$ is true, the probability is nearly $$1$$ that the proportion of wrong statements will be approximately $$\beta$$. Thus, we can say that in the long run [ here Wald applies a law of large numbers by driving $$M \rightarrow \infty$$ (our comment, not Wald’s) ] the proportion of wrong statements will be $$\alpha$$ if $$H_0$$is true and $$\beta$$ if $$H_1$$ is true.
The quantity $$\alpha$$ is called the size of the critical region, and the quantity $$1-\beta$$ is called the power of the critical region.
Wald notes that
• one critical region $$W$$ is more desirable than another if it has smaller values of $$\alpha$$ and $$\beta$$. Although either $$\alpha$$ or $$\beta$$ can be made arbitrarily small by a proper choice of the critical region $$W$$, it is possible to make both $$\alpha$$ and $$\beta$$ arbitrarily small for a fixed value of $$n$$, i.e., a fixed sample size.
Wald summarizes Neyman and Pearson’s setup as follows:
• Neyman and Pearson show that a region consisting of all samples $$(z_1, z_2, \ldots, z_n)$$ which satisfy the inequality
$\frac{ f_1(z_1) \cdots f_1(z_n)}{f_0(z_1) \cdots f_1(z_n)} \geq k$
is a most powerful critical region for testing the hypothesis $$H_0$$ against the alternative hypothesis $$H_1$$. The term $$k$$ on the right side is a constant chosen so that the region will have the required size $$\alpha$$.
Wald goes on to discuss Neyman and Pearson’s concept of uniformly most powerful test.
Here is how Wald introduces the notion of a sequential test
• A rule is given for making one of the following three decisions at any stage of the experiment (at the m th trial for each integral value of m ): (1) to accept the hypothesis H , (2) to reject the hypothesis H , (3) to continue the experiment by making an additional observation. Thus, such a test procedure is carried out sequentially. On the basis of the first observation one of the aforementioned decisions is made. If the first or second decision is made, the process is terminated. If the third decision is made, a second trial is performed. Again, on the basis of the first two observations one of the three decisions is made. If the third decision is made, a third trial is performed, and so on. The process is continued until either the first or the second decisions is made. The number n of observations required by such a test procedure is a random variable, since the value of n depends on the outcome of the observations.
Footnotes
[1] Because the decision maker believes that $$z_{k+1}$$ is drawn from a mixture of two i.i.d. distributions, he does not believe that the sequence $$[z_{k+1}, z_{k+2}, \ldots]$$ is i.i.d. Instead, he believes that it is exchangeable. See [Kre88] chapter 11, for a discussion of exchangeability.
• Share page | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8373200297355652, "perplexity": 1417.4326583539948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812938.85/warc/CC-MAIN-20180220110011-20180220130011-00521.warc.gz"} |
https://brilliant.org/problems/a-problem-by-akash-vm-3/ | # An electricity and magnetism problem by Akash VM
AN ELECTRON IS PLACED IN ONE OF THE CORNERS OF A CUBE SIDE 2cm.CALCULATE THE FLUX THROUGH ALL THE FACES IF THE ELECTRIC PERMITTIVITY IS E
× | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8923495411872864, "perplexity": 816.4382674134804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648594.80/warc/CC-MAIN-20180323200519-20180323220519-00030.warc.gz"} |
https://control.com/textbook/closed-loop-control/p-i-and-d-responses-graphed/ | # P, I, and D Responses Graphed
## Chapter 32 - Closed-loop Control Systems
A very helpful method for understanding the operation of proportional, integral, and derivative control terms is to analyze their respective responses to the same input conditions over time. This section is divided into subsections showing P, I, and D responses for several different input conditions, in the form of graphs. In each graph, the controller is assumed to be direct-acting (i.e. an increase in process variable results in an increase in output).
It should be noted that these graphic illustrations are all qualitative, not quantitative. There is too little information given in each case to plot exact responses. The illustrations of P, I, and D actions focus only on the shapes of the responses, not their exact numerical values.
In order to quantitatively predict PID controller responses, one would have to know the values of all PID settings, as well as the original starting value of the output before an input change occurred and a time index of when the change(s) occurred.
### Responses to a single step-change
Proportional action directly mimics the shape of the input change (a step). Integral action ramps at a rate proportional to the magnitude of the input step. Since the input step holds a constant value, the integral action ramps at a constant rate (a constant slope). Derivative action interprets the step as an infinite rate of change, and so generates a “spike911” driving the output to saturation.
When combined into one PID output, the three actions produce this response:
### Responses to a momentary step-and-return
Proportional action directly mimics the shape of the input change (an up-and-down step). Integral action ramps at a rate proportional to the magnitude of the input step, for as long as the PV is unequal to the SP. Once PV = SP again, integral action stops ramping and simply holds the last value912. Derivative action interprets both steps as infinite rates of change, and so generates “spikes913” at the leading and at the trailing edges of the step. Note how the leading (rising) edge causes derivative action to saturate high, while the trailing (falling) edge causes it to saturate low.
When combined into one PID output, the three actions produce this response:
### Responses to two momentary steps-and-returns
Proportional action directly mimics the shape of all input changes. Integral action ramps at a rate proportional to the magnitude of the input step, for as long as the PV is unequal to the SP. Once PV = SP again, integral action stops ramping and simply holds the last value. Derivative action interprets each step as an infinite rate of change, and so generates a “spike” at the leading and at the trailing edges of each step. Note how a leading (rising) edge causes derivative action to saturate high, while a trailing (falling) edge causes it to saturate low.
When combined into one PID output, the three actions produce this response:
### Responses to a ramp-and-hold
Proportional action directly mimics the ramp-and-hold shape of the input. Integral action ramps slowly at first (when the error is small) but increases ramping rate as error increases. When error stabilizes, integral rate likewise stabilizes. Derivative action offsets the output according to the input’s ramping rate.
When combined into one PID output, the three actions produce this response:
### Responses to an up-and-down ramp
Proportional action directly mimics the up-and-down ramp shape of the input. Integral action ramps slowly at first (when the error is small) but increases ramping rate as error increases, then ramps slower as error decreases back to zero. Once PV = SP again, integral action stops ramping and simply holds the last value. Derivative action offsets the output according to the input’s ramping rate: first positive then negative.
When combined into one PID output, the three actions produce this response:
### Responses to a multi-slope ramp
Proportional action directly mimics the ramp shape of the input. Integral action ramps slowly at first (when the error is small) but increases ramping rate as error increases, then accelerates its increase as the PV ramps even steeper. Once PV = SP again, integral action stops ramping and simply holds the last value. Derivative action offsets the output according to the input’s ramping rate: first positive, then more positive, then it spikes negative when the PV suddenly returns to SP.
When combined into one PID output, the three actions produce this response:
### Responses to a multiple ramps and steps
Proportional action directly mimics the ramp-and-step shape of the input. Integral action ramps slowly at first (when the error is small) but increases ramping rate as error increases. Which each higher ramp-and-step in PV, integral action winds up at an ever-increasing rate. Since PV never equals SP again, integral action never stops ramping upward. Derivative action steps with each ramp of the PV.
When combined into one PID output, the three actions produce this response:
### Responses to a sine wavelet
As always, proportional action directly mimics the shape of the input. The 90$$^{o}$$ phase shift seen in the integral and derivative responses, compared to the PV wavelet, is no accident or coincidence. The derivative of a sinusoidal function is always a cosine function, which is mathematically identical to a sine function with the angle advanced by 90$$^{o}$$:
${d \over dx} (\sin x) = \cos x = \sin (x + 90^o)$
Conversely, the integral of a sine function is always a negative cosine function914, which is mathematically identical to a sine function with the angle retarded by 90$$^{o}$$:
$\int \sin x \> dx = - \cos x = \sin (x - 90^o)$
In summary, the derivative operation always adds a positive (leading) phase shift to a sinusoidal input waveform, while the integral operation always adds a negative (lagging) phase shift to a sinusoidal input waveform.
When combined into one PID output, these particular integral and derivative actions mostly cancel, since they happen to be sinusoidal wavelets of equal amplitude and opposite phase. Thus, the only way that the final (PID) output differs from proportional-only action in this particular case is the “steps” caused by derivative action responding to the input’s sudden rise at the beginning and end of the wavelet:
If the I and D tuning parameters were such that the integral and derivative responses were not equal in amplitude, their effects would not completely cancel. Rather, the resultant of P, I, and D actions would be a sine wavelet having a phase shift somewhere between $$-90^{o}$$ and $$+90^{o}$$ exclusive, depending on the relative strengths of the P, I, and D actions.
The 90 degree phase shifts associated with the integral and derivative operations are useful to understand when tuning PID controllers. If one is familiar with these phase shift relationships, it is relatively easy to analyze the response of a PID controller to a sinusoidal input (such as when a process oscillates following a sudden load or setpoint change) to determine if the controller’s response is dominated by any one of the three actions. This may be helpful in “de-tuning” an over-tuned (overly aggressive) PID controller, if an excess of P, I, or D action may be identified from a phase comparison of PV and output waveforms.
### Note to students regarding quantitative graphing
A common exercise for students learning the function of PID controllers is to practice graphing a controller’s output given input (PV and SP) conditions, either qualitatively or quantitatively. This can be a frustrating experience for some students, as they struggle to accurately combine the effects of P, I, and/or D responses into a single output trend. Here, I will present a way to ease the pain.
Suppose for example you were tasked with graphing the response of a PD (proportional + derivative) controller to the following PV and SP inputs over time. You are told the controller has a gain of 1, a derivative time constant of 0.3 minutes, and is reverse-acting:
My first recommendation is to qualitatively sketch the individual P and D responses. Simply draw two different trends, each one right above or below the given PV/SP trends, showing the shapes of each response over time. You might even find it easier to do if you re-draw the original PV and SP trends on a piece of non-graph paper with the qualitative P and D trends also sketched on the same piece of non-graph paper. The purpose of the qualitative sketches is to separate the task of determining shapes from the task of determining numerical values, in order to simplify the process.
After sketching the separate P and D trends, label each one of the “features” (changes either up or down) in these qualitative trends. This will allow you to more easily combine the effects into one output trend later:
Now, you may qualitatively sketch an output trend combining each of these “features” into one graph. Be sure to label each ramp or step originating with the separate P or D trends, so you know where each “feature” of the combined output graph originates from:
Once the general shape of the output has been qualitatively determined, you may go back to the separate P and D trends to calculate numerical values for each of the labeled “features.”
Note that each of the PV ramps is 15% in height, over a time of 15 seconds (one-quarter of a minute). With a controller gain of 1, the proportional response to each of these ramps will also be a ramp that is 15% in height.
Taking our given derivative time constant of 0.3 minutes and multiplying that by the PV’s rate-of-change ($$d\hbox{PV} \over dt$$) during each of its ramping periods (15% per one-quarter minute, or 60% per minute) yields a derivative response of 18% during each of the ramping periods. Thus, each derivative response “step” will be 18% in height.
Going back to the qualitative sketches of P and D actions, and to the combined (qualitative) output sketch, we may apply the calculated values of 15% for each proportional ramp and 18% for each derivative step to the labeled “features.” We may also label the starting value of the output trend as given in the original problem (35%), to calculate actual output values at different points in time. Calculating output values at specific points in the graph becomes as easy as cumulatively adding and subtracting the P and D “feature” values to the starting output value:
Now that we know the output values at all the critical points, we may quantitatively sketch the output trend on the original graph:
• Share
Published under the terms and conditions of the Creative Commons Attribution 4.0 International Public License | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8786753416061401, "perplexity": 1538.1718164234899}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038085599.55/warc/CC-MAIN-20210415125840-20210415155840-00289.warc.gz"} |
https://de.zxc.wiki/wiki/Topologie_(Mathematik) | # Topology (mathematics)
Cup and full torus are homeomorphic to one another.
Note : A homeomorphism is a direct mapping between the points of the cup and the full torus, the intermediate stages in the course of time only serve to illustrate the continuity of this mapping.
The topology ( Greek τόπος tópos , German , location, place ' and -logie ) is a fundamental part of mathematics . It deals with the properties of mathematical structures that are preserved under constant deformations, whereby the concept of continuity is defined in a very general way by the topology. The topology emerged from the concepts of geometry and set theory .
Towards the end of the 19th century, topology emerged as a separate discipline, called geometria situs 'geometry of location' or analysis situs (Greek-Latin for 'analysis of location') in Latin.
Topology has been recognized as a fundamental discipline for decades. Accordingly, alongside algebra , it can be seen as a second pillar for a large number of other fields of mathematics. It is particularly important for geometry , analysis , functional analysis and the theory of Lie groups . For its part, it has also fertilized set theory and category theory .
The basic concept of topology is that of topological space , which represents a far-reaching abstraction of the notion of “proximity” and thus allows far-reaching generalizations of mathematical concepts such as continuity and limit value . Many mathematical structures can be understood as topological spaces. Topological properties of a structure are called those that only depend on the structure of the underlying topological space. These are precisely those properties that are not changed by “deformations” or by homeomorphisms . In illustrative cases, this includes stretching, compressing, bending, distorting and twisting a geometric figure. For example, a sphere and a cube are indistinguishable from a topology perspective; they are homeomorphic. Likewise, a donut (the shape of which is called a full torus in mathematics ) and a one-handled cup are homeomorphic, as one can be transformed into the other without cutting (see animation on the right). In contrast, the surface of the torus is topologically different from the spherical surface: on the sphere, every closed curve can be continuously drawn together to a point (the descriptive language can be made more precise), on the torus not every curve .
The topology is divided into sub-areas. These include the algebraic topology , the geometric topology as well as the topological graph and knot theory . The set theoretical topology can be seen as the basis for all of these sub-disciplines. In this, topological spaces in particular are also considered, the properties of which differ particularly widely from those of geometric figures.
An important concept in topology is continuity. Continuous mappings correspond in the topology to what is usually called homomorphisms in other mathematical categories . A reversible mapping between topological spaces that is continuous in both directions is called a homeomorphism and corresponds to what is usually called isomorphism in other categories: Homeomorphic spaces cannot be distinguished by topological means. A fundamental problem in this discipline is to decide whether two spaces are homeomorphic or, more generally, whether continuous mappings with certain properties exist.
## history
The term “topology” was first used around 1840 by Johann Benedict Listing ; the older term analysis situs (for example, situation investigation ”) remained common for a long time, with a meaning that went beyond the newer,“ set-theoretical ”topology.
The solution of the seven bridges problem at Königsberg by Leonhard Euler in 1736 is considered to be the first topological and at the same time the first graph theoretical work in the history of mathematics. Another contribution from Euler to the so-called Analysis situs is the polyhedron set of 1750, named after him. If one denotes the number of vertices with which the edges and with those of the surfaces of a polyhedron (which meets the conditions to be specified), then applies . It was not until 1860 that it became known through a copy (made by Gottfried Wilhelm Leibniz ) of a lost manuscript by René Descartes that he had already known the formula. ${\ displaystyle e}$${\ displaystyle k}$${\ displaystyle f}$${\ displaystyle e-k + f = 2}$
Maurice Fréchet introduced the metric space in 1906 . Georg Cantor dealt with the properties of open and closed intervals, examined boundary processes, and at the same time founded modern topology and set theory . Topology is the first branch of mathematics that was consistently formulated in terms of set theory - and, conversely, gave impetus for the development of set theory.
A definition of topological space was first established by Felix Hausdorff in 1914. According to today's usage, he defined an open environment base there , but not a topology that was only introduced by Kazimierz Kuratowski and Heinrich Tietze around 1922. In this form, the axioms were popularized in the textbooks by Kuratowski (1933), Alexandroff / Hopf (1935), Bourbaki (1940) and Kelley (1955). It turned out that a lot of mathematical knowledge can be transferred to this conceptual basis. For example, it was recognized that there are different metrics for a fixed base set, which lead to the same topological structure on this set, but also that different topologies are possible on the same base set. On this basis, the set-theoretical topology developed into an independent research area, which in a certain way has been separated from geometry - or rather is closer to analysis than to actual geometry.
One goal of topology is the development of invariants of topological spaces. With these invariants, topological spaces can be distinguished. For example, the gender of a compact , coherent, orientable surface is such an invariant. The gender zero sphere and the gender one torus are different topological spaces. The algebraic topology emerged from the considerations of Henri Poincaré about the fundamental group , which is also an invariant in the topology. Over time, topological invariants such as the Betti numbers studied by Henri Poincaré have been replaced by algebraic objects such as homology and cohomology groups .
## Basic concepts
### Topological space
Topology (as a branch of mathematics ) deals with properties of topological spaces . If any basic set is provided with a topology (a topological structure), then it is a topological space and its elements are understood as points . The topology of the room is then determined by the fact that certain subsets are marked as open . The identical topological structure can be specified via its complements , which then represent the closed subsets. Usually, topological spaces are defined in textbooks using the open sets; more precisely: the set of open sets is called the topology of topological space . ${\ displaystyle {\ mathcal {O}}}$${\ displaystyle (X, {\ mathcal {O}})}$
Based on open or closed sets, numerous topological terms can be defined, such as those of the environment , continuity , point of contact and convergence .
#### Open sets
Topology (via open sets): A topological space is a set of points provided with a set of subsets (the open sets), which meets the following conditions: ${\ displaystyle X}$${\ displaystyle {\ mathcal {O}} \ subset {\ mathcal {P}} \ left (X \ right)}$
${\ displaystyle X \ in {\ mathcal {O}}}$and .${\ displaystyle \ emptyset \ in {\ mathcal {O}}}$ For any index sets with for all applies${\ displaystyle I}$${\ displaystyle O_ {i} \ in {\ mathcal {O}}}$${\ displaystyle i \ in I}$ ${\ displaystyle \ textstyle \ bigcup _ {i \ in I} O_ {i} \ in {\ mathcal {O}}}$. (Union) For finite index sets with for all applies${\ displaystyle I}$${\ displaystyle O_ {i} \ in {\ mathcal {O}}}$${\ displaystyle i \ in I}$ ${\ displaystyle \ textstyle \ bigcap _ {i \ in I} O_ {i} \ in {\ mathcal {O}}}$. (Average)
The pair is called a topological space and the topology of this topological space. ${\ displaystyle (X, {\ mathcal {O}})}$${\ displaystyle {\ mathcal {O}}}$
The most important concept that is defined by open sets is that of the environment: A set is the environment of a point if it includes an open set that contains the point. Another important concept is that of continuity : a mapping
${\ displaystyle f \ colon X \ to Y}$
of the topological spaces and is continuous if and only if the archetypes of open sets are open in , so it holds. ${\ displaystyle (X, T_ {X})}$${\ displaystyle (Y, T_ {Y})}$${\ displaystyle f ^ {- 1} (O_ {Y})}$${\ displaystyle O_ {Y} \ in T_ {Y}}$${\ displaystyle (X, T_ {X})}$${\ displaystyle f ^ {- 1} (O_ {Y}) \ in T_ {X}}$
#### Closed sets
Starting from the open sets, the closed sets can be defined as those subsets of space whose complements are open, i.e. for every open set the points that are not contained in it form a closed set. ${\ displaystyle O}$${\ displaystyle A: = X \! \ setminus \! O}$
This immediately results in the
Topology (over closed sets): A topological space is a set of points provided with a set of subsets of (the closed sets; is the power set of ), which satisfies the following conditions: ${\ displaystyle X}$${\ displaystyle {\ mathcal {A}} \ subset {\ mathcal {P}} \ left (X \ right)}$${\ displaystyle X}$${\ displaystyle {\ mathcal {P}} \ left (X \ right)}$${\ displaystyle X}$
${\ displaystyle X \ in {\ mathcal {A}}}$and .${\ displaystyle \ emptyset \ in {\ mathcal {A}}}$ For any index sets with for all applies${\ displaystyle I}$${\ displaystyle A_ {i} \ in {\ mathcal {A}}}$${\ displaystyle i \ in I}$ ${\ displaystyle \ textstyle \ bigcap _ {i \ in I} A_ {i} \ in {\ mathcal {A}}}$. (Average) For finite index sets with for all applies${\ displaystyle I}$${\ displaystyle A_ {i} \ in {\ mathcal {A}}}$${\ displaystyle i \ in I}$ ${\ displaystyle \ textstyle \ bigcup _ {i \ in I} A_ {i} \ in {\ mathcal {A}}}$. (Union)
The equivalence to the previous definition of open sets follows directly from De Morgan's laws : from becomes and vice versa. ${\ displaystyle \ textstyle \ bigcap}$${\ displaystyle \ textstyle \ bigcup}$
Closed sets can be imagined as sets of points that contain their edge, or in other words: Whenever there are points in the closed set that come as close as desired to another point (a point of contact ), this point is also included in the closed set . One thinks about which basic properties should be contained in the concept of the closed set and then , abstracting from specific definitions of closedness, e.g. from analysis , calls every closed subsets (which meet these conditions) a topological space. First of all, the empty set should be closed, because it does not contain any points that others could touch . Likewise, the set of all points should be completed, because it already contains all possible points of contact. If any set of closed sets is given, the intersection, i.e. the set of points that are contained in all these sets, should also be closed, because if the intersection had contact points that lie outside of it, one of the would have to be cutting quantities do not contain this point of contact and could not be final. In addition, the union of two (or finitely many) closed sets should again be closed; when two closed sets are united, there are no additional points of contact. The union of an infinite number of closed sets, on the other hand, is not required to be closed, because these could “keep getting closer” to another point and thus touch it.
### Homeomorphism
A homeomorphism is a bijective mapping between two topological spaces, so that a bijection between the topologies of the two spaces also comes about through point-by-point transfer of the open sets. Two topological spaces between which there is a homeomorphism are called homeomorphic . Homeomorphic spaces do not differ in terms of topological properties in the narrower sense. The homeomorphisms can be understood as the isomorphisms in the category of topological spaces.
### Terms not related to topological spaces
Topological spaces can be equipped with additional structures, for example uniform spaces , metric spaces , topological groups or topological algebras are examined . Properties that make use of additional structures of this kind are no longer necessarily preserved under homeomorphisms, but are sometimes also the subject of investigation in various sub-areas of topology.
There are also generalizations of the concept of topological space: In point-free topology , instead of a set of points with sets marked as open, only the structure of the open sets is considered as a lattice . Convergence structures define to which values each filter converges on an underlying set of points. Under the catchphrase Convenient Topology , an attempt is made to find classes of spaces similar to topological or uniform spaces, but which have “more pleasant” category-theoretical properties.
## Sub-areas of the topology
The modern topology is roughly divided into the three areas of set-theoretical topology, algebraic topology and geometric topology. There is also the differential topology . This is the basis of modern differential geometry and, despite the extensively used topological methods, is mostly regarded as a sub-area of differential geometry.
### Set theoretical or general topology
The set theoretical topology, like the other sub-areas of topology, includes the study of topological spaces and the continuous mappings between them. Especially those for the Analysis fundamental concepts of continuity and convergence are completely transparent only in the terminology of set-theoretic topology. But the concepts of set theoretic topology are also used in many other mathematical sub-areas. In addition, there are many concepts and mathematical statements of the set-theoretical topology that are valid and important for the more specific sub-areas of topology. Examples:
For example, the compactness of a room is an abstraction of the Heine – Borel principle . In the general terminology of set-theoretical topology, the product of two compact spaces is compact again, which generalizes the statement that a closed finite-dimensional cube is compact. In addition, it holds that a continuous function is bounded by a compact set into the real numbers and takes on its maximum and minimum. This is a generalization of the principle of minimum and maximum .
In general, topological spaces can violate many properties that are familiar from the topology of real numbers, but which are often found in normal spaces. Therefore, one often looks at topological spaces that meet certain separation properties, which represent minimal requirements for many more extensive sentences and enable more in-depth characterizations of the structure of the spaces. Compactness is another example of such “beneficial” properties. In addition, one also considers spaces on which certain additional structures are defined, such as uniform spaces or even topological groups and metric spaces , which through their structure allow additional concepts such as completeness .
Another key concept in this sub-area is different concepts of context .
### Algebraic topology
The algebraic topology (also called "combinatorial topology", especially in older publications) examines questions about topological spaces by tracing the problems back to questions in algebra . In algebra, these questions are often easier to answer. A central problem within topology is, for example, the investigation of topological spaces for invariants . Using the theory of homologies and cohomologies , one searches for such invariants in the algebraic topology.
### Geometric topology
The geometric topology deals with two-, three- and four-dimensional manifolds . The term two-dimensional manifold means the same as surface, and three- and four-dimensional manifolds are corresponding generalizations. In the field of geometric topology, one is interested in how manifolds behave under continuous transformations. Typical geometric quantities such as angle, length and curvature vary under continuous images. A geometrical quantity that does not vary, and therefore one that is of interest, is the number of holes in a surface. Since one deals almost exclusively with manifolds with dimensions smaller than five, this sub-area of topology is also called low-dimensional topology. In addition, the knot theory as part of the theory of three-dimensional manifolds belongs to the geometric topology.
## Applications
Since the field of topology is very broad, aspects of it can be found in almost every branch of mathematics. The study of the respective topology therefore often forms an integral part of a deeper theory. Topological methods and concepts have become an integral part of mathematics. A few examples are given here:
Differential geometry
Manifolds
In differential geometry , the study of plays manifolds a central role. These are special topological spaces , i. H. Sets that have a certain topological structure. Often they are also called topological manifolds. Fundamental properties are then proven with the help of topological means before they are provided with further structures and then form independent (and non-equivalent) subclasses (e.g. differentiable manifolds , PL manifolds, etc.).
Example used result of the geometric topology: Classification of surfaces
Closed surfaces are special types of 2-dimensional manifolds. With the help of the algebraic topology it can be shown that every surface consists of a finite number of embedded 2-polytopes that are glued together along their edges. In particular, this allows all closed areas to be classified into 3 classes, which is why one can always assume that the closed area is in a "normal form".
Functional Analysis The functional analysis arose from the study of function spaces which initially abstractions as Banach and Hilbert spaces learned. Today, functional analysis also deals more generally with infinite-dimensional topological vector spaces . These are vector spaces provided with a topology so that the basic algebraic operations of the vector space are continuous (“compatible” with the topology). Many of the concepts examined in functional analysis can be traced back solely to the structure of topological vector spaces, as which Hilbert and Banach spaces in particular can be understood, so that they can be viewed as the central object of investigation in functional analysis.
Descriptive Set Theory The descriptive set theory deals with certain "constructible" and "well-formed" subsets of Polish spaces . Polish spaces are special topological spaces (without any further structure) and many of the central concepts examined are purely topological in nature. These topological terms are related to concepts of “ definability ” and “ predictability ” from mathematical logic , about which statements can be made using topological methods.
Harmonic Analysis The central object of investigation in harmonic analysis are locally compact groups , that is, groups provided with a compatible locally compact topological structure. These represent a generalization of the Lie groups and thus of ideas of "continuous symmetries".
### Application in economics
Topological concepts are mainly used in economics in the field of welfare economics . The topology is also used in general equilibrium models .
## literature
### Textbooks
Commons : Topology (Mathematics) - album of pictures, videos and audio files
Wikibooks: Mathematics: Topology - Learning and Teaching Materials
## Individual evidence
1. ^ IM Jones (ed.): History of Topology . Elsevier, 1999, ISBN 0-444-82375-1 , pp. 103 .
2. ^ IM Jones (ed.): History of Topology . Elsevier, 1999, ISBN 0-444-82375-1 , pp. 503-504 .
3. Christoph J. Scriba, Peter Schreiber: 5000 years of geometry: history, cultures, people (from counting stone to computer). Springer, Berlin, Heidelberg, New York, ISBN 3-540-67924-3 , p. 451.
4. a b c F. Lemmermeyer: Topology . In: Guido Walz (Ed.): Lexicon of Mathematics . 1st edition. Spectrum Academic Publishing House, Mannheim / Heidelberg 2000, ISBN 978-3-8274-0439-8 .
5. Felix Hausdorff: Basic features of set theory , 1914, p. 213.
6. ^ Find. Math. , 3 , 1922.
7. ^ Math. Ann. 88, 1923.
8. Epple et al., Hausdorff GW II, 2002.
9. Christoph J. Scriba, Peter Schreiber: 5000 years of geometry: history, cultures, people (from counting stone to computer). Springer, Berlin, Heidelberg, New York, ISBN 3-540-67924-3 , p. 515.
10. If a topological space, then is the set of closed sets ${\ displaystyle (X, {\ mathcal {O}})}$
${\ displaystyle {\ mathcal {A}}: = \ {A \ in {\ mathcal {P}} (X) \; \ mid \; X \! \ setminus \! A \ in {\ mathcal {O}} \}}$.
11. General topology . In: Michiel Hazewinkel (Ed.): Encyclopedia of Mathematics . Springer-Verlag and EMS Press, Berlin 2002, ISBN 978-1-55608-010-4 (English, online ).
12. John. Stillwell: Mathematics and its history . Springer, New York 2010, ISBN 978-1-4419-6052-8 , pp. 468 .
13. D. Erle: Knot theory . In: Guido Walz (Ed.): Lexicon of Mathematics . 1st edition. Spectrum Academic Publishing House, Mannheim / Heidelberg 2000, ISBN 3-8274-0439-8 .
14. Berthold U. Wigger: Grundzüge der Finanzwissenschaft , p. 18. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 48, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.863340437412262, "perplexity": 1911.533663555896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00011.warc.gz"} |
http://hal.in2p3.fr/view_by_stamp.php?label=IN2P3&langue=fr&action_todo=view&id=in2p3-00285473&version=1&view=extended_view | HAL : in2p3-00285473, version 1
arXiv : 0805.4779
Implications of the cosmic ray spectrum for the mass composition at the highest energiesAllard D., Busca N.G., Decerprit G., Olinto A., Parizot E.Journal of Cosmology and Astroparticle Physics 10 (2008) 033 - http://hal.in2p3.fr/in2p3-00285473
Physique/Astrophysique/Cosmologie et astrophysique extra-galactique Planète et Univers/Astrophysique/Cosmologie et astrophysique extra-galactique
Implications of the cosmic ray spectrum for the mass composition at the highest energies
D. Allard1, N.G. Busca1, G. Decerprit1, A. Olinto1, 2, E. Parizot1
1 : APC - UMR 7164 - AstroParticule et Cosmologie http://www.apc.univ-paris7.fr/ CNRS : UMR7164 – IN2P3 – Observatoire de Paris – Université Paris VII - Paris Diderot – CEA : DSM/IRFU APC - UMR 7164, Université Paris Diderot, 10 rue Alice Domon et Léonie Duquet, case postale 7020, F-75205 Paris Cedex 13 France 2 : University of Chicago http://www.uchicago.edu/ University of Chicago Edward H. Levi Hall 5801 South Ellis Avenue Chicago, Illinois 60637 États-Unis
APC - AHE
The significant attenuation of the cosmic ray flux above ~5 × 10^19 eV suggests that the observed high energy spectrum is shaped by the so-called GZK effect (GZK: Greisen-Zatsepin-Kuzmin). This interaction of ultra-high energy cosmic rays (UHECRs) with the ambient radiation fields also affects their composition. We review the effect of photodissociation interactions on different nuclear species and analyze the phenomenology of secondary-proton production as a function of energy. We show that, by itself, the UHECR spectrum does not constrain the composition of cosmic rays at their extragalactic sources. While the propagated composition (i.e., as observed at Earth) cannot contain significant amounts of intermediate mass nuclei (say between He and Si), whatever the source composition, and while it is vastly proton dominated when protons are able to reach energies above 10^20 eV at the source, we show that the propagated composition can be dominated by Fe and sub-Fe nuclei at the highest energies, either if the sources are very strongly enriched in Fe nuclei (a rather improbable situation), or if the accelerated protons have a maximum energy of a few 10^19 eV at the sources. We also show that in the latter cases, the expected flux above 3 × 10^20 eV is very much reduced as compared to the case when protons dominate in this energy range, both at the sources and at Earth.
Articles dans des revues avec comité de lecture
2008
Journal of Cosmology and Astroparticle Physics Publisher Institute of Physics (IOP) ISSN 1475-7508 (eISSN : 1475-7516)
10
033
16 pages, 7 figures
Lien vers le texte intégral : http://fr.arXiv.org/abs/0805.4779
in2p3-00285473, version 1 http://hal.in2p3.fr/in2p3-00285473 oai:hal.in2p3.fr:in2p3-00285473 Contributeur : Simone Lantz <> Soumis le : Jeudi 5 Juin 2008, 15:49:52 Dernière modification le : Vendredi 21 Septembre 2012, 14:56:46 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8669144511222839, "perplexity": 3515.759008395919}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274581.53/warc/CC-MAIN-20140728011754-00463-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://mathhelpforum.com/number-theory/72378-recurrence-relation.html | 1. ## A Recurrence Relation
Define the function $f: \mathbb{N} \longrightarrow \mathbb{N}$ recursively by: $\sum_{d \mid n}f(d)=2^n, \ \forall n \in \mathbb{N}.$ Prove that $f(n)$ is divisible by $n$ for all $n \in \mathbb{N}.$
2. Originally Posted by NonCommAlg
Define the function $f: \mathbb{N} \longrightarrow \mathbb{N}$ recursively by: $\sum_{d \mid n}f(d)=2^n, \ \forall n \in \mathbb{N}.$ Prove that $f(n)$ is divisible by $n$ for all $n \in \mathbb{N}.$
A very small part of the answer
$\sum_{d \mid 1}f(d)=f(1)=2$
If n is prime $\sum_{d \mid n}f(d)=f(1)+f(n)=2^n$ therefore $f(n)=2^n-2$ which is divisible by n according to Fermat's little theorem
Fermat's little theorem - Wikipedia, the free encyclopedia
3. First, that recurrence relation determines completely the numbers $f(n)$, and they are in fact given by: $
f\left( n \right) = \sum\limits_{\left. d \right|n} {\mu \left( {\tfrac{n}
{d}} \right) \cdot 2^d }
$
-by Möbius' inversion formula.
Okay, but the path we will take is rather different. Consider the primitive strings of bits of length (*) $n$ these happen to satisfy the same recurrence relation, hence they must be equal - from what I said above-.
(*) A string is not primitive if it is a concatenation of smaller ( all of them equal) strings. For example $
\begin{array}{*{20}c}
1 & 0 & 1 & 0 \\
\end{array}
$
is not a primtive string, since it's made out of $
\begin{array}{*{20}c}
1 & 0 \\
\end{array}
$
Take $
\begin{array}{*{20}c}
1 & 1 & 0 \\
\end{array}
$
for instance, this is a primitive string of bits of length 3. Next note that if we 'move' the places by 1 to the right ( and the last one goes to the first, a sort of circular permutation) we get: $
\begin{array}{*{20}c}
0 & 1 & 1 \\
\end{array}
$
which is another primitive string of length 3, and if we do it again we get: $
\begin{array}{*{20}c}
1 & 0 & 1 \\
\end{array}
$
which is again a primitive string. Note also that if we do it again we get to the initial one.
So this suggests that we can partition the set of primitive strings of length $n$ into sets of $n$ elements in which each of the primitive strings is obtained by applying the operation -the operation has a name but it's on the tip of my tongue, oh well, I'll call it Circular permutation- we have just used repeatedly to a certain primtive string. (clearly an equivalence relation)
First, it's easy to see that this operation applied to a primitive string generates another ( suppose the contrary and you'll get a contradiction by using a method similar to the one below.)
We have to show that indeed we have sets of $n$ different strings, always.
So, suppose we have a primitive string of length $n$, which we will assoctiate with a function $
\delta :\mathbb{Z} \to \left\{ {0,1} \right\}
$
were we have: $
\delta \left( i \right) = \left\{ \begin{gathered}
{\text{what's in the place }}i\left( {\bmod .n} \right){\text{ if }}i\left( {\bmod .n} \right) \ne 0 \hfill \\
{\text{what's in the place n if }}i\left( {\bmod .n} \right) = 0 \hfill \\
\end{gathered} \right.
$
Suppose that applying our operation repeatedly $s times we get to the same initial string, which is equivalent to the condition: $
\delta \left( {i + s} \right) = \delta \left( i \right)
$
then $
\delta \left( {i + s \cdot m} \right) = \delta \left( i \right);\forall m \in \mathbb{Z}
$
Now consider $
M= {\text{min}}\left\{ {k \in \mathbb{Z}^ + /\delta \left( i \right) = \delta \left( {i + k} \right);\forall i \in \mathbb{Z}} \right\}
$
, and the integer division: $
n = M \cdot q + r
$
; $
q,r \in \mathbb{Z}/0 \leqslant r < M
$
, since $
n \in \left\{ {k \in \mathbb{Z}^ + /\delta \left( i \right) = \delta \left( {i + k} \right);\forall i \in \mathbb{Z}} \right\}
$
we get: $
\delta \left( i \right) = \delta \left[ {i + \underbrace {\left( {M \cdot q + r} \right)}_n \cdot h} \right] = \delta \left( {i + r \cdot h} \right);\forall h \in \mathbb{Z}
$
, hence, if r is not 0 we have: $
r \in \left\{ {k \in \mathbb{Z}^ + /\delta \left( i \right) = \delta \left( {i + k} \right);\forall i \in \mathbb{Z}} \right\} \wedge r < M
$
which is a contradiction, hence $r=0$ and $
\left. M \right|n
$
But this implies that our primitive string is a concatenation of $M$ smaller, and equal, strings of length $M$, thus it's not a primitive string: CONTRADICTION
Now if $
\xi
$
represents a string of length n, let $
D \xi
$
be the string of length n resulting from doing the operation on $
\xi
$
.
We've just seen that, for any primitive string of length n, $
\varepsilon
$
, $
D^i \varepsilon \ne \varepsilon
$
for all $
0 < i < n
$
(1)
So consider that for a primitive string we have that $
D^0 \xi ;...;D^{n - 1} \xi
$
are not all different, that is $
D^i \xi = D^j \xi
$
for some $n>j>i\geq{0}$, then $
D^i \xi = D^j \xi = D^{j - i} \left( {D^i \xi } \right)
$
and we get a contradiction by using (1), since we know that $
D^i \xi
$
is also a primitive bit string of length n
Therefore the claim is proven since we have $
f\left( n \right) = n \cdot S
$
where $S$ is the number of equivalence classes.
4. that's an interesting combinatorial approach PaulRS. the problem has also an algebraic solution:
Fact: the number of monic irreducible polynomials of degree $n$ over a finite field with $p$ elements, where $p$ is prime, is: $\frac{1}{n}\sum_{d \mid n} \mu \left(\frac{n}{d} \right)p^d.$
i encourage anyone who's new to algebra (and number theory) to think about this beautiful result and see if they can prove it!
5. Originally Posted by NonCommAlg
that's an interesting combinatorial approach PaulRS. the problem has also an algebraic solution:
Fact: the number of irreducible polynomials of degree $n$ over a finite field with $p$ elements, where $p$ is prime, is: $\frac{1}{n}\sum_{d \mid n} \mu \left(\frac{n}{d} \right)p^d.$
i encourage anyone who's new to algebra (and number theory) to think about this beautiful result and see if they can prove it!
Here is a hint to this: Show $x^{p^n} - x = \prod_j p_j(x)$ --> then compare degrees.
Where the product if over all monic irreducible polynomials of order dividing $n$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 63, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999433755874634, "perplexity": 802.8587604752746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120881.99/warc/CC-MAIN-20170423031200-00211-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://en.wikipedia.org/wiki/Twomey_effect | # Twomey effect
The Twomey effect describes how additional cloud condensation nuclei (CCN), possibly from anthropogenic pollution, may increase the amount of solar radiation reflected by clouds. This is an indirect effect (or radiative forcing) by such particles, as distinguished from direct effects (forcing) due to enhanced scattering or absorbing radiation by such particles not in clouds.
Cloud droplets normally form on aerosol particles that serve as CCN. Increasing the number concentration of CCN can lead to formation of more cloud droplets, which, in turn, have smaller size. The increase in number concentration increases the optical depth of the cloud, which results in increase in the cloud albedo making clouds appear whiter. Satellite imagery often shows trails of cloud or of enhanced brightness of cloud behind ocean-going ships due to this effect. The decrease in global mean absorption of solar radiation due to increases in CCN concentrations exerts a cooling influence on climate; the global average magnitude of this effect over the industrial era is estimated as between -0.3 and -1.8 Wm−2.[1]
## Derivation
Assume a uniform cloud that extends infinitely in the horizontal plane, also assume that the particle size distribution peaks near an average value of ${\displaystyle {\bar {r}}}$.
The formula for the optical depth of a cloud:
${\displaystyle \tau =2\pi h{\bar {r}}^{2}N}$
Where ${\displaystyle \tau }$ is the optical depth, ${\displaystyle h}$ is cloud thickness, ${\displaystyle {\bar {r}}}$ is the average particle size, and ${\displaystyle N}$ is the total particle density.
The formula for the liquid water content of a cloud is:
${\displaystyle LWC={\tfrac {4}{3}}\pi {\bar {r}}^{3}\rho _{L}hN}$
Where ${\displaystyle \rho _{L}}$is the density of air.
Taking our assumptions into account we can combine the two to derive this expression:
${\displaystyle \tau ={\tfrac {3}{2}}{\tfrac {LWC}{\rho _{L}{\bar {r}}}}}$
If we assume liquid water content (${\displaystyle LWC}$) is equal for the cloud before and after altering the particle density we obtain:
${\displaystyle {\bar {r_{2}}}={\bar {r_{1}}}\left({\frac {N_{1}}{N_{2}}}\right)^{\frac {1}{3}}}$
Now we assume total particle density ${\displaystyle N}$ is increased by a factor of 2 and we can solve for how ${\displaystyle {\bar {r_{1}}}}$ changes when ${\displaystyle N}$ is doubled.
${\displaystyle {\bar {r_{2}}}}$= ${\displaystyle 0.79{\bar {r_{1}}}={\bar {r_{1}}}\left({\frac {N_{1}}{2N_{1}}}\right)^{\frac {1}{3}}}$
We can now take our equation that relates ${\displaystyle \tau }$ to ${\displaystyle LWC}$ to solve for the change in optical depth when the particle size is reduced.
${\displaystyle \tau _{2}={\frac {\tau _{1}}{0.79}}=1.26\,\tau _{1}}$
In more general terms, the Twomey effect states that for a fixed liquid water content ${\displaystyle LWC}$ and cloud depth, the optical thickness can be represented by:
${\displaystyle \tau \varpropto N^{\tfrac {1}{3}}}$
This brings us to the conclusion that increasing the total particle density also increases the optical depth, illustrating the Twomey Effect mathematically. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 21, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9310358762741089, "perplexity": 878.358206529253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863886.72/warc/CC-MAIN-20180620202232-20180620222232-00179.warc.gz"} |
https://www.gerad.ca/fr/events/1234 | Groupe d’études et de recherche en analyse des décisions
# Topological recursion
## Bertrand Eynard – CPT, CEA Saclay, France
Topological recursion is an ubiquitous and universal recursive relationship that has appeared in various domains of mathematics and physics: volumes of moduli spaces, coefficients of asymptotic expansions in random matrix theory, Hurwitz numbers, Jones polynomials, Gromov-Witten invariants, and many other combinatorial objects, all mysteriously satisfy the same relation. Moreover, this recursion relation is effective: it allows an actual computation. This recursion has been axiomatized into a definition of some "new invariants" of curves. In this lecture we shall introduce the topological recursion, illustrate it on examples and mention its beautiful properties. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9308272004127502, "perplexity": 1375.9177483704773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488544264.91/warc/CC-MAIN-20210623225535-20210624015535-00040.warc.gz"} |
https://pos.sissa.it/350/107/ | Volume 350 - 7th Annual Conference on Large Hadron Collider Physics (LHCP2019) - Plenaries
Studies of rare electroweak multiboson interactions at the LHC
P. Chang* on behalf of the ATLAS and CMS Collaborations
*corresponding author
Full text: pdf
Pre-published on: 2019 September 27
Published on: 2019 December 04
Abstract
We present a summary of the current status of measurements in multiboson final states at the LHC from the ATLAS and CMS experiments. Studying the rare productions of electroweak multibosons at the LHC can probe new physics beyond the energy reach of the LHC. Various searches for rare processes involving multiboson interactions are presented and their impacts on the constraints on new physics in the framework of Standard Model Effective Field Theory are also discussed. A brief highlight of the high luminosity LHC projection study is discussed as well.
DOI: https://doi.org/10.22323/1.350.0107
Open Access | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8509699106216431, "perplexity": 2586.8488382047985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606872.19/warc/CC-MAIN-20200122071919-20200122100919-00239.warc.gz"} |
https://ai.stackexchange.com/questions/5359/does-k-consistency-always-imply-k-1-consistency | # Does k consistency always imply (k - 1) consistency?
From Russell-Norvig:
A CSP is strongly k-consistent if it is k-consistent and is also (k − 1)-consistent, (k − 2)-consistent, . . . all the way down to 1-consistent.
How can a CSP be k-consistent without being (k - 1)-consistent? I can't think of any counter example for this case. Any help would be appreciated. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9117867946624756, "perplexity": 1487.0906387738098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057861.0/warc/CC-MAIN-20210926114012-20210926144012-00639.warc.gz"} |
http://math.stackexchange.com/questions/182916/how-do-i-integrate-this-expression-int2x-dx-over-x3x2-3 | # How do I integrate this expression: $\int{2x\,dx\over x^3+x^{2/3}}$?
How do I integrate this expression: $\displaystyle\int{2x\,dx\over x^3+x^{2/3}}$?
-
Let $u=x^{1/3}$, then $u^3=x$ so $dx=3u^2 du$ and now your integrand is a rational function.
On that note: $u^7+1=(u+1)(u^6-u^5+u^4-u^3+u^2-u+1)$. I wish the OP luck with his integration... – Guess who it is. Aug 15 '12 at 17:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9745831489562988, "perplexity": 773.8717871392155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375093400.45/warc/CC-MAIN-20150627031813-00208-ip-10-179-60-89.ec2.internal.warc.gz"} |
http://dimacs.rutgers.edu/~graham/pubs/html/CormodeGolabKornMcGregorSrivastavaZhang09.html | ## G. Cormode, L. Golab, F. Korn, A. McGregor, D. Srivastava, and X. Zhang. Estimating the confidence of conditional functional dependencies. In ACM SIGMOD International Conference on Management of Data (SIGMOD), 2009.
Conditional functional dependencies (CFDs) have recently been proposed as extensions of classical functional dependencies that apply to a certain subset of the relation, as specified by a pattern tableau. Calculating the support and confidence of a CFD (i.e., the size of the applicable subset and the extent to which it satisfies the CFD) gives valuable information about data semantics and data quality. While computing the support is easier, computing the confidence exactly is expensive if the relation is large, and estimating it from a random sample of the relation is unreliable unless the sample is large. We study how to efficiently estimate the confidence of a CFD with a small number of passes (one or two) over the input using small space. Our solutions are based on a variety of sampling and sketching techniques, and apply when the pattern tableau is known in advance, and also the harder case when this is given after the data have been seen. We analyze our algorithms, and show that they can guarantee a small additive error; we also show that relative errors guarantees are not possible. We demonstrate the power of these methods empirically, with a detailed study over a mixture of real and synthetic data. These experiments show that it is possible to estimates the CFD confidence very accurately with summaries which are much smaller than the size of the data they represent.
bib | .pdf ] Back
This file was generated by bibtex2html 1.92. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8781769275665283, "perplexity": 525.3811267197849}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783000.84/warc/CC-MAIN-20200128184745-20200128214745-00373.warc.gz"} |
https://www.physicsforums.com/threads/about-stokes-thm.653596/ | # About stoke's thm.
1. Nov 20, 2012
### sigh1342
1. The problem statement, all variables and given/known data
if my curve is a ellipse intersect by a cylinder ,$$(x^2+y^2=a^2 )$$ and plane $$ax+by+cz=d$$ , and the $$curl.F=<0,0,f(x,y)>$$
and the question is about find the line integral of $$\oint F\cdot dr$$
then I apply stoke's thm. for the $$S_{1}$$ surface which is the projection of the ellipse on x-y plane
$$(x^2+y^2≤a^2,z=0)$$+$$S_{2}$$ the surface which is the surface area of cylinder from the ellipse to circle. since $$\int \int_{D} curl.F \cdot dS_{2}$$ is 0 ,
so what I need to do is compute $$\int \int_{D} curl.F \cdot dS_{1} = \int \int_{x^2+y^2≤a^2} f(x,y) dxdy$$
2. Relevant equations
3. The attempt at a solution
just want to confirm whether the approach is correct. sorry for typing ugly, and poor english
2. Nov 21, 2012
### LCKurtz
Presumably, those are not the same $a$.
You haven't told us what F is. Do you have an F whose curl is that form? (And you would normally write it as curl F without the dot.)
It isn't the way I would have analyzed the problem, but if I understand what you are doing I think the answer is yes. And I would work the last integral in polar coordinates. Don't forget about the correct orientation.
3. Nov 22, 2012
### sigh1342
ya , not the same $a$ ,and I mean the $$curl.F$$ is only contain $$k$$
such as $$F=<-y^3,x^3,z^3>$$ , then $$curl.F = <0,0,3x^2+3y^2>,$$
can you tell me about your approach for these questions?
4. Nov 22, 2012
### LCKurtz
You have $$\int_C\vec F\cdot d\vec r = \iint_S\nabla \times \vec F\cdot d\vec S =\iint_S \langle 0,0,f(x,y)\rangle\cdot d\vec S$$where $S$ is the portion of the plane $ax+by+cz = d$. I would then parameterize the surface using$$x=x,\, y = y,\, z =\frac{d-ax-by} c$$so$$\vec R(x,y) = \langle x,y, \frac{d-ax-by} c\rangle$$and use$$\iint_S \langle 0,0,f(x,y)\rangle\cdot d\vec S= \pm\iint_{(x,y)}\langle 0,0,f(x,y)\rangle\cdot\vec R_x\times \vec R_y\, dydx$$
That last integral is over the circle in the $xy$ plane and the sign is chosen to agree with the orientation around the curve using the right hand rule. You would probably want to change that last integral to polar coordinates to evaluate it because of the circle.
You don't have to think about the lateral surface of the cylinder and this method would work whether or not your curl had zeroes in the first two components.
5. Nov 22, 2012
### sigh1342
thank you guy!!
you are so helpful
Similar Discussions: About stoke's thm. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9519487619400024, "perplexity": 422.6770395805334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886108264.79/warc/CC-MAIN-20170821095257-20170821115257-00650.warc.gz"} |
https://infoscience.epfl.ch/record/110714 | Infoscience
Conference paper
# Recent Advances in Multi-view Distributed Video Coding
We consider dense networks of surveillance cameras capturing overlapped images of the same scene from different viewing directions, such a scenario being referred to as multi-view. Data compression is paramount in such a system due to the large amount of captured data. In this paper, we propose a Multi-view Distributed Video Coding approach. It allows for low complexity / low power consumption at the encoder side, and the exploitation of inter-view correlation without communications among the cameras. We introduce a combination of temporal intra-view side information and homography inter-view side information. Simulation results show both the improvement of the side information, as well as a significant gain in terms of coding efficiency.
#### Reference
• LTS-CONF-2007-034
Record created on 2007-08-28, modified on 2016-08-08 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8492571711540222, "perplexity": 1556.3762380515618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187193.40/warc/CC-MAIN-20170322212947-00339-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/92559-exponent-multiplication-division-again.html | # Thread: Exponent Multiplication and Division (again)
1. ## Exponent Multiplication and Division (again)
For the following expression I am having difficulty understanding why one cannot proceed with the rule $a^na^m=a^{n+m}$ with a -9 from $(3xy^4)^{-2}$, instead of first converting it to a fraction first e.g $(\frac{1}{9}x^{-2}y^{-8})(144x^4y^2)$
Expression
$(3xy^4)^{-2}(12x^2y)^2$
= $(\frac{1}{9}x^{-2}y^{-8})(144x^4y^2)= 16x^2y^{-6}$
Thats as far as I will simplify.
Why cannot this read $(-9x^{-2}y^{-8})(144x^4y^2) ?$
2. I think the technique that you are trying to apply is moving a negative power from the denominator, and while you are on the right track, what you have done is incorrect.
$
\frac{1}{9}=9^{-1}
$
$
\frac{1}{9}\ne-9
$
It is easy to get scooped into this as it is custom to change, for example, $2y^{-2}$ to $\frac{2}{y^2}$. But it is extremely important to remember that if you are going to do this with a constant with no visible exponent, it really looks like this.
$
\frac{1}{9}=\frac{1}{9^1}=9^{-1}
$
So that if you bring that 9 up, it's exponent value of 1 becomes negative, not the 9 itself.
3. Also one other thing to keep in mind if you are unsure of a method of simplification, or want to double check your answer. Remember that simplifications are aesthetic and never change the actual value of the function. If you simplify A to B to C. The following must ALWAYS be true. $A=B=C$.
With that in mind, you can check the value of any given equation relative to others by picking arbitrary values and plugging them in to get a value. This value should be equal to any equations preceeding it in the simplification process and any coming after it.
For the sake of example, I am choosing $x=5$ and $y=10$.
First equation - The $\frac{1}{9}$ equation.
$(\frac{1}{9}x^{-2}y^{-8})(144x^4y^2)$
= $(\frac{1}{9}5^{-2}10^{-8})(144(5)^4(10)^2)$
= $0.0004$
Second Equation - The $9^{-1}$ equation.
$(9^{-1}x^{-2}y^{-8})(144x^{4}y^{2})$
= $(9^{-1}(5)^{-2}(10)^{-8})(144(5)^{4}(10)^{2})$
= $0.0004$
Third equation - Comparing the above to the final simplified $16x^2y^{-6}$
$16x^2y^{-6}$
= $16(5)^2(10)^{-6}$
= $0.0004$
Finally, we will plug these values into the $-9$ equation and see whats comes out.
$(-9x^{-2}y^{-8})(144x^{4}y^2)$
= $(-9(5)^{-2}(10)^{-8})(144(5)^{4}(10)^2)$
= $-0.0324$
As you can see the method of simplification you used is not valid as it causes the value of the equation with the same x and y values to change. Does this help?
4. Unfortunately Kasper 1/9 is correct according to the book in this case.
They solve it as follows
$(3xy^4)^{-2}(12x^2y)^2$
$(\frac{1}{9}x^{-2}y^{-8})(144x^4y^2)= 16x^2y^{-6}=\frac{16x^2}{y^6}$
Again im not sure why the 1/9 does not remain as -9.
5. Originally Posted by allyourbass2212
Unfortunately Kasper 1/9 is correct according to the book in this case.
They solve it as follows
$(3xy^4)^{-2}(12x^2y)^2$
$(\frac{1}{9}x^{-2}y^{-8})(144x^4y^2)= 16x^2y^{-6}=\frac{16x^2}{y^6}$
Again im not sure why the 1/9 does not remain as -9.
No, you read my post wrong, I am saying that $\frac{1}{9}$ is correct and that $-9$ is not correct. My first post correlates to the solution you provided by the following statement:
$3^{-2}=9^{-1}=\frac{1}{9}\ne-9$
The $\frac{1}{9}$ never was a $-9$, it could not have been because the two are not equivalent.
Read over my first post about how negative exponents work, let me know if you still don't understand.
6. Ive read over your post but I am still a little confused.
For instance
$3^{-2}=9^{-1}=\frac{1}{9}\ne-9$
So it appears any number to a negative exponent you can rewrite as $n^{-1}$ rather than $-n$.
For instance
$4^{-2}=16^{-1}$
$7^{-2}=49^{-1}$
Is this correct?
If so that would allow us to use $a^{-n} = \frac{1}{a^n}$, in which case the $16^{-1}$would equal $\frac{1}{16}$ and $49^{-1}$ would equal $\frac{1}{49}$.
Is my logic correct?
7. Originally Posted by allyourbass2212
Ive read over your post but I am still a little confused.
For instance
$3^{-2}=9^{-1}=\frac{1}{9}\ne-9$
So it appears any number to a negative exponent you can rewrite as $n^{-1}$ rather than $-n$.
For instance
$4^{-2}=16^{-1}$
$7^{-2}=49^{-1}$
Is this correct?
If so that would allow us to use $a^{-n} = \frac{1}{a^n}$, in which case the $16^{-1}$would equal $\frac{1}{16}$ and $49^{-1}$ would equal $\frac{1}{49}$.
Is my logic correct?
Very correct
8. Originally Posted by allyourbass2212
Ive read over your post but I am still a little confused.
For instance
$3^{-2}=9^{-1}=\frac{1}{9}\ne-9$
So it appears any number to a negative exponent you can rewrite as $n^{-1}$ rather than $-n$.
For instance
$4^{-2}=16^{-1}$
$7^{-2}=49^{-1}$
Is this correct?
If so that would allow us to use $a^{-n} = \frac{1}{a^n}$, in which case the $16^{-1}$would equal $\frac{1}{16}$ and $49^{-1}$ would equal $\frac{1}{49}$.
Is my logic correct?
Exactly, and another thing to try to keep these rules fresh in your mind. Just stick em in your calculator. Expressions with equivalent values are just that, equivalent, and therefore simplifications of each other.
For example, let's consider the statement:
$4^{-2}=16^{-1}=\frac{1}{16}\ne-16$
Just plug each one into your calculator to find what is equivalent and what isn't. Like a "One of these does not belong." game, .
i.e.
$4^{-2}=0.0625$
$16^{-1}=0.0625$
$\frac{1}{16}=0.0625$
$-16=-16$
Also, just to clarify one piece of your post.
So it appears any number to a negative exponent you can rewrite as $n^{-1}$ rather than $-n$.
This is correct, but I would like to reinforce that it is not a *can*. If you are simplifying it this way, that is exactly what needs to be done because it is never equal to -n. Any number to a negative exponent is never equal to just ignoring the negative, applying the power, and then bringing the negative down to make the entire expression negative.
In other words, to reapply the concepts above, $2^{-3}$ is never equal to $-8$.
9. Thanks again guys! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 78, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8402421474456787, "perplexity": 347.5510029174901}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660706.30/warc/CC-MAIN-20160924173740-00142-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://abdn.pure.elsevier.com/en/publications/morita-invariance-of-equivariant-lusternik-schnirelmann-category- | # Morita Invariance of Equivariant Lusternik-Schnirelmann Category and Invariant Topological Complexity
Andrés Angel* (Corresponding Author), Hellen Colman, Mark Grant, John Oprea
*Corresponding author for this work
Research output: Contribution to journalArticlepeer-review
## Abstract
We use the homotopy invariance of equivariant principal bundles to prove that the equivariant ${\mathcal A}$-category of Clapp and Puppe is invariant under Morita equivalence. As a corollary, we obtain that both the equivariant Lusternik-Schnirelmann category of a group action and the invariant topological complexity are invariant under Morita equivalence. This allows a definition of topological complexity for orbifolds.
Original language English 179-195 14 Theory and Applications of Categories 35 7 Published - 18 Feb 2020
## Keywords
• math.AT
• math.CT
• Lusternik-Schnirelmann category
• Topological complexity | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9500685930252075, "perplexity": 3493.831238532994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178378043.81/warc/CC-MAIN-20210307170119-20210307200119-00572.warc.gz"} |
https://www.ideals.illinois.edu/browse?rpp=20&order=ASC&sort_by=-1&value=Mathematics&etal=-1&type=subject&starts_with=L | # Browse by Subject "Mathematics"
• (1969)
application/pdf
PDF (2MB)
• (1971)
application/pdf
PDF (2MB)
• (1988)
Since Lorentz introduced Lorentz space, there have been several generalizations of this space. Hunt and Cwikel studied Lorentz L$\sb{\rm p,q}$ spaces and showed some basic properties such as the characterization of the ...
application/pdf
PDF (3MB)
• (1974)
application/pdf
PDF (2MB)
• (1970)
application/pdf
PDF (5MB)
• (2008)
Instead of focusing on the application, we are also interested in more theoretical setting. We discuss the conditions on density of the set of invertible (resp.: noninvertible) N x P matrices. Lastly we study the generalized ...
application/pdf
PDF (1MB)
• (2008)
For free random variables, we obtain even stronger version of the Law of the Iterated Logarithm: limsupn→infinity 1n i=1nxi≤ bau2s .
application/pdf
PDF (1MB)
• (1971)
application/pdf
PDF (3MB)
• (1989)
This thesis describes a new class of Intelligent Tutoring Systems (ITS) which I call the Learning Companion Systems (LCS). In the learning environment of such a system, there are three agents involved, namely, the human ...
application/pdf
PDF (7MB)
• (1992)
This research addresses algorithmic approaches for solving two different, but related, types of optimization problems. Firstly, the research considers the solution of a specific type of assignment problem using continuous ...
application/pdf
PDF (3MB)
• (1999)
The thesis obtains the Lefschetz-Reidemeister trace of a self-map as the image under a map of spaces whose domain is the K-theory of a ring with a bimodule and whose range is the Hochschild homology of the ring with the ...
application/pdf
PDF (5MB)
• (1973)
application/pdf
PDF (4MB)
• (1994)
Finite difference techniques are used to solve a variety of differential equations. For the neutron diffusion equation, the typical local truncation error for standard finite difference approximation is on the order of the ...
application/pdf
PDF (4MB)
• (1980)
A central extension of finite groups e: 0 (--->) A (--->) E (--->) G (--->)1 is said to be a stem extension of G if A is contained in the commutator subgroup E' of E. Schur showed that A must be isomorphic to ...
application/pdf
PDF (2MB)
• (1975)
application/pdf
PDF (2MB)
• (1982)
In Chapter I we improve upon results on the almost sure approximation of the empirical process of weakly dependent random vectors, recently obtained by Berkes and Philipp and Philipp and Pinzur. For strongly mixing sequences ...
application/pdf
PDF (3MB)
• (1962)
application/pdf
PDF (2MB)
• (1954)
application/pdf
PDF (1MB)
• (1993)
The topic of my thesis is the geometry of projective homogeneous spaces G/H for a semisimple algebraic group G in characteristic p $>$ 0, where H is a subgroup scheme containing a Borel subgroup B. In characteristic p ...
application/pdf
PDF (2MB)
• (1967)
application/pdf
PDF (2MB) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9383453130722046, "perplexity": 1869.4327952905053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111865.15/warc/CC-MAIN-20160428161511-00055-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://www.khanacademy.org/math/precalculus/x9e81a4f98389efdf:rational-functions/x9e81a4f98389efdf:adding-and-subtracting-rational-expressions/v/adding-and-subtracting-rational-expressions-with-like-denominators | If you're seeing this message, it means we're having trouble loading external resources on our website.
If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.
# Adding & subtracting rational expressions: like denominators
CCSS.Math:
## Video transcript
- [Voiceover] So let's add six over two X squared minus seven to negative 3 X minus eight over two X squared minus seven. And like always, pause the video and try to work it out before I do. When you look at this, we have these two rational expressions and we have the same denominator, two X squared minus seven. So you could say, we have six two X squared minus sevenths and then we have negative three X minus eight two X squared minus sevenths is one way to think about it. So if you have the same denominator, this is going to be equal to, this is going to be equal to... our denominator is going to be two X squared minus seven, two X squared minus seven, and then we just add the numerators. So it's going to be six plus negative three X, negative three X minus eight. So if we want to simplify this a little bit, we'd recognize that we can add these two constant terms, the six and the negative eight. Six plus negative eight is going to be negative two, so it's going to be negative two and then adding a negative three X, that's the same thing as subtracting three X, so negative two minus three X, all of that over, all of that with that same blue color, all of that over two X squared minus seven. And we're done. We've just added these two rational expressions. Let's do another example. So here, we want to subtract one rational expression from another. So see if you can figure that out. Well, once again, both of these rational expressions have the exact same denominator, the denominator for both of them is 14 X squared minus nine, 14 X squared minus nine. So the denominator of the difference, I guess we can call it that, is going to be 14 X squared minus nine. So 14 X squared minus nine. Did I say four X squared before? 14 X squared minus nine, that's the denominator of both of them, so that's going to be the denominator of our answer right over here. And so, we can just subtract the numerators. So we're gonna have nine X squared plus three minus all of this business, minus negative three X squared plus five. And so we can distribute the negative sign. This is going to be equal to nine X squared plus three, and then, if you distribute the negative sign, the negative of negative three X squared is going to be plus three X squared and then the negative of positive five is going to be negative five, so we're gonna subtract five from that, and all of that is going to be over 14 X squared minus nine. 14 X squared minus nine. And so in the numerator we can do some simplification. We have nine X squared plus three X squared, so that's going to be equal to 12 X squared. And then, we have... we have three plus negative five, or we can say three minus five, so that's going to be negative two, and all of that is going to be over 14 X squared minus nine. 14 X squared minus nine. And we're all done. We have just subtracted. And we can think about it, is there any way we can simplify this more, are there any common factors, but these both could be considered differences of squares, but they're going to be differences of squares of different things, so they're not going to have common factors. So this is about as simple as we can get. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9337431192398071, "perplexity": 425.0721633078855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711064.71/warc/CC-MAIN-20221205232822-20221206022822-00511.warc.gz"} |
http://physics.stackexchange.com/questions/25131/how-many-stars-are-in-the-milky-way-galaxy-and-how-can-we-determine-this/25132 | # How many stars are in the Milky Way galaxy, and how can we determine this?
I have heard multiple estimates on the quantity of stars within our galaxy, anything from 100 to 400 billion of them. The estimates seem to be increasing for the time being. What are the main methods that are used to make these estimates, and why are there such large discrepancies between them?
-
The estimates I've read are similar to yours: 200 to 400 billion stars. Counting the stars in the galaxy is inherently difficult because, well, we can't see all of them.
We don't really count the stars, though. That would take ages: instead we measure the orbit of the stars we can see. By doing this, we find the angular velocity of the stars and can determine the mass of the Milky Way.
But the mass isn't all stars. It's also dust, gas, planets, Volvos, and most overwhelmingly: dark matter. By observing the angular momentum and density of stars in other galaxies, we can estimate just how much of our own galaxy's mass is dark matter. That number is close to 90%. So we subtract that away from the mass, and the rest is stars (other objects are more-or-less insignificant at this level).
The mass alone doesn't give us a count though. We have to know about how much each star weighs, and that varies a lot. So we have to class different types of stars, and figure out how many of each are around us. We can extrapolate that number and turn the mass into the number of stars.
Obviously, there's a lot of error in this method: it's hard to measure the orbit of stars around the galactic center because they move really, really slowly. So we don't know exactly how much the Milky Way weighs, and figuring out how much of that is dark matter is even worse. We can't even see dark matter, and we don't really understand it either. Extrapolating the concentrations of different classes of stars is inexact, and at best we can look at other galaxies to confirm that the far side of the Milky Way is probably the same as this one. Multiply all those inaccuracies together and you get a range on the order of 200 billion.
-
Good answer. In short: we look at a small sample of the Milky Way near us and figure out how many stars it has and how much it weighs, then we figure out how much the whole galaxy weighs, and we estimate from there. – Wedge Jun 5 '11 at 10:15
The estimate of the dark matter contribution is only necessary in so far that you need a model for Galactic potential in order to come up with a self consistent model for the density distribution of the stars. – Rob Jeffries Dec 3 '14 at 10:53
I've added this because I don't think the accepted answer is very clear.
Estimating the number of stars in the Galaxy relies mostly on two things.
1. We estimate the present day mass function (that is the number of stars that exist per unit mass per unit volume) in the solar neighbourhood.
2. We construct a model for the overall density distribution of the stars in our Galaxy and assume that this is governed by the same mass function.
The presence or not of dark matter is almost totally irrelevant, except that it can be of help in constructing self-consistent density models of the Galaxy that match the dynamics of stars. What is much more helpful is very detailed censuses of stars carried out in different directions that have good estimates for the distances of the stars that are counted and can take account of the ubiquitous obscuration by dust that is a problem in almost every direction except straight out of the Galaxy plane. A further problem is that it turns out that the stars that dominate the Galactic population are very faint stars of about $0.25M_{\odot}$. These cannot be seen beyond a few hundred pc, so estimates of stellar numbers are a vast extrapolation based on what we observe in a very small volume around the Sun. We are crucially reliant on the assumption that the low-mass stellar mass function is invariant - and this is very hard to test.
-
## protected by Qmechanic♦Apr 3 '14 at 3:37
Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.933426022529602, "perplexity": 296.840358372375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986806.32/warc/CC-MAIN-20150728002306-00280-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://searxiv.org/search?author=Regina%20Caputo | Results for "Regina Caputo"
total 565took 0.12s
Cosmic Rays and Interstellar Medium with Gamma-Ray Observations at MeV EnergiesMar 13 2019Latest precise cosmic-ray (CR) measurements and present gamma-ray observations have started challenging our understanding of CR transport and interaction in the Galaxy. Moreover, because the density of CRs is similar to the density of the magnetic field, ... More
Minimum Bias Triggers at ATLAS, LHCDec 02 2008In the first phase of LHC data-taking ATLAS will measure the charged-particle density at the initial center-of-mass energy of 10 TeV and then at 14 TeV. This will allow us improve our knowledge of soft QCD models and pin-down cross-sections of different ... More
Supersymmetry searches at the TevatronJun 30 2004CDF and D0 collaborations analyzed up to 200 pb-1 of the delivered data in search for different supersymmetry signatures, so far with negative results. We present results on searches for chargino and neutralino associated production, squarks and gluinos, ... More
Uniform Poincare inequalities for unbounded conservative spin systems: The non-interacting caseFeb 04 2002Mar 17 2003We prove a uniform Poincare' inequality for non-interacting unbounded spin systems with a conservation law, when the single-site potential is a bounded perturbation of a convex function. The result is then applied to Ginzburg-Landau processes to show ... More
Radiative Axion InflationFeb 07 2019Planck data robustly exclude the simple $\lambda\phi^4$ scenario for inflation. This is also the case for models of Axion Inflation in which the inflaton field is the radial part of the Peccei-Quinn complex scalar field. In this letter we show that for ... More
The Brauer-Kuroda formula for higher S-class numbers in dihedral extensions of number fieldsApr 17 2010Apr 20 2010Let p be an odd prime and let L/k be a Galois extension of number fields whose Galois group is isomorphic to the dihedral group of order 2p. Let S be a finite set of primes of L which is stable under the action of Gal(L/k). The Lichtenbaum conjecture ... More
Blue-blocking spectacles lenses for retinal damage protection and circadian rhythm: evaluation parametersJun 11 2018There is evidence for the effect of blue light on circadian cycle and ocular pathologies. Moreover, the introduction of LED lamps has increased the presence of blue light. In the last two years, many different blue blocking ophthalmic lenses have been ... More
A human centered perspective of E-maintenanceJul 10 2014E-maintenance is a technology aiming to organize and structure the ICT during the whole life cycle of the product, to develop a maintenance support system that is effective and efficient. A current challenge of E-maintenance is the development of generic ... More
Decaying hadrons within constituent-quark modelsJun 19 2012Within conventional constituent-quark models hadrons come out as stable bound states of the valence (anti)quarks. Thereby the resonance character of hadronic excitations is completely ignored. A more realistic description of hadron spectra can be achieved ... More
Geometrical phases for the G(4,2) Grassmannian manifoldJan 24 2003Mar 13 2003We generalize the usual abelian Berry phase generated for example in a system with two non-degenerate states to the case of a system with two doubly degenerate energy eigenspaces. The parametric manifold describing the space of states of the first case ... More
Bootstrapping Lexical Choice via Multiple-Sequence AlignmentMay 25 2002An important component of any generation system is the mapping dictionary, a lexicon of elementary semantic expressions and corresponding natural language realizations. Typically, labor-intensive knowledge-based methods are used to construct the dictionary. ... More
Catching the Drift: Probabilistic Content Models, with Applications to Generation and SummarizationMay 12 2004We consider the problem of modeling the content structure of texts within a specific domain, in terms of the topics the texts address and the order in which these topics appear. We first present an effective knowledge-lean method for learning content ... More
Search for gamma-ray emission from $p$-wave dark matter annihilation in the Galactic CenterApr 12 2019Indirect searches for dark matter through Standard Model products of its annihilation generally assume a cross-section which is dominated by a term independent of velocity ($s$-wave annihilation). However, in many DM models an $s$-wave annihilation cross-section ... More
New meteor showers identified in the CAMS and SonotaCo meteoroid orbit surveysMay 07 2014A cluster analysis was applied to the combined meteoroid orbit database derived from low-light level video observations by the SonotaCo consortium in Japan (64,650 meteors observed between 2007 and 2009) and by the Cameras for All-sky Meteor Surveillance ... More
Mixing time of PageRank surfers on sparse random digraphsMay 13 2019Given a digraph $G$, a parameter $\alpha\in(0,1)$ and a distribution $\lambda$ over the vertices of $G$, the generalised PageRank surf on $G$ with parameters $\alpha$ and $\lambda$ is the Markov chain on the vertices of $G$ such that at each step with ... More
A Note on Wetting Transition for Gradient FieldsSep 10 1999We prove existence of a wetting transition for two types of gradient fields: 1) Continuous SOS models in any dimension and 2) Massless Gaussian model in two dimensions. Combined with a recent result showing the absence of such a transition for Gaussian ... More
An explicit candidate for the set of Steinitz classes of tame Galois extensions with fixed Galois group of odd orderNov 08 2011Mar 04 2012Given a finite group G and a number field k, a well-known conjecture asserts that the set R_t(k,G) of Steinitz classes of tame G-Galois extensions of k is a subgroup of the ideal class group of k. In this paper we investigate an explicit candidate for ... More
Diffusivity in one-dimensional generalized Mott variable-range hopping modelsJan 09 2007Aug 31 2009We consider random walks in a random environment which are generalized versions of well-known effective models for Mott variable-range hopping. We study the homogenized diffusion constant of the random walk in the one-dimensional case. We prove various ... More
Global metallicity of globular cluster stars from colour-magnitude diagramsFeb 27 2002We have developed an homogeneous evolutionary scenario for H- and He-burning low-mass stars by computing updated stellar models for a wide metallicity and age range (0.0002$\le Z \le$0.004 and 9$\le t(Gyr) \le$15, respectively) suitable to study globular ... More
Asymmetric diffusion and the energy gap above the 111 ground state of the quantum XXZ modelJun 26 2001We consider the anisotropic three dimensional XXZ Heisenberg ferromagnet in a cylinder with axis along the 111 direction and boundary conditions that induce ground states describing an interface orthogonal to the cylinder axis. Let $L$ be the linear size ... More
Towards Learning free Naive Bayes Nearest Neighbor-based Domain AdaptationMar 26 2015As of today, object categorization algorithms are not able to achieve the level of robustness and generality necessary to work reliably in the real world. Even the most powerful convolutional neural network we can train fails to perform satisfactorily ... More
Time scales: from Nabla calculus to Delta calculus and vice versa via dualityOct 01 2009Jan 17 2010In this note we show how one can obtain results from the nabla calculus from results on the delta calculus and vice versa via a duality argument. We provide applications of the main results to the calculus of variations on time scales.
Phase ordering after a deep quench: the stochastic Ising and hard core gas models on a treeDec 22 2004Consider a low temperature stochastic Ising model in the phase coexistence regime with Markov semigroup $P_t$. A fundamental and still largely open problem is the understanding of the long time behavior of $\d_\h P_t$ when the initial configuration $\h$ ... More
Entropy dissipation estimates in a Zero-Range dynamicsMay 24 2004Sep 26 2006We study the exponential decay of relative entropy functionals for zero-range processes on the complete graph. For the standard model with rates increasing at infinity we prove entropy dissipation estimates, uniformly over the number of particles and ... More
Gauss sums, Jacobi sums and cyclotomic units related to torsion Galois modulesFeb 16 2014Let $G$ be a finite group and let $N/E$ be a tamely ramified $G$-Galois extension of number fields. We show how Stickelberger's factorization of Gauss sums can be used to determine the stable isomorphism class of various arithmetic $\mathbb{Z}[G]$-modules ... More
Entropy production in nonlinear recombination modelsSep 22 2016We study the convergence to equilibrium of a class of nonlinear recombination models. In analogy with Boltzmann's H theorem from kinetic theory, and in contrast with previous analysis of these models, convergence is measured in terms of relative entropy. ... More
A large deviation principle for Wigner matrices without Gaussian tailsJul 24 2012Oct 28 2014We consider $n\times n$ Hermitian matrices with i.i.d. entries $X_{ij}$ whose tail probabilities $\mathbb {P}(|X_{ij}|\geq t)$ behave like $e^{-at^{\alpha}}$ for some $a>0$ and $\alpha \in(0,2)$. We establish a large deviation principle for the empirical ... More
On fake Z_p extensions of number fieldsJul 07 2008May 10 2009After providing a general result for dihedral extensions, we study the growth of the $p$-part of the class group of the non-normal subfields of the anticyclotomic extension of an imaginary quadratic field, providing a formula of Iwasawa type. Furthermore, ... More
Relaxation time of anisotropic simple exclusion processes and quantum Heisenberg modelsFeb 04 2002Motivated by an exact mapping between anisotropic half integer spin quantum Heisenberg models and asymmetric diffusions on the lattice, we consider an anisotropic simple exclusion process with $N$ particles in a rectangle of $\bbZ^2$. Every particle at ... More
Preliminary Results of the Fermi High-Latitude Extended Source CatalogSep 19 2017We report on preliminary results from the Fermi High-Latitude Extended Sources Catalog (FHES), a comprehensive search for spatially extended gamma-ray sources at high Galactic latitudes ($|b|>5^\circ$) based on data from the Fermi Large Area Telescope ... More
Isoperimetric inequalities and mixing time for a random walk on a random point processJul 31 2006Oct 31 2007We consider the random walk on a simple point process on $\Bbb{R}^d$, $d\geq2$, whose jump rates decay exponentially in the $\alpha$-power of jump length. The case $\alpha =1$ corresponds to the phonon-induced variable-range hopping in disordered solids ... More
Multivariate spacings based on data depth: I. Construction of nonparametric multivariate tolerance regionsJun 18 2008This paper introduces and studies multivariate spacings. The spacings are developed using the order statistics derived from data depth. Specifically, the spacing between two consecutive order statistics is the region which bridges the two order statistics, ... More
Automatic Aggregation by Joint Modeling of Aspects and ValuesJan 23 2014We present a model for aggregation of product review snippets by joint aspect identification and sentiment analysis. Our model simultaneously identifies an underlying set of ratable aspects presented in the reviews of a product (e.g., sushi and miso for ... More
Learning to Paraphrase: An Unsupervised Approach Using Multiple-Sequence AlignmentApr 02 2003We address the text-to-text generation problem of sentence-level paraphrasing -- a phenomenon distinct from and more difficult than word- or phrase-level paraphrasing. Our approach applies multiple-sequence alignment to sentences gathered from unannotated ... More
Monotone homotopies and contracting discs on Riemannian surfacesNov 13 2013Oct 05 2016We prove a "gluing" theorem for monotone homotopies; a monotone homotopy is a homotopy through simple contractible closed curves which themselves are pairwise disjoint. We show that two monotone homotopies which have appropriate overlap can be replaced ... More
Fermipy: An open-source Python package for analysis of Fermi-LAT DataJul 29 2017Fermipy is an open-source python framework that facilitates analysis of data collected by the Fermi Large Area Telescope (LAT). Fermipy is built on the Fermi Science Tools, the publicly available software suite provided by NASA for the LAT mission. Fermipy ... More
Looking Under a Better Lamppost: MeV-scale Dark Matter CandidatesMar 14 2019The era of precision cosmology has revealed that about 85% of the matter in the universe is dark matter. Two well-motivated candidates are weakly interacting massive particles (WIMPs) and weakly interacting sub-eV particles (WISPs) (e.g. axions). Both ... More
Positron Annihilation in the GalaxyMar 13 2019The 511 keV line from positron annihilation in the Galaxy was the first $\gamma$-ray line detected to originate from outside our solar system. Going into the fifth decade since the discovery, the source of positrons is still unconfirmed and remains one ... More
Dynamics of point Josephson junctions in a microstrip lineApr 03 2010We analyze a new long wave model describing the electrodynamics of an array of point Josephson junctions in a superconducting cavity. It consists in a wave equation with Dirac delta function sine nonlinearities. We introduce an adapted spectral problem ... More
Unidirectional Propagation of an Ultra-Short Electromagnetic Pulse in a Resonant Medium with High Frequency Stark ShiftJul 18 2001We consider in the unidirectional approximation the propagation of an ultra short electromagnetic pulse in a resonant medium consisting of molecules characterized by a transition operator with both diagonal and non-diagonal matrix elements. We find the ... More
Dynamics of point Josephson junctions in a microstrip lineJan 27 2005We model the dynamics of point Josephson junctions in a 1D microstrip line using a wave equation with delta distributed sine nonlinearities. The model is suitable for both low T$_c$ and high T$_c$ systems (0 and $\pi$ junctions). For a single junction ... More
Statics of point Josephson junctions in a micro strip lineJul 11 2006We model the static behavior of point Josephson junctions in a micro strip line using a 1D linear differential equation with delta distributed sine non-linearities. We analyze the maximum current $\gamma_{max}$ crossing the micro strip for a given magnetic ... More
Influence of the passive region on Zero Field Steps for window Josephson junctionsMar 31 2002We present a numerical and analytic study of the influence of the passive region on fluxon dynamics in a window junction. We examine the effect of the extension of the passive region and its electromagnetic characteristics, its surface inductance and ... More
Reaction-diffusion front crossing a local defectJun 14 2011The interaction of a Zeldovich reaction-diffusion front with a localized defect is studied numerically and analytically. For the analysis, we start from conservation laws and develop simple collective variable ordinary differential equations for the front ... More
Discrete sine-Gordon dynamics on networksJun 08 2015In this study we consider the sine-Gordon equation formulated on domains which are not locally homeomorphic to any subset of the Euclidean space. More precisely, we formulate the discrete dynamics on trees and graphs. Each edge is assumed to be a 1D uniform ... More
Highly Degenerate Harmonic Mean Curvature FlowApr 24 2008We study the evolution of a weakly convex surface $\Sigma_0$ in $\R^3$ with flat sides by the Harmonic Mean Curvature flow. We establish the short time existence as well as the optimal regularity of the surface and we show that the boundaries of the flat ... More
Search for Gamma-ray Emission from Dark Matter Annihilation in the Small Magellanic Cloud with the Fermi Large Area TelescopeMar 03 2016Mar 29 2016The Small Magellanic Cloud (SMC) is the second-largest satellite galaxy of the Milky Way and is only 60 kpc away. As a nearby, massive, and dense object with relatively low astrophysical backgrounds, it is a natural target for dark matter indirect detection ... More
Synchronization in fiber lasers arraysApr 25 2013We consider an array of fiber lasers coupled through the nearest neighbors. The model is a generalized nonlinear Schroedinger equation where the usual Laplacian is replaced by the graph Laplacian. For a graph with no symmetries, we show that there is ... More
Designing arrays of Josephson junctions for specific static responsesMar 12 2007We consider the inverse problem of designing an array of superconducting Josephson junctions that has a given maximum static current pattern as function of the applied magnetic field. Such devices are used for magnetometry and as Terahertz oscillators. ... More
HB Morphology and Age Indicators for Metal-Poor Stellar Systems with Age in the Range of 1 to 20 GyrJul 20 1994Isochrone computations and horizontal branch (HB) models for Y(MS)=0.23 and two values of Z (0.0001, 0.0004)are used to derive constraints on the age indicators and HB morphology of metal-poor clusters with age t (in Gyrs) in the range of 1< t < 20.It ... More
A simple theory for the Raman spikeSep 11 2003The classical stimulated Raman scattering system describing resonant interaction between two electromagnetic waves and a fast relaxing medium wave is studied by introducting a systematic perturbation approach in powers of the relaxation time. We separate ... More
Global regularity of solutions to systems of reaction-diffusion with Sub-Quadratic Growth in any dimensionJan 28 2009This paper is devoted to the study of the regularity of solutions to some systems of reaction--diffusion equations, with reaction terms having a subquadratic growth. We show the global boundedness and regularity of solutions, without smallness assumptions, ... More
Nonlinear waves in networks: a simple approach using the sine--Gordon equationFeb 26 2014To study the propagation of nonlinear waves across Y-- and T--type junctions, we consider the 2D sine--Gordon equation as a model and study the dynamics of kinks and breathers in such geometries. The comparison of the energies reveals that the angle of ... More
Regularity for non-local almost minimal boundaries and applicationsMar 12 2010Jun 09 2011We introduce a notion of non-local almost minimal boundaries similar to that introduced by Almgren in geometric measure theory. Extending methods developed recently for non-local minimal surfaces we prove that flat non-local almost minimal boundaries ... More
Domain Generalization with Domain-Specific Aggregation ModulesSep 28 2018Visual recognition systems are meant to work in the real world. For this to happen, they must work robustly in any visual domain, and not only on the data used during training. Within this context, a very realistic scenario deals with domain generalization, ... More
Growth-Driven Percolations: The Dynamics of Community Formation in Neuronal SystemsNov 02 2004The quintessential property of neuronal systems is their intensive patterns of selective synaptic connections. The current work describes a physics-based approach to neuronal shape modeling and synthesis and its consideration for the simulation of neuronal ... More
Contracting the boundary of a Riemannian 2-discMay 24 2012Dec 03 2014Let $D$ be a Riemannian 2-disc of area $A$, diameter $d$ and length of the boundary $L$. We prove that it is possible to contract the boundary of $D$ through curves of length $\leq L + 200d\max\{1,\ln {\sqrt{A}\over d} \}$. This answers a twenty-year ... More
A projection algorithm for non-monotone variational inequalitiesSep 30 2016We introduce and study the convergence properties of a projection-type algorithm for solving the variational inequality problem for point-to-set operators. No monotoni\-city assumption is used in our analysis. The operator defining the problem is only ... More
Spatially Resolved Emission of a High Redshift DLA Galaxy with the Keck/OSIRIS IFUOct 31 2013We present the first Keck/OSIRIS infrared IFU observations of a high redshift damped Lyman-alpha (DLA) galaxy detected in the line of sight to a background quasar. By utilizing the Laser Guide Star Adaptive Optics (LGSAO) to reduce the quasar PSF to FWHM~0.15 ... More
Rationalizing Neural PredictionsJun 13 2016Prediction without justification has limited applicability. As a remedy, we learn to extract pieces of input text as justifications -- rationales -- that are tailored to be short and coherent, yet sufficient for making the same prediction. Our approach ... More
An Unsupervised Method for Uncovering Morphological ChainsMar 08 2015Most state-of-the-art systems today produce morphological analysis based only on orthographic patterns. In contrast, we propose a model for unsupervised morphological analysis that integrates orthographic and semantic views of words. We model word formation ... More
Reasoning for ALCQ extended with a flexible meta-modelling hierarchyOct 29 2014This works is motivated by a real-world case study where it is necessary to integrate and relate existing ontologies through meta- modelling. For this, we introduce the Description Logic ALCQM which is obtained from ALCQ by adding statements that equate ... More
STM-induced surface aggregates on metals and oxidized siliconAug 27 2012We have observed an aggregation of carbon or carbon derivatives on platinum and natively oxidized silicon surfaces during STM measurements in ultra-high vacuum on solvent-cleaned samples previously structured by e-beam lithography. We have imaged the ... More
How can the Odderon be detected at RHIC and LHCJul 08 2006Dec 21 2006The Odderon remains an elusive object, 33 years after its invention. The Odderon is now a fundamental object in QCD and CGC and it has to be found experimentally if QCD and CGC are right. In the present paper, we show how to find it at RHIC and LHC. The ... More
Modeling heterogeneity in ranked responses by nonparametric maximum likelihood: How do Europeans get their scientific knowledge?Jan 07 2011This paper is motivated by a Eurobarometer survey on science knowledge. As part of the survey, respondents were asked to rank sources of science information in order of importance. The official statistical analysis of these data however failed to use ... More
Supersymmetric variational energies for the confined Coulomb systemFeb 26 2002Jul 05 2002The methodology based on the association of the Variational Method with Supersymmetric Quantum Mechanics is used to evaluate the energy states of the confined hydrogen atom.
Ladder operators for subtle hidden shape invariant potentialsMay 03 2004Ladder operators can be constructed for all potentials that present the integrability condition known as shape invariance, satisfied by most of the exactly solvable potentials. Using the superalgebra of supersymmetric quantum mechanics we construct the ... More
Quantum Computation with Vibrationally Excited MoleculesAug 05 2002A new physical implementation for quantum computation is proposed. The vibrational modes of molecules are used to encode qubit systems. Global quantum logic gates are realized using shaped femtosecond laser pulses which are calculated applying optimal ... More
Heralded orthogonalisation of coherent states and their conversion to discrete-variable superpositionsFeb 01 2017The nonorthogonality of coherent states is a fundamental property which prevents them from being perfectly and deterministically discriminated. To circumvent this problem, we present an experimentally feasible protocol for the probabilistic orthogonalisation ... More
HI Observations of the Ca II absorbing galaxies Mrk 1456 and SDSS J211701.26-002633.7Sep 30 2009In an effort to study Damped Lyman Alpha galaxies at low redshift, we have been using the Sloan Digital Sky Survey to identify galaxies projected onto QSO sightlines and to characterize their optical properties. For low redshift galaxies, the HI 21cm ... More
Monotone Operators without EnlargementsOct 14 2011Enlargements have proven to be useful tools for studying maximally monotone mappings. It is therefore natural to ask in which cases the enlargement does not change the original mapping. Svaiter has recently characterized non-enlargeable operators in reflexive ... More
Rationalizing Neural PredictionsJun 13 2016Nov 02 2016Prediction without justification has limited applicability. As a remedy, we learn to extract pieces of input text as justifications -- rationales -- that are tailored to be short and coherent, yet sufficient for making the same prediction. Our approach ... More
Aspect-augmented Adversarial Networks for Domain AdaptationJan 01 2017Sep 25 2017We introduce a neural method for transfer learning between two (source and target) classification tasks or aspects over the same domain. Rather than training on target labels, we use a few keywords pertaining to source and target aspects indicating sentence ... More
The most metal-poor damped Lyman alpha systems: An insight into dwarf galaxies at high redshiftJun 26 2014Jul 31 2015In this paper we analyze the kinematics, chemistry, and physical properties of a sample of the most metal-poor damped Lyman-alpha systems (DLAs), to uncover their links to modern-day galaxies. We present evidence that the DLA population as a whole exhibits ... More
A new exactly solvable Eckart-type potentialApr 30 2001A new exact analytically solvable Eckart-type potential is presented, a generalisation of the Hulthen potential. The study through Supersymmetric Quantum Mechanics is presented together with the hierarchy of Hamiltonians and the shape invariance property. ... More
The stellar content of 10 dwarf irregular galaxiesSep 15 1998We examine the stellar content of 10 dwarf irregular galaxies of which broad-band CCD photometry was published in Hopp & Schulte-Ladbeck (1995). We also present Halpha images for several of these galaxies. The galaxies in the sample are located outside ... More
Junction Tree Variational Autoencoder for Molecular Graph GenerationFeb 12 2018Mar 29 2019We seek to automate the design of molecules based on specific chemical properties. In computational terms, this task involves continuous embedding and generation of molecular graphs. Our primary contribution is the direct realization of molecular graphs, ... More
Cutting-Decimation Renormalization for diffusive and vibrational dynamics on fractalsOct 24 1997Recently, we pointed out that on a class on non exactly decimable fractals two different parameters are required to describe diffusive and vibrational dynamics. This phenomenon we call dynamical dimension splitting is related to the lack of exact decimation ... More
The window Josephson junction: a coupled linear nonlinear systemJun 05 2001We investigate the interface coupling between the 2D sine-Gordon equation and the 2D wave equation in the context of a Josephson window junction using a finite volume numerical method and soliton perturbation theory. The geometry of the domain as well ... More
RR Lyrae stars in Galactic globular clusters.III. Pulsational predictions for metal content Z=0.0001 to Z=0.006May 21 2004The results of nonlinear, convective models of RR Lyrae pulsators with metal content Z=0.0001 to 0.006 are discussed and several predicted relations connecting pulsational (period and amplitude of pulsation) and evolutionary parameters (mass, absolute ... More
Theoretical models for classical cepheids: V. Multiwavelength relationsNov 23 1999From a theoretical study based on nonlinear, nonlocal and time-dependent convective pulsating models at varying mass and chemical composition, we present the predicted Period-Luminosity, Period-Color, Color-Color and Period-Luminosity-Color relations ... More
Spectrum of large random reversible Markov chains: two examplesNov 07 2008May 31 2010We take on a Random Matrix theory viewpoint to study the spectrum of certain reversible Markov chains in random environment. As the number of states tends to infinity, we consider the global behavior of the spectrum, and the local behavior at the edge, ... More
Circular Law Theorem for Random Markov MatricesAug 11 2008Jun 09 2010Consider an nxn random matrix X with i.i.d. nonnegative entries with bounded density, mean m, and finite positive variance sigma^2. Let M be the nxn random Markov matrix with i.i.d. rows obtained from X by dividing each row of X by its sum. In particular, ... More
Oscillations of simple networksSep 14 2011Oct 24 2012To describe the flow of a miscible quantity on a network, we introduce the graph wave equation where the standard continuous Laplacian is replaced by the graph Laplacian. This is a natural description of an array of inductances and capacities, of fluid ... More
Triple Scoring Using Paragraph Vector - The Gailan Triple Scorer at WSDM Cup 2017Dec 22 2017In this paper we describe our solution to the WSDM Cup 2017 Triple Scoring task. Our approach generates a relevance score based on the textual description of the triple's subject and value (Object). It measures how similar (related) the text description ... More
A Complete Method of Comparative Statics for Optimization Problems (Unabbreviated Version)Oct 27 2013A new method of deriving comparative statics information using generalized compensated derivatives is presented which yields constraint-free semidefiniteness results for any differentiable, constrained optimization problem. More generally, it applies ... More
Inverse source problem in a forced networkMar 13 2018Sep 17 2018We address the nonlinear inverse source problem of identifying a time-dependent source occurring in one node of a network governed by a wave equation. We prove that time records of the associated state taken at a strategic set of two nodes yield uniqueness ... More
Random walk on sparse random digraphsAug 26 2015Jan 22 2018A finite ergodic Markov chain exhibits cutoff if its distance to equilibrium remains close to its initial value over a certain number of iterations and then abruptly drops to near 0 on a much shorter time scale. Originally discovered in the context of ... More
Star luminosity function as an age indicator for the Dwarf spheroidal Leo IJul 23 1995Star luminosity function, already recognized as an age indicator for old galactic globular clusters, can be used to contrains the age of younger stellar systems like the nearby dwarfs spheroidal Leo I. We compare the observed luminosity function of Leo ... More
Coupling conditions for the nonlinear shallow water equations in forksSep 30 2015Oct 11 2016We study numerically and analytically how nonlinear shallow water waves propagate in a fork. Using a homothetic reduction procedure, conservation laws and numerical analysis in a 2D domain, we obtain simple angle dependent coupling conditions for the ... More
Nonlinear Analysis of Experimental Noisy Time Series in Fluidized Bed SystemsAug 02 1994The paper describes the application of some numerical techniques to analyze and to characterize the observed dynamical behaviour of fluidized bed systems. The preliminary results showed clearly that the dynamics of the considered process can be nonrecurrent ... More
Entropic repulsion in $|\nabla φ|^p$ surfaces: a large deviation bound for all $p\geq 1$Jan 12 2017We consider the $(2+1)$-dimensional generalized solid-on-solid (SOS) model, that is the random discrete surface with a gradient potential of the form $|\nabla\phi|^{p}$, where $p\in [1,+\infty]$. We show that at low temperature, for a square region $\Lambda$ ... More
Duality for the left and right fractional derivativesSep 18 2014We prove duality between the left and right fractional derivatives, independently on the type of fractional operator. Main result asserts that the right derivative of a function is the dual of the left derivative of the dual function or, equivalently, ... More
Adaptive Learning to Speed-Up Control of Prosthetic Hands: a Few Things Everybody Should KnowFeb 27 2017A number of studies have proposed to use domain adaptation to reduce the training efforts needed to control an upper-limb prosthesis exploiting pre-trained models from prior subjects. These studies generally reported impressive reductions in the required ... More
Proof of Aldous' spectral gap conjectureJun 06 2009Sep 28 2009Aldous' spectral gap conjecture asserts that on any graph the random walk process and the random transposition (or interchange) process have the same spectral gap. We prove the conjecture using a recursive strategy. The approach is a natural extension ... More
Radial sine-Gordon kinks as sources of fast breathersMar 18 2013We consider radial sine-Gordon kinks in two, three and higher dimensions. A full two dimensional simulation showing that azimuthal perturbations remain small allows to reduce the problem to the one dimensional radial sine-Gordon equation. We solve this ... More
RR Lyrae stars in Galactic globular clusters. VI. The Period-Amplitude relationSep 20 2007We compare theory and observations for fundamental RR Lyrae in the solar neighborhood and in both Oosterhoff type I (OoI) and type II (OoII) Galactic globular clusters (GGCs). The distribution of cluster RR_ab in the PA_V plane depends not only on the ... More
The Cepheid Period-Luminosity relation and the maser distance to NGC 4258Oct 24 2001In a recent paper describing HST observations of Cepheids in the spiral galaxy NGC 4258, Newman et al. (2001) report that the revised calibrations and methods for the Key Project on the Extragalactic Distance Scale yield that the true distance modulus ... More
On the second overtone stability among SMC CepheidsMar 07 2001We present a new set of Cepheid, full amplitude, nonlinear, convective models which are pulsationally unstable in the second overtone (SO). Hydrodynamical models were constructed by adopting a chemical composition typical for Cepheids in the Small Magellanic ... More
The Topology of RR Lyrae Instability Strip and the Oosterhoff DichotomyJul 11 1995Convective pulsating models with Y=0.24, stellar mass M=0.65Mo, 0.75Mo, under selected assumptions about luminosities and effective temperatures, are used together with stellar atmosphere computations to get the predicted dependence of RR Lyrae instability ... More | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8653884530067444, "perplexity": 2034.6280527482181}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256082.54/warc/CC-MAIN-20190520162024-20190520184024-00167.warc.gz"} |
https://www.authorea.com/doi/full/10.1002/essoar.10508672.1 | Intra-cloud Microphysical Variability Obtained from Large-eddy Simulations using the Super-droplet Method
• +1
• Toshiki Matsushima,
• Seiya Nishizawa,
• Shin-ichiro Shima,
• Wojciech Grabowski
Toshiki Matsushima
RIKEN Center for Computational Science
Corresponding Author:[email protected]
Author Profile
Seiya Nishizawa
RIKEN Center for Computational Science
Author Profile
Shin-ichiro Shima
University of Hyogo
Author Profile
Wojciech Grabowski
National Center for Atmospheric Research
Author Profile
## Abstract
In this study, the super droplet-method (SDM) is used in large-eddy simulations of an isolated cumulus congestus observed during the 1995 Small Cumulus Microphysics Study field project in order to investigate the intra-cloud variability associated with entrainment and mixing. The SDM is a Lagrangian particle-based method for cloud microphysics that provides droplet size distributions (DSD) coupled to the simulated cloud-scale dynamics. The authors show that sensitivity to the spatial resolution and the initial number of particles is larger, and sensitivity to the initial conditions is smaller, when the order of the DSD moment is smaller. Through the use of simulations with reliable statistics, microphysical variability is investigated at scales of ∼ 100 m that can be considered well resolved in both the numerical simulations and in-situ aircraft observations. Large spatial variability in cloudy volumes is shown to be strongly affected by entrainment. Mean values of the adiabatic fraction (AF), cloud droplet number concentration, and the cubed ratio of the mean volume radius and the effective radius (k) agree well with observations in the middle and upper cloud layers. Moreover, the AF and k values are found to be positively correlated, and the reduction of the mean volume radius scaled by its adiabatic value with the decrease of the mean droplet concentration scaled by its adiabatic value is found to be smaller than the theoretical prediction of homogeneous mixing. The latter supports the notion of inhomogeneous mixing due to entrainment. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.926227867603302, "perplexity": 3179.256827411448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949035.66/warc/CC-MAIN-20230329213541-20230330003541-00393.warc.gz"} |
https://www.physicsforums.com/threads/caclulus-mapping-problem.656536/ | # Caclulus Mapping problem
1. Dec 3, 2012
### mcafej
1. The problem statement, all variables and given/known data
Take the unit circle in the x-y plane with center at (0, 0), bisected by the x-axis. Take two
maps, the first MS from the circle minus the south pole S to the x-axis that take a point P on the circle to the intersection of the line from the south pole (0, −1) through P with the x-axis, and the second MN from the circle minus the north pole N to the x-axis that take a point P on the circle to the intersection of the line from the north pole (0, 1) through P with the x-axis.
(i). Under MS, what part of the x-axis corresponds to the upper half of the circle? Under MN , what part of the x-axis corresponds to the upper half of the circle?
(ii). If P = (cos θ,sin θ), show that MS(P) =cos θ/(1+sin θ),
and MN (P) =cos θ/(1−sin θ)
.
(iii). If P is any point on the circle other than N or S, show that if MS(P) = x, then MN (P) = 1/x
3. The attempt at a solution
I'm really confused on what the actual map is. I tried drawing a circle and seeing what the points would get mapped to, the problem is, what happens to the upper half of the circle under MN, or the lower half of the circle under MS? The way I am reading it, the map works by making a circle and then taking a point P on the circle and drawing a line from P to either (0, 1) or (0, -1),depending on if you are using MS or MN, and then the point where that line intersects the x axis is the value that you assign to P. If I am correct about that transformation, that would mean that for the first part of the quesiton, I would get.
i) The upper half of the circle under MS corresponds the the entire x axis between the points -1 and 1. Under MN however, the upper half of the circle does not correspond to anything on the x axis.
2. Dec 3, 2012
### micromass
Staff Emeritus
Correct. You might want to search information on "stereographic projections".
Yes.
No, that is not true. Points on the upper half of the circle certainly do correspond to certain points on the x-axis. For example, P in the picture below certainly does correspond to a certain point on the x-axis, namely Q.
3. Dec 3, 2012
### mcafej
Thank you, that makes a lot more sense. It also helps with part 2 of the problem (where you can take the limit as theta approaches 3π/2 for MS and you get x>1 or x<1, and you can do the same with MN, but instead have theta approach ∏/2 to get the same thing).
I am a little confused on how to prove that if MS(P)=x, then MN(P)=1/x. Could anybody explain why this is true?
4. Dec 3, 2012
### micromass
Staff Emeritus
I think for that last question, it might be good to give an explicit formulation of the maps $M_S$ and $M_N$.
So given a point $(x,y)$ on the circle, what is $M_S(x,y)$ in terms of x and y??
Similar Discussions: Caclulus Mapping problem | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8556094169616699, "perplexity": 390.0707504076209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886124662.41/warc/CC-MAIN-20170823225412-20170824005412-00191.warc.gz"} |
http://mathhelpforum.com/algebra/130315-properties-logarithms.html | Math Help - properties of logarithms
1. properties of logarithms
I have the expression $4log_{3}\sqrt[2]{x}-4log_{3}x$. In simplifying the first term I need to get rid of the square root and change the coefficient of 4 to an exponent. So if I get rid of the square root first that would be $x^{1/2}$ so when I now change the coefficient of 4 to an exponent, shouldnt the term be $log_{3}(x^{1/2})^4$? It seems that the actual answer is $log_{3}x^{1/2(4)}$ but according to the properties of logarithms I dont understand why. The properties say that $\sqrt[x]{y}=y^{1/x}$ and $mlog n=logn^m$ so why is the simplification of the term $log_{3}x^{1/2(4)}$ and not $log_{3}(x^{1/2})^4$?
In other words if $mlog n=logn^m$ then why am I simply multiplying the exponent of 1/2 in $x^{1/2}$ by 4 instead of raising it to a power of 4?
I have the expression $4log_{3}\sqrt[2]{x}-4log_{3}x$. In simplifying the first term I need to get rid of the square root and change the coefficient of 4 to an exponent. So if I get rid of the square root first that would be $x^{1/2}$ so when I now change the coefficient of 4 to an exponent, shouldnt the term be $log_{3}(x^{1/2})^4$? It seems that the actual answer is $log_{3}x^{1/2(4)}$ but according to the properties of logarithms I dont understand why. The properties say that $\sqrt[x]{y}=y^{1/x}$ and $mlog n=logn^m$ so why is the simplification of the term $log_{3}x^{1/2(4)}$ and not $log_{3}(x^{1/2})^4$?
In other words if $mlog n=logn^m$ then why am I simply multiplying the exponent of 1/2 in $x^{1/2}$ by 4 instead of raising it to a power of 4?
Let $x>0$ and $a,b \in R$ , then :
$a log (x)^b = log \left( (x)^b \right)^a = log (x)^{ab}$.
You should know that $(a^b)^c=a^{bc}$.
3. $log_{3}x^{(1/2)4}=log_{3}x^{4/2}=log_{3}x^{2}$
i think this could help you...i'm not sure if am right but kind of having trouble in logs
You're asking lots of questions here. Let me see if I can answer them one at a time.
I have the expression $4log_{3}\sqrt[2]{x}-4log_{3}x$. In simplifying the first term I need to get rid of the square root and change the coefficient of 4 to an exponent. So if I get rid of the square root first that would be $x^{1/2}$
Correct!
so when I now change the coefficient of 4 to an exponent, shouldnt the term be $log_{3}(x^{1/2})^4$?
Yes. It is.
It seems that the actual answer is $log_{3}x^{1/2(4)}$ but according to the properties of logarithms I dont understand why.
I think that at the heart of your confusion is that you're not sure what $(x^{\frac12})^4$ means. It means:
$(x^{\frac12})\times(x^{\frac12})\times(x^{\frac12} )\times(x^{\frac12})$
which is the same as:
$x^{(\frac12+\frac12+\frac12+\frac12)}=x^{(\frac12\ times4)}$
Notice that it isn't:
$x^{(\frac12)^4}$
That would be:
$x^{(\frac12\times\frac12\times\frac12\times\frac12 )}=x^{(\frac{1}{16})}$
which is something different altogether.
The properties say that $\sqrt[x]{y}=y^{1/x}$ and $mlog n=logn^m$ so why is the simplification of the term $log_{3}x^{1/2(4)}$ and not $log_{3}(x^{1/2})^4$?
Can you see now that these are the same thing?
In other words if $mlog n=logn^m$ then why am I simply multiplying the exponent of 1/2 in $x^{1/2}$ by 4 instead of raising it to a power of 4?
See above! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 40, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.883179247379303, "perplexity": 122.49962680504841}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737933027.70/warc/CC-MAIN-20151001221853-00001-ip-10-137-6-227.ec2.internal.warc.gz"} |
http://www.maa.org/publications/periodicals/convergence/james-gregory-and-the-pappus-guldin-theorem-selections-from-the-gpu-2?device=mobile | James Gregory and the Pappus-Guldin Theorem - Selections from the GPU (2)
Author(s):
Andrew Leahy (Knox College)
Proposition Thirty-one.
If there are two figures symmetric around an axis which are rotated in such a way that the axes of rotation of each of the figures are normal to the axis of each figure, then the ratio of one solid arising from such a rotation to the other solid arising from the same rotation is compounded directly from the ratio of the first figure to the other figure and from the ratio of the segment between the center of gravity and the axis of rotation of the first figure to the similar segment of the other figure.
Let ABC and HIL be any two figures symmetric around the axes BF and IN, which are rotated around the lines EG and MO cutting the extended (if necessary) axes BF and IN normally at the points F and N. Let D and K be the centers of gravity of the figures ABC and HIL. I say that the ratio of the solid arising from the figure ABC rotated around the line EG to the solid arising from the figure HIL rotating around the line MO, is compounded from the ratio of the figure ABC to the figure HIL and from the ratio of DF to KN.
Above the figures ABC and HIL let right cylindrical figures of equal height be cut by planes passing through the lines EG and MO, each one into two trunks, namely, an upper and a lower trunk. The ratio of the solid of revolution arising from ABC to the solid of revolution arising from HIL is compounded from the ratio of the lower trunk of the cylinder above ABC to the lower trunk of the cylinder above HIL and from the ratio of the radius of rotation of the figure ABC to the radius of rotation of the figure HIL.
But the lower trunk of the cylinder above ABC is to the lower trunk of the cylinder above HIL is in a ratio compounded from the ratio of the lower trunk of the cylinder above ABC to the entire cylinder above ABC, from the ratio of the entire cylinder above ABC to the entire cylinder above HIL, and from the ratio of the entire cylinder above HIL to its lower trunk. But from the convertendo of the Consequence to Proposition 29 the lower trunk of the cylinder above ABC is to the entire cylinder as FD is to the radius of rotation of the figure AB. Also, the cylinder above ABC is to the cylinder above HIL as the figure ABC is to the figure HIL. Similarly, by the Consequence to Proposition 29, the cylinder above HIL is to its own lower trunk as the radius of rotation of the figure HIL is to KN. Consequently, the ratio of the lower trunk of the cylinder above ABC to the lower trunk of the cylinder above HIL is compounded from the ratio of the line DF to the radius of rotation of the figure ABC, from the ratio of the figure ABC to the figure HIL, and from the ratio of the radius of rotation of the figure HIL to the line KN. Therefore, the ratio of the solid arising from the rotation of the figure ABC to the solid arising from the rotation of the figure HIL is compounded from the ratio of the figure ABC to the figure HIL, from the ratio of the line DF to the radius of rotation of the figure ABC, from the ratio of the radius of rotation of the figure ABC to the radius of rotation of the figure HIL, and from the ratio of the radius of rotation of the figure HIL to the line KN. But the last three ratios compound to the ratio of DF to KN. Therefore, the ratio of the solid arising from the rotation of the figure ABC around EG to the solid arising from the rotation of the figure HIL around MO is compounded from the ratio of the figure ABC to the figure HIL and from the ratio of the segment between the center of gravity of the figure ABC and its axis of rotation -- namely, DF -- to the segment between the center of gravity of the figure HIL and that same axis of rotation -- namely, KN -- which ought to have been demonstrated.
Proposition Thirty-three.
If there are any two figures which are rotated around a given axis, the ratio of the one solid arising from such a rotation to the other solid arising from the same rotation will be compounded from the direct ratio of one figure to the other figure and from the direct ratio of the segment between the center of gravity and the axis of rotation of the one figure to the similar segment of the other figure.
Let ABC and NQP be any two figures which are rotated around the lines EF and Y4, and let their centers of gravity be D and R, which are sent down lines DG and RZ perpendicular to the axes of rotation EF and Y4. I say that the ratio of the solid arising from the figure ABC rotated around the line EF to the solid arising from the figure NQP rotated around the line Y4 is compounded from the ratio of the figure ABC to the figure NQP and from the ratio of DG to RZ.
Let the lines BH and Q2 touching the figures ABC and NQP at B and Q be drawn parallel to the lines DG and RZ. Let the figures ABC and QNP be conceived to be revolved around the lines BH and Q2, like axes, until, attaining the plane on the other part of the axes, they make the figures BLM and QTX, equal and similar to themselves and having exactly the same position toward the lines BH and EF, and Q2 and Y4. Let O and V be the centers of gravity of the figures BLM and QTX. Let the lines OI and V3 be drawn perpendicular to the lines EF and Y4. Also, let the lines DO and RV, intersecting the lines BH and Q2 in the points K and S, be joined.
It is manifest that the point K is the center of gravity of the figure BACBLM symmetric around the axis BH and likewise that the point S is the center of gravity of the entire figure QNPQTX around the axis Q2. It is also apparent that the line DG, KH, OI and also RZ, S2, V3, are equal among themselves. Since the figures BACBLM and QNPQTX are symmetric around the axes BH and Q2, which are normal to the axis of rotation, therefore the solid of revolution arising from the rotation of the figure BACBLM around the line EF is to the solid of revolution arising from the rotation of the figure QNPQTX in the ratio compounded from the ratio of the figure BACBLM to the figure QNPQTX and from the ratio of KH to S2 by Proposition 31. But the solid arising from the figure BACBLM rotated around EF is twice the solid arising from the figure BAC rotated around the same EF. Likewise, the solid arising from the figure QNPQTX rotated around the line Y4 is twice the solid arising from the figure QNP rotated around the same Y4. Also, the figure BACBLM is twice the figure BAC and the figure QNPQTX is twice the figure QNP. Since halves are in the same ratio with their own doubles, the solid of revolution arising from the figure ABC rotated around the line EF will be to the solid of revolution arising from the figure NQP rotated around the line Y4 in the ratio compounded from the ratio of the figure ABC to the figure NQP and from the ratio of KH to S2--or DG to RZ--which it was desired to demonstrate.
Consequence.
It follows that if the centers of gravity of the figures are equally distant from the axes of rotation, the solids of revolution arising from the rotation of figures are in a direct ratio to the figures themselves. If the figures themselves are equal, it follows that the solids of revolution arising from their rotation are in a direct ratio to the segments between the centers of gravity and the axes of rotation. If the segments and figures are equal, the solids of revolution arising from them will be equal even if the figures are dissimilar between themselves.
Scholium.
From these results, it is manifest that between any two figures there are three ratios--namely, of the one figure to the other figure, of the solid of revolution arising from the rotation of one figure to the solid of revolution arising from the rotation of the other figure, and of the segment between the center of gravity and the axis of rotation of the one figure to the similar segment of the second figure--giving two of which always discloses the unknown third.
All these things are demonstrated universally In the same manner for every curve or curves not enclosing a figure thus so of all geometrical demonstrations these are maximally universal.
Proposition Thirty-five.
Each solid of revolution is equal to a right cylindrical figure whose base is the figure out of the rotation of which the solid is produced and whose altitude is the circumference of a circle in which the center of gravity of the figure is revolved.
Let AB be a figure whose center of gravity is C. Let a solid of revolution be made from the rotation of the figure AB around the line DF. I say this solid of revolution is equal to the cylinder whose base is the figure AB and whose altitude is the circumference of the circle in which the center of gravity C is revolved.
Let HGKI be a rectangle whose center of gravity is L. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.935610830783844, "perplexity": 583.5268422811212}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936464809.62/warc/CC-MAIN-20150226074104-00296-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://www.hive.co.uk/Product/Aurelien-Ecole-Centrale-de-Nantes-Nantes-France-Babarit/Ocean-Wave-Energy-Conversion--Resource-Technologies-and-Performance/21602992 | # Ocean Wave Energy Conversion : Resource, Technologies and Performance Hardback
#### Description
The waves that animate the surface of the oceans represent a deposit of renewable energy that for the most part is still unexploited today.
This is not for lack of effort, as for more than two hundred years inventors, researchers and engineers have struggled to develop processes and systems to recover the energy of the waves.
While all of these efforts have failed to converge towards a satisfactory technological solution, the result is a rich scientific and technical literature as well as extensive and varied feedback from experience. For the uninitiated, this abundance is an obstacle.
In order to facilitate familiarization with the subject, we propose in this work a summary of the state of knowledge on the potential of wave energy as well as on the processes and technologies of its recovery (wave energy converters).
In particular, we focus on the problem of positioning wave energy in the electricity market, the development of wave energy conversion technologies from a historical perspective, and finally the energy performance of the devices.
This work is aimed at students, researchers, developers, industry professionals and decision makers who wish to acquire a global perspective and the necessary tools to understand the field.
#### Information
£125.00
£107.65
on all orders
###### Pick up orders
from local bookshops | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8323416113853455, "perplexity": 811.475773886952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583857993.67/warc/CC-MAIN-20190122161345-20190122183345-00614.warc.gz"} |
http://www.perlmonks.org/?node_id=657858 | Your skill will accomplishwhat the force of many cannot PerlMonks
### Re: Installing Perl on Windows XP
by inman (Curate)
on Dec 19, 2007 at 11:57 UTC ( #657858=note: print w/ replies, xml ) Need Help??
in reply to Installing Perl on Windows XP
Try running this from a command line rather than the editor. The diamond operator is shorthand for standard input which your editor may treat differently. Your script is fine although you should get into the habit of adding "use strict;" to enforce various checking routines.
Comment on Re: Installing Perl on Windows XP
Re^2: Installing Perl on Windows XP
by Ethen (Acolyte) on Dec 20, 2007 at 05:40 UTC
Ok...now I have written my script in notepad and saved it as "proj.pl". Now how to run it in command prompt? Please bear with me for my small queries as I am new to this.....but ur answers are really giving me a gud picture of it. Thanks Ethen
For a normal ActiveState install you should be able to run your Perl application by simply typing the filename on the command line and pressing enter. If that doesn't do it for you then you could try:
```c:\perl\bin\perl.exe proj.pl
which runs the Perl interpreter from the default location used by the ActiveState install.
Perl is environmentally friendly - it saves trees
Welcome to the Monastery Ethen
"you should be able to run your Perl application by simply typing the filename on the command line"
You may find it useful when first learning to create a shortcut to the copy cmd.exe into the Perl bin.
Then all that is needed is to type the file name "proj.pl" at the command prompt.
Create A New User
Node Status?
node history
Node Type: note [id://657858]
help
Chatterbox?
and the web crawler heard nothing...
How do I use this? | Other CB clients
Other Users?
Others romping around the Monastery: (9)
As of 2014-11-23 05:59 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8552500009536743, "perplexity": 4931.562730290731}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379190.66/warc/CC-MAIN-20141119123259-00215-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/3972124/liminf-of-pointwise-norms-of-a-weakly-convergent-sequence | # Liminf of Pointwise Norms of a Weakly Convergent Sequence
Let $$X_1, X_2, \cdots$$ be a sequence of $$p$$-integrable $$\mathbb{R^d}$$ valued random variables. Assume that $$X_n$$ converges $$0$$ weakly, then can we say that $$r(\omega) = liminf\{ |X_1 (\omega)|, |X_2 (\omega)|, \cdots\}$$ is $$0$$ for almost every $$\omega ?$$
The classical example of a weakly convergent but not strongly convergent sequence is those of orthogonal basis, yet it is not a counterexample for the above claim. I could not prove the claim but intuitively I believe it holds, at least for $$\mathbb{R^d}$$ valued random vectors.
If needed one can assume that the sequence $$(X_n)_n$$ is bounded in $$p$$-norm, I do not feel like this is necessary.
Weak convergence together with boundedness of second moments implies convergence of expectations. By Fatou's Lemma we get $$E \lim \inf |X_n| \leq \lim \inf E|X_n|=0$$ since $$E|X_n| \to 0$$. This implies that $$\lim \inf |X_n|=0$$ almost surely.
• I don't see how weak convergence together with boundedness of second moments implies convergence of expectations. Are you using Vitaly type theorem here? For every $Y$ $q$-integrable, we have $E[Y X_n] \rightarrow E[Y \, 0] = 0,$ and there is some $W$ $p$-integrable such that $|X_n| \leq W$ for all $n.$ I dont see how we proceed. Could you provide more details please? Jan 4 at 9:52
• @vekinpirna Boundeness of second moments implies uniform integrabilty. And $Y_n \to 0$ weakly, $(Y_n)$ uniform integrable implies $EY_n \to 0$. (one quick way of proving this to replace weak convergence by almost sure convergence using Skorohod Representation Theorem), but you can do it without this theorem also). Jan 4 at 10:00
• We are only intersetd in the real random variables $|X_n|$. And $X_n \to 0$ weakly implies $|X_n| \to 0$ weakly, so the usual Skorohod Theorem applies. @vekinpirna Jan 4 at 10:15
• Is it obvious that $X_n \rightarrow 0$ weakly implies $|X_n| \rightarrow 0$ weakly? I will take my time working on that. Jan 4 at 10:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9827539324760437, "perplexity": 211.73901357570824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056392.79/warc/CC-MAIN-20210918093220-20210918123220-00260.warc.gz"} |
https://mathhelpboards.com/threads/direct-products-and-quotient-groups.7765/ | # Direct Products and Quotient Groups
#### Peter
##### Well-known member
MHB Site Helper
Jun 22, 2012
2,918
In Beachy and Blair: Abstract Algebra, Section 3.8 Cosets, Normal Groups and Factor Groups, Exercise 17 reads as follows:
----------------------------------------------------------------------------------------------------------------------
17. Compute the factor group [TEX] ( \mathbb{Z}_6 \times \mathbb{Z}_4 ) / (2,2) [/TEX]
----------------------------------------------------------------------------------------------------------------------
Since I did not know the meaning of "Compute the factor group" I proceeded to try to list them members of [TEX] ( \mathbb{Z}_6 \times \mathbb{Z}_4 ) / (2,2) [/TEX] but had some difficulties, when I realised that I was unsure of whether the group [TEX] ( \mathbb{Z}_6 \times \mathbb{Z}_4 ) [/TEX] was a group under multiplication or addition. SO essentially I did not know how to carry out group operations in [TEX] ( \mathbb{Z}_6 \times \mathbb{Z}_4 ) / (2,2) [/TEX].
Reading Beachy and Blair, Chapter 3 Groups, page 118 (see attachment) we find the following definition:
-------------------------------------------------------------------------------------------------------------
3,3,3 Definition. Let [TEX] G_1 [/TEX] and [TEX] G_2 [/TEX] be groups. The set of all ordered pairs [TEX] (x_1, x_2) [/TEX] such that [TEX] x_1 \in G_1 [/TEX] and [TEX] x_2 \in G_2 [/TEX] is called the direct product of [TEX] G_1 [/TEX] and [TEX] G_2 [/TEX], denoted by [TEX] G_1 \times G_2 [/TEX].
----------------------------------------------------------------------------------------------------------------
Then Proposition 3,3,4 reads as follows:
-----------------------------------------------------------------------------------------------------------------
3,3,4 Proposition. Let [TEX] G_1 [/TEX] and [TEX] G_2 [/TEX] be groups.
(a) The direct product [TEX] G_1 \times G_2 [/TEX] is a group under the operation defined for all [TEX] (a_1, a_2) , (b_1, b_2) \in G_1 \times G_2 [/TEX] by
[TEX] (a_1, a_2) (b_1, b_2) = (a_1b_1, a_2b_2 ) [/TEX].
(b) etc etc
------------------------------------------------------------------------------------------------------------------
However in Example 3.3.3 on page 119 we find the group [TEX] ( \mathbb{Z}_2 \times \mathbb{Z}_2 ) [/TEX] dealt with as having addition as its operation.
My question is - what is the convention on direct products of [TEX] ( \mathbb{Z}_n \times \mathbb{Z}_m ) [/TEX] - does one use addition or multiplication?
Presumably, since the operations involve integers the matter is more than one of notation?
Can someone please clarify this matter?
Would appreciate some help.
Peter
#### mathbalarka
##### Well-known member
MHB Math Helper
Mar 22, 2013
573
Peter said:
My question is - what is the convention on direct products of $\mathbb{Z}_n×\mathbb{Z}_m$ - does one use addition or multiplication?
Since direct product are taken pointwise, the operation is a matter of pairing. If their are two pairs $(a_1, b_1)$ and $(a_2, b_2)$ where the elements of the tuples are as in proposition 2, then direct product results $(a_1 *_{G_1} b_1, a_2 *_{G_2} b_2)$. In the case of two cyclic groups of order n and m respectively, we have one operation mod n and other mod m
#### Deveno
##### Well-known member
MHB Math Scholar
Feb 15, 2012
1,967
Um...$\Bbb Z_n$ is only a group under addition modulo $n$, since the coset containing 0 never has a multiplicative inverse (one usually does not speak of the trivial group as "$\Bbb Z_0$").
To get a handle on what this quotient group might look like, it is helpful to know the ORDER of (2,2) in this group. It turns out that $\Bbb Z_6 \times \Bbb Z_4$ is of order 24, so we need only consider divisors of 24. In fact, it can be shown that the maximal possible order is lcm(6,4) = 12. We have:
$(2,2) \neq (0,0)$, so (2,2) is not of order 1.
$(2,2) + (2,2) = (4,0) \neq (0,0)$, so (2,2) is not of order 2.
$3(2,2) = (0,2) \neq (0,0)$, so (2,2) is not of order 3.
$4(2,2) = (2,0) \neq (0,0)$, so (2,2) is not of order 4.
5 does not divide 24.
$6(2,2) = (0,0)$, thus (2,2) is of order 6.
Thus our quotient group has order 24/6 = 4.
To avoid awkward notation, lets denote the subgroup generated by (2,2) as $H$. The identity of $\Bbb Z_6 \times \Bbb Z_4/H$ is, of course:
$H = \{(0,0),(2,2),(4,0),(0,2),(2,0),(4,2)\}$
Since our quotient group has order 4, we might hope that an element $a$ of order 4 in $\Bbb Z_6 \times \Bbb Z_4$ generates the cosets. Let's see if this is so:
Choose $a = (0,1)$, which is clearly of order 4 in our direct product group. Since $a \not\in H$, clearly, $a + H$ is a distinct element of $\Bbb Z_6 \times \Bbb Z_4/H$ from $H$. So we have as our second element of the quotient group:
$(0,1) + H = \{(0,1),(2,3),(4,1),(0,3),(2,1),(4,3)\}$.
Unfortunately, we see that $2a = 2(0,1) = (0,2) \in H$, which means that $(0,1) + H$ has order 2 in the quotient group.
However, clearly $(1,0) \not\in H$ nor $(0,1) + H$, so $(1,0) + H$ is a 3rd element of the quotient group:
$(1,0) + H = \{(1,0),(3,2),(5,0),(1,2),(3,0),(5,2)\}$.
Again, $2(1,0) = (2,0) \in H$, so $(1,0) + H$ is of order 2 in the quotient group. This tells us the quotient group is isomorphic to $\Bbb Z_2 \times \Bbb Z_2$, since it has more than one element of order 2.
For completeness' sake, we list the last element of the quotient group (which is the set of all the elements of the original direct product not yet listed in any coset) namely:
$(1,1) + H = \{(1,1),(3,3),(5,1),(1,3),(3,1),(5,3)\}$
It should be clear that the mapping $\phi:\Bbb Z_2 \times \Bbb Z_2 \to \Bbb Z_6 \times \Bbb Z_4/H$ given by:
$\phi(a,b) = (a,b) + H$
is an isomorphism.
Looking at it the other way, you should be able to come up with a surjective homomorphism from $\Bbb Z_6 \times \Bbb Z_4 \to \Bbb Z_2 \times \Bbb Z_2$ with kernel $H$. Can you?
#### Peter
##### Well-known member
MHB Site Helper
Jun 22, 2012
2,918
Um...$\Bbb Z_n$ is only a group under addition modulo $n$, since the coset containing 0 never has a multiplicative inverse (one usually does not speak of the trivial group as "$\Bbb Z_0$").
To get a handle on what this quotient group might look like, it is helpful to know the ORDER of (2,2) in this group. It turns out that $\Bbb Z_6 \times \Bbb Z_4$ is of order 24, so we need only consider divisors of 24. In fact, it can be shown that the maximal possible order is lcm(6,4) = 12. We have:
$(2,2) \neq (0,0)$, so (2,2) is not of order 1.
$(2,2) + (2,2) = (4,0) \neq (0,0)$, so (2,2) is not of order 2.
$3(2,2) = (0,2) \neq (0,0)$, so (2,2) is not of order 3.
$4(2,2) = (2,0) \neq (0,0)$, so (2,2) is not of order 4.
5 does not divide 24.
$6(2,2) = (0,0)$, thus (2,2) is of order 6.
Thus our quotient group has order 24/6 = 4.
To avoid awkward notation, lets denote the subgroup generated by (2,2) as $H$. The identity of $\Bbb Z_6 \times \Bbb Z_4/H$ is, of course:
$H = \{(0,0),(2,2),(4,0),(0,2),(2,0),(4,2)\}$
Since our quotient group has order 4, we might hope that an element $a$ of order 4 in $\Bbb Z_6 \times \Bbb Z_4$ generates the cosets. Let's see if this is so:
Choose $a = (0,1)$, which is clearly of order 4 in our direct product group. Since $a \not\in H$, clearly, $a + H$ is a distinct element of $\Bbb Z_6 \times \Bbb Z_4/H$ from $H$. So we have as our second element of the quotient group:
$(0,1) + H = \{(0,1),(2,3),(4,1),(0,3),(2,1),(4,3)\}$.
Unfortunately, we see that $2a = 2(0,1) = (0,2) \in H$, which means that $(0,1) + H$ has order 2 in the quotient group.
However, clearly $(1,0) \not\in H$ nor $(0,1) + H$, so $(1,0) + H$ is a 3rd element of the quotient group:
$(1,0) + H = \{(1,0),(3,2),(5,0),(1,2),(3,0),(5,2)\}$.
Again, $2(1,0) = (2,0) \in H$, so $(1,0) + H$ is of order 2 in the quotient group. This tells us the quotient group is isomorphic to $\Bbb Z_2 \times \Bbb Z_2$, since it has more than one element of order 2.
For completeness' sake, we list the last element of the quotient group (which is the set of all the elements of the original direct product not yet listed in any coset) namely:
$(1,1) + H = \{(1,1),(3,3),(5,1),(1,3),(3,1),(5,3)\}$
It should be clear that the mapping $\phi:\Bbb Z_2 \times \Bbb Z_2 \to \Bbb Z_6 \times \Bbb Z_4/H$ given by:
$\phi(a,b) = (a,b) + H$
is an isomorphism.
Looking at it the other way, you should be able to come up with a surjective homomorphism from $\Bbb Z_6 \times \Bbb Z_4 \to \Bbb Z_2 \times \Bbb Z_2$ with kernel $H$. Can you?
Thanks for the help, Deveno.
However, I am having difficulties with the following issue.
I can see that we need to find an element that is not in H in order to generate the next element of the quotient group, and then look for an element that is in neither of the two discovered groups, and so on.
However, I do not understand your arguments regarding the order of such an element.
For example, you write:
"Since our quotient group has order 4, we might hope that an element $a$ of order 4 in $\Bbb Z_6 \times \Bbb Z_4$ generates the cosets. "
Why, exactly, are you hoping an element a of order 4 in $\Bbb Z_6 \times \Bbb Z_4$.
What exactly is the connection between the order of the element and the process of generating the cosets?
Can you clarify this issue?
Note that I am still reflecting on the issue of the isomorphism and the surjective homomorphism that you mention.
Peter
#### Deveno
##### Well-known member
MHB Math Scholar
Feb 15, 2012
1,967
There are basically two ways of looking at a quotient group, each of which has its attractions.
1) First way: as a group made out of $H$-sized "chunks" of $G$, where $H$ is a normal subgroup of $G$. We use $H$ to define an equivalence relation on $G$, which partitions $G$ into these "chunks", called "cosets of $H$" (the normality of $H$ ensures that right cosets and left cosets agree). Since we are lumping ELEMENTS of $G$ together into "equivalence classes", it is easy to see we wind up with a group with fewer things in it.
"Naming" the equivalence classes poses a bit of a problem, we have several choices for a name for each coset (picking a representative uniquely determines the COSET, but the representative itself is not unique, any element of the coset could have the coset named after it: there is some unavoidable redundancy).
2) Second way: a quotient group of $G$ is another group $G'$ together with a surjective homomorphism $\phi: G \to G'$. The normal subgroup of $G$ we are "modding out" in this view is the kernel of the homomorphism $\phi$.
The fundamental isomorphism theorem basically says these two ways are equivalent.
Now, given that we have a quotient group of order 4, our fondest wish would be for it to be the nicest possible kind of group of order 4, a cyclic one. Cyclic groups are very nice, and easy to understand, and have an almost transparent inner structure. Since $G$ has elements of order 4, if one of those generated the cosets, it would mean the quotient was cyclic. Unfortunately, that is not the case, here.
Now, given ANY homomorphism $\phi: G \to G'$ between 2 groups, it is ALWAYS the case that:
$|\phi(g)|$ divides $|g|$. So it is a natural question to ask, does:
$|\phi(g)| = |g|$?
When the order of the $g$ involved is small (in this example, we looked at $g$ with an order of 4), often the possibilities for the order of $\phi(g)$ are quite limited, and perhaps can provide useful information.
#### Peter
##### Well-known member
MHB Site Helper
Jun 22, 2012
2,918
There are basically two ways of looking at a quotient group, each of which has its attractions.
1) First way: as a group made out of $H$-sized "chunks" of $G$, where $H$ is a normal subgroup of $G$. We use $H$ to define an equivalence relation on $G$, which partitions $G$ into these "chunks", called "cosets of $H$" (the normality of $H$ ensures that right cosets and left cosets agree). Since we are lumping ELEMENTS of $G$ together into "equivalence classes", it is easy to see we wind up with a group with fewer things in it.
"Naming" the equivalence classes poses a bit of a problem, we have several choices for a name for each coset (picking a representative uniquely determines the COSET, but the representative itself is not unique, any element of the coset could have the coset named after it: there is some unavoidable redundancy).
2) Second way: a quotient group of $G$ is another group $G'$ together with a surjective homomorphism $\phi: G \to G'$. The normal subgroup of $G$ we are "modding out" in this view is the kernel of the homomorphism $\phi$.
The fundamental isomorphism theorem basically says these two ways are equivalent.
Now, given that we have a quotient group of order 4, our fondest wish would be for it to be the nicest possible kind of group of order 4, a cyclic one. Cyclic groups are very nice, and easy to understand, and have an almost transparent inner structure. Since $G$ has elements of order 4, if one of those generated the cosets, it would mean the quotient was cyclic. Unfortunately, that is not the case, here.
Now, given ANY homomorphism $\phi: G \to G'$ between 2 groups, it is ALWAYS the case that:
$|\phi(g)|$ divides $|g|$. So it is a natural question to ask, does:
$|\phi(g)| = |g|$?
When the order of the $g$ involved is small (in this example, we looked at $g$ with an order of 4), often the possibilities for the order of $\phi(g)$ are quite limited, and perhaps can provide useful information.
Thanks for the help, Deveno.
However, I do not think I have fully understood all the implications regarding the order of elements in this context.
Indeed, I was just reading Beachy and Blair: Abstract Algebra, Example 3.8.12 (page 174) which reasons from the order of elements of a factor group to an isomorphism - can you please help me see their argument.
The example (which is extremely similar to the example we have been talking about) reads as follows: (See Attachment)
==============================================================
Example 3.8.12
Let $$\displaystyle G = \mathbb{Z}_4 \times \mathbb{Z}_4$$
and let $$\displaystyle N = \{ (0,0), (2,0), (0,2), (2,2) \}$$
(We have omitted the brackets denoting congruence classes because that makes the notation too cumbersome)
There are fur cosets of this subgroup, which we can choose as follows:
$$\displaystyle N, \ \ (1,0) + N , \ \ (0,1) + N , \ \ (1,1) + N$$
The representatives of the cosets have been carefully chosen to show that each non-trivial element of the factor group has order 2, making the factor group $$\displaystyle G/N$$ isomorphic to $$\displaystyle \mathbb{Z}_2 \times \mathbb{Z}_2$$.
... ... etc etc (see attachment)
===============================================================
I do not follow the logic that allows us to argue that each non-trivial element of the factor group has order 2, making the factor group $$\displaystyle G/N$$ isomorphic to $$\displaystyle \mathbb{Z}_2 \times \mathbb{Z}_2$$.
Can you explain ... and further, can you formally and rigorously prove this implication.
By the way the example then goes on to another factor group and argues another isomorphism based on the cyclic nature of the factor group - see attachment - but I am puzzled by this reasoning as well.
Hoping that someone can help.
Peter
Last edited:
#### Deveno
##### Well-known member
MHB Math Scholar
Feb 15, 2012
1,967
Coset multiplication (here I use multiplication in the general sense of "group operation") is easy:
$(xH)\ast(yH) = (x\ast y)H$.
In other words, the product of the cosets $xH$ and $yH$ is the coset containing $x\ast y$.
Now in the example you have given, we have:
$[(1,0) + N] + [(1,0) + N] = [(1,0) + (1,0)] + N = (2,0) + N = N$
(since (2,0) is an element of $N$).
Thus in the quotient group $G/N$ we have an element $x \neq e$ with:
$x\ast x = x^2 = e$, showing $|x| = 2$ (remember $N$ is the identity of $G/N$).
Now for ANY element $g \in G$ with $|g| = k$, it is certainly true that:
$g^k = e$, so all the more so it will be true in $G/N$ that:
$(gN)^k = (g^k)N = eN = N$. But it may happen that for some smaller integer $t$ we have:
$(gN)^t = N$.
For any group, however, it is easy to show that if $x \in G$, and:
$x^m = e$, with $|x| = s$ we must have that $s$ is a divisor of $m$.
Why? because if we write:
$m = qs + r$, with either $r = 0$ or $r < s$, we get:
$e = x^m = x^{qs + r} = (x^{qs})(x^r) = (x^s)^q(x^r) = (e^q)(x^r) = ex^r = x^r$.
If $0 < r < s$, this contradicts the fact that $|x|$ is the least positive integer with:
$x^s = e$, forcing us to conclude that $m = qs$ that is: $s|m$.
#### Peter
##### Well-known member
MHB Site Helper
Jun 22, 2012
2,918
Coset multiplication (here I use multiplication in the general sense of "group operation") is easy:
$(xH)\ast(yH) = (x\ast y)H$.
In other words, the product of the cosets $xH$ and $yH$ is the coset containing $x\ast y$.
Now in the example you have given, we have:
$[(1,0) + N] + [(1,0) + N] = [(1,0) + (1,0)] + N = (2,0) + N = N$
(since (2,0) is an element of $N$).
Thus in the quotient group $G/N$ we have an element $x \neq e$ with:
$x\ast x = x^2 = e$, showing $|x| = 2$ (remember $N$ is the identity of $G/N$).
Now for ANY element $g \in G$ with $|g| = k$, it is certainly true that:
$g^k = e$, so all the more so it will be true in $G/N$ that:
$(gN)^k = (g^k)N = eN = N$. But it may happen that for some smaller integer $t$ we have:
$(gN)^t = N$.
For any group, however, it is easy to show that if $x \in G$, and:
$x^m = e$, with $|x| = s$ we must have that $s$ is a divisor of $m$.
Why? because if we write:
$m = qs + r$, with either $r = 0$ or $r < s$, we get:
$e = x^m = x^{qs + r} = (x^{qs})(x^r) = (x^s)^q(x^r) = (e^q)(x^r) = ex^r = x^r$.
If $0 < r < s$, this contradicts the fact that $|x|$ is the least positive integer with:
$x^s = e$, forcing us to conclude that $m = qs$ that is: $s|m$.
OK, thanks, follow what you have said ... (forgive me if I am being slow ...) but still need help with the link between the order of the elements in the factor group and the isomorphism.
Mind you, one can "see" the isomorphism $$\displaystyle G/N \cong \mathbb{Z}_2 \times \mathbb{Z}_2$$, for example
$$\displaystyle (1,0)H \to (1,0)$$ etc
BUT
... what exactly is the logic connecting the order of elements in the factor group and the isomorphism.
If we have two groups of the same order, with each of the elements having the same order, can we claim they are isomorphic, for example?
Can you help?
Peter
#### Deveno
##### Well-known member
MHB Math Scholar
Feb 15, 2012
1,967
Well, in general, no.
Two isomorphic groups will, of course, have elements of the same order(s), but it IS possible to have two non-isomorphic groups of the same order, with the same number of elements of each possible order.
The smallest example I can think of is these 2 groups of order 27:
$G = \Bbb Z_3 \times \Bbb Z_3 \times \Bbb Z_3$
$G' = \left\{\begin{bmatrix}1&a&b\\0&1&c\\0&0&1 \end{bmatrix}: a,b,c \in \Bbb Z_3\right\}$
each of which has 26 elements of order 3, but the first is abelian, while the second is not.
#### Peter
##### Well-known member
MHB Site Helper
Jun 22, 2012
2,918
Well, in general, no.
Two isomorphic groups will, of course, have elements of the same order(s), but it IS possible to have two non-isomorphic groups of the same order, with the same number of elements of each possible order.
The smallest example I can think of is these 2 groups of order 27:
$G = \Bbb Z_3 \times \Bbb Z_3 \times \Bbb Z_3$
$G' = \left\{\begin{bmatrix}1&a&b\\0&1&c\\0&0&1 \end{bmatrix}: a,b,c \in \Bbb Z_3\right\}$
each of which has 26 elements of order 3, but the first is abelian, while the second is not.
Ok, thanks ... but the link between the order of the elements in the factor group and Beachy and Blair's argument in Example 3.8.12, page 174 (see attachment) remains a mystery to me ...
To restate my problem, I do not follow the logic that allows Beachy and Blair to argue that because each non-trivial element of the factor group has order 2, the factor group $$\displaystyle G/N$$ is isomorphic to $$\displaystyle \mathbb{Z}_2 \times \mathbb{Z}_2$$.
Can you please clarify the basis of their argument ... they seem to argue the existence of an isomorphism from the fact that each non-trivial member of the factor group has order2 ... I'm perplexed ...
Peter
#### Deveno
##### Well-known member
MHB Math Scholar
Feb 15, 2012
1,967
This has to due with the rather limited possibilities for a group of order 4:
1) Any group of order 4 is abelian (can you prove this?).
2) If a group of order 4 has no element of order 4 (if it does, it is cyclic of order 4) then every non-identity element is of order 2. We can thus write it as:
$G = \{e,a,b,ab\}, a^2 = b^2 = (ab)^2 = e$.
The last equation:
$(ab)^2 = e$ together with $a^2 = b^2 = e$ gives:
$e = (ab)^2 = abab$
$a = a(abab) = a^2(bab) = e(bab) = bab$
$ab = (bab)b = (ba)b^2 = (ba)e = ba$.
If we define:
$\phi:G \to \Bbb Z_2 \times \Bbb Z_2$ by:
$\phi(e) = (0,0)$
$\phi(a) = (1,0)$
$\phi(b) = (0,1)$
$\phi(ab) = (1,1)$
It is clear to see that:
$\phi(ab) = (1,1) = (1,0) + (0,1) = \phi(a) + \phi(b)$
(the other 15 products are trivial to verify: we can use the fact that both groups are abelian to reduce the products we have to check to 10, we just checked 1, which leaves 9, 4 of these involve the identity, and are almost immediate, of the remaining 5, 3 can be deduced from order arguments, leaving just:
$\phi((ab)a) = \phi(ab) + \phi(a)$
$\phi((ab)b) = \phi(ab) + \phi(b)$
that actually require any thought).
Having displayed a bijective homomorphism, we conclude $(G,\ast) \cong (\Bbb Z_2 \times \Bbb Z_2,+)$
It often turns out that investigating the elements of order 2 is quite illuminating in determining which group we have of a given order. This is always the case if $|G|$ is a power of 2. In general, if $p$ is prime and $p$ divides the order of a finite group $G$, looking at the number of elements of order $p$ can help determine which isomorphism class $G$ belongs to (but as the example I gave above shows, does NOT completely determine $G$).
Completely determining how many group isomorphism classes we have of a group of order $n$ there are is, in general, a difficult problem (but, not insurmountable...this problem HAS been solved, but the amount of calculation required is truly mind-boggling). Fortunately, you are rarely presented with a "bad" example (such as determining which group of order 1024 you have...there are almost 50 billion of them!!!...fortunately only around 400 million (!) of those are "common").
But for groups of order less than 32, this problem is simple enough that you could solve it yourself (you might struggle with the groups of order 16...there are more than you might think).
#### Peter
##### Well-known member
MHB Site Helper
Jun 22, 2012
2,918
This has to due with the rather limited possibilities for a group of order 4:
1) Any group of order 4 is abelian (can you prove this?).
2) If a group of order 4 has no element of order 4 (if it does, it is cyclic of order 4) then every non-identity element is of order 2. We can thus write it as:
$G = \{e,a,b,ab\}, a^2 = b^2 = (ab)^2 = e$.
The last equation:
$(ab)^2 = e$ together with $a^2 = b^2 = e$ gives:
$e = (ab)^2 = abab$
$a = a(abab) = a^2(bab) = e(bab) = bab$
$ab = (bab)b = (ba)b^2 = (ba)e = ba$.
If we define:
$\phi:G \to \Bbb Z_2 \times \Bbb Z_2$ by:
$\phi(e) = (0,0)$
$\phi(a) = (1,0)$
$\phi(b) = (0,1)$
$\phi(ab) = (1,1)$
It is clear to see that:
$\phi(ab) = (1,1) = (1,0) + (0,1) = \phi(a) + \phi(b)$
(the other 15 products are trivial to verify: we can use the fact that both groups are abelian to reduce the products we have to check to 10, we just checked 1, which leaves 9, 4 of these involve the identity, and are almost immediate, of the remaining 5, 3 can be deduced from order arguments, leaving just:
$\phi((ab)a) = \phi(ab) + \phi(a)$
$\phi((ab)b) = \phi(ab) + \phi(b)$
that actually require any thought).
Having displayed a bijective homomorphism, we conclude $(G,\ast) \cong (\Bbb Z_2 \times \Bbb Z_2,+)$
It often turns out that investigating the elements of order 2 is quite illuminating in determining which group we have of a given order. This is always the case if $|G|$ is a power of 2. In general, if $p$ is prime and $p$ divides the order of a finite group $G$, looking at the number of elements of order $p$ can help determine which isomorphism class $G$ belongs to (but as the example I gave above shows, does NOT completely determine $G$).
Completely determining how many group isomorphism classes we have of a group of order $n$ there are is, in general, a difficult problem (but, not insurmountable...this problem HAS been solved, but the amount of calculation required is truly mind-boggling). Fortunately, you are rarely presented with a "bad" example (such as determining which group of order 1024 you have...there are almost 50 billion of them!!!...fortunately only around 400 million (!) of those are "common").
But for groups of order less than 32, this problem is simple enough that you could solve it yourself (you might struggle with the groups of order 16...there are more than you might think).
Thank you for that most informative post ... really helpful!
Will now work through it carefully.
Peter
#### Peter
##### Well-known member
MHB Site Helper
Jun 22, 2012
2,918
Thank you for that most informative post ... really helpful!
Will now work through it carefully.
Peter
Hi Deveno,
Can you provide some guidance on the approach to proving that any group of order 4 is abelian?
Would appreciate some help,
Peter
#### Deveno
##### Well-known member
MHB Math Scholar
Feb 15, 2012
1,967
Let $G$ be a finite group with 4 elements. One of these is, of course, the identity. We can write $G$ as:
$G = \{e,x,y,z\}$.
Some trial-and-error quickly establishes that the element $xy$ (which must lie in $G$, by closure) must either be $e$ or $z$.
Case 1) $xy = e$.
In this case, neither $x$ nor $y$ can be of order 2, since any element of order 2 is its own inverse. Since (by Lagrange) the order of each of these must divide 4, and 1 and 2 are not possible, they must both be of order 4, hence $G$ is cyclic, and thus abelian.
Case 2) $xy = z$. We can thus write:
$G = \{e,x,y,xy\}$.
Now consider: $yx$ must also lie in $G$.
$yx = e \implies x = y^{-1} \implies xy = e$, contradiction.
$yx = x \implies y = e$, contradiction.
$yx = y \implies x = e$, contradiction.
Hence the only possibility is $yx = xy$, thus:
$e$ commutes with everything,
$x$ commutes with everything,
$y$ commutes with everything, so all elements commute with all others. Thus $G$ is abelian. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.944037139415741, "perplexity": 318.6260984481938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300533.72/warc/CC-MAIN-20220117091246-20220117121246-00257.warc.gz"} |
http://umj.imath.kiev.ua/volumes/issues/?lang=en&year=2012&number=2 | 2017
Том 69
№ 7
# Volume 64, № 2, 2012
Article (English)
### Value-sharing problem for p-adic meromorphic functions and their difference operators and difference polynomials
Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 147-164
We discuss the value-sharing problem, versions of the Hayman conjecture, and the uniqueness problem for p-adic meromorphic functions and their difference operators and difference polynomials.
Article (English)
### A result on generalized derivations on right ideals of prime rings
Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 165-175
Let $R$ be a prime ring of characteristic not 2 and let $I$ be a nonzero right ideal of $R$. Let $U$ be the right Utumi quotient ring of $R$ and let $C$ be the center of $U$. If $G$ is a generalized derivation of $R$ such that $[[G(x), x], G(x)] = 0$ for all $x \in I$, then $R$ is commutative or there exist $a, b \in U$ such that $G(x) = ax + xb$ for all $x \in R$ and one of the following assertions is true: $$(1)\quad (a - \lambda)I = (0) = (b + \lambda)I \;\;\text{for some}\; \lambda \in C,$$ $$(2)\quad (a - \lambda)I = (0) \;\;\text{for some}\; \lambda \in C \;\;\text{and}\; b \in C.$$
Article (Ukrainian)
### Classification of finite commutative semigroups for which the inverse monoid of local automorphisms is permutable
Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 176-184
We give a classification of finite commutative semigroups for which the inverse monoid of local automorphisms is permutable.
Article (English)
### Vector bundles over noncommutative nodal curves
Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 185-199
We describe vector bundles over a class of noncommutative curves, namely, over noncommutative nodal curves of string type and of almost string type. We also prove that, in other cases, the classification of vector bundles over a noncommutative curve is a wild problem.
Article (English)
### On Agarwal - Pang-type integral inequalities
Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 199-209
We establish some new Agarwal – Pang-type inequalities involving second-order partial derivatives. Our results in special cases yield some of interrelated results and provide new estimates for inequalities of this type.
Article (English)
### Recognition of the groups $L_5(4)$ and $U_4(4)$ by the prime graph
Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 210-217
Let $G$ be a finite group. The prime graph of $G$ is the graph $\Gamma(G)$ whose vertex set is the set $\Pi(G)$ of all prime divisors of the order $|G|$ and two distinct vertices $p$ and $q$ of which are adjacent by an edge if $G$ has an element of order $pq$. We prove that if $S$ denotes one of the simple groups $L_5(4)$ and $U_4(4)$ and if $G$ is a finite group with $\Gamma(G) = \Gamma(S)$, then $G$ has a $G$ normal subgroup $N$ such that $\Pi(N) \subseteq \{2, 3, 5\}$ and $\cfrac GN \cong S$.
Article (English)
### On equalities involving integrals of the logarithm of the Riemann ζ-function and equivalent to the Riemann hypothesis
Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 218-228
Using the generalized Littlewood theorem about a contour integral involving the logarithm of an analytical function, we show how an infinite number of integral equalities involving integrals of the logarithm of the Riemann ζ-function and equivalent to the Riemann hypothesis can be established and present some of them as an example. It is shown that all earlier known equalities of this type, viz., the Wang equality, Volchkov equality, Balazard-Saias-Yor equality, and an equality established by one of the authors, are certain particular cases of our general approach.
Article (Ukrainian)
### Investigation of solutions of boundary-value problems with essentially infinite-dimensional elliptic operator
Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 229-236
We consider Dirichlet problems for the Poisson equation and linear and nonlinear equations with essentially infinite-dimensional elliptic operator (of the Laplace -Levy type). The continuous dependence of solutions on boundary values and sufficient conditions for increasing the smoothness of solutions are investigated.
Article (Russian)
### Boundary-value problems for a nonlinear hyperbolic equation with divergent part and Levy Laplacian
Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 237-244
We propose an algorithm for the solution of the boundary-value problem $U(0,x) = u_0,\;\; U(t, 0) = u_1$ and the external boundary-value problem $U(0, x) = v_0, \;\;U(t, x) |_{\Gamma} = v_1, \;\; \lim_{||x||_H \rightarrow \infty} U(t, x) = v_2$ for the nonlinear hyperbolic equation $$\frac{\partial}{\partial t}\left[k(U(t,x))\frac{\partial U(t,x)}{\partial t}\right] = \Delta_L U(t,x)$$ with divergent part and infinite-dimensional Levy Laplacian $\Delta_L$.
Article (English)
### On Shiba - Waterman space
Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 245-252
We give a necessary and sufficient condition for the inclusion of $\Lambda BV^{(p)}$ in the classes $H^q_{\omega}$.
Article (Ukrainian)
### Canonical form of polynomial matrices with all identical elementary divisors
Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 253-267
The problem of reducing polynomial matrices to the canonical form by using semiscalar equivalent transformations is studied. A class of polynomial matrices is singled out, for which the canonical form with respect to semiscalar equivalence is indicated. This form enables one to solve the classification problem for collections of matrices over a field up to similarity.
Brief Communications (Ukrainian)
### Control of linear dynamical systems by time transformations
Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 268-274
Necessary and sufficient conditions for the controllability of solutions of linear inhomogeneous integral equations are obtained.
Brief Communications (English)
### On the complexity of the ideal of absolute null sets
Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 275-276
Answering a question of Banakh and Lyaskovska, we prove that for an arbitrary countable infinite amenable group $G$ the ideal of sets having $\mu$-measure zero for every Banach measure $\mu$ on $G$ is an $F_{\sigma \delta}$ subset of $\{0,1\}^G$.
Brief Communications (Ukrainian)
### Spectral problem for discontinuous integro-differential operator
Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 277-282
A representation of solutions of a discontinuous integro-differential operator is obtained. The asymptotic behavior of the eigenvalues and eigenfunctions of this operator is described.
Brief Communications (Ukrainian)
### Diagonalizability of matrices over a principal ideal domain
Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 283-288
A square matrix is said to be diagonalizable if it is similar to a diagonal matrix. We establish necessary and sufficient conditions for the diagonalizability of matrices over a principal ideal domain. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9072705507278442, "perplexity": 394.7912477560358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886124662.41/warc/CC-MAIN-20170823225412-20170824005412-00417.warc.gz"} |
https://infoscience.epfl.ch/record/176467 | ## Simple biset functors and double Burnside ring
Let G be a finite group and let k be a field. Our purpose is to investigate the simple modules for the double Burnside ring kB(G,G). It turns out that they are evaluations at G of simple biset functors. For a fixed finite group H, we introduce a suitable bilinear form on kB(G,H) and we prove that the quotient of kB(-,H) by the radical of the bilinear form is a semi-simple functor. This allows for a description of the evaluation of simple functors, hence of simple modules for the double Burnside ring.
Published in:
Journal of Pure and Applied Algebra, 217, 546-566
Year:
2013
Publisher:
Amsterdam, Elsevier
ISSN:
0022-4049
Keywords:
Laboratories: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9257065057754517, "perplexity": 407.8935317710416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649961.11/warc/CC-MAIN-20180324073738-20180324093738-00366.warc.gz"} |
https://www.khanacademy.org/test-prep/gmat/problem-solving/v/gmat-math-53 | 0 energy points
# GMAT: Math 53
Video transcript
We're on problem 245. If x plus y is equal to a, and x minus y is equal to b, then what is-- they want to know what 2xy is equal to. Let's solve for x and y in terms of a and b, and then just figure out what this equals to. So we have two equations with two unknowns. Let's just add them together to solve for x. We get x plus x is 2x-- the y's cancel out-- is equal to a plus b. Or x is equal to a plus b over 2. Now if we had x plus y is equal to a-- I just re-wrote this. And now if we multiply both sides of this by negative 1, so we get minus x plus y is equal to minus b. And now add these two equations. So I'm essentially subtracting this equation from that one. That cancels out, so I get 2y is equal to a minus b, or y is equal to a minus b over 2. And now we can figure out what 2xy is equal to. 2xy is equal to 2 times a plus b over 2, times a minus b over 2. This 2 will cancel out with one of these 2's. And we're left with-- and what's a plus b times a minus b? It's a squared minus b squared over 2. Which is choice A. Problem 246. Let me do it in magenta. 246. A rectangular circuit board is to have width w inches-- let me draw it. So let's say it has width w inches, perimeter of p inches-- so let me just put that at the side right here-- perimeter of p inches, and area of k square inches. Which of the following equations must be true? And so they want us to relate this width to the area to the perimeter. Let me introduce another variable. Let's call this, right here, let's call this the height of the circuit board. So now we can do some interesting things. If that's the height, then that's also the height. If that's the width, then this is also the width. So let's see what the perimeter would be. It will be 2 times the width, plus 2 times the height is equal to the perimeter. And then we could also say width times height is equal to area. But if we can solve for height in terms of the perimeter and the width, then we could use that to get an expression that doesn't involve this variable. So let's do that. So if you divide both sides of this by 2, you get width plus height is equal to perimeter over 2. And then you get height is equal to perimeter over 2 minus width. And so the area, k, will be equal to the width times the height. Instead of writing an h there, let's write what we just figured out. p over 2 minus w. And then that is equal to pw/2 minus w squared. Let's see, when I look at the solution, they don't have any fractions in it, so let's multiple both sides of the equation by 2. We can ignore this. k is equal to pw/2 minus w squared. Multiply both sides by 2, you get 2k is equal to pw minus w squared. Let's add w squared the both sides. You get w squared plus 2k is equal to pw. Let's subtract, because all of the choices have them setting equal to 0. So then we could subtract pw from both sides, and you get-- oh wait, I made a mistake. If we multiply both sides of this by 2, 2 times k is 2k. 2 times pw/2 is pw. 2 times minus w squared is minus 2w squared. So in this step we have to add 2w squared to both sides. Sorry about that. So we have 2w squared plus 2k is equal to pw. Subtract pw from both sides, you get 2w squared minus pw, plus 2k is equal to 0. And that is choice E, right? 2w squared minus pw plus 2k. That is choice E. Next question. Problem 247. An arithmetic sequence is a sequence in which each term after the first is equal to the sum of the preceding term and a constant. If the list of letters shown above is an arithmetic sequence-- so they wrote p, r, s, t, u. So all they're saying is that, the difference between p and r is going to be some number. And the difference between r and s is going to be that same number. And the difference between s and t is going to be that same number. So an example of an arithmetic sequence, this could be 1, 2, 3, 4, 5. Because every number is one more than the one before it. So anyway, they say which of the following must also be an arithmetic sequence? So choice one, they write 2p, 2r, 2s, 2t, 2u. Well, let's just use our example. If this was p, r, s, t, and u, what is 2 times all of that? Well, then it'll be 2, 4-- no no, sorry-- it'll be, yeah, 2, 4, 6, 8, 10. So now instead of incrementing by one every time, we're incrementing by two every time. But it's still an arithmetic sequence because the difference between each number and the number before it is a constant. It's always equal to 2. So choice number one is definitely an arithmetic sequence. Choice two. And if you say, well, that works for that example, how do I know it'll work for all examples? So the other way to think about is, whatever the difference is between p and r, now you're going to have twice the difference between 2p and 2r. And it was the same distance between r and s, now you're going to have twice that distance between 2r and 2s. So here in our particular example, we went from 1 to 2, but it could have been something else. OK, Statement two says, p minus 3, r minus 3-- so they're just shifting everything; I don't even have to write them alll-- all the way to u minus 3. So they just took everything in this sequence and made them three less. But if r minus p is equal to some number, r minus 3, minus p minus 3 is going to be the exact same thing. I can even prove that to you, right? r minus p-- sorry, r minus 3, minus p minus 3. That's equal to r minus 3, minus p plus 3. And so these cancel out. So the difference between this and this ends up to be r minus p, the difference between that and that. And that should make sense intuitively, we're just shifting all the numbers down by three. So that shouldn't change the difference between the numbers. So two is still an arithmetic sequence. Statement three. p squared, r squared-- so we're just squaring all the numbers-- t squared, and u squared. So let's just use our particular example. If p was 1, then p squared is 1, 2 squared is 4, s squared is 9-- right, 3 squared is nine, 4 squared is 16. Now what's the difference between the numbers? The difference here is three. The difference here is five. The difference here is seven. And this is interesting in and of itself, that-- well, first of all, let's just answer our question. The difference is now not constant. We have a different difference between each successive number, right? At the beginning, the difference is two every time. Here it's three, then it changes to five, then it changes to seven. So three is not an arithmetic sequence. So the answer is D, one and two. This is something interesting. And if you've never experimented with it, this is something to look at. I've always been fascinated by the distance between the perfect squares increases by increasing odd numbers, which is just something to think about. Anyway, next problem. Actually, I have two problems left out of all of the problem-solving problems, so instead of just doing one problem and going over time, let me do two of them in the the next video. See you soon. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9332123398780823, "perplexity": 393.7342411058543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823839.40/warc/CC-MAIN-20171020063725-20171020083725-00670.warc.gz"} |
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=49&t=56418&p=210045 | ## Approximation
ashwathinair
Posts: 113
Joined: Sat Aug 17, 2019 12:17 am
### Approximation
Can someone explain how and when we use approximation for ICE tables? Are there other situations you can use them as well? Is this only for acids and bases?
Caroline Beecher 2H
Posts: 51
Joined: Wed Nov 14, 2018 12:21 am
### Re: Approximation
You can use approximation for weak acids and bases problems when the K value is less than 10^-3. This means it is small enough to not make a huge difference when calculating concentrations, so you wouldn't need to use the quadratic formula. The 5% rule refers to evaluating whether the approximation was valid: getting a percent ionization of less than 5% means the approximation is good.
Posts: 100
Joined: Wed Sep 18, 2019 12:19 am
### Re: Approximation
Ice tables seem to be useful only when it comes to calculating the number of products and reactants are present at equilibrium for a reaction.
Uisa_Manumaleuna_3E
Posts: 60
Joined: Wed Sep 21, 2016 2:56 pm
### Re: Approximation
In class Dr Lavelle told us to use approximation for sure when K is less than 10^-3. But its a little tricky if we have 10^-4. Definitely use approximation with 10^-5, but its a little uncertain for 10^-4
KBELTRAMI_1E
Posts: 108
Joined: Sat Jul 20, 2019 12:17 am
### Re: Approximation
also how do you know if you add or subtract x in the C section of the ICE table?
ng1D
Posts: 43
Joined: Wed Sep 18, 2019 12:17 am
### Re: Approximation
You would add for the reactants and subtract for products
Sara Richmond 2K
Posts: 110
Joined: Fri Aug 30, 2019 12:16 am
### Re: Approximation
Approximation can be used for weak acids and bases problems when the K value is less than 10^-3. This means it is so small that it is not going to make a difference when calculating concentrations. After you use approximation, you should use the 5% rule to check if your calculation was valid/ if the change of concentration was really small enough to consider negligible. The 5% rule states that if the percent ionization is less than 5%, it is okay to use approximation.
MeeraBhagat
Posts: 95
Joined: Sat Aug 24, 2019 12:15 am
### Re: Approximation
If the K value is less than 10^-3, then the concentrations of products created are small enough in comparison to the original concentration of the reactant(s) that you can consider it insignificant enough to leave the “-x” term out of the reactants when calculating K.
Posts: 125
Joined: Sat Aug 17, 2019 12:17 am
### Re: Approximation
ashwathinair wrote:Can someone explain how and when we use approximation for ICE tables? Are there other situations you can use them as well? Is this only for acids and bases?
If the K value is less than 10^-3, then for the creation of the equation that is equivalent to the equilibrium constant, it is acceptable to remove the -X value from the equilibrium constant equation. This is because if the equilibrium constant is small, then the concentration of products is small, thus the concentration of reacts remain relatively large, so any change of X is basically nothing (but IT IS NOT 0). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9427597522735596, "perplexity": 1214.1918874332257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369054.89/warc/CC-MAIN-20210304113205-20210304143205-00147.warc.gz"} |
http://math.stackexchange.com/questions/112075/how-to-interpret-square-brackets-and-valuations-in-propositional-logic/112098 | # How to interpret square brackets and valuations in propositional logic?
I am faced with this task:
"Let M be the set of all valuations. Let for each propositional formula $A: [A] = v \in M : v(A) = 1$. Show that:
$[A \land B] = [A] \cap [B]$
So, I get the idea what I'm supposed to prove, but the definitions escape me. I would like to know if
1) Does "Let M be the set of all valuations" mean that $M = \{0,1\}$ ?
2) What do the square brackets mean in logic? I suppose they are not matrices nor intervals.
2) How exactly am I supposed to read the $A: [A] = v \in M : v(A) = 1$ in English?
I understand that I'm supposed to show that when A is true and represented as a set $\{1\}$ it's intersection with B gives similar truth table to the boolean AND operator ( $\{1\}\cap \{1\} = \{1\}$ and $\{1\} \cap \{0\} = \emptyset$. I'm not just yet getting there formally.
Any help in understanding the mathematical language is appreciated!
-
Suppose that our language has a certain collection of basic proposition letters. That set can be finite or infinite. But for simplicity let this set be finite, and (say) suppose that it consists of the letters $P_1,P_2,\dots,P_n$.
A valuation is an assignment of (truth) values to each of the proposition letters. Each value is $1$ or $0$ (or alternately T and F, there are plenty of other notations, but your course seems to use $1$ for true and $0$ for false).
Take such a valuation $v$. The valuation $v$ can be extended to all sentences of the language in the natural way. For example, if $P$ is a proposition letter, then we say that $v(\lnot P)=0$ if $v(P)=1$, and that $v(\lnot P)=1$ if $v(P)=0$. Here $\lnot$ is the logical "not": the notation in your course may be different. We continue to extend $v$ by saying what $v(X)$ is for more and more complex sentences. Since all sentences can be built up, starting from proposition letters, by using basic logical operations, it is enough to say what $v(\lnot A)$ is, if we know what $v(A)$ is, what $v(A\land B)$ is, if we know $v(A)$ and $v(B)$, and so on for the other logical operations.
For your problem, we review how one extends $v$ to $v(A\land B)$. By definition, $v(A\land B)=1$ if $v(A)$ and $v(B)$ are both equal to $1$, and $v(A\land B)=0$ otherwise. This captures the intuitive idea that $A\land B$ is true precisely if both $A$ and $B$ are true.
Let $A$ be a sentence. Your course uses the notation $[A]$ for the set of all valuations $v$ such that $v(A)=1$. In symbols, $$[A]=\{v\in M: v(A)=1\}.$$
That notation is specific to your course. There is no really standard notation for the concept. Square brackets are used in various other ways in mathematics. There is no connection with intervals, or matrices. The instructor, or textbook writer, just wanted a convenient abbreviation.
Suppose, for example, that the valuation $v$ assigns value $1$ to $P_1$, value $1$ to $P_2$, and value $0$ to $P_3$. Then $v(P_1\land P_2)=1$, so if $A=p_1\land P_2)$, then $v(A)=1$, so $v\in [A]$. Similarly, $v(P_1\land P_3)=0$, so if $A=P_1\land P_3$, the $v\notin [A]$.
Informally, $[A]$ is the set of all valuations that make $A$ true. It is the collection of ways to assign truth/falsehood to the various proposition letters so that $A$ ends up being true.
Here is another example. Let $A$ be the sentence $P_1\lor \lnot P_1$. Whatever value we assign to the basic proposition letters $P_i$, the sentence $P_1\lor \lnot P_1$ will be true, so every valuation $v$ is an element of $[A]$. This is a fancy way of saying that $A$ is a tautology.
Finally, we get to the specific problem! You are asked to show that $$[A \land B] = [A] \cap [B].$$ Note that by definition, the left-hand side is a set, and the right-hand side is a set. We typically show that two sets $X$ and $Y$ are equal by showing that anything in $X$ is also in $Y$, and that anything in $Y$ is also in $X$. So there are two things that need to be shown. Sometimes they can be handled together, but it is safer to deal with them separately.
(i) We first show that if $v$ is in $[A \land B]$, then $v$ is in $[A] \cap [B]$. So suppose that $v$ is in $[A \land B]$. Then by definition, $v(A \land B)=1$. But by definition, this forces $v(A)=1$ and $v(B)=1$. Since $v(A)=1$, we have $v\in [A]$. Similarly, $v\in [B]$. So $v$ is in both $[A]$ and $[B]$, and therefore $v\in [A] \cap [B]$. With fewer symbols, because $v$ makes $A\land B$ true, it makes $A$ true, and also makes $B$ true, so $v$ in the intersection of $[A]$ and $[B]$.
(ii) We next show that if $v$ is in $[A] \cap [B]$, then $v$ is in $[A \land B]$. So suppose that $v$ is in $[A] \cap [B]$. Then $v\in [A]$ and $v\in [B]$. So $v(A)=1$ and $v(B)=1$. So $v(A\land B)=1$. So $v\in [A\land B]$.
Remark: I hope that you will see that once we figured out what we needed to show, actually showing it turned out to be pretty easy. The exercise you were given is basically a language lesson, much like being asked to use French irregular verbs in a sentence after learning about the basic grammar.
-
Thank you! This was exactly what I needed to understand the definition! The study material was very vague on this subject and I couldn't find anywhere a clear definition of the square brackets in this context. – Remolod Domelor Feb 22 '12 at 19:37
A valuation is a truth assignment to all propositions. All valuations are all possible truth assignments. $[A]$ means the set of all valuations, under which, the sentence in brackets evaluates to true. I think in the definition you are missing braces, it should be $[A] = \{ v \in M : v(A) = 1 \}$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9516570568084717, "perplexity": 111.31859221597628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049282327.67/warc/CC-MAIN-20160524002122-00104-ip-10-185-217-139.ec2.internal.warc.gz"} |
http://jwcn.eurasipjournals.com/content/2012/1/272 | Review
# Optimal resource allocation in wireless communication and networking
Alejandro Ribeiro
Author Affiliations
Department of Electrical and Systems Engineering, University of Pennsylvania, 200 S. 33rd St., Philadelphia, PA, 19096, USA
EURASIP Journal on Wireless Communications and Networking 2012, 2012:272 doi:10.1186/1687-1499-2012-272
Received: 20 February 2012 Accepted: 18 July 2012 Published: 23 August 2012
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
### Abstract
Optimal design of wireless systems in the presence of fading involves the instantaneous allocation of resources such as power and frequency with the ultimate goal of maximizing long term system properties such as ergodic capacities and average power consumptions. This yields a distinctive problem structure where long term average variables are determined by the expectation of a not necessarily concave functional of the resource allocation functions. Despite their lack of concavity it can be proven that these problems have null duality gap under mild conditions permitting their solution in the dual domain. This affords a significant reduction in complexity due to the simpler structure of the dual function. The article discusses the problem simplifications that arise by working in the dual domain and reviews algorithms that can determine optimal operating points with relatively lightweight computations. Throughout the article concepts are illustrated with the optimal design of a frequency division broadcast channel.
### Introduction
Operating variables of a wireless system can be separated in two types. Resource allocation variables p(h) determine instantaneous allocation of resources like frequencies and transmitted powers as a function of the fading coefficient h. Average variables x capture system’s performance over a large period of time and are related to instantaneous resource allocations via ergodic averages. A generic representation of the relationship between instantaneous and average variables is
(1)
where f1(h,p(h)) is a vector function that maps channel h and resource allocation p(h) to instantaneous performance f1(h,p(h)). The system’s design goal is to select resource allocations p(h) to maximize ergodic variables x in some sense.
An example of a relationship having the form in (1) is a code division multiple access channel in which case h denotes the vector of channel coefficients, p(h) the instantaneous transmitted power, f1(hp(h)) the instantaneous communication rate determined by the signal to interference plus noise ratio, and x the ergodic rates determined by the expectation of the instantaneous rates. The design goal is to allocate instantaneous power p(h) subject to a power constraint so as to maximize a utility of the ergodic rate vector x. This interplay of instantaneous actions to optimize long term performance is pervasive in wireless systems. A brief list of examples includes optimization of orthogonal frequency division multiplexing [1], beamforming [2,3], cognitive radio [4,5], random access [6,7], communication with imperfect channel state information (CSI) [8,9], and various flavors of wireless network optimization [10-18].
In many cases of interest the functions f1(h,p(h)) are nonconcave and as a consequence finding the resource allocation distribution p(h) that maximizes x requires solution of a nonconvex optimization problem. This is further complicated by the fact that since fading channels h take on a continuum of values there is an infinite number of p(h) variables to be determined. A simple escape to this problem is to allow for time sharing in order to make the range of convex and permit solution in the dual domain without loss of optimality. While the nonconcave function f1(h,p(h)) still complicates matters, working in the dual domain makes solution, if not necessarily simple, at least substantially simpler. However, time sharing is not easy to implement in fading channels.
In this article, we review a general methodology that can be used to solve optimal resource allocation problems in wireless communications and networking without resorting to time sharing [19,20]. The fundamental observation is that the range of is convex if the probability distribution of the channel h contains no points of positive probability (Section “Duality in wireless systems optimization”). This observation can be leveraged to show lack of duality gap of general optimal resource allocation problems (Theorem 1) making primal and dual problems equivalent. The dual problem is simpler to solve and its solution can be used to recover primal variables (Section “Recovery of optimal primal variables”) with reduced computational complexity due to the inherently separable structure of the problem Lagrangians (Section “Separability”). We emphasize that this reduction in complexity, as in the case of time sharing, just means that the problem becomes simpler to solve. In many cases it also becomes simple to solve, but this is not necessarily the case.
We also discuss a stochastic optimization algorithm to determine optimal dual variables that can operate without knowledge of the channel probability distribution (Section “Dual descent algorithms”). This algorithm is known to almost surely converge to optimal operating points in an ergodic sense (Theorem 5). Throughout the article concepts are illustrated with the optimal design of a frequency division broadcast channel (Section “Frequency division broadcast channel” in “Optimal wireless system design”, Section “Frequency division broadcast channel” in “Recovery of optimal primal variables”, and Section “Frequency division broadcast channel” in “Dual descent algorithms”).
One of the best known resource allocation problems in wireless communications concerns the distribution of power on a block fading channel using capacity-achieving codes. The solution to this problem is easy to derive and is well known to reduce to waterfilling across the fading gain, e.g., [21, p. 245]. Since this article can be considered as an attempt to generalize this solution methodology to general wireless communication and networking problems it is instructive to close this introduction by reviewing the derivation of the waterfilling solution. This is pursued in the following section.
#### Power allocation in a point-to-point channel
Consider a transmitter having access to perfect CSI h that it uses to select a transmitted power p(h) to convey information to a receiver. Using a capacity achieving code the instantaneous channel rate for fading realization h is where N0denotes the noise power at the receiver end. A common goal is to maximize the average rate with respect to the probability distribution mh(h) of the channel gain h—which is an accurate approximation of the long term average rate—subject to an average power constraint P0. We can formulate this problem as the optimization program
(2)
In most cases the fading channel h takes on a continuum of values. Therefore, solving (2) requires the determination of a power allocation function that maps nonnegative fading coefficients to nonnegative power allocations. This means that (2) is an infinite dimensional optimization problem which in principle could be difficult to solve. Nevertheless, the solution to this program is easy to derive and given by waterfilling as we already mentioned. The widespread knowledge of the waterfilling solution masks the fact that is is rather remarkable that (2) is easy to solve and begs the question of what are the properties that make it so. Let us then review the derivation of the waterfilling solution in order to pinpoint these properties.
To solve (2) we work in the dual domain. To work in the dual domain we need to introduce the Lagrangian, the dual function, and the dual problem. Introduce then the nonnegative dual variable and define the Lagrangian associated with the optimization problem in (2) as
(3)
The dual function is defined as the maximum value of the Lagrangian across all functions , which upon defining as the set of nonnegative functions can be defined as
(4)
The dual problem corresponds to the minimization of the dual function with respect to all nonnegative multipliers λ,
(5)
Since the objective in (2) is concave with respect to variables p(h) and the constraint is linear in p(h) the optimization problem in (2) is convex and as such it has null duality gap in the sense that P=D.
An entity that is important for the upcoming discussion is the primal Lagrangian maximizer function whose values for given h are denoted as p(h,λ). This function is defined as the one that maximizes the Lagrangian for given dual variable λ
(6)
Using the definition of the Lagrangian maximizer function we can write the dual function as .
Computing values p(h,λ) of the Lagrangian maximizer function p(λ) is easy. To see that this is true rewrite the Lagrangian in (3) so that there is only one expectation
(7)
With the Lagrangian written in this form we can see that the maximization of required by (6) can be decomposed in maximizations for each individual channel realization,
(8)
That the equality in (8) is true is a consequence of the fact that the expectation operator is linear and that there are no constraints coupling the selection of values p(h1) and p(h2) for different channel realizations h1h2. Functional values p(h) in both sides of (8) are required to be nonnegative but other than that we can select p(h1) and p(h2) independently of each other as indicated in the right hand side of (8).
Since the right hand side of (8) states that to maximize the Lagrangian we can select functional values p(h) independently of each other, values p(h,λ) of the Lagrangian maximizer function p(λ) defined in (6) are given by
(9)
The similarity between (6) and (9) is deceiving as the latter is a much easier problem to solve that involves a single variable. To find the Lagrangian maximizer value p(h,λ) operating from (9) it suffices to solve for the null of the derivative with respect to p(h). Doing this yields the Lagrangian maximizer
(10)
where the operator [x] + :=max(x,0) denotes projection onto nonnegative reals, which is needed because of the constraint p(h)≥0.
Of particular interest is the Lagrangian maximizer function p(λ) corresponding to the optimal Lagrange multiplier . Returning to the definition of the dual function in (4) we can bound D=g(λ) as
(11)
Indeed, since D is given by a Lagrangian maximization it must equal or exceed the values of the Lagrangian for any function p, and for the optimal function pin particular. Considering the explicit Lagrangian expression in (3) we can write as
(12)
Observe that since pis the optimal power allocation function it must satisfy the power constraint implying that it must be . Since the optimal dual variable satisfies λ≥0 their product is also nonnegative allowing us to transform (12) into the bound
(13)
where the equality is true because P and pare the otpimal value and arguments of the primal optimization problem (2). Combining the bounds in (11) and (13) yields
(14)
Using the equivalence P=Dof primal and dual optimum values it follows that the inequalities in (13) must hold as equalities. The equality , in particular, implies that the function p is the Lagrangian maximizing function corresponding to λ=λ, i.e.,
(15)
The important consequence of (15) follows from the fact that the Lagrangian maximizer function p(λ) is easy to compute using the decomposition in (9). By extension, if the optimal multiplier λ is available, computation of the optimal power allocation function palso becomes easy. Indeed, making λ=λin (9) we can determine values p(h) of p as
(16)
which are explicitly given by (10) with λ=λ.
To complete the problem solution we still need to determine the optimal multiplier λ. We show a method for doing so in Section “Dual descent algorithms”, but it is important to recognize that this cannot be a difficult problem because the dual function is single-dimensional and convex—dual functions of maximization problems are always convex. This has to be contrasted with the infinite dimensionality of the primal problem. By working in the dual domain we reduce the problem of determining the infinite dimensional optimal power allocation function to the determination of the one dimensional optimal Lagrange multiplier λ.
Recapping the derivation of the optimal power allocation p, we see that there are three conditions that make (18) simple:
(C1) Since the optimization problem is convex it is equivalent to its Lagrangian dual implying that optimal primal variables can be determined as Lagrangian maximizers associated with the optimal multiplier λ[cf. (11)–(15)].
(C2) Due to the separable structure of the Lagrangian, determination of the optimal power allocation function is carried out through the solution of per-fading state subproblems [cf. (6)–(9)].
(C3) Because there is a finite number of constraints the dual function is finite dimensional even though there is an infinite number of primal variables.
Most optimization problems in wireless systems are separable in the sense of (C2) and have a finite number of constraints as [cf. (C3)] but are typically not convex [cf. (C1)].
To illustrate this latter point consider a simple variation of (2) where instead of using capacity achieving codes we use adaptive modulation and coding (AMC) relying on a set of L communication modes. The l-th mode supports a rate αland is used when the signal to noise ratio (SNR) at the receiver end is between βl−1 and βl. Letting γ be the received SNR and denote the indicator function of the event , the communication rate function C(γ) for AMC can be written as
(17)
The corresponding optimal power allocation problem subject to average power constraint q0 can now be formulated as
(18)
Similar to (2) the dual function of the optimization problem in (18) is one dimensional and its Lagrangian is separable in the sense that we can find Lagrangian maximizers by performing per-fading state maximizations. Alas, the problem is not convex because the AMC rate function C(γ) is not concave, in fact, it is not even continuous.
Since it does not satisfy (C1), solving (18) in the dual domain is not possible in principle. Nevertheless the condition that allows determination of p(h) as the Lagrangian maximizer p(h,λ) [cf. (16)] is not the convexity of the primal problem but the lack of duality gap. We’ll see in Section “Duality in wireless systems optimization” that this problem does have null duality gap as long as the probability distribution of the channel h contains no points of strictly positive probability. Thus, the solution methodology in (3)-(16) can be applied to solve (18) despite the discontinuous AMC rate function C(γ). This is actually a quite generic property that holds true for optimization problems where nonconcave functions appear inside expectations. We introduce this generic problem formulation in the next section.
### Optimal wireless system design
Let us return to the relationship in (1) where h denotes the random fading state that ranges on a continuous space , p(h) the instantaneous resource allocation, f1(h,p(h)) a vector function mapping h and p(h) to instantaneous system performance, and x an ergodic average. The expectation in (1) is with respect to the joint probability distribution mh(h) of the vector channel h.
It is convenient to think of f1(h,p(h)) as a family of functions indexed by the parameter h with p(h) denoting the corresponding variables. Notice that there is one vector p(h) per fading state h, which translates into an infinite number of resource allocation variables if h takes on an infinite number of values. Consequently, it is adequate to refer to the set of all resource allocations as the resource allocation function. The number of ergodic limits x of interest, on the other hand, is assumed finite.
Instantaneous resource allocations p(h) are further constrained to a given bounded set . These restrictions define a set of admissible resource allocation functions that we denote as
(19)
Variables p(h) determine system performance over short periods of times. As such, they are of interest to the system designer but transparent to the end user except to the extent that they determine the value of ergodic variables x. Therefore, we adopt as our design objective the maximization of a concave utility function f0(x) of the ergodic average x. Putting these preliminaries together we write the following program as an abstract formulation of optimal resource allocation problems in wireless systems
(20)
where we added further constraints on the set of ergodic averages x. These constraints are in the form of a bounded convex set inclusion and a concave function inequality f2(x)≥0. In the problem formulation in (20) the set is convex and the functions f0(x) and f2(x) are concave. The family of functions f1(h,p(h)) is not necessarily concave with respect to p(h) and the set is not necessarily convex. The sets and are assumed compact to guarantee that x and p(h) are finite. For the expectation in (20) to exist we need to have f1(h,p(h)) integrable with respect to the probability distribution mh(h) of the vector channel h. This imposes a (mild) restriction on the functions f1(h,p(h)) and the power allocation function p. Integrability is weaker than continuity.
For future reference define xand p as the arguments that solve (20)
(21)
The configuration pair (x,p) attains the optimum value P=f0(x) and satisfies the constraints and as well as the set constraints , and . Observe that the pair (x,p) need not be unique. It may be, and it is actually a common occurrence in practice, that more than one configuration is optimal. Thus, (21) does not define a pair of variables but a set of pairs of optimal configurations. As it does not lead to confusion we use (x,p) to represent both, the set of optimal configurations and an arbitrary element of this set.
To write the Lagrangian we introduce a nonnegative Lagrange multiplier where λ10 is associated with the constraint and λ20 with the constraint f2(x). The Lagrangian of the primal optimization problem in (20) is then defined as
(22)
The corresponding dual function is the maximum of the Lagrangian with respect to primal variables and
(23)
The dual problem is defined as the minimum value of the dual function over all nonnegative dual variables
(24)
and the optimal dual variables are defined as the arguments that achieve the minimum in (24)
(25)
Notice that the optimal dual argument Λis a set as in the case of the primal optimal arguments because there may be more than one vector that achieves the minimum in (24). As we do with the optimal primal variables (x,p), we use Λ to denote the set of optimal dual variables and an arbitrary element of this set. A particular example of this generic problem formulation is presented next.
A common access point (AP) administers a power budget q0to communicate with a group of J nodes. The physical layer uses frequency division so that at most one terminal can be active at any given point in time. The goal is to design an algorithm that allocates power and frequency to maximize a given ergodic rate utility metric while ensuring that rates are at least rmin and not more than rmax.
Denote as hi the channel to terminal i and define the vector h:=[h1…,hJ]Tgrouping all channel realizations. In any time slot the AP observes the realization h of the fading channel and decides on suitable power allocations pi(h) and frequency assignments αi(h). Frequency assignments αi(h) are indicator variables αi(h)∈{0,1} that take value αi(h)=1 when information is transmitted to node i and αi(h)=0 otherwise. If αi(h)=1 communication towards i ensues at power pi(h) resulting in a communication rate C(hipi(h)/N0) determined by the SNR hipi(h)/N0. The specific form of the function C(hipi(h)/N0) mapping channels and powers to transmission rates depends on the type of modulation and codes used. One possibility is to select capacity achieving codes leading to . A more practical choice is to use AMC in which case C(hipi(h)/N0)=CAMC(hipi(h)/N0) with CAMC(hipi(h)/N0) as given in (17).
Regardless of the specific form of C(hipi(h)/N0) we can write the ergodic rate of terminal i as
(26)
The factor C(hipi(h)/N0) is the instantaneous rate achieved if information is conveyed. The factor αi(h) indicates wether this information is indeed conveyed or not. The expectation weights the instantaneous capacity across fading states and is equivalent to the consideration of an infinite horizon time average.
Similarly, pi(h) denotes the power allocated for communication with node i, but this communication is attempted only if αi(h)=1. Thus, the instantaneous power utilized to communicate with i for channel realization h is αi(h)pi(h). The total instantaneous power is the sum of these products for all i and the long term average power consumption can be approximated as the expectation
(27)
that according to the problem statement cannot exceed the budget q0.
To avoid collisions between communication attempts the indicator variables αi(h) are restricted so that at most one of them is 1. Define the vector α(h):=[α1(h),…,αJ(h)]T corresponding to values of the function and introduce the set of vector functions
(28)
We can now express the frequency exclusion constraints as .
We still need to model the restriction that the achieved capacity ri needs to be between rmin and rmax but this is easily modeled as the constraint rminrirmax. Defining the vector r=[r1,…,rJ]T this constraint can be written as with the set defined as
(29)
We finally introduce a monotonic nondecreasing utility function U(ri) to measure the value of rate riand formally state our design goal as the maximization of the aggregate utility . Using the definitions in (26)–(29) the operating point that maximizes this aggregate utility for a frequency division broadcast channel follows from the solution of the optimization problem
(30)
where we relaxed the rate expression in (26) to an inequality constraint, which we can do without loss of optimality because the utility U(ri) is monotonic nondecreasing.
The problem formulation in (30) is of the form in (20). The ergodic rates r in (30) are represented by the ergodic variables x in (20) whereas the power and frequency allocation functions p and α of (30) correspond to the resource allocation function pof (20). The set maps to the set and the set to the set . There are no functions in (30) taking the place of the function f2(x) of (20). The function f1(h,p(h)) in (20) is a placeholder for the stacking of the functions αi(h)C(hipi(h)/N0) for different i and the negative of the power consumptions in (30). The power constraint is not exactly of the form in (1) because q0is a constant not a variable but this doesn’t alter the fundamentals of the problem. The functions αi(h)C(hipi(h)/N0) are not concave and the set is not convex. This makes the program in (30) nonconvex but is consistent with the restrictions imposed in (20).
To write the Lagrangian corresponding to this optimization problem introduce multipliers λ:=[λ1,…,λJ]T associated with the capacity constraints and μassociated with the power constraint. The Lagrangian is then given by
(31)
The dual function, dual problem, and optimal dual arguments are defined as in (22)–(25) by replacing for . Since (30) is a nonconvex program we do not know if the dual problem is equivalent to the primal problem. We explore this issue in the following section.
### Duality in wireless systems optimization
For any optimization problem the dual minimum D provides an upper bound for the primal optimum value P. This is easy to see by combining the definitions of the dual function in (23) and the Lagrangian in (22) to write
(32)
Because the dual function value g(Λ) is obtained by maximizing the right hand side of (32), evaluating this expression for arbitrary primal variables yields an upper bound on g(Λ). Using a pair (x,p) of optimal primal arguments as this arbitrary selection yields the inequality
(33)
Since the pair x,p is optimal, it is feasible, which means that we must have and . Lagrange multipliers are also nonnegative according to definition. Therefore, the last two summands in the right hand side of (33) are nonnegative from which it follows that
(34)
The inequality in (34) is true for any Λand therefore true for Λ=Λin particular. It then follows that the dual optimum D upper bounds the primal optimum P,
(35)
as we had claimed. The difference D-P is called the duality gap and provides an indication of the loss of optimality incurred by working in the dual domain.
For the problem in (20) the duality gap is null as long as the channel probability distribution mh(h) contains no point of positive probability as we claim in the following theorem which is a simple generalization of a similar result in [20].
#### Theorem 1
Let P denote the optimum value of the primal problem (20) and D that of its dual in (24) and assume there exists a strictly feasible point (x0,p0) that satisfies the constraints in (20) with strict inequality. If the channel probability distribution mh(h) contains no point of positive probability the duality gap is null, i.e.,
(36)
The condition on the channel distribution not having points of positive probability is a mild requirement satisfied by practical fading channel models including Rayleigh, Rice, and Nakagami. The existence of a strictly feasible point (x0,p0) is a standard constraint qualification requirement which is also not stringent in practice.
In order to prove Theorem 1 we take a detour in Section “Lyapunov’s convexity theorem” to define atomic and nonatomic measures along with the presentation of Lyapunov’s Convexity Theorem. The proof itself is presented in Section “Proof of Theorem 1”. The implications of Theorem 1 are discussed in Sections “Recovery of optimal primal variables” and “Dual descent algorithms”.
#### Lyapunov’s convexity theorem
The proof of Theorem 1 uses a theorem by Lyapunov concerning the range of nonatomic measures [22]. Measures assign strictly positive values to sets of a Borel field. When all points have zero measure the measure is called nonatomic as we formally define next.
#### Definition 1 (Nonatomic measure)
Let w be a measure defined on the Borel field of subsets of a space . Measure w is nonatomic if for any measurable set with w(E0)>0, there exist a subset E of E0; i.e., , such that w(E0)>w(E)>0.
Familiar measures are probability related, e.g., the probability of a set for a given channel distribution. To build intuition on the notion of nonatomic measure consider a random variable X taking values in [0,1] and [2,3]. The probability of landing in each of these intervals is 1/2 and X is uniformly distributed inside each of them; see Figure 1. The space is the real line, and the Borel field comprises all subsets of real numbers. For every subset define the measure of E as twice the integral of x, weighted by the probability distribution of X on the set E, i.e.,
(37)
Figure 1. Nonatomic measure. The random variable X is uniformly distributed in . The measure is nonatomic because all sets of nonzero probability include a smaller set of nonzero probability. Lyapunov’s convexity theorem (Theorem 2) states that the measure range is convex. The range of wXis the, indeed convex, interval [0,3].
Note that, except for the factor 2, the value of wX(E) represents the contribution of the set E to the expected value of X and that when E is the whole space , it holds . According to Definition 1, wX(E) is a nonatomic measure of elements of . Every subset E0 with wX(E0)>0 includes at least an interval (a,b). The measure of the set E:=E0−((a + b)/2,b) formed by removing the upper half of (a,b) from E0 is wX(E)=wX(E0)−(ba)/2. The measure of E satisfies wX(E)>0 as required for wX(E) to be nonatomic.
To contrast this with an example of an atomic measure consider a random variable Y landing equiprobably in [0,1] or 5/2; see Figure 2. In this case, the measure is atomic because the set E0={5/2} has positive measure wY(E)=1. The only set is the empty set whose measure is null.
Figure 2. Atomic measure. The random variable Y lands with equal probability in Y=5/2 and uniformly in the interval [0,1]. The measure is atomic because the set {1} has strictly positive probability and no set other than the empty set is strictly included in {1}. Theorem 2 does not apply. The range of wY(E) is the nonconvex union of the intervals [0,1/2] and [5/2,3].
#### Theorem 2 (Lyapunov’s convexity theorem [[22]]).
Consider nonatomic measures w1,…,wnon the Borel field of subsets of a space and define the vector measure w(E):=[w1(E),…,wn(E)]T. The range of the measure wis convex. I.e., if w(E1)=w1and w(E2)=w2, then for any α∈[0,1] there exists such that w(E0)=αw1 + (1−α)w2.
The difference between the distributions of X and Y is that Y contains a point of strictly positive probability, i.e., an atom. This implies presence of delta functions in the probability density function of Y . Or, in a rather cleaner statement the cumulative distribution function (cdf) of X is continuous whereas the cdf of Y is not.
Lyapunov’s convexity theorem introduced next refers to the range of values taken by (vector) nonatomic measures.
Returning to the probability measures defined in terms of the probability distributions of the random variables X and Y , Theorem 2 asserts that the range of wX(E), i.e., the set of all possible values taken by wX is convex. In fact, it is not difficult to verify that the range of wXis the convex interval [0,3] as shown in Figure 1. Theorem 2 does not claim anything about wY. In this case, it is easy to see that the range of wY is the (non-convex) union of the intervals [0,1/2] and [5/2,3]; see Figure 2.
#### Proof of Theorem 1
To establish zero duality gap we will consider a perturbed version of (20) obtained by perturbing the constraints used to define the Lagrangian in (22). The perturbation function P(Δ) assigns to each (perturbation) parameter set the solution of the (perturbed) optimization problem
(38)
The perturbed problem in (38) can be interpreted as a modified version of (20), where we allow the constraints to be violated by Δamounts. To prove that the duality gap is zero, it suffices to show that P(Δ) is a concave function of Δ; see, e.g., [23].
Let and be an arbitrary given pair of perturbations. Let (x,p) be a pair of ergodic limits and resource allocation variables achieving the optimum value P(Δ) corresponding to perturbation Δ. Likewise, denote as (x,p) a pair that achieves the optimum value P(Δ) corresponding to perturbation Δ. For arbitrary α∈[0,1], we are interested in the solution of (38) under perturbation Δα:=αΔ + (1−α)Δ. In particular, to show that the perturbation function P(Δ) is concave we need to establish that
(39)
The roadblock to establish concavity of the perturbation functions is the constraint . More specifically, the difficulty with this constraint is the ergodic limit . Let us then isolate the challenge by defining the ergodic limit span
(40)
The set contains all the possible values that the expectation can take as the resource allocation function pvaries over the admissible set . When the channel distribution mh(h) contains no points of positive probability, the set is convex as we claim in the following theorem.
#### Theorem 3.
Let be ergodic limit span set in (40). If the channel probability distribution mh(h) contains no point of positive probability the set is convex.
Before proving Theorem 3 let us apply it to complete the proof of Theorem 1. For doing that consider two arbitrary points and . If these points belong to there exist respective resource allocation functions and such that
(41)
If is a convex set as claimed by Theorem 3, we must have that for any given α the point yα:=αy + (1−α)y also belongs to . In turn, this means there exists a resource allocation function for which
(42)
Further define the ergodic limit convex combination xα:=αx + (1−α)x. We first show that the pair (xα,pα) is feasible for the problem for perturbation Δα.
The convex combination satisfies the constraint because the set is convex. We also have f2(xα)≥δ2,α because the function f2(x) is concave. To see that this is true simply notice that concavity of f2(x) implies that . But f2(x)≥δ2 and f2(x)≥δ2 because x and x are feasible in (38) with perturbations Δand Δ. Substituting these latter two inequalities into the previous one yields according to the definition of Δα.
We are left to show that the pair (xα,pα) satisfies the constraint . For doing so recall that since (x,p) is feasible for perturbation Δand (x,p) is feasible for perturbation Δ′we must have
(43)
Perform a convex combination of the inequalities in (43) and use the definitions of xα:=αx + (1−α)x and to write
(44)
Combining (42) and (44) it follows that completing the proof that the pair (xα,pα) is feasible for the problem for perturbation Δα.
The utility yield of this feasible pair is f0(xδ) which we can bound as
(45)
Since (x,p) is optimal for perturbation Δ we have f0(x)=P(Δ) and, likewise, f0(x)=P(Δ). Further noting that the optimal yield P(Δα) for perturbation Δα must exceed the yield f0(xα) of feasible point xαwe conclude that
(46)
The expression in (46) implies that P(Δ) is a concave function of the perturbation vector Δ. The duality gap is therefore null as stated in (36).
We proceed now to the proof of Theorem 3.
#### Proof.
(Theorem 3) Consider two arbitrary points and . If these points belong tho there exist respective resource allocation functions and such that
(47)
To prove that is a convex set we need to show that for any given αthe point yα:=αy + (1−α)y also belongs to . In turn, this means we need to find a resource allocation function for which
(48)
For this we will use Theorem 2 (Lyapunov’s convexity theorem). Consider the space of all possible channel realizations , and the Borel field of all possible subsets of . For every s et define the vector measure
(49)
where the integrals are over the set E with respect to the channel distribution mh(h). A vector of channel realizations is a point in the space . The set E is a collection of vectors h. Each of these sets is assigned vector measure w(E) defined in terms of the power distributions p(h) and p(h). The entries of w(E) represent the contribution of realizations hEto the ergodic limits in (48). The first group of entries measure such contributions when the resource allocation is p(h). Likewise, the second group of entries of w(E) denote the contributions to the ergodic limits of the resource distribution p(h).
Two particular sets that are important for this proof are the empty set, E=, and the entire space . For , the integrals in (49) coincide with the expected value operators in (47). We write this explicitly as
(50)
For E=, or any other zero-measure set for that matter, we have w()=0.
The measure w(E) is nonatomic. This follows from the fact that the channel distribution contains no points of positive probability combined with the requirement that the resource allocation values p(h) belong to the bounded sets . Combining these two properties it is easy to see that that there are no channel realizations with positive measure, i.e., w(h)=0 for all . This is sufficient to ensure that w(E) is a nonatomic measure.
Being w(E) nonatomic it follows from Theorem 2 that the range of wis convex. Hence, the vector
(51)
belongs to the range of possible measures. Therefore, there must exist a set E0 such that . Focusing on the entries of w(E0) that correspond to the resource allocation function pit follows that
(52)
The analogous relation holds for the entries of w(E0) corresponding to p, i.e., (52) is valid if p(h) is replaced by p(h) but this fact is inconsequential for this proof.
Consider now the complement set defined as the set for which and . Given this definition and the additivity property of measures, we arrive at . Combining the latter with (51), yields
(53)
We mimick the reasoning leading from (51) to (52), but now we restrict the focus of (53) to the second entries of . It therefore follows that
(54)
Define now power distributions pα(h) coinciding with p(h) for channel realization hE0and with p(h) when , i.e.,
(55)
The resource distribution pαsatisfies the set constraint in (19). Indeed, to see that for all note that p(h) and p(h) are feasible in their respective problems and as such and for all channels . Because for given channel realization h it holds that either pα(h)=p(h) when hE0 or pα(h)=p(h) when it follows that for all channel realizations . According to the definition in (19) this implies that .
Let us now ponder the ergodic limit associated with power allocation pα(h).
Using (52) and (54), the average link capacities for power allocation pα(h) can be expressed in terms of p(h), p(h) as
(56)
The first equality in (56) holds becauset the space is divided into E0 and its complement . The second equality is true because when restricted to E0, pα(h)=p(h); and when restricted to , pα(h)=p(h). The third equality follows from (52) and (54).
Comparing (56) with (48) we see that the power allocation pα yields ergodic limit yα. Therefore implying convexity of as we wanted to show. □
### Recovery of optimal primal variables
Having null duality gap means that we can work in the dual domain without loss of optimality. In particular, instead of finding the primal maximum P we can find the dual minimum D, which we know are the same according to Theorem 1. A not so simple matter is how to recover an optimal primal pair (x,p) given an optimal dual vector Λ. Observe that recovering optimal variables is more important than knowing the maximum yield because optimal variables are needed for system implementation. In this section we study the recovery of optimum primal variables (x,p) from a given optimal multiplier Λ.
Start with an arbitrary, not necessarily optimal, multiplier Λ and define the primal Lagrangian maximizer set as
(57)
The elements of (x(Λ),p(Λ)) yield the maximum possible Lagrangian value for given dual variable. Comparing the definition of the dual function in (23) and that of the Lagrangian maximizers in (57) it follows that we can write the dual function g(Λ) as
(58)
Particular important pairs of Lagrangian maximizers are those associated with a given optimal dual variable Λ. As we show in the following theorem, these variables are related with the optimal primal variables (x,p).
#### Theorem 4.
For an optimization problem of the form in (20) let denote the Lagrangian maximizer set as defined in (57) corresponding to a given optimal multiplier Λ. The optimal argument set is included in this set of Lagrangian maximizers, i.e.,
(59)
#### Proof
Reinterpret (x,p) as a particular pair of optimal primal arguments. Start by noting that the value of the dual function g(Λ) can be upper bounded by the Lagrangian evaluated at (x,p)=(x,p)
(60)
Indeed, the inequality would be true for any and because that is what being the maximum means.
Consider now the Lagrangian that according to (22) we can explicitly write as
(61)
Since the pair (x,p) is an optimal argument of (20) we must have and f2(x)≥0. The multipliers also satisfy and because they are required to be nonnegative. Thus, the last two summands in (61) are nonnegative from where it follows that
(62)
Combining (60) and (62) and using the definitions g(Λ) and P=f0(x) yields
(63)
But since according to Theorem 1 the duality gap is null D=P and the inequalities must hold hold as equalities. In particular, we have
(64)
which according to (58) means the pair (p,Λ) is a Lagrangian maximizer associated with Λ=Λ. Since this is true for any pair of optimal variables (p,Λ) it follows that the set (p,Λ) is included in the set of Lagrangian maximizers as stated in (59). □
According to (59) optimal arguments can be recovered from Lagrangian maximizers associated with optimal multipliers. One has to take care to interpret this set inclusion properly. Equation (59) does not mean that we can always compute by finding Lagrangian maximizers and as such it may or may not be a useful result. If the Lagrangian maximizer pair is unique then the set is a singleton and by extension so is the (included) set of optimal primal variables. In this case Lagrangian maximizers can be used as proxies for optimal arguments. When the set is not a singleton this is not possible and recovering optimal primal variables from optimal multipliers Λ is somewhat more difficult.
In general, problems in optimal wireless networking are such that the Lagrangian maximizer resource allocation functions p(Λ) are unique to within a set of zero measure. The ergodic limit Lagrangian maximizers x(Λ), however, are not unique in many cases. This is more a nuisance than a fundamental problem. Algorithms that find primal optimal operating points regardless of the characteristics of the set of Lagrangian maximizers are studied in Section “Dual descent algorithms”.
#### Separability
To determine the primal Lagrangian maximizers in (57) it is convenient to reorder terms in the definition in (22) to write
(65)
We say that in (22) the Lagrangian is grouped by dual variables because each dual variable appears in only one summand. The expression in (65) is grouped by primal variables because each summand contains a single primal variable if we interpret as a single term and the expectation as a weighted sum—which is not true, but close enough for an intuitive interpretation.
Writing the Lagrangian as in (65) simplifies computation of Lagrangian maximizers. Since there are no constraints coupling the selection of optimal x and p in (65) we can separate the determination of the pair (x(Λ),p(Λ)) in (57) into the determination of the ergodic limits
(66)
and the resource allocation function
(67)
The computation of the resource allocation function in (67) can be further separated. The set as defined in (19) constrains separate values p(h) of the function pbut doesn’t couple the selection of p(h1) and p(h2) for different channel realizations h1h2. Further observing that expectation is a linear operation we can separate (67) into per-fading state subproblems of the form
(68)
The absence of coupling constraints in the Lagrangian maximization, which permits separation into the ergodic limit maximization in (66) and the per-fading maximizations in (68), is the fundamental difference between (57) and the primal problem in (20). In the latter, the selection of optimal x and p as well as the selection of p(h1) and p(h2) for different channel realizations h1h2 are coupled by the constraint .
The decomposition in (68) is akin to the decomposition in (16) for the particular case of a point to point channel with capacity achieving codes. It implies that to determine the optimal power allocation p it is convenient to first determine the optimal multiplier Λ. We then proceed to exploit the lack of duality gap and the separable structure of the Lagrangian to compute values p(h) of the optimal resource allocation independently of each other. The computation of optimal ergodic averages is also separated as stated in (66). This separation reduces computational complexity because of the reduced dimensionality of (66) and (68) with respect to that of (20).
For the separation in (66) and (68) to be possible we just need to have a nonatomic channel distribution and ensure existence of a strictly feasible point as stated in the hypotheses of Theorem 1. These two properties are true for most optimal wireless communication and networking problems. A particular example is discussed in the following section.
Consider the optimal frequency division broadcast channel problem in (30) whose Lagrangian is given by the expression in (31). Terms in can be rearranged to uncover the separable structure of the Lagrangian
(69)
This rearrangement is equivalent to the generic transformation leading from (22) to (65). As we observed after (65) the computation of Lagrangian maximizing ergodic limits and resource allocation functions can be separated as can the computation of resource allocation values corresponding to different fading channel realizations. This is the case in this particular example. In fact, there is more separability to be exploited in (69).
With regards to primal ergodic variables rnotice that each Lagrangian maximizing rate ri(λ,μ) depends only on λi and that we can compute each ri(λ,μ)=ri(λi) separately as
(70)
This can be easily computed as U(ri) is a one dimensional concave function. As a particular case consider the identity utility U(ri)=ri. Since the Lagrangian becomes a linear function of ri, the maximum occurs at either rmaxor rmin depending on the sign of 1−λi. When λi=1 the Lagrangian becomes independent of ci. In this case any value in the interval [rmin,rmax] is a Lagrangian maximizer. Putting these observations together we have
(71)
Notice that the Lagrangian maximizer ri(λi) is not unique if λi=1. Therefore, if it is not possible to recover the optimal rate from the Lagrangian maximizer corresponding to the optimal multiplier. In fact, an optimal multiplier is uninformative with regards to the optimal ergodic rate as it just implies that which we know is true because this is the feasible range of ri. If you think this is an unlikely scenario because it is too much of a coincidence to have , think again. Having is the most likely situation. If the optimal rate is either or . However, the capacity bounds rminand rmaxare selected independently of the remaining system parameters. It is quite unlikely—indeed, not true in most cases—that the optimal power and frequency allocation yields a rate determined by these arbitrarily selected parameters.
As we observed in going from (67) to (68) determination of the optimal power and frequency allocation functions requires maximization of the terms inside the expectation. These implies solution of the optimization problems
(72)
where we opened up the constraint into its per-fading state components.
The maximization in (72) can be further simplified. Begin by noting that irrespectively of the value of α(h) the best possible power allocation pi(h,λ,μ) of terminal i is the one that maximizes its potential contribution to the sum in (72), i.e.,
(73)
If αi(h)=1 this contribution is added to the sum in (72). If it is multiplied by αi(h)=0 it is not added to the sum in (72). Either way pi(h,λ,μ) as given by (73) is the optimal power allocation for terminal i.
To determine the frequency allocation α(h,λ,μ) define the discriminants
(74)
which we can use to rewrite (72) as
(75)
Since at most one αi(h)=1 in (75), the best we can do is to select the terminal with the largest discriminant when that discriminant is positive. If all discriminants are negative the best we can do is to make αi(h)=0 for all i.
The Lagrangian maximizers pi(h,λ,μ) and α(h,λ,μ) in (73) and (75) are almost surely unique for all values of Λand μ. In particular, optimal allocations and α(h) can be obtained by making Λ=Λand μ=μ in (73)–(75).
### Dual descent algorithms
Determining optimal dual variables Λ is easier than determining optimal primal pairs (p,x) because there are a finite number of multipliers and the dual function is convex. If the dual function is convex descent algorithms are guaranteed to converge towards the optimum value, which implies we just need to determine descent directions for the dual function.
Descent directions for the dual function can be constructed from the constraint slacks associated with the Lagrangian maximizers. To do so, consider a given Λ and use the definition of the Lagrangian maximizer pair (x(Λ),p(Λ)) in (57) to write the dual function as
(76)
Further consider the Lagrangian evaluated at an arbitrary multiplier and primal Lagrangian multipliers corresponding to the given Λ. This Lagrangian lower bounds the dual function value which allows us to write
(77)
Subtracting (76) from (77) yields
(78)
Defining the vector with components
(79)
and recalling the multiplier definitions and we can write
(80)
If the dual function g(Λ) is differentiable, the expression in (80) implies that is its gradient. If the dual function is nondifferentiable defines a subgradient of the dual function. In either case is a descent direction of the dual function. This can be verified by substituting M=Λ in (80) to conclude that for any ΛΛit must be
(81)
Since the inner product of and (ΛΛ) is negative the vectors and (ΛΛ) form an angle smaller than Π/2. This can be interpreted as meaning that standing at Λ, the vector points in the direction of Λ.
Having a descent direction available we can introduce a time index t and a stepsize εtto define the dual subgradient descent algorithm as the one with iterates λ(t) obtained through recursive application of
(82)
This algorithm is known to converge to optimal dual variables if the stepsize vanishes at a nonsummable rate and to approach Λ if the stepsize is constant; see e.g., [20], Section 6.
A problem in implementing (82) is that computing the subgradient component in (79) is costly. To compute , we need to evaluate the expectation where each of the resource allocations p(h,Λ) follows from the solution of the optimization problem in (68). Therefore, to approximate the expectation we need to determine p(h,Λ) for a grid of channel values, which gets impractical if h has large dimension. A Montecarlo approximation of could be computed but that is also costly. Furthermore, to compute we need to know the probability distribution mh(h) which needs to be estimated from channel observations. To overcome these difficulties we replace the gradient in (82) by a stochastic subgradient as we discuss in the following section.
Consider a given channel realization h and given multiplier Λ and define the vector
(83)
This definition is made such that the expected value of s1(h,Λ) with respect to the channel distribution is the subgradient component in (79). Thus, if we define the vector with s1(h,Λ) as in (83) and we have
(84)
Formally, (84) implies that s(h,Λ) is a stochastic subgradient of the dual function. Intutively, (84) implies that s(h,Λ) is an average descent direction of the dual function because its expectation is a descent direction. Thus, if we draw independent channel realizations h(t) and replace for s(h(t),Λ(t)) in (82) we expect to observe some sort of convergence towards optimum multipliers.
The advantage of this substitution is that to compute the stochastic subgradient s(h,Λ), we do not need to evaluate an expectation as in the case of the subgradient . As a consequence, using stochastic subgradients as descent directions results in an algorithm that is computationally lighter. Perhaps more important, we can operate without knowledge of the channel probability distribution if we use the current channel realization h(t) as our channel sample. These observations motivate the introduction of the following dual stochastic subgradient descent algorithm. (S1) Primal iteration. Given multipliers λ(t) observe current channel realization h(t) and determine primal Lagrangian maximizers [cf. (66) and (68)]
(85)
(S2) Dual stochastic subgradient. With the Lagrangian maximizers determined by (85) compute the stochastic subgradient of the dual function with components [cf. (83) and (79)]
(86)
(S3) Dual iteration. With stochastic subgradients as in (86) and given step size ε descend in the dual domain along the direction −s(t) [cf. (82)]
(87)
The core of the dual stochastic subgradient descent algorithm is the dual iteration (S3). The purpose of the primal iteration (S1) is to compute the stochastic subgradients in (S2) that are needed to implement the dual descent update in (S3). We can think of the primal variables x(Λ(t)) and p(h(t),Λ(t)) as a byproduct of the descent implementation.
Convergence properties depend on whether constant or time varying step sizes are used. If the stepsizes εt form a nonsummable but square summable series, i.e., and , then using a simple supermartingale argument it can be shown that λ(t) converges to Λ almost surely [24]. If constant stepsizes εt=ε for all t are used, λ(t) does not converge to Λ but it can be shown that λ(t) visits a neighborhood of the optimal multiplier set Λ[19, Appendix]. Excursions away from this set are possible, but the set is visited infinitely often. The suboptimality of this set is controlled by the step size ε.
If λ(t) approaches or converges to Λit follows as a consequence of Theorem 4 that an optimal primal pair (x,p) can be computed from the Lagrangian maximizers if the latter are unique. Observe that this does not require a separate computation because the Lagrangian maximizers are computed in the primal iteration (S1). One may question that at time t we do not compute the Lagrangian maximizer function p(Λ(t)) but just the single value p(h(t),Λ(t)). However, h(t) is the channel realization at time t which means that p(h(t),Λ(t)) is the value we need to compute to adapt to the current channel realization.
This permits reinterpretation of (S1)–(S3) as a policy to determine wireless systems’ operating points. At time t we observe current channel realization h(t) and determine resource allocation p(h(t),Λ(t)) which we proceed to implement in the current time slot. In this case the core of the algorithm is the primal iteration (S1) and the dual variable λ(t) is an internal state that determines the operating point. Steps (S2) and (S3) are implemented to update this internal state so that as time progresses λ(t) approaches Λand the policy becomes optimal because it chooses the best possible resource allocation adapted to the current channel realization h(t).
The reinterpretation of (S1)–(S3) as a policy to determine resource allocations p(t)=p(h(t),Λ(t)) associated with observed channel realizations h(t) motivates a redefinition of the concept of solution to the wireless optimization problem in (20). In principle, solving (20) entails finding the optimal resource allocation p and the optimal ergodic average x such that the problem constraints are satisfied and [cf. (21)]. Heeding the interpretation of dual stochastic subgradient descent as a policy, we are interested in the optimality of the sequences of power allocations and average variables generated by (S1)–(S3). Further notice that since (S1)–(S3) is a stochastic algorithm the sequences and generated in a particular run are instantiations of respective random processes and . We are therefore interested in the optimality of the processes and .
To be more specific consider the channel stochastic process whose instances are sequences of channel realizations drawn independently from the channel probability distribution mh(h). Suppose we are also given a sequence of variables drawn from a stochastic process and a resource allocation function p(h) that dictates allocation of resources p(h(t)) for channel realization h(t). Assuming that the process is ergodic, the constraint is equivalent to
(88)
Indeed, since and are ergodic processes the limit is a constant that we could denote by x and the limit is equivalent to the expectation .
Writing the constraint in the more cumbersome form shown in (88) has the advantage that the latter can be generalized to cases in which we are given stochastic processes and in which is not necessarily ergodic and realizations p(t) of the process are more general than just functions of the channel state h(t). This concept of solution is formally defined next.
#### Definition 2
Consider the channel stochastic process whose instances are sequences drawn independently from the channel probability distribution mh(h). We say the stochastic processes and with realizations and problem in (20) if (i) Instantaneous feasibility. Sequence values satisfy the set constraints , for all times t. (ii) Almost sure average feasibility. Ergodic limits of sequences and are feasible with probability 1, i.e.,
(89)
(90)
(iii) Almost sure optimality. The yield of the ergodic limit of is almost surely optimal, i.e,
(91)
If the stochastic process is ergodic and the process is such that realizations p(t)=p(h(t)) are functions of current channel states, Definition 2 is equivalent to (21) with and p(h(t))=p(h(t)). Definition 2 is more general because it allows correlation between values of and lets p(t) be more complex than just a function of the current channel realization h(t). This added generality is needed because processes and defined as per (S1)–(S3) yield correlated processes and in which p(t) is a function of the current channel realization h(t) and the current Lagrange multiplier λ(t). Processes and are close to optimal in the sense of Definition 2 as we describe in the following theorem.
#### Theorem 5 (Ergodic stochastic optimization [[19]])
Consider the optimization problem in (20) as well as processes and generated by the stochastic dual descent algorithm (S1)–(S3). Let be a bound on the second moment of the norm of the stochastic subgradients s(h,Λ) and assume the same hypotheses of Theorem 1. Sequences and are such that: (i) Feasibility. Items (i) and (ii) of Definition 2 hold true. (ii) Near optimality. The ergodic average of x(t) almost surely converges to a value with optimality gap smaller than , i.e,
(92)
The sequences and satisfy the constraints in (89) and (90) almost surely and the objective function evaluated at the ergodic limit is within of optimal. Since and are compact sets it follows that the bound is finite. Therefore, reducing ε it is possible to make arbitrarily close to P and as a consequence the sequences and arbitrarily close to optimal. It follows that the processes and generated by (S1)–(S3) are arbitrarily close to processes and that are optimal in the sense of Definition 2.
Variables p and x optimal in the sense of (21) are not computed by (S1)–(S3). Rather, (89) implies that, asymptotically, (S1)–(S3) is drawing resource allocation realizations p(t)=p(h(t),Λ(t)) and variables x(t):=x(Λ(t)) that are close to optimal as per Definition 2. The important point here is that having a procedure to generate stochastic processes close to optimal in the sense of Definition 2 is sufficient for practical implementation.
An example application of the dual stochastic subgradient descent algorithm (S1)–(S3) is discussed in the next section.
To implement dual stochastic descent for the frequency division broadcast channel we need to specify the primal iteration (S1) and the dual iteration (S2). To specify the primal iteration (S1) we need to compute Lagrangian maximizers for which it suffices to recall the expressions in Section “Frequency division broadcast channel” of “Recovery of optimal primal variables”. For the ergodic rate ri we make λi=λi(t) in (70) to conclude that the primal iterate ri(t)=ri(λi(t)) is
(93)
For the power allocations pi(t)=pi(h(t),λ(t),μ(t)) and the frequency assignments α(t)=α(h(t),λ(t),μ(t)) we need to set the multipliers to λ=λ(t) and μ=μ(t) and also set the value of the channel to its current state h=h(t). This substitution in (73) yields the power allocation
(94)
To determine the frequency assignments α(t) we first substitute λ=λ(t), μ=μ(t), and h=h(t) in (74) to compute the discriminants di(t)=di(h(t),λ(t),μ(t))
(95)
from where we conclude that the frequency assignment α(t)=α(h(t),λ(t),μ(t)) is given by the solution of [cf. (75)]
(96)
Recall that since at most one αi(h)=1 in (96), the optimal frequency allocation is to make αi(h)=1 for the terminal with the largest discriminant when that discriminant is positive. If all discriminants are negative we make αi(h)=0 for all i.
The ESO algorithm for optimal resource allocation in broadcast channels is completed with an iteration in the dual domain [cf. (83) and (87)]
(97)
As per Theorem 5 iterative application of (93)–(97) yields sequences ri(t), αi(t) and pi(t) such that: (i) The sum utility for the ergodic limits of ri(t) is almost surely within a small constant of optimal; (ii) The power constraint in (27) and the rate constraints in (26) are almost surely satisfied in an ergodic sense. This result is true despite the presence of the non-convex integer constraint , the non-concave function C(hipi(t)/N0), the lack of access to the channel’s probability distribution, and the infinite dimensionality of the optimization problem.
#### Numerical results
The dual stochastic subgradient descent algorithm for optimal resource allocation in frequency division broadcast channels defined by (93)–(97) is simulated for a system with J=16 nodes. Three AMC modes corresponding to capacities 1, 2 and 3 bits/s/Hz are used with transitions at SINR 1, 3 and 7. Fading channels are generated as i.i.d. Rayleigh with average powers 1 for the first four nodes, i.e., j=1,…,4, and 2, 3 and 4 for subsequent groups of 4 nodes. Noise power is N0=1 and average power available is q0=3. Rate of packet acceptance is constrained to be 0≤ri(t)≤2 bits/s/Hz. The optimality criteria is proportional fair scheduling, i.e., for all i. Steps size is ε=0.1.
Figure 3 shows evolution of dual variables λi(t) and corresponding rates ri(t) for representative nodes i=1 with average channels and i=9 with . The time average rate is also shown. Neither multipliers λi(t) nor rates ri(t) converge, but ergodic rates do converge. Multiplier λ1(t) associated with node 1 is larger than multiplier λ9(t) of node 9. This improves fairness of resource allocation by increasing the chances of allocating user 1 even when the channel h1(t) is smaller than h9(t)—recall that channel h9(t) is stronger on average. Convergence of the algorithm is ratified by Figures 4 and 5. Figure 4 shows evolution of the objective and the dual function value g(t):=g(λ(t),μ(t)). Notice that the objective value is decreasing towards the maximum objective. This is not a contradiction, because variables are infeasible but approach feasibility as t grows. The dual function’s value is an upper bound on the maximum utility and it can be observed to approach the objective as t grows. Eventually, the objective value becomes smaller than the dual value as expected. Figure 5 corroborates satisfaction of the power constraint in (27) and the rate constraints in (26). The amount by which the power constraint (27) is violated is shown in the top. In the bottom we show the corresponding figure for the rate constraint in (26). Since there are J of these constraints we show the minimum and maximum violation. All constraints are satisfied as t grows. Resulting power allocations appear in Figure 6 for a channel with average power and for a channel with . Power allocation is opportunistic in that power is allocated only when channel realizations are above average.
Figure 3. Primal and dual iterates in dual stochastic gradient descent. Evolution of dual variables λi(t) and rates ri(t) for representative nodes with average channels and for the algorithm in (93)–(97) are shown. Multipliers λi(t) and capacities ci(t) do not converge, but ergodic rates do.
Figure 4. Optimal frequency division broadcast channel. Objective value and dual function’s value g(t):=g(λ(t),μ(t)) for the algorithm in (93)–(97) are shown along with lines marking optimal utility and 90% of optimal yield. Utility yield becomes optimal as time grows.
Figure 5. Power and capacity constraints. Feasibility as time grows is corroborated for the power constraint in (27) (top) and rate constraints in (26) (bottom). For the rate constraint we show the maximum and minimum value of constraint violation.
Figure 6. Power allocations. Power allocated as a function of channel realization is shown for channels with average power (top) and (bottom). The resulting power allocation is opportunistic in that power is allocated only when channel realizations are above average.
### Conclusions
This article reviews recent results which state that problems of the form in (20) in which nonconcave functions appear inside expectations have null duality gap as long as the probability distribution of the fading coefficient h contains no points of strictly positive probability. Lack of duality gap permits solution in the dual domain leading to a substantial reduction in the computational cost of determining optimal operating points of the wireless system. Working in the dual domain leads to a solution methodology that can be interpreted as a generalization of the derivation of the waterfilling power allocation in point to point channels reviewed in Section “Power allocation in a point-to-point channel”.
Specifically, the problem of determining the optimal resource allocation function p in (20) is challenging due to its infinite dimensionality and lack of convexity. However, in the dual domain we need to determine the optimal multiplier Λthat minimizes the dual function in (23). This is simpler because the dual function is convex and finite dimensional. Once we have found an optimal dual variable we can determine optimal operating points as Lagrangian maximizers. In doing so we can exploit the separable structure of the Lagrangian to decompose the optimization problem into the per fading state subproblems in (68). We emphasize that solving the optimization programs in (68) is not necessarily easy if the dimensionality of h is large. Nevertheless solving (68) is always simpler than solving (20) and in some cases plain simple. Lack of duality gap and Lagrangian separability are further exploited to propose the dual stochastic subgradient descent algorithm (S1)–(S3) which converges to an optimal operating point with probability 1 in an ergodic sense.
There are three key points that permit the development of the solution methodology outlined in the previous paragraph: Nonatomic fading distribution. A nonatomic fading distribution leads to the lack of duality gap. The fact that P=D, i.e., that primal and dual optimal values are the same, is what allows us to work in the dual domain without loss of optimality. In formal terms, lack of duality gap is the tool that we used to recover the optimal primal variables (x,p) from the optimal dual variable Λby determining the primal Lagrangian maximizers for [cf. Theorem 4]. It is important to distinguish between convexity of the optimization problem and lack of duality gap. Null duality gap may follow from convexity, but convexity is rare in wireless communications systems. Lack of duality gap can also follow from a nonatomic fading distribution, which is a common occurrence in wireless systems. Lagrangian Separability. According to Theorem 4 null duality gap permits computation of the optimal pair (x,p) as the Lagrangian maximizers (x(Λ),p(Λ)). This is not a simplification per se but leads to a simplification because the computation of the Lagrangian maximizer function p(Λ) can be separated into per fading state problems whose solution determines values p(h,Λ) of this function [cf. (66)–(68)]. The Lagrangian is separable in this sense because neither the constraints nor the objective function involve a nonlinear function coupling the selection of values p(h1) and p(h2) for different channel realizations h1h2. Whenever p(h1) and p(h2) appear as part of the same constraint they appear as different terms of an expectation operation. This absence of coupling is what permits exchanging the order of maximization and expectation in going from (67) to (68). Finite number of constraints. Working in the dual domain is simpler than working in the primal domain because the dual function is finite dimensional whereas the primal problem is infinite dimensional. We have a finite dimensional dual function as long as the original optimization problem has a finite number of constraints.
Nonatomic fading distributions, Lagrangian separability, and having a finite number of constraints are properties that appear in many, indeed most, problems in optimal design of wireless systems. In such cases the methodology described in this article can be applied to their solution.
The use of dual problems as a shortcut to solve optimization problems in communications has a rich history [25-28]; see also [29] for a comprehensive treatment. Lack of duality gap in non-convex optimization problems has also been observed in the context of asymmetric digital subscriber lines [30,31]. In network optimization lack of duality gap leads to the optimality of layered architectures which renders the complexity of wireless networking essentially identical to the complexity of physical layer optimization [32-35]. For the use of techniques discussed here in the solution of specific problems we refer the reader to [36-42]. For further details on dual stochastic sub gradient descent, the literature on convergence of subgradient descent algorithms [43-45], and stochastic subgradient descent [46-49] is of interest.
### Competing interests
The author declares that he has no competing interests.
### Acknowledgements
Work in this article is supported by the Army Research Office grant W911NF-10-1-0388 and the National Science Foundation award CAREER CCF-0952867. Part of the results in this article were derived while the author was at the University of Minnesota. The work presented here has benefited from discussions with Yichuan Hu, Dr. Nikolaos Gatsis, and Prof. Georgios B. Giannakis. The Associate Editor, Dr. Deniz Gunduz, provided valuable corrections to a draft version of this article.
### References
1. X Wang, GB Giannakis, Resource allocation for wireless multiuser OFDM networks. IEEE Trans. Inf. Theory 57(7), 4359–4372 (2011)
2. V Ntranos, N Sidiropoulos, L Tassiulas, On multicast beamforming for minimum outage. IEEE Trans. Wirel. Commun 8(6), 3172–3181 (2009)
3. ND Sidiropoulos, TN Davidson, ZQ Luo, Transmit beamforming for physical-layer multicasting. IEEE Trans. Signal Process 54(6), 2239–2251 (2006)
4. JA Bazerque, GB Giannakis, Distributed scheduling and resource allocation for cognitive OFDMA radios. Mobile Nets. Apps 13(5), 452–462 (2008). Publisher Full Text
5. Z Quan, S Cui, AH Sayed, Optimal linear cooperation for spectrum sensing in cognitive radio networks. IEEE J. Sel. Topics Signal Process 2, 28–40 (2008)
6. Y Hu, A Ribeiro, Optimal wireless networks based on local channel state information. IEEE Trans. Signal Process 60(9), 4913–4929 (September 2012)
7. Y Hu, A Ribeiro, Adaptive distributed algorithms for optimal random access channels. IEEE Trans. Wirel. Commun 10(8), 2703–2715 (2011)
8. Y Hu, A Ribeiro, A Optimal wireless multiuser channels with imperfect channel state information. Proc. Int. Conf. Acoustics Speech Signal Process, vol. 1, ((Kyoto Japan, 2012), pp), . 1–4
9. Y Hu, A Ribeiro, Optimal transmission over a fading channel with imperfect channel state information. Global Telecommun. Conf., vol. 1, ((Houston, TX, 2011), pp), . 1–5
10. L Chen, SH Low, M Chiang, JC Doyle, Cross-layer congestion control, routing and scheduling design in ad hoc wireless networks. Proc. IEEE INFOCOM, vol. 1, ((Barcelona, Spain, 23–29 April 2005), pp), . 1–13
11. M Chiang, SH Low, RA Calderbank, JC Doyle, Layering as optimization decomposition. Proc IEEE 95, 255–312 (2007)
12. A Eryilmaz, R Srikant, Joint congestion control, routing, and MAC for stability and fairness in wireless networks. IEEE J. Sel. Areas Commun 24(8), 1514–1524 (2006)
13. L Georgiadis, MJ Neely, Resource allocation and cross-layer control in wireless networks. Found Trends Netw 1, 1–144 (2006)
14. JW Lee, RR Mazumdar, NB Shroff, Opportunistic power scheduling for dynamic multi-server wireless systems. IEEE Trans. Wirel. Commun 5(6), 1506–1515 (2006)
15. X Lin, NB Shroff, R Srikant, A tutorial on cross-layer optimization in wireless networks. IEEE J. Sel. Areas Commun 24(8), 1452–1463 (2006)
16. MJ Neely, E Modiano, CE Rohrs, Dynamic power allocation and routing for time-varying wireless networks, IEEE J. Sel. Areas Commun 23, 89–103 (2005)
17. X Wang, K Kar, Cross-layer rate optimization for proportional fairness in multihop wireless networks with random access. IEEE J. Sel. Areas Commun 24(8), 1548–1559 (2006)
18. Y Yi, S Shakkottai, Hop-by-hop congestion control over a wireless multi-hop network. IEEE/ACM Trans. Netw. 15(133–144), 1548–1559 (2007)
19. A Ribeiro, Ergodic stochastic optimization algorithms for wireless communication and networking. IEEE Trans. Signal Process 58(12), 6369–6386 (2010)
20. A Ribeiro, G Giannakis, Separation principles in wireless networking. IEEE Trans. Inf. Theory 56(9), 4488–4505 (2010)
21. S Boyd, L Vandenberghe, in Convex Optimization (Cambridge University Press, Cambridge, 2004)
22. AA Lyapunov, Complètement, Sur les, Fonctions-vecteur, URSS, Additives. Bull. Acad. Sci. Sèr. Math 4, 465–478 (1940)
23. RT Rockafellar, in Convex Analysis (Princeton University Press, Princeton, NJ, 1970)
24. NZ Shor, in Minimization Methods for Non-Differentiable Functions (Springer, Berlin, 1985)
25. FP Kelly, A Maulloo, D Tan, Rate control for communication networks: shadow prices, proportional fairness and stability. J. Oper. Res. Soc 49(3), 237–252 (1998)
26. SH Low, DE Lapsley, Optimization flow control, I: basic algorithm and convergence. IEEE/ACM Trans. Netw 7(6), 861–874 (1998)
27. SH Low, A duality model of TCP and queue management algorithms. IEEE/ACM Trans. Netw 11(4), 525–536 (2003). Publisher Full Text
28. SH Low, F Paganini, JC Doyle, Internet congestion control. IEEE Control Syst. Mag 22, 28–43 (2002)
29. R Srikant, in The Mathematics of Internet Congestion Control (Birkhauser, 2004)
30. ZQ Luo, S Zhang, Dynamic spectrum management: complexity and duality. IEEE J. Sel. Topics Signal Process 1(2), 57–73 (2008)
31. W Yu, R Lui, Dual methods for nonconvex spectrum optimization of multicarrier systems. IEEE Trans. Commun 54(7), 1310–1322 (2006)
32. RA Berry, EM Yeh, Cross-layer wireless resource allocation. IEEE Signal Process. Mag 21(5), 59–68 (2004). Publisher Full Text
33. MJ Neely, Energy optimal control for time-varying wireless networks. IEEE Trans. Inf. Theory 52(7), 2915–2934 (2006)
34. MJ Neely, E Modiano, CP Li, Fairness and optimal stochastic control for heterogeneous networks. Proc. IEEE INFOCOM, vol 3 ((Miami, FL 13–17, March 2005), pp), . 1723–1734
35. A Ribeiro, G Giannakis, Layer separability of wireless networks. Proc. Conf. on Info. Sciences and Systems, vol. 1 ((Princeton Univ), . Princeton, NJ, 2008), pp. 821–826
36. N Gatsis, A Ribeiro, G Giannakis, A class of convergent algorithms for resource allocation in wireless fading networks. IEEE Trans. Wirel. Commun 9(5), 1808–1823 (2010)
37. N Gatsis, A Ribeiro, G Giannakis, Cross-layer optimization of wireless fading ad-hoc networks. Proc. Int. Conf. Acoustics Speech Signal Process, vol. 1 ((Taipei, Taiwan, 2009), pp), . 2353–2356
38. Y Hu, A Ribeiro, Optimal wireless networks based on local channel state information. Proc. Int. Conf. Acoustics Speech Signal Process, vol. 1 ((Prague Czech Republic, 2011), pp), . 3124–3127
39. Y Hu, A Ribeiro, Adaptive distributed algorithms for optimal random access channels. Proc. Allerton Conf. on Commun. Control Computing, vol. 1 ((Monticello, 2010), pp), . 1474–1481
40. A Ribeiro, G Giannakis, Optimal FDMA over wireless fading mobile ad-hoc networks. Proc. Int. Conf. Acoustics Speech Signal Process, vol. 1 ((Las Vegas, NV, 2008), pp), . 2765–2768
41. A Ribeiro, T Luo, N Sidiropoulos, G Giannakis, Modelling and optimization of stochastic routing for wireless multihop networks. Proc. IEEE Int. Conf. on Computer Commun, vol. 1 ((Anchorage, AK, 2007), pp), . 1748–1756
42. A Ribeiro, N Sidiropoulos, G Giannakis, Optimal distributed stochastic routing algorithms for wireless multihop networks. IEEE Trans. Wirel. Commun 7(11), 4261–4272 (2008)
43. A Juditsky, G Lan, A Nemirovski, A Shapiro, Stochastic approximation approach to stochastic programming. SIAM J. Optim 19(4), 1574–1609 (2009). Publisher Full Text
44. T Larsson, M Patriksson, A Str omberg, Ergodic primal convergence in dual subgradient schemes for convex programming. Math. Progr 86(2), 283–312 (1999). Publisher Full Text
45. A Nedic, A Ozdaglar, Approximate primal solutions and rate analysis for dual subgradient methods. SIAM J. Optim 19(4), 1757–1780 (2009). Publisher Full Text
46. BT Polyak, Newstochasticapproximationtypeprocedures Autom, Remote Control 51, 937–946 (1990)
47. BT Polyak, AB Juditsky, Acceleration of stochastic approximation by averaging. SIAM J. Control Optim 30(4), 838–855 (1992). Publisher Full Text
48. A Ribeiro, Stochastic learning algorithms for optimal design of wireless fading networks. Proc. IEEE Workshop on Signal Process. Advances in Wireless Commun vol. 1 ((Marakech, Morocco, 2010), pp), . 1–5
49. A Ribeiro, Ergodic stochastic optimization algorithms for wireless communication and networking. Proc. Int. Conf. Acoustics Speech Signal Process, vol. 1, ((Dallas, TX, 2010), pp), . 3326–3329 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9367709755897522, "perplexity": 783.4198307620701}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637899702.24/warc/CC-MAIN-20141030025819-00153-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-a-combined-approach-4th-edition/chapter-r-section-r-2-fractions-exercise-set-page-r-15/5 | ## Algebra: A Combined Approach (4th Edition)
$\frac{13}{1}$ = 13
Simplify by dividing the numerator by the denominator $\frac{13}{1}$ = $\frac{13/1}{1/1}$ = $\frac{13}{1}$ = 13 Result: 13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9492781758308411, "perplexity": 889.704752944036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947957.81/warc/CC-MAIN-20180425193720-20180425213720-00161.warc.gz"} |
https://math.eretrandre.org/tetrationforum/showthread.php?tid=480&pid=5047 | • 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
True or False Logarithm bo198214 Administrator Posts: 1,391 Threads: 90 Joined: Aug 2007 07/27/2010, 04:53 AM (This post was last modified: 07/27/2010, 08:02 AM by bo198214.) Hey, the following function $f_n$ iapproaches - if the limit $n\to\infty$ exists - the intuitive logarithm to base $b$, i.e. the intuitive Abel function of $bx$ developed at 1: $f_n(x)=-\sum_{k=1}^n \left(n\\k\right)(-1)^{k}\frac{1-x^k}{1-b^k}$ The question is whether this is indeed the logarithm, i.e. if $\lim_{n\to\infty} f_n(x) = \log_b(x)$ for $\left|1-\frac{x}{b}\right|<1$, provided that the limit exists at all. It has a certain similarity to Euler's false logarithm series (pointed out by Gottfried here) as it can indeed be proven that $f(b^m) = m$ for natural numbers $m$ (even for $m=0$ in difference to Euler's series): $f_n(b^m) = -\sum_{k=1}^n \left(n\\k\right)(-1)^{k}\frac{1-b^{mk}}{1-b^k}$ if we now utilize that $\frac{1-y^m}{1-y}=\sum_{i=0}^{m-1} y^i$ for $y=b^k$ then we get $ f_n(b^m) = -\sum_{k=1}^n \left( n \\ k \right) (-1)^{k}\sum_{i=0}^{m-1} b^{ki} = \sum_{i=0}^{m-1}\left(1-\sum_{k=0}^n \left(n\\k\right)(-1)^{k} b^{ki}\right)$ $f_n(b^m)=\sum_{i=0}^{m-1} 1-(1-b^i)^n$ Hence $\lim_{n\to\infty} f_n(b^m) = m$ But is this true also for non-integer $m$? Do we have some rules like $\lim_{n\to\infty} f_n(x^n)=n \lim_{n\to\infty} f_n(x)$, or even $\lim_{n\to\infty} f_n(xy)=\lim_{n\to\infty} f_n(x) + f_n(y)$? Gottfried Ultimate Fellow Posts: 787 Threads: 121 Joined: Aug 2007 07/27/2010, 01:12 PM (This post was last modified: 07/27/2010, 01:31 PM by Gottfried.) Here is another view into a polynomial interpolation for the logarithm which does not provide the true mercator-series for the logarithm. It is just a q&d reproduction of some analysis I've done recently but which I didn't yet document properly. It deals much with that (b-1)^m and (b^m-1) terms - so maybe you can see a/the relation to your own procedure... We want, with some vector of coefficients X, a quasi-logarithmic solution for a base b, V(b^m)~ * X = m We generate a set of interpolation-points for the b^m-parameters V(b^0)~ V(b^1)~ V(b^2)~ ... Make this a matrix, call it VV. Note that this matrix is symmetric! VV = matrix{r,c=0..inf}(b^(r*c)) We generate a vector for the m-results Z=[0,1,2,3,...] Then the classical ansatz to find the coefficientsvector X by vandermonde interpolation is: VV * X = Z X = VV^-1 * Z But VV cannot be inverted in the case of infinite size. So we factorize VV into triangular and diagonal factors and invert that factors separately [L,D,U] = LU(VV) Here VV is symmetric, thus U is the transpose of L. So actually [L,D,L~] = LU(VV) Moreover, we have the remarkable property, that L is simply the q-analoguous of the binomial-matrix to base b and D contains q-factorials. Thus we neither have actually to calculate the LU-factorization nor the inversion - the entries of the inverted factors can directly be set. So we have LI = L^-1, DI = D^-1 just by inserting the known values for the inverse q-binomial-matrix (see description of entries at end) Then, formally, the coefficientsvector X could be computed by the product X = (LI~ * DI * LI) * Z But LI~ * DI * LI could imply divergent dot-products, (I didn't actually test this here) so we leave it with two separate factors: W = LI~ * DI // upper triangular WI = LI * Z // columnvector, see explicte description of entries at end At this point -since we know explicitely the entries of W and WI- we could dismiss all the matrix-stuff and proceed to the usual notation with the summation-symbol and the known coefficients and have very simple formulae... But well, since we are just here, let's proceed that way... :-) I'll denote a formal matrix-product, which cannot be evaluated, by the operator <*>. Then we expect (at least for *integer m*) this to be a correct formula: V(b^m)~ * W <*> WI = m We compute the leftmost product first, and actually the result-vector Y~ in V(b^m)~ * W = Y~ becomes rowfinite - it just contains the q-binomials (m:k)_b for k=0..m So in the formula ( V(b^m)~ * W ) <*> WI = m we have actually ( [1,(m:1)_b, (m:2)_b,...,1,0,0,0,...]) * WI = m thus the product with WI can be done and we get an exact (and correct) solution for integer m. So far, so good. However, this does not apply for fractional m. The vector Y is no more finite and approximations suggest, that for all fractional values the formula is false. Additional remark: because by the factor DI we get the same denominators as shown in the article on Euler's false logarithms and the overall-structure is very similar I assume, that this procedure provides simply the taylor-coefficients of that Eulerian series. Gottfried Code:// description of entries in LI,DI and WI A quick inspection of an actual example gives the following (please crosscheck this!); the symbol (r:c) means the binomial r over c the symbols x!_b and (r:c)_b denote the according q-analogues to base b LI = matrix {r=0..inf,c=0..r} ( (-1)^(r-c)*b^(r-c:2)*(r:c)_b) DI = diagonal(vector{r=0..inf}( 1/ ( r!_b*(b-1)^r * b^(r:2) )) WI = vector{r=0..inf}( if(r==0) : 0 ) ( if(r >0) : (-1)^(r-1) (r-1)!_b *(b-1)^(r-1) ) Gottfried Helms, Kassel Gottfried Ultimate Fellow Posts: 787 Threads: 121 Joined: Aug 2007 07/27/2010, 02:41 PM (This post was last modified: 07/27/2010, 07:34 PM by Gottfried.) (07/27/2010, 01:12 PM)Gottfried Wrote: (...) Then, formally, the coefficientsvector X could be computed by the product X = (LI~ * DI * LI) * Z But LI~ * DI * LI could imply divergent dot-products, (I didn't actually test this here) so we leave it with two separate factors: (...)Hmm, I just tried this with base 2 and 3, and actually the entries of X seem to be computable. I get for the row r=0 $X[0] = - \sum_{k=0}^{\infty} \frac1{b^k - 1} $ for a row r>0 $X[r] = - (-1)^r \frac{b^r }{b^r -1}* \prod_{k=1}^r \frac 1{b^k-1}$ Of course the prod-expression can be rewritten as q-factorial multiplied by powers of (b-1) $X[r] = - (-1)^r * \frac{b^r}{b^r-1}*\frac 1{(b -1)^r * r !_b }$ and then $falselog(b^m) = -\sum_{k=1}^{\infty}\frac1{b^k-1} - \sum_{r=1}^{\infty} (-1)^r \frac{b^r}{(b^r-1) * r!_b * (b-1)^r}*(b^m)^r$ correct for positive integer m and wrong for other m. (But now it seems that I drifted far away from Henryk's formula, sorry) Gottfried (I forgot it, but we had this already: here ) Gottfried Helms, Kassel bo198214 Administrator Posts: 1,391 Threads: 90 Joined: Aug 2007 08/11/2010, 02:37 AM (This post was last modified: 08/11/2010, 02:57 AM by bo198214.) Now its out: It is the *true* logarithm. Proof here. andydude Long Time Fellow Posts: 509 Threads: 44 Joined: Aug 2007 04/25/2012, 09:37 PM (This post was last modified: 04/25/2012, 09:37 PM by andydude.) (08/11/2010, 02:37 AM)bo198214 Wrote: Now its out: It is the *true* logarithm. Proof here. For some reason, it took me over a year to find this post/paper. It is brilliantly written. Good job Henryk. Regards, Andrew Robbins « Next Oldest | Next Newest »
Possibly Related Threads... Thread Author Replies Views Last Post Is bugs or features for fatou.gp super-logarithm? Ember Edison 10 12,920 08/07/2019, 02:44 AM Last Post: Ember Edison Can we get the holomorphic super-root and super-logarithm function? Ember Edison 10 13,811 06/10/2019, 04:29 AM Last Post: Ember Edison Principal Branch of the Super-logarithm andydude 7 19,384 06/20/2011, 09:32 PM Last Post: tommy1729 Logarithm reciprocal bo198214 10 29,757 08/11/2010, 02:35 AM Last Post: bo198214 Kneser's Super Logarithm bo198214 18 60,716 01/29/2010, 06:43 AM Last Post: mike3 A false interpolation paradigm (?); a reconsideration Gottfried 4 10,939 09/17/2009, 08:17 AM Last Post: bo198214 Unique Holomorphic Super Logarithm bo198214 3 8,957 11/24/2008, 06:23 AM Last Post: Kouznetsov Jabotinsky's iterative logarithm bo198214 21 32,926 06/14/2008, 12:44 AM Last Post: andydude Super-logarithm on the imaginary line andydude 3 9,009 11/15/2007, 05:52 PM Last Post: andydude
Users browsing this thread: 1 Guest(s) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 23, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8909406065940857, "perplexity": 4991.590426584347}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057508.83/warc/CC-MAIN-20210924080328-20210924110328-00033.warc.gz"} |
https://www.physicsforums.com/threads/qed-explanation-for-charge.742297/ | # QED explanation for charge?
1. ### Zak
8
I have read some fairly vague descriptions of charge that say it can be looked at as the amplitude for a particle to exchange a photon.
For example, when two electrons repel, it is because a photon is emitted from one too the other, which would change the direction of both equally and oppositely due to the conservation of momentum. I believe this photon is known as a 'virtual photon'.
Firstly, could anyone confirm whether this is at all correct and, if so, how could the same concept be applied to the attraction between two oppositely charged particles?
Additionally, if charge is the result of photon emission, does that mean that all charge particles are constantly emitting photons in order to always be 'charged'? Would this explain why the electromagnetic force has infinite range?
2. ### WannabeNewton
5,765
Could you give some references? Paraphrasing often goes awry.
Charge is simply the conserved quantity resulting from the local (gauge) invariance of the Dirac lagrangian under phase transformations when the electromagnetic interaction is included in the lagrangian.
8
4. ### MikeGomez
252
I think their explanation is bogus. You can’t use increased uncertainty in position to exactly position a photon, as they have done.
I believe that is correct.
I don’t think you could say that, because now you are talking about real photons. Accelerated charges do emit radiation, and there is still controversy about whether it is acceleration or jerk (change in acceleration) which causes this.
Look up synchrotron radiation, and the Larmor formula.
1 person likes this.
5. ### Zak
8
my bad, I meant to say the description wasn't very good.
So does anybody have a better explanation for why two charges would attract eachother in relation to the photon exchanges between them?
6. ### MikeGomez
252
WannaBe’s explanation in post #2 is excellent. That might not be very satisfying to you if you are searching for a deep insight into the cause or origin of charge. The fact is that no one really knows.
Are you comfortable with that explanation? If so, reflect on what the mechanism might be, by which a photon is emitted from an electron.
We have a natural tendency to transfer what we think we understand about the macro-world into the micro-world, and the micro-world quiote often punishes us for that. For example electrons don’t fly around the nucleus of the atom like planets in orbit, as was originally thought. In the case of charges “exchanging” photons, that seems to make sense to us for the case of two like charges repelling because that is similar to our everyday experience of contact forces. If we throw a ball at a bottle and knock the bottle over, we observe that method of transfer of momentum and we think it applies to the micro-world of electron and photons. But we really have no justification for that. The micro-world has an entirely new set of rules, and the way in which light interacts with matter is the realm of QED.
For now, I think a good way to think of it is that charges influence the field, and it is the polarity of the electromagnetic field which determines the direction of the emitting/absorbing photons. It that way of thinking, at least to me, the sample that you have given makes just as much sense for attraction as for repulsion.
1 person likes this.
Know someone interested in this topic? Share a link to this question via email, Google+, Twitter, or Facebook
Draft saved Draft deleted
Similar discussions for: QED explanation for charge? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8088628053665161, "perplexity": 509.7764257276146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463453.54/warc/CC-MAIN-20150226074103-00161-ip-10-28-5-156.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/216395/plotting-a-joint-probability-density-function | # Plotting a Joint Probability Density function
I have a problem where I have two independent variables each having a probability density function given by:
$p(s_1) = \frac{1}{2}\sqrt{3}$, when $s_1\leq\sqrt{3}$
and $0$, otherwise
And the probability density function is the same for other variable.
When a joint probability function is graphed it says that it will be a square. How?
Thanks for any help...
-
Is it the probability density you mean? – Golbez Oct 18 '12 at 16:05
Yeah..Thats what I mean – shaunshd Oct 18 '12 at 16:09
That's not a probability density function, its integral is not $1$. Probably what is intended is $\frac{1}{2\sqrt{3}}$ when $|s_1|\le \sqrt{3}$. – André Nicolas Oct 18 '12 at 16:34
At the time I am writing this, the claimed probability density function is not a pdf, since its integral is not $1$.
Probably what is intended is $\dfrac{1}{2\sqrt{3}}$ when $|s_1|\le \sqrt{3}$.
We will change notation a little, and assume that we have two independent random variables $X$ and $Y$. Random variable $X$ has pdf $\dfrac{1}{2\sqrt{3}}$ when $|x|\le \sqrt{3}$, and $0$ when $|x|\gt \sqrt{3}$. Random variable $Y$ has pdf $\dfrac{1}{2\sqrt{3}}$ when $|y|\le \sqrt{3}$, and $0$ when $|y|\gt \sqrt{3}$.
Since $X$ and $Y$ are independent, their joint pdf is the product of the individual pdf.
Thus the joint pdf $f(x,y)$ is equal to $\dfrac{1}{12}$ when both $|x|$ and $|y|$ are $\le \sqrt{3}$, and $0$ elsewhere.
So the joint pdf is the constant $\dfrac{1}{12}$ on and inside the square with corners $(\sqrt{3},\sqrt{3})$, $(-\sqrt{3},\sqrt{3})$, $(-\sqrt{3},-\sqrt{3})$, and $(\sqrt{3},-\sqrt{3})$.
If we decide to ignore the parts of the world where the joint pdf is $0$, we have a constant density function on a square. A constant density function on a square is not the same thing as a square, but when we graph $z=f(x,y)$ in space, we will get a square "table" of constant height $\dfrac{1}{12}$.
-
Thank you...... – shaunshd Jan 3 '13 at 3:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9817284345626831, "perplexity": 87.96140558079355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00141-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/any-calculi-without-limits.320251/ | # Any calculi without limits?
1. Jun 16, 2009
### lolgarithms
do you know of any nonstandard calculi that does not have to use limits or the standard part function? I don't really care how weird it is, as long as you can define the derivative and the integral.
Last edited: Jun 16, 2009
2. Jun 16, 2009
### Civilized
Smooth infinitesimal analysis might be what your looking for, a nonstandard model in the logicians sense of true infinitesimals along with derivatives and integrals, but without limits and without any standard part functions ala robinson. The price to pay is that the law of the excluded middle does not hold in SIA, and in general it is not equivalent to standard analysis.
Similar Discussions: Any calculi without limits? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9757533669471741, "perplexity": 1020.766459745276}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107490.42/warc/CC-MAIN-20170821041654-20170821061654-00210.warc.gz"} |
https://matheuscmss.wordpress.com/2019/02/08/breuillard-serts-joint-spectrum-i/ | Posted by: matheuscmss | February 8, 2019
## Breuillard-Sert’s joint spectrum (I)
Last November 2018, Romain Dujardin, Charles Favre, Thomas Gauthier, Rodolfo Gutiérrez-Romo and I started a groupe de travail around the preprint The joint spectrum by Emmanuel Breuillard and Cagri Sert.
My plan is to transcript my notes from this groupe de travail in a series of posts starting today with a summary of the first meeting where an overview of the whole article was provided. As usual, all mistakes/errors in the sequel are my sole responsibility.
1. Introduction
Let ${M_d(\mathbb{C})}$ be the set of ${d\times d}$ matrices with complex entries. Given ${A\in M_d(\mathbb{C})}$, recall that its spectral radius ${r(A)}$ is given by Gelfand’s formula
$\displaystyle r(A) = \lim\limits_{n\rightarrow\infty}\|A^n\|^{1/n}$
More generally, given a compact subset ${S\subset M_d(\mathbb{C})}$, recall that its joint spectral radius of ${S}$ (introduced by Rota–Strang in 1960) is the quantity
$\displaystyle R(S) := \lim\limits_{n\rightarrow\infty} \sup\limits_{g_1,\dots, g_n\in S} \|g_1\dots g_n\|^{1/n} = \lim\limits_{n\rightarrow\infty} \sup\limits_{g\in S^n} \|g\|^{1/n}$
where ${S^n:=\{g_1\dots g_n: g_1,\dots, g_n\in S\}}$.
Remark 1 By submultiplicativity (or, more precisely, Fekete’s lemma), the limit defining ${R(S)}$ always exists.
Remark 2 ${R(S)}$ is independent of the choice of ${\|.\|}$. In particular, ${R(S) = R(g S g^{-1})}$ for all ${g\in GL_d(\mathbb{C})}$.
The joint spectral radius appears naturally in several areas of Mathematics (such as wavelets and control theory), and my first contact with this notion occurred through a subfield of Dynamical Systems called ergodic optimization (where one considers an observable ${f}$ and one seeks to maximize ${\int f d\mu}$ among all invariant probability measures ${\mu}$ of a given dynamical system).
The goal of Breuillard–Sert article is two-fold: they introduce of a notion of joint spectrum of ${S}$ and they show that it vastly refines previous related concepts such as joint spectral radius, Benoist cone, etc.
Today, our plan is to provide an overview of some of the main results obtained by Breuillard–Sert. For this sake, we divide this post into two sections: the first one contains a potpourri of prototypical versions of Breuillard–Sert’s theorems, and the the last section provides the precise statements whose proofs will be discussed in subsequent posts in this series.
2. A potpourri of results about the joint spectrum
2.1. Existence of the joint spectrum
Before introducing the definition of joint spectrum, we need the following notations. Given ${g\in GL_d(\mathbb{C})}$, its Cartan vector ${\kappa(g)\in\mathbb{R}^d}$ is
$\displaystyle \kappa(g) = (\log a_1(g),\dots,\log a_d(g))$
where ${a_1(g)\geq\dots a_d(g)>0}$ are the singular values of ${g}$. In particular, if ${S\subset GL_d(\mathbb{C})}$ is compact, then
$\displaystyle \kappa(S):=\{\kappa(g): g\in S\}$
is a compact subset of ${\mathbb{R}^d}$. Also, we denote by ${S^n:=\{g_1\dots g_n: g_i\in S \,\forall\,i=1,\dots,n\}}$.
Theorem 1 (Breuillard–Sert) If the monoid ${\Gamma = \langle S \rangle}$ generated by ${S}$ acts irreducibly on ${\mathbb{C}^d}$, then ${\frac{1}{n}\kappa(S^n)}$ converges in the Hausdorff topology to a compact subset ${J(S)}$ called the joint spectrum of ${S}$.
Remark 3 The technical “irreducibility” assumption on ${\Gamma}$ is not strictly necessary: what is important here is the reductiveness of the Zariski closure ${G}$ of ${\Gamma}$ (i.e., ${G}$ contains no non-trivial unipotent normal subgroup) and the irreducibility assumption is just a simple condition ensuring that the Zariski closure is reductive.
Remark 4 Similarly to Remark 2 on the joint spectral radius ${R(S)}$, the joint spectrum ${J(S)}$ is invariant under conjugation, i.e., ${J(S)=J(gSg^{-1})}$ for all ${g\in GL_d(\mathbb{C})}$.
Remark 5 The geometry of the joint spectrum ${J(S)}$ allows to recover several classical quantities associated to ${S}$. For instance, ${\log R(S) = \max\{x_1: (x_1,\dots,x_d)\in J(S)\}}$ and the lower joint spectral radius ${\log \left(\lim\limits_{n\rightarrow\infty} \frac{1}{n} \min\limits_{g\in S^n} \|g\|^{1/n}\right)}$ verifies
$\displaystyle \log \left(\lim\limits_{n\rightarrow\infty} \frac{1}{n} \min\limits_{g\in S^n} \|g\|^{1/n}\right) = \min\{x_1: (x_1,\dots,x_d)\in J(S)\}.$
More generally, if ${\rho}$ is a representation of ${GL_d(\mathbb{C})}$ (e.g., the ${k}$-th exterior power), then
$\displaystyle R(\rho(S)) = \max\{n_1x_1+\dots+n_dx_d: (x_1,\dots,x_d)\in J(S)\}$
where ${(n_1,\dots,n_d)\in\mathbb{N}^d}$ is the highest weight of ${\rho}$ (e.g., ${(\underbrace{1,\dots, 1}_{k},0, \dots, 0)}$ in the case of the ${k}$-th exterior power).
The proof of Theorem 1 uses the notion of proximal elements (also appearing in the proof of Tits alternative via a ping-pong argument). More concretely, recall that a matrix is proximal when it has an unique eigenvalue of maximal modulus. In this context, the idea behind Theorem 1 when ${\Gamma}$ is Zariski dense in ${GL_d(\mathbb{C})}$ can be explained as follows. An important theorem of Abels–Margulis–Soifer ensuring that there exists a finite subset ${F\subset \Gamma}$ such that for each ${g\in GL_d(\mathbb{C})}$ one can find ${f\in F}$ so that the matrix ${gf}$ has a simple spectrum, i.e., ${gf}$ induces a proximal element in all exterior power representations of ${GL_d(\mathbb{C})}$. In fact, the finiteness of ${F}$ implies that ${\frac{1}{n}\kappa(g)}$ is close to ${\frac{1}{n}\kappa(gf)}$ when ${n}$ is large and ${g\in S^n}$, and the simplicity of the spectrum of ${gf}$ guarantees that ${\frac{1}{n}\kappa(gf)}$ stays close to ${\frac{1}{nm}\kappa((gf)^m)}$ for all ${m\geq 1}$ because both of them are not far from the Jordan vector of ${gf}$ consisting of the ordered list of the logarithms of the moduli of its eigenvalues. As it turns out, this information can be used to show that ${\limsup\limits_{k\rightarrow\infty} d\left(\frac{1}{n}\kappa(g), \frac{1}{k}\kappa(S^k)\right)=O_S(1/n)}$ for any ${g\in S^n}$, and this gives the desired convergence thanks to the following elementary lemma about Hausdorff topology (applied to ${K_n:=\frac{1}{n}\kappa(S^n)}$).
Lemma 2 Let ${(X,d)}$ be a compact metric space. A sequence ${(K_n)_{n\in\mathbb{N}}}$ of compact subsets of ${X}$ converges in Hausdorff topology to a compact subset of ${X}$ if and only if for all ${\delta>0}$ one can find ${n_0\in\mathbb{N}}$ such that
$\displaystyle \limsup\limits_{m\rightarrow\infty} d(x, K_m)\leq\delta$
for all ${x\in K_n}$, ${n\geq n_0}$.
Proof: If ${K_n}$ converges to ${K_{\infty}}$, then for each ${\delta>0}$ one can find ${n_0\in\mathbb{N}}$ so that ${d(K_n, K_{\infty})<\delta/2}$ for all ${n\geq n_0}$. Therefore, ${K_n}$ is contained in the ${(\delta/2)}$-neighborhood of ${K_{\infty}}$ and ${K_{\infty}}$ is contained in the ${(\delta/2)}$-neighborhood of ${K_m}$ for all ${n,m\geq n_0}$. In particular, ${K_n}$ is contained in the ${\delta}$-neighborhood of ${K_m}$ for all ${n,m\geq n_0}$, so that ${\limsup\limits_{m\rightarrow\infty} d(x, K_m)\leq\delta}$ for all ${x\in K_n}$, ${n\geq n_0}$.
Conversely, denote by ${K_{\infty}}$ the set of accumulation points of sequences ${(x_n)_{n\in\mathbb{N}}}$ with ${x_n\in K_n}$ for all ${n\in\mathbb{N}}$. Observe that ${K_{\infty}}$ is compact because it is a closed subset of the compact metric space ${(X,d)}$: indeed, given a sequence ${x_i^{\infty}\in K_{\infty}}$, ${i\in\mathbb{N}}$, converging to ${x_*\in X}$, we can select ${n_i\rightarrow\infty}$ as ${i\rightarrow\infty}$ and ${x_i^{n_i}\in K_{n_i}}$ with ${d(x_i^{n_i}, x_i^{\infty})<1/i}$ for all ${i\in\mathbb{N}}$; hence, ${x_*}$ is accumulated by any sequence ${(y_n)}$ with ${y_n\in K_n}$ for all ${n\in\mathbb{N}}$ and ${y_{n_i}=x_i^{n_i}}$ for all ${i\in\mathbb{N}}$.
We affirm that ${K_n}$ converges to ${K_{\infty}}$. Otherwise, one could find ${\delta>0}$ and ${m_i\rightarrow\infty}$ as ${i\rightarrow\infty}$ such that ${d(K_{\infty}, K_{m_i})>3\delta}$ for all ${i\in\mathbb{N}}$. In other terms, for each ${i\in\mathbb{N}}$, either there is ${y_i^{\infty}\in K_{\infty}}$ with ${d(y_i^{\infty}, K_{m_i})>3\delta}$ or there is ${z_{m_i}\in K_{m_i}}$ with ${d(z_{m_i}, K_{\infty})>3\delta}$. Note that the second possibility can not occur infinitely many times because a certain accumulation point ${z_*}$ of ${z_{m_i}}$ would be a point ${z_*\in K_{\infty}}$ with ${d(z_*, K_{\infty})\geq 3\delta}$. Thus, there is no loss in generality in assuming that there is ${y_i^{\infty}\in K_{\infty}}$ with ${d(y_i^{\infty}, K_{m_i})>3\delta}$ for all ${i\geq 1}$. By compactness of ${K_{\infty}}$, we can extract a subsequence ${y_{i_k}^{\infty}\rightarrow y_*\in K_{\infty}}$ as ${k\rightarrow\infty}$. Therefore, there exists ${k_0\in\mathbb{N}}$ such that ${d(y_{i_k}^{\infty}, y_*)<\delta}$ and, a fortiori, ${d(y_*, K_{m_{i_k}})>2\delta}$ for all ${k\geq k_0}$. Moreover, ${y_*\in K_{\infty}}$ implies that there exists ${x_{n_l}\in K_{n_l}}$ with ${d(x_{n_l}, y_*)<\delta}$ for all ${l\in\mathbb{N}}$ and ${n_l\rightarrow \infty}$ as ${l\rightarrow\infty}$. This is a contradiction because it would follow that ${d(x_{n_l}, K_{m_{i_k}})>\delta}$ for each ${l\in\mathbb{N}}$ and ${k\geq k_0}$, so that
$\displaystyle \limsup\limits_{m\rightarrow\infty} d(x_{n_l}, K_m)\geq \delta$
for each ${l\in\mathbb{N}}$. $\Box$
Our discussion above of the idea of proof of Theorem 1 indicates that the eigenvalues of matrices ${g\in GL_d(\mathbb{C})}$ or rather their Jordan vectors ${\lambda(g)=(\log|\lambda_1(g)|,\dots,\log|\lambda_d(g)|)}$, where ${|\lambda_1(g)|\geq\dots\geq|\lambda_d(g)|}$ are the moduli of the eigenvalues of ${g}$, play an important role in the construction of the joint spectrum. This intuition is reinforced by the following elementary proposition saying that it is not hard to establish the convergence of certain normalized collections of eigenvalues:
Proposition 3 Let ${S\subset GL_d(\mathbb{C})}$ be a compact subset containing the identity matrix ${\textrm{Id}}$. Then, ${\frac{1}{n}\lambda(S^n):=\{\frac{1}{n}\lambda(g):g\in S^n\}}$ converges in the Hausdorff topology.
Proof: By the previous lemma, it suffices to show that given ${\delta>0}$, there exists ${n_0\in\mathbb{N}}$ such that
$\displaystyle \limsup\limits_{m\rightarrow\infty} d(x,\frac{1}{m}\lambda(S^m))\leq \delta$
for all ${x\in\frac{1}{n}\lambda(S^n)}$ with ${n\geq n_0}$.
For this sake, let us observe that if ${x=\frac{1}{n}\lambda(g)}$ for some ${g\in S^n}$, then ${g^k\in S^m}$ where ${m=kn+j}$, ${0\leq j, thanks to our assumption that ${\textrm{Id}\in S}$. Since ${\lambda(g^k)=k\lambda(g)}$, it follows that
$\displaystyle d(x,\frac{1}{m}\lambda(S^m))\leq \|x-\frac{1}{m}\lambda(g^k)\| = |\frac{1}{n}-\frac{k}{m}| \cdot \|\lambda(g)\|$
and, hence, ${\limsup\limits_{m\rightarrow\infty}d(x,\frac{1}{m}\lambda(S^m)) = 0}$ for any ${x\in\frac{1}{n}\lambda(S^n)}$. $\Box$
2.2. Cartan vectors, Jordan vectors and Benoist cone
As it turns out, there is an intricate relationship between eigenvalues and joint spectrum. Indeed, Breuillard and Sert proved the following facts about the sets ${\frac{1}{n}\lambda(S^n)}$ of renormalized Jordan vectors, the joint spectrum ${J(S)}$, and the so-called Benoist cone ${BC(\Gamma)}$ consisting of the accumulation points of positive linear combinations of ${\lambda(g)}$‘s for ${g}$ in the monoid ${\Gamma}$ spanned by ${S}$:
Theorem 4 (Breuillard–Sert) One has that ${\frac{1}{n}\lambda(S^n)\subset J(S)}$ for each ${n\in\mathbb{N}}$ and ${BC(\Gamma)}$ is the cone spanned by ${\overline{\bigcup\limits_{n\in\mathbb{N}^*} \frac{1}{n}\lambda(S^n)} = J(S)}$.
Remark 6 This theorem is the higher-dimensional analog of Berger–Wang theorem asserting that the joint spectral radius ${R(S)}$ is given by the formula
$\displaystyle R(S)=\limsup\limits_{n\rightarrow\infty}\left(\sup\limits_{g\in S^n} |\lambda_1(g)|^{1/n}\right)$
Remark 7 One doesn’t have ${\frac{1}{n}\lambda(S^n)}$ converges to ${J(S)}$ in general (due to potential “periodicity” issues). For example, take ${\alpha>1}$ and let ${a=\left(\begin{array}{cc}\alpha & 0 \\ 0 & 1/\alpha\end{array}\right)}$, ${r=\left(\begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array}\right)}$ and ${S:=\left\{ ar, r \right\}\subset SL(2,\mathbb{R})}$. Denote by ${\lambda_+(g)}$ the logarithm of the spectral radius of ${g\in SL(2,\mathbb{R})}$. Then, ${\frac{1}{2n+1}\lambda_+(S^{2n+1})\rightarrow\{0\}}$ as ${n\rightarrow\infty}$ because ${g^2=-\textrm{Id}}$ for all ${g\in S^{2n+1}}$, ${n\in\mathbb{N}}$, but it is not hard to use the fact that ${a^n\in S^{2n}}$ and ${\lambda_+(a^n)=n\log\alpha}$ to establish that
$\displaystyle \frac{1}{2n}\lambda_+(S^{2n})\rightarrow [0,\frac{1}{2}\log\alpha]$
as ${n\rightarrow\infty}$.
2.3. Convex bodies, polyhedra and joint spectra
It is also shown by Breuillard and Sert that (under the assumption of Theorem 1) the joint spectrum ${J(S)}$ is the “folding” ${\phi(K)}$ of a convex body ${K\subset \mathbb{R}^k}$, ${k\leq d}$, by a certain piecewise affine map ${\phi:\mathbb{R}^k\rightarrow\mathbb{R}^d}$. Moreover, any convex body inside the Weyl chamber ${\{x_1\geq\dots\geq x_d\}}$ is the joint spectrum of some ${S\subset GL_d(\mathbb{C})}$ (satisfying the hypothesis of Theorem 1) and any polyhedron in this Weyl chamber with finitely many vertices is the joint spectrum ${J(S)}$ of some finite subset ${S\subset GL_d(\mathbb{C})}$. However, there are finite subsets ${S\subset GL_d(\mathbb{C})}$ whose joint spectrum ${J(S)}$ is not polyhedral: these examples are related to the counterexamples to the Lagarias–Wang finiteness conjecture (asserting that for any finite ${S\subset GL_d(\mathbb{C})}$, there are ${g_1,\dots, g_n\in S}$ with ${R(S)=\|g_1\dots g_n\|^{1/n}}$) constructed by several authors including Bousch–MairesseMorris–SidorovJenkinson–Pollicott and Bochi–Sert.
2.4. Joint spectrum and random products of matrices
Sert proved the following large deviations principle for random products of matrices belonging to a compact subset ${S\subset GL_d(\mathbb{C})}$ spanning a monoid acting irreducibly on ${\mathbb{C}^d}$: for each probability measure ${\mu}$ on ${GL_d(\mathbb{C})}$ whose support is ${S}$, there is a function ${I_{\mu}:\mathbb{R}^d\rightarrow [0,+\infty]}$ such that for every open subset ${U\subset\mathbb{R}^d}$ with closure ${\overline{U}}$ one has
$\displaystyle -\inf\limits_{x\in U}I_{\mu}(x)\leq\liminf\limits_{n\rightarrow\infty} \frac{1}{n}\log\mu^{\mathbb{N}}\left(\left\{(g_1,\dots,g_k,\dots)\in S^{\mathbb{N}}: \frac{1}{n}\kappa(g_1\dots g_n)\in U\right\}\right)$
and
$\displaystyle \limsup\limits_{n\rightarrow\infty} \frac{1}{n}\log\mu^{\mathbb{N}}\left(\left\{(g_1,\dots,g_k,\dots)\in S^{\mathbb{N}}: \frac{1}{n}\kappa(g_1\dots g_n)\in U\right\}\right)\leq -\inf\limits_{x\in \overline{U}}I_{\mu}(x)$
In this setting, Breuillard and Sert showed that the joint spectrum is a sort of “essential support” in the sense that
$\displaystyle J(S) = \overline{\{x\in\mathbb{R}^d: I_{\mu}(x)<\infty\}}.$
In fact, ${I_{\mu}(x)>0}$ for all ${x\neq\lambda_{\mu}}$ where ${\lambda_{\mu^{\mathbb{N}}}}$ is the Lyapunov vector of ${\mu^{\mathbb{N}}}$, i.e.,
$\displaystyle \lambda_{\mu^{\mathbb{N}}} := \lim\limits_{n\rightarrow\infty} \frac{1}{n} \kappa(g_1\dots g_n)\in J(S)$
for ${\mu^{\mathbb{N}}}$-almost every ${(g_1,\dots, g_n, \dots)\in S^{\mathbb{N}}}$.
As it turns out, the Lyapunov vectors ${\lambda_{\mu^{\mathbb{N}}}}$ might miss some points in ${J(S)}$, i.e., they are sometimes confined to a proper closed subset of ${J(S)}$. Nevertheless, any ${\lambda}$ in the interior of ${J(S)}$ is the Lyapunov vector of a certain ergodic shift invariant probability measure on ${S^{\mathbb{N}}}$ (actually Gibbs measure / equilibrium state), but, in general, this is false about certain boundary points of ${J(S)}$.
Furthermore, the elements of the joint spectrum are always realized by fixed sequences. More precisely, Daubechies–Lagarias showed that the joint spectral radius ${R(S)=\lim\limits_{n\rightarrow\infty} \sup\limits_{g\in S^n} \|g\|^{1/n}}$ satisfies
$\displaystyle R(S)=\lim\limits_{n\rightarrow\infty} \|b_1\dots b_n\|^{1/n}$
for some fixed sequence ${(b_1,\dots, b_n,\dots)\in S^{\mathbb{N}}}$, and, more generally, Breuillard and Sert proved that any ${x\in J(S)}$ satisfies
$\displaystyle x=\lim\limits_{n\rightarrow\infty} \frac{1}{n}\kappa(b_1\dots b_n)$
for some fixed sequence ${(b_1,\dots, b_n,\dots)\in S^{\mathbb{N}}}$.
2.5. Domination and continuity
Breuillard and Sert also prove that the joint spectrum ${J(S)}$ varies continuously at a compact ${S\subset GL_d(\mathbb{C})}$ satisfying some domination assumption such as
$\displaystyle J(S)\subset\{x_1>\dots>x_d\}$
or, equivalently, one has an exponential separation of singular values in the sense that there exists ${\varepsilon>0}$ and ${n_0\in\mathbb{N}}$ such that
$\displaystyle a_{k+1}(g)/a_k(g)\leq (1-\varepsilon)^n$
for all ${k=1,\dots, d-1}$ and ${g\in S^n}$ with ${n\geq n_0}$.
3. General versions of main results
A systematic study of the joint spectrum of ${S\subset GL_d(\mathbb{C})}$ can be efficiently done by working as intrinsically as possible, i.e., replacing ${GL_d(\mathbb{C})}$ by the Zariski-closure of the monoid ${\Gamma}$ spanned by ${S}$ (namely, the smallest algebraic group containing ${\Gamma}$).
From now on, we assume that ${G}$ is a connected real Lie group which is reductive, i.e., it contains no non-trivial normal unipotent subgroup. Intuitively, this means that we are avoiding “Jordan blocks”. This hypothesis is adapted to our context (and, in particular, to Theorem 1) because of the following example:
Example 1 Let ${\mathbb{G}}$ be the Zariski closure of the monoid ${\Gamma}$ generated by ${S\subset GL_d(\mathbb{C})}$ and denote by ${G}$ the connected component of the identity in the subgroup ${\mathbb{G}(\mathbb{R})}$ of real points of ${\mathbb{G}}$. If ${\Gamma}$ acts irreducibly on ${\mathbb{C}^d}$, then ${G}$ is reductive: otherwise, the fixed subspace of a non-trivial unipotent radical would be invariant under ${\Gamma}$.
The notions of Cartan and Jordan vectors from the previous section admit the following intrinsic versions. A Cartan decomposition ${G=KAK}$ where ${K}$ is a maximal compact subgroup and ${A}$ be a maximal torus allows to define a Cartan projection ${\kappa:G\rightarrow\mathfrak{a}^+}$ where ${\mathfrak{a}^+}$ is a Weyl chamber of the Lie algebra ${\mathfrak{a}}$ of ${A}$ associated to a choice of simple roots in a root system.
Example 2 The group ${G=GL_d(\mathbb{R})}$ has a maximal torus ${A=\{a=\textrm{diag}(\lambda_1,\dots,\lambda_d)\in G: \lambda_j > 0\}}$ consisting of diagonal matrices with positive entries. A root system is given by the roots ${\alpha_{i,j}(a)=\log\lambda_i - \log\lambda_j}$ for ${1\leq i, j\leq d}$ and the (closed) Weyl chamber ${\mathfrak{a}^+=\{(\log\lambda_1,\dots,\log\lambda_d): \lambda_1\geq\dots\geq\lambda_d\}}$ is associated to the simple roots ${\alpha_{i,i+1}}$, ${1\leq i\leq d}$. In particular, the corresponding Cartan projection assigns to each ${g\in G}$ its Cartan vector ${\kappa(g)}$.
Similarly, a Iwasawa decomposition ${G=KAN}$ allows to define a Jordan projection ${\lambda:G\rightarrow\mathfrak{a}^+}$ by requiring that ${\exp(\lambda(g))}$ is conjugate to the unique ${g_h\in A}$ with ${g=g_e g_h g_u\in KAN}$. [Update (February 11, 2019): As C. Sert pointed out to me (in private communication), strictly speaking one actually must replace ‘Iwasawa decomposition’ by Jordan-Chevalley decomposition in order to get the definition of the Jordan projection (because in general the elliptic, hyperbolic and unipotent terms in Iwasawa decomposition do not commute).]
In this language, some of the main results of Breuillard and Sert can be summarized as follows.
Theorem 5 Let ${G}$ be a connected reductive real Lie group and consider a compact subset ${S\subset G}$ spanning a monoid ${\Gamma=\langle S \rangle}$ which is Zariski-dense in ${G}$. Then,
$\displaystyle \lim_{n\rightarrow\infty}\frac{1}{n}\kappa(S^n)= J(S) = \lim\limits_{n\rightarrow\infty}\frac{1}{n}\lambda(S^n)$
in the Hausdorff topology. The compact subset ${J(S)\subset\mathfrak{a}^+}$ is called (intrinsic) joint spectrum and any ${x\in J(S)}$ is given by ${x=\lim\limits_{n\rightarrow\infty}\frac{1}{n}\kappa(b_1\dots b_n)}$ for some fixed sequence ${(b_1,\dots, b_n,\dots)\in S^{\mathbb{N}}}$.Moreover, any ${x}$ in the relative interior ${int(J(S))}$ (of ${J(S)}$ with respect to the smallest affine subspace of ${\mathfrak{a}}$ containing it) satisfies ${x=\lim\limits_{n\rightarrow\infty}\frac{1}{n}\kappa(g_1\dots g_n)}$ for ${\nu}$-almost every sequence ${(g_1,\dots, g_n,\dots)\in S^{\mathbb{N}}}$, where ${\nu}$ is a certain ergodic shift-invariant probability measure on ${S^{\mathbb{N}}}$.
Furthermore, the Lyapunov spectrum of a random walk on ${G}$ with respect to any law ${\mu}$ with support ${\textrm{supp}(\mu)=S}$ is simple: the Lyapunov vector ${\lambda_{\mu^{\mathbb{N}}}}$ (i.e., the ${\mu^{\mathbb{N}}}$-almost sure limit of ${\frac{1}{n}\kappa(g_1\dots g_n)}$) belongs to the relative interior of ${J(S)}$.
Theorem 6 The (intrinsic) joint spectrum ${J(S)}$ is a closed convex subset of ${\mathfrak{a}^+}$. Moreover, if ${S}$ is not included in the coset of a closed connected proper Lie subgroup of ${G}$ containing ${[G,G]}$, then ${J(S)}$ has non-empty interior in ${\mathfrak{a}}$.
Remark 8 The previous results say that the Benoist cone ${BC(\Gamma)}$ (spanned by all positive linear combinations of ${\lambda(g)}$, ${g\in\Gamma=\langle S\rangle}$) is generated by ${J(S)\cup\{0\}}$. In particular, they allow to recover a result of Benoist saying that ${BC(\Gamma)}$ is convex and its interior is not empty when ${G}$ is semi-simple.
Theorem 7 A convex body ${K\subset \mathfrak{a}^+}$ has the form ${K=J(S)}$ for some compact subset ${S}$ generating a Zariski dense monoid of ${G}$. Moreover, if ${K}$ is a polyhedron with a finite number of vertices, then ${S}$ can also be taken finite.
Remark 9 The converse in the second part of Theorem 7 is not true in general: Breuillard and Sert exhibit in their article an example of a finite subset ${T\subset SL_2(\mathbb{R})\times SL_2(\mathbb{R})}$ generating a Zariski dense monoid such that the boundary of ${J(T)}$ is not piecewise ${C^1}$.
As it turns out, the Zariski-denseness condition is not strictly necessary in order to develop the theory of the joint spectrum: indeed, Breuillard and Sert show (cf. Theorem 8 below) that one can replace Zariski-denseness by the assumption that ${S}$ is ${G}$dominated, i.e., ${\frac{1}{n}\kappa(S^n)}$ is included in the interior ${\mathfrak{a}^{++}}$ of the (closed) Weyl chamber ${\mathfrak{a}^+}$.
Remark 10 If ${G=SL_d(\mathbb{R})}$, then ${S}$ is ${G}$-dominated if and only if there exists ${\varepsilon>0}$ such that
$\displaystyle \frac{a_{i+1}(g)}{a_i(g)}\leq (1-\varepsilon)^n$
for all ${i=1,\dots, d-1}$, ${g\in S^n}$ with ${n}$ sufficiently large.
Theorem 8 Let ${G}$ be a reductive, connected, real Lie group, and suppose that ${S}$ is a ${G}$-dominated compact subset. Then,
$\displaystyle \lim_{n\rightarrow\infty}\frac{1}{n}\kappa(S^n)= J(S) = \lim\limits_{n\rightarrow\infty}\frac{1}{n}\lambda(S^n)$
in the Hausdorff topology. The (intrinsic) joint spectrum ${J(S)}$ is a convex body in ${\mathfrak{a}^{++}}$ such that for each ${x\in J(S)}$, there exists ${(b_1,\dots, b_n,\dots)\in S^{\mathbb{N}}}$ with
$\displaystyle x = \lim\limits_{n\rightarrow\infty} \frac{1}{n} \kappa(b_1\dots b_n).$
Moreover, ${J(S)}$ varies continuously with ${S}$ in this setting: for every ${\varepsilon>0}$, there exists ${\delta>0}$ such that ${d(J(S), J(S'))<\varepsilon}$ whenever ${d(S, S')<\delta}$.
The next posts of this series are dedicated to the proof of some of these statements. For now, we close this post with the following list of open problems mentioned in Breuillard–Sert article:
• can one extend Theorem 78 and the portions about ${\textrm{int}(J(S))}$ in Theorem 5 to the case of non-archimedean local fields?
• can one define a joint spectrum for more general cocycles and/or base dynamics?
• is there a multi-fractal analysis describing for each ${x\in J(S)}$ the Hausdorff dimension of the set of sequences ${b=(b_1,\dots, b_n,\dots)\in S^{\mathbb{N}}}$ with ${\frac{1}{n}\kappa(b_1\dots b_n)\rightarrow x}$? (the analogous question for ${\frac{1}{n}\lambda_1(b_1\dots b_n)\rightarrow y}$ was studied by Feng here and here)
• can one describe the boundary of ${J(S)}$ using probability measures? if so, are these measures: Sturmian? zero entropy?
• is it true that ${J(S)}$ is a locally Lipschitz function of ${S}$? (recall that the joint spectral radius ${R(S)}$ is known to vary locally Lipschitz with ${S}$)
• given ${\varepsilon>0}$, can one give an effective upper bound on the smallest value ${n\in\mathbb{N}}$ such that ${d(x,\frac{1}{n}\kappa(S^n))<\varepsilon}$ and/or ${d(x,\frac{1}{n}\lambda(S^n))<\varepsilon}$? (the analogous question for the joint spectral radius was discussed by Morris and Bochi–Garibaldi) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 395, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9742177724838257, "perplexity": 435.8157467495305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249414450.79/warc/CC-MAIN-20190223001001-20190223023001-00533.warc.gz"} |
http://mathhelpforum.com/algebra/87037-question-regarding-polynomials.html | 1. ## Question regarding Polynomials
Use the fourth degree equation ax^4 + bx^3 + cx^2 + dx + e = 0 whose roots are A,B,C,D to verify that
1) the equation P(x/m) = 0 has roots m times those of P(x) = 0.
Thank you.
2. Originally Posted by noobonastick
Use the fourth degree equation ax^4 + bx^3 + cx^2 + dx + e = 0 whose roots are A,B,C,D to verify that
1) the equation P(x/m) = 0 has roots m times those of P(x) = 0.
Thank you.
Let x be a root of P(x)=0, put y=mx, then P(y/m)=P(x)=0, so mx is a root of P(x/m).
CB
3. Originally Posted by CaptainBlack
Let x be a root of P(x)=0, put y=mx, then P(y/m)=P(x)=0, so mx is a root of P(x/m).
sorry I dont understand the logic... can you elaborate?
4. Originally Posted by noobonastick
sorry I dont understand the logic... can you elaborate?
I have shown that if x is a root of P(x), then mx is a root of P(x/m).
What else do you need?
(note implicit assumption that m != 0)
CB | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9198567867279053, "perplexity": 1740.629371436959}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608956.34/warc/CC-MAIN-20170527152350-20170527172350-00343.warc.gz"} |
https://www.physicsforums.com/threads/partial-derivative-properties.878907/ | # Partial Derivative Properties
1. Jul 14, 2016
### Kyle.Nemeth
1. The problem statement, all variables and given/known data
I would just like to know if this statement is true.
2. Relevant equations
$$\frac {\partial^2 f}{\partial x^2} \frac{\partial g}{\partial x}=\frac{\partial g}{\partial x} \frac {\partial^2 f}{\partial x^2}$$
3. The attempt at a solution
I've thought about this a bit and I haven't come to a conclusion. Thanks for the help!
Last edited: Jul 14, 2016
2. Jul 14, 2016
### Mr-R
Well, it depends on $f$ and $g$ and not so on the partial derivative. If $f$ and $g$ are "normal" functions like $f(x)=x^2$ for example, then the statement is true. On the other hand, if they represent matrices then generally they wouldn't commute, ie. $f\cdot g\neq g\cdot f$ because $g$ and $f$ do not commute generally.
3. Jul 14, 2016
### Ray Vickson
If you set $A = \partial g/\partial x$ and $B = \partial^2 f/\partial x^2$, you have written $A B = B A$, which is true for any two real numbers.
However, if what you really meant was to have
$$\frac{\partial}{\partial x} \left( g \frac{\partial^2 f}{\partial x^2} \right)$$
on one side and
$$\frac{\partial^2} {\partial x^2} \left( f \frac{\partial g}{\partial x} \right)$$
on the other, then that is a much different question.
Which did you mean?
4. Jul 19, 2016
### Kyle.Nemeth
I intended for the original question you had answered about $$AB=BA$$ for any real number. I was assuming that the second derivative had acted on f and the first derivative had acted on g.
Draft saved Draft deleted
Similar Discussions: Partial Derivative Properties | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9333248138427734, "perplexity": 591.9032237081598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822822.66/warc/CC-MAIN-20171018070528-20171018090528-00871.warc.gz"} |
https://www.semanticscholar.org/author/Pedro-Montero/144821817 | • Publications
• Influence
Equivariant compactifications of vector groups with high index
• Mathematics
• 6 June 2018
In this note, we classify smooth equivariant compactifications of $\mathbb{G}_a^n$ which are Fano manifolds with index $\geq n-2$.
• 4
• PDF
Geometry of singular Fano varieties and projective bundles over curves
This thesis is devoted to the geometry of Fano varieties and projective vector bundles over a smooth projective curve. In the first part we study the geometry of mildly singular Fano varieties onExpand
On singular Fano varieties with a divisor of Picard number one
In this paper we study the geometry of mildly singular Fano varieties on which there is an effective prime divisor of Picard number one. Afterwards, we address the case of toric varieties. Finally,Expand
• 3
• PDF
A characterization of some Fano 4-folds through conic fibrations
• Mathematics
• 24 March 2018
Let $X$ be a complex projective Fano $4$-fold. Let $D\subset X$ be a prime divisor. Let us consider the image $\mathcal{N}_{1}(D,X)$ of $\mathcal{N}_{1}(D)$ in $\mathcal{N}_{1}(X)$ through theExpand
• 3
• PDF
Newton–Okounkov bodies on projective bundles over curves
In this article, we study Newton–Okounkov bodies on projective vector bundles over curves. Inspired by Wolfe’s estimates used to compute the volume function on these varieties, we compute allExpand
• 3
• PDF
Fano Threefolds as Equivariant Compactifications of the Vector Group
• Mathematics
• 22 February 2018
In this article, we determine all equivariant compactifications of the three-dimensional vector group $\mathbf{G}_a^3$ which are smooth Fano threefolds with Picard number greater or equal than two.
• 8
• PDF
On the liftability of the automorphism group of smooth hypersurfaces of the projective space
• Mathematics
• 26 April 2020
Let $X$ be a smooth hypersurface of dimension $n\geq 1$ and degree $d\geq 3$ in the projective space given as the zero set of a homogeneous form $F$. If $(n,d)\neq (1,3), (2,4)$ it is well known thatExpand
• 1
• PDF | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9638745188713074, "perplexity": 699.4070622096439}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038492417.61/warc/CC-MAIN-20210418133614-20210418163614-00126.warc.gz"} |
https://www.clutchprep.com/chemistry/practice-problems/104899/how-many-grams-of-zn-cn-2-s-117-44-g-mol-would-be-soluble-in-100-ml-of-h2o-inclu | Problem: How many grams of Zn(CN)2(s) (117.44 g/mol) would be soluble in 100 mL of H2O? Include the balanced reaction and the expression for Ksp in your answer. The Ksp value for Zn(CN)2(s) is 3.0 × 10–16.
⚠️Our tutors found the solution shown to be helpful for the problem you're searching for. We don't have the exact solution yet.
Problem Details
How many grams of Zn(CN)2(s) (117.44 g/mol) would be soluble in 100 mL of H2O? Include the balanced reaction and the expression for Ksp in your answer. The Ksp value for Zn(CN)2(s) is 3.0 × 10–16. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9410574436187744, "perplexity": 2746.306141573623}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657131734.89/warc/CC-MAIN-20200712051058-20200712081058-00539.warc.gz"} |
https://www.clutchprep.com/organic-chemistry/practice-problems/16236/there-are-two-contributing-resonance-structures-for-an-anion-called-acetaldehyde | # Problem: There are two contributing resonance structures for an anion called acetaldehyde enolate, whose condensed molecular formula is CH2CHO-. Draw the two resonance contributors and the resonance hybrid, then consider the map of electrostatic potential (MEP) shown below for this anion. Comment on whether the MEP is consistent or not with predominance of the resonance contributor you would have predicted to be represented most strongly in the hybrid.
🤓 Based on our data, we think this question is relevant for Professor Daoudi's class at UCF.
###### Problem Details
There are two contributing resonance structures for an anion called acetaldehyde enolate, whose condensed molecular formula is CH2CHO-. Draw the two resonance contributors and the resonance hybrid, then consider the map of electrostatic potential (MEP) shown below for this anion. Comment on whether the MEP is consistent or not with predominance of the resonance contributor you would have predicted to be represented most strongly in the hybrid. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9451724886894226, "perplexity": 3183.4510224942023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347414057.54/warc/CC-MAIN-20200601040052-20200601070052-00583.warc.gz"} |
http://www.mathguru.com/level1/fractions-2007101600018087.aspx | If you like what you see in Mathguru
Subscribe Today
For 12 Months US Dollars 12 / Indian Rupees 600 Available in 20 more currencies if you pay with PayPal. Buy Now No questions asked full moneyback guarantee within 7 days of purchase, in case of Visa and Mastercard payment
Example: Comparing Unlike Fractions
Post to:
Explanation:
Fraction (mathematics)
Fractions (from Latin: fractus, "broken") are numbers expressed as the ratio of two numbers, and are used primarily to express a comparison between parts and a whole.
The earliest fractions were reciprocals of integers: ancient symbols representing one part of two, one part of three, one part of four, and so on. A much later development were the common or "vulgar" fractions which are still used today (½, ⅝, ¾, etc.) and which consist of a numerator and a denominator, the numerator representing a number of equal parts and the denominator telling how many of those parts make up a whole. An example is 3/4, in which the numerator, 3, tells us that the fraction represents 3 equal parts, and the denominator, 4, tells us that 4 parts make up a whole.
A still later development was the decimal fraction, now called simply a decimal, in which the denominator is a power of ten, determined by the number of digits to the right of a decimal separator, the appearance of which (e.g., a period, a raised period (•), a comma) depends on the locale (for examples, see decimal separator). Thus for 0.75 the numerator is 75 and the denominator is 10 to the second power, viz. 100, because there are two digits to the right of the decimal separator.
A third kind of fraction still in common use is the percentage, in which the denominator is always 100. Thus 75% means 75/100.
Other uses for fractions are to represent ratios, and to represent division. Thus the fraction 3/4 is also used to represent the ratio 3:4 (three to four) and the division 3 ÷ 4 (three divided by four).
In mathematics, the set of all numbers which can be expressed as a fraction m/n, where m and n are integers and n is not zero is called the set of rational numbers. This set is represented by the symbol Q.
### Comparing fractions
Comparing fractions with the same denominator only requires comparing the numerators.
Because 3>2.
One way to compare fractions with different denominators is to find a common denominator. To compare and , these are converted to and Then bd is a common denominator and the numerators ad and bc can be compared.
? gives
As a short cut, known as "cross multiplying", you can just compare ad and bc, without computing the denominator.
?
Multiply 17 by 5 and multiply 18 by 4. Since 85 is greater than 72, .
(Our solved example in mathguru.com uses this concept)
http://en.wikipedia.org/wiki/Fraction_(mathematics) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9299289584159851, "perplexity": 1028.019539419536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689411.82/warc/CC-MAIN-20170922235700-20170923015700-00020.warc.gz"} |
https://courses.lumenlearning.com/suny-microeconomics/chapter/video-price-elasticity-of-demand/ | ## Video: Price Elasticity of Demand
This video provides a nice overview of the concept of elasticity and how it can be used. You’ll learn how to calculate elasticities later in this module. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8687005043029785, "perplexity": 632.7197007307047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103341778.23/warc/CC-MAIN-20220627195131-20220627225131-00510.warc.gz"} |
http://mathhelpforum.com/discrete-math/24184-5-discrete-math-problems-need-tomorrow-please-help.html | 1)
(a) Find a recurrence relation for the number of ways to completely cover a 2 x n checkerboard with 1 x 2 dominoes. [Hint: Consider separately the coverings where the position in the top right corner of the checkerboard is covered by a domino positioned horizontally and where it is covered by a domino positioned vertically.]
(b) What are the initial conditions for the recurrence relation in part (a)?
(c) How many ways are there to completely cover a 2 x 17 checkerboard with 1 x 2 dominoes?
2) Find f(n) when n = 2^k, where f satisfies the recurrence relation f(n) = f(n/2) + 1 with f(1)=1.
3) Suppose that there are n = 2^k teams in an elimination tournament, where there are n/2 games in the first round, with then n/2 = 2^(k-1) winners playing in the second round, and so on. Develop a recurrence relation for the number of rounds in the tournament.
4) Solve the recurrence relation for the number of rounds in the tournament described in Exercise 3.
5) Show that f_(0) – f_(1) + f_(2) – … – f_(2n-1) + f_(2n) = f_(2n-1) -1 when n is a positive integer.
Word Doc attached because I don't know how to format Superscript and subscript in this forum. Please refer to it.
Thank You
Anu
Attached Files
2. ## Recursive
On problem 1:
A. If we have 2 rows and n columns (2xn board) and we put a domino vertically in the upper right corner, then we have the rest of the board equivalent to a 2x(n-1) board. If we put the domino in the corner horizontally, we have to put another horizontal one below it, leaving the rest of the board equivalent to a 2x(n-2) board. This should give you the recurrence relation.
B. Consider n=1 and n=2 cases
C. Use your formula (creating a table may be fastest).
--Kevin C.
3. ## Recursion
Problems 2-4:
2) Consider $g(k)=f(2^k)$. As $f(n)=f(\frac{n}{2})+1$ with $f(1)=1$ and $\frac{2^k}{2}=2^{k-1}$, you can find the recurrence relation for g(k) which is easily solvable, and use that to find solution for $f(2^k)=g(k)$
3) Notice that after the first round (of n/2 games) we have n/2 winners left. The rest of the tournament is then equivalent to a tournament of n/2 players. Thus we can find a recurrence relation for the number of rounds f(n) in terms of f(n/2).
4) Use 2 and 3
I'm not sure on 5, because of the question of what f(n) is for odd n.
--Kevin C.
4. Originally Posted by TwistedOne151
On problem 1:
A. If we have 2 rows and n columns (2xn board) and we put a domino vertically in the upper right corner, then we have the rest of the board equivalent to a 2x(n-1) board. If we put the domino in the corner horizontally, we have to put another horizontal one below it, leaving the rest of the board equivalent to a 2x(n-2) board. This should give you the recurrence relation.
B. Consider n=1 and n=2 cases
C. Use your formula (creating a table may be fastest).
--Kevin C.
Sorry, but I cannot understand...can u explain more?
5. ## A diagram
I'll try to draw a picture of sorts:
Let us call the number of ways of filling a 2xn board with dominoes F(n)
Consider a 2xn board (here 2x8):
_ _ _ _ _ _ _ _
| | | | | | | | |
_ _ _ _ _ _ _ _
| | | | | | | | |
_ _ _ _ _ _ _ _
In the upper right corner we could put a domino vertically
_ _ _ _ _ _ _ _
| | | | | | | |*|
_ _ _ _ _ _ _ _
| | | | | | | |*|
_ _ _ _ _ _ _ _
Leaving a space equal to a 2x(n-1) board (here 2X7), so there are F(n-1) ways to fill the board with a domino on the right like this.
We could instead put a domino horizontally in the corner:
_ _ _ _ _ _ _ _
| | | | | | |*|*|
_ _ _ _ _ _ _ _
| | | | | | | | |
_ _ _ _ _ _ _ _
We would then need to put another below this one (marked by @)
_ _ _ _ _ _ _ _
| | | | | | |*|*|
_ _ _ _ _ _ _ _
| | | | | | |@|@|
_ _ _ _ _ _ _ _
Leaving a 2x(n-2) space, so we have F(n-2) ways to fill the board with a pair of dominoes on the right like this.
Thus the total number of ways to fill a 2xn board is F(n)=F(n-1)+F(n-2)
And we have our recurrence relation for part (a).
--Kevin C.
6. Thank You for the 1 - 4 problems
7. Some one please help me on number 5. Do I need to do the proof by induction? If yes, I don't understand how. Is there any other way to do it? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9435476064682007, "perplexity": 394.9020864172139}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461995.64/warc/CC-MAIN-20150226074101-00168-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://byjus.com/solve-for-x-calculator/ | # Solve For X Calculator
Input must be in the form Ax+B=C
Enter the number for A : x + =
Value of X =
Solve for X Calculator is a free online tool that displays the variable value x for the given equation. BYJU’S online solve for x calculator tool makes the calculation faster, and it displays the variable value x in a fraction of seconds.
## How to Use the Solve for X Calculator?
The procedure to use the solve for x calculator is as follows:
Step 1: Enter the coefficients of the equation in the respective input field
Step 2: Now click the button “Solve” to get the variable value
Step 3: Finally, the value of x will be displayed in the output field
### What is Meant by the Solve for X?
In Mathematics, an algebraic equation is defined as a mathematical statement which has two equal expressions separated by the equal sign. The equation consists of variables, coefficients, constants, exponents, and the terms separated by the mathematical operators such as addition, subtraction, multiplication and division. The standard form of an equation is given as Ax+ B = C, where A, B, and C are the numbers. There are different types of equations, such as linear equation, quadratic equation, cubic equation, radical equation, exponential equation, trigonometric equations, and so on. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9246457815170288, "perplexity": 517.9899924805939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107917390.91/warc/CC-MAIN-20201031092246-20201031122246-00475.warc.gz"} |
https://koreauniv.pure.elsevier.com/en/publications/measurement-of-differential-and-integrated-fiducial-cross-section | # Measurement of differential and integrated fiducial cross sections for Higgs boson production in the four-lepton decay channel in pp collisions at √s = 7 and 8 TeV
The CMS Collaboration
Research output: Contribution to journalArticlepeer-review
31 Citations (Scopus)
## Abstract
Integrated fiducial cross sections for the production of four leptons via the H → 4ℓ decays (ℓ = e, μ) are measured in pp collisions at (Formula presented.) and 8TeV. Measurements are performed with data corresponding to integrated luminosities of 5.1 fb−1 at 7TeV, and 19.7 fb−1 at 8 TeV, collected with the CMS experiment at the LHC. Differential cross sections are measured using the 8 TeV data, and are determined as functions of the transverse momentum and rapidity of the four-lepton system, accompanying jet multiplicity, transverse momentum of the leading jet, and difference in rapidity between the Higgs boson candidate and the leading jet. A measurement of the Z → 4ℓ cross section, and its ratio to the H → 4ℓ cross section is also performed. All cross sections are measured within a fiducial phase space defined by the requirements on lepton kinematics and event topology. The integrated H → 4ℓ fiducial cross section is measured to be 0. 56− 0.44 + 0.67(stat)− 0.06 + 0.21(syst) fb at 7 TeV, and 1. 11− 0.35 + 0.41(stat)− 0.10 + 0.14(syst) fb at 8 TeV. The measurements are found to be compatible with theoretical calculations based on the standard model.
Original language English 5 Journal of High Energy Physics 2016 4 https://doi.org/10.1007/JHEP04(2016)005 Published - 2016 Apr 1
## Keywords
• Hadron-Hadron scattering
• Higgs physics
## ASJC Scopus subject areas
• Nuclear and High Energy Physics
## Fingerprint
Dive into the research topics of 'Measurement of differential and integrated fiducial cross sections for Higgs boson production in the four-lepton decay channel in pp collisions at √s = 7 and 8 TeV'. Together they form a unique fingerprint. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.994429886341095, "perplexity": 3628.46132879412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587711.69/warc/CC-MAIN-20211025123123-20211025153123-00714.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/127828-normal-distribution-print.html | # Normal Distribution
• February 8th 2010, 01:04 PM
Dinkydoe
Normal Distribution
Given that $X,Y$ are independant Normal distributed variables, with $N(1,\sigma^2), \sigma = \frac{1}{2}$
How do I calculate $P(X+Y\leq t)$
I'm not quite sure how to calculate this since the cdf of the normal distribution doesn't have a closed form. And approximating with an error-function seems quite a bother. Any hints?
• February 8th 2010, 02:05 PM
matheagle
Let Z=X+Y, then $Z\sim N(E(Z), V(Z))$
you want $P(Z\le t)=\int_{-\infty}^t f_Z(z)dz$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9824765920639038, "perplexity": 1043.98611791265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988458.74/warc/CC-MAIN-20150728002308-00027-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://clay6.com/qa/52037/a-solution-is-obtained-by-mixing-300-g-of-25-solution-and-400-g-of-40-solut | # A solution is obtained by mixing 300 g of 25% solution and 400 g of 40% solution by mass. Calculate the mass percentage of the resulting solution.
$\begin{array}{1 1}66.5\%\\76.5\%\\85.6\%\\55.5\%\end{array}$
300 g of 25% solution contains solute =75g
400 g of 40% solution contains solute = 160g
Total mass of solution = 300 + 400
$\Rightarrow 700$ g
% of solute in the final solution = $\large\frac{235}{700}$$\times 100$
$\Rightarrow 33.5\%$
% of water in the final solution =100-33.5
$\Rightarrow 66.5\%$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9971472024917603, "perplexity": 3646.0713889030726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687740.4/warc/CC-MAIN-20170921101029-20170921121029-00063.warc.gz"} |
https://cs.stackexchange.com/questions/60153/in-tensorflow-tutorials-why-do-they-use-only-the-first-term-of-cross-entropy-as | # In TensorFlow tutorials, why do they use only the first term of cross-entropy as the cost function?
The cross-entropy cost function is usually defined as
$$C = -\frac{1}{n} \sum_x \left[y \ln \hat{y} + (1-y ) \ln (1-\hat{y}) \right]$$
where $y$ is the expected output and $\hat{y}$ is the predicted output, for training example $x$.
But, in TensorFlow MNIST tutorial, they use
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y_conv), reduction_indices=[1]))
which, I suppose, is equivalent to
$$C = -\frac{1}{n} \sum_x y\ln\hat{y}$$
That means only the first term of the cross-entropy expression, $y\ln\hat{y}$, is being used. Why? Why isn't the second term, $(1 - y)\ln(1 - \hat{y})$, being used too?
Great question! Actually, there's no contradiction. The short version of the explanation is that those two equations look different because they are intended for slightly different scenarios -- but under the covers, they're actually much more similar than you might think. Let me walk you through it, and hopefully by the end of this explanation you'll see how everything is consistent and why both equations are correct, in the particular context where they're used.
## Cross-entropy loss for multi-class neural networks
When using neural networks for MNIST, we have 10 classes (one per digit). The neural net has 10 outputs (i.e., 10 neurons at the final layer). Call the outputs $\hat{y}_0,\hat{y}_1,\dots,\hat{y}_{9}$. If you feed in an image $x$, the intended interpretation is that $\hat{y}_d$ is supposed to represent the neural network's estimate of the "probability" that the image is an instance of the digit $d$.
For the training set, we know what the desired output is. Let's define $y_0,y_1,\dots,y_9$ to be the desired "probability distribution". In particular, $y_d$ should be $1$ for the correct digit $d$ and $0$ for all other values of $d$.
With these definitions, the cross-entropy loss for a single instance $x$ is defined to be
$$C_x = - \sum_{i=0}^9 y_i \log \hat{y}_i.$$
(Notice that if the correct digit is $d$, then this value simplifies to $-\log \hat{y}_d$, since we have $y_d=1$ and $y_i=0$ for all other $i$.)
The empirical cross-entropy loss for an entire training set is the average of these values, over all of the instances in the training set:
$$C = - {1 \over n} \sum_x \sum_{i=0}^9 y_i \log \hat{y}_i.$$
That's the cross-entropy loss. I think this is exactly what the Tensor Flow tutorial is computing. (Side note: this is different from the equation you presented. You were missing the inner sum over all 10 classes. I suspect you might have misinterpreted the Tensor Flow code. No biggy.)
You can also see how this generalizes to any number of classes: the sum over $i=0,1,\dots,9$ gets changed to a sum over all classes, however many of them there may be.
## Cross-entropy loss for two-class neural networks
As a special case, suppose we have two classes. In particular, suppose there are two classes and two outputs from the neural network (two neurons at the output layer). Then the cross-entropy loss for a single instance (the inner sum) becomes just
$$C_x = - y_0 \log \hat{y}_0 - y_1 \log \hat{y}_1.$$
Normally we normalize $y_0,y_1$ to be a probability distribution, so $y_0+y_1=1$, and similarly for $\hat{y}_0,\hat{y}_1$. As a result, we have $y_0 = 1-y_1$ and $\hat{y}_0 = 1-\hat{y}_1$. So, for a two-class neural network, we have
$$C_x = - y_1 \log \hat{y}_1 - (1-y_1) \log (1-\hat{y}_1),$$
and the empirical loss for an entire training set is
$$C = - {1 \over n} \sum_x [y_1 \log \hat{y}_1 - (1-y_1) \log (1-\hat{y}_1)].$$
So far, so good.
## Cross-entropy loss for two-class neural networks with a single output
Now if we have a two-class classification problem, it's not actually necessary for the network to produce two outputs. Alternatively, we could build a network with only a single output $\hat{y}$. We could interpret this single output value as "probability" that the input instance should be labelled as class 1. It follows that the "probability" that the instance should be labelled as class 0 is $1-\hat{y}$. So, if $\hat{y}>0.5$, we'll label the input as class 1; otherwise, we'll label it as class 0. This is the architecture used in the first web page you link to.
How should we measure the cross-entropy loss for this network? Well, just replace $\hat{y}_0,\hat{y}_1$ with $1-\hat{y},\hat{y}$ and everything goes through unchanged.
A slightly tricky thing is that we need to replace $y_0,y_1$ with something. What should we replace it with? I suggest we replace it with $1-y,y$, where $y$ is a value that indicates the desired output: if the correct label is class 1, then $y=1$, else $y=0$. Notice how this all works out nicely: if the correct label is class 1, then we get the distribution $0,1$; if the correct label is class 0, we get the distribution $1,0$.
Now plugging into the equations above, we see that the cross-entropy loss for a single instance $x$ is
$$C_x = - y \log \hat{y} - (1-y) \log (1-\hat{y})$$
(since we decided that instead of $\hat{y}_1$ we now have $\hat{y}$, and similarly for $y_1,y$).
As a result, the empirical loss for an entire training set is
$$C = - {1 \over n} \sum_x [y \log \hat{y} - (1-y) \log (1-\hat{y})].$$
This exactly matches the formula found in the first link you gave.
## Bottom line
See how it all lines up and is consistent? Basically, the cross-entropy is a well-defined notion in information theory; there is only a single definition of the cross-entropy. In information theory, the cross-entropy is defined in terms of two probability distributions.
To use this idea to construct a loss function for a neural network, we construct two probability distributions (one based on the actual outputs from the neural network, the other based on the desired outputs), and then we apply the information-theoretic definition. The exact way of defining those two probability distributions will depend on the architecture of the neural network, and in particular, on the number of outputs from the neural network. When you have slightly different architectures, you'll get a slightly different equation for the cross-entropy loss function. But the underlying idea is exactly the same: we apply the same information-theoretic cross-entropy slightly differently, to distributions defined in a slightly different way, so the equations look different on the surface -- but they're actually not as different as they might seem.
I hope this helps!
• The expression $$C = - {1 \over n} \sum_x [y \log \hat{y} - (1-y) \log (1-\hat{y})]$$, which I've been thinking to be a "full form" of cross-entropy is, in fact, a special case of $$C = - {1 \over n} \sum_x \sum_{i=0}^9 y_i \log \hat{y}_i$$ for the particular situation when we have a single output unit. Thank you! Your answer was really clarifying! – mcrisc Jun 30 '16 at 20:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9020896553993225, "perplexity": 248.711276559529}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107881369.4/warc/CC-MAIN-20201023102435-20201023132435-00283.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.