text
stringlengths 104
605k
|
---|
# 1.7 Inverse functions (Page 4/10)
Page 4 / 10
The domain of function $\text{\hspace{0.17em}}f\text{\hspace{0.17em}}$ is $\text{\hspace{0.17em}}\left(1,\infty \right)\text{\hspace{0.17em}}$ and the range of function $\text{\hspace{0.17em}}f\text{\hspace{0.17em}}$ is $\text{\hspace{0.17em}}\left(\mathrm{-\infty },-2\right).\text{\hspace{0.17em}}$ Find the domain and range of the inverse function.
The domain of function $\text{\hspace{0.17em}}{f}^{-1}\text{\hspace{0.17em}}$ is $\text{\hspace{0.17em}}\left(-\infty \text{,}-2\right)\text{\hspace{0.17em}}$ and the range of function $\text{\hspace{0.17em}}{f}^{-1}\text{\hspace{0.17em}}$ is $\text{\hspace{0.17em}}\left(1,\infty \right).$
## Finding and evaluating inverse functions
Once we have a one-to-one function, we can evaluate its inverse at specific inverse function inputs or construct a complete representation of the inverse function in many cases.
## Inverting tabular functions
Suppose we want to find the inverse of a function represented in table form. Remember that the domain of a function is the range of the inverse and the range of the function is the domain of the inverse. So we need to interchange the domain and range.
Each row (or column) of inputs becomes the row (or column) of outputs for the inverse function. Similarly, each row (or column) of outputs becomes the row (or column) of inputs for the inverse function.
## Interpreting the inverse of a tabular function
A function $\text{\hspace{0.17em}}f\left(t\right)\text{\hspace{0.17em}}$ is given in [link] , showing distance in miles that a car has traveled in $\text{\hspace{0.17em}}t\text{\hspace{0.17em}}$ minutes. Find and interpret $\text{\hspace{0.17em}}{f}^{-1}\left(70\right).$
30 50 70 90 20 40 60 70
The inverse function takes an output of $\text{\hspace{0.17em}}f\text{\hspace{0.17em}}$ and returns an input for $\text{\hspace{0.17em}}f.\text{\hspace{0.17em}}$ So in the expression $\text{\hspace{0.17em}}{f}^{-1}\left(70\right),\text{\hspace{0.17em}}$ 70 is an output value of the original function, representing 70 miles. The inverse will return the corresponding input of the original function $\text{\hspace{0.17em}}f,\text{\hspace{0.17em}}$ 90 minutes, so $\text{\hspace{0.17em}}{f}^{-1}\left(70\right)=90.\text{\hspace{0.17em}}$ The interpretation of this is that, to drive 70 miles, it took 90 minutes.
Alternatively, recall that the definition of the inverse was that if $\text{\hspace{0.17em}}f\left(a\right)=b,\text{\hspace{0.17em}}$ then $\text{\hspace{0.17em}}{f}^{-1}\left(b\right)=a.\text{\hspace{0.17em}}$ By this definition, if we are given $\text{\hspace{0.17em}}{f}^{-1}\left(70\right)=a,\text{\hspace{0.17em}}$ then we are looking for a value $\text{\hspace{0.17em}}a\text{\hspace{0.17em}}$ so that $\text{\hspace{0.17em}}f\left(a\right)=70.\text{\hspace{0.17em}}$ In this case, we are looking for a $\text{\hspace{0.17em}}t\text{\hspace{0.17em}}$ so that $\text{\hspace{0.17em}}f\left(t\right)=70,\text{\hspace{0.17em}}$ which is when $\text{\hspace{0.17em}}t=90.$
Using [link] , find and interpret (a) and (b)
30 50 60 70 90 20 40 50 60 70
1. $f\left(60\right)=50.\text{\hspace{0.17em}}$ In 60 minutes, 50 miles are traveled.
2. ${f}^{-1}\left(60\right)=70.\text{\hspace{0.17em}}$ To travel 60 miles, it will take 70 minutes.
## Evaluating the inverse of a function, given a graph of the original function
We saw in Functions and Function Notation that the domain of a function can be read by observing the horizontal extent of its graph. We find the domain of the inverse function by observing the vertical extent of the graph of the original function, because this corresponds to the horizontal extent of the inverse function. Similarly, we find the range of the inverse function by observing the horizontal extent of the graph of the original function, as this is the vertical extent of the inverse function. If we want to evaluate an inverse function, we find its input within its domain, which is all or part of the vertical axis of the original function’s graph.
Given the graph of a function, evaluate its inverse at specific points.
1. Find the desired input on the y -axis of the given graph.
2. Read the inverse function’s output from the x -axis of the given graph.
## Evaluating a function and its inverse from a graph at specific points
A function $\text{\hspace{0.17em}}g\left(x\right)\text{\hspace{0.17em}}$ is given in [link] . Find $\text{\hspace{0.17em}}g\left(3\right)\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}{g}^{-1}\left(3\right).$
To evaluate $g\left(3\right),\text{\hspace{0.17em}}$ we find 3 on the x -axis and find the corresponding output value on the y -axis. The point $\text{\hspace{0.17em}}\left(3,1\right)\text{\hspace{0.17em}}$ tells us that $\text{\hspace{0.17em}}g\left(3\right)=1.$
To evaluate $\text{\hspace{0.17em}}{g}^{-1}\left(3\right),\text{\hspace{0.17em}}$ recall that by definition $\text{\hspace{0.17em}}{g}^{-1}\left(3\right)\text{\hspace{0.17em}}$ means the value of x for which $\text{\hspace{0.17em}}g\left(x\right)=3.\text{\hspace{0.17em}}$ By looking for the output value 3 on the vertical axis, we find the point $\text{\hspace{0.17em}}\left(5,3\right)\text{\hspace{0.17em}}$ on the graph, which means $\text{\hspace{0.17em}}g\left(5\right)=3,\text{\hspace{0.17em}}$ so by definition, $\text{\hspace{0.17em}}{g}^{-1}\left(3\right)=5.\text{\hspace{0.17em}}$ See [link] .
what is a function?
I want to learn about the law of exponent
explain this
what is functions?
A mathematical relation such that every input has only one out.
Spiro
yes..it is a relationo of orders pairs of sets one or more input that leads to a exactly one output.
Mubita
Is a rule that assigns to each element X in a set A exactly one element, called F(x), in a set B.
RichieRich
If the plane intersects the cone (either above or below) horizontally, what figure will be created?
can you not take the square root of a negative number
No because a negative times a negative is a positive. No matter what you do you can never multiply the same number by itself and end with a negative
lurverkitten
Actually you can. you get what's called an Imaginary number denoted by i which is represented on the complex plane. The reply above would be correct if we were still confined to the "real" number line.
Liam
Suppose P= {-3,1,3} Q={-3,-2-1} and R= {-2,2,3}.what is the intersection
can I get some pretty basic questions
In what way does set notation relate to function notation
Ama
is precalculus needed to take caculus
It depends on what you already know. Just test yourself with some precalculus questions. If you find them easy, you're good to go.
Spiro
the solution doesn't seem right for this problem
what is the domain of f(x)=x-4/x^2-2x-15 then
x is different from -5&3
Seid
All real x except 5 and - 3
Spiro
***youtu.be/ESxOXfh2Poc
Loree
how to prroved cos⁴x-sin⁴x= cos²x-sin²x are equal
Don't think that you can.
Elliott
By using some imaginary no.
Tanmay
how do you provided cos⁴x-sin⁴x = cos²x-sin²x are equal
What are the question marks for?
Elliott
Someone should please solve it for me Add 2over ×+3 +y-4 over 5 simplify (×+a)with square root of two -×root 2 all over a multiply 1over ×-y{(×-y)(×+y)} over ×y
For the first question, I got (3y-2)/15 Second one, I got Root 2 Third one, I got 1/(y to the fourth power) I dont if it's right cause I can barely understand the question.
Is under distribute property, inverse function, algebra and addition and multiplication function; so is a combined question
Abena
find the equation of the line if m=3, and b=-2
graph the following linear equation using intercepts method. 2x+y=4
Ashley
how
Wargod
what?
John
ok, one moment
UriEl
how do I post your graph for you?
UriEl
it won't let me send an image?
UriEl
also for the first one... y=mx+b so.... y=3x-2
UriEl
y=mx+b you were already given the 'm' and 'b'. so.. y=3x-2
Tommy
Please were did you get y=mx+b from
Abena
y=mx+b is the formula of a straight line. where m = the slope & b = where the line crosses the y-axis. In this case, being that the "m" and "b", are given, all you have to do is plug them into the formula to complete the equation.
Tommy
thanks Tommy
Nimo
0=3x-2 2=3x x=3/2 then . y=3/2X-2 I think
Given
co ordinates for x x=0,(-2,0) x=1,(1,1) x=2,(2,4)
neil |
##### Tools
This site is devoted to mathematics and its applications. Created and run by Peter Saveliev.
# Sequences and their limits
(Redirected from Treat calculus discretely!)
## 1 Limits of sequences: long-term trends
Example (falling ball). We watch a ping-pong ball falling down and recording -- at equal intervals -- how high it is. The result is an ever-expanding string, a sequence, of numbers. If the frames of the video are combined into one image, it will look something like this:
We have a list: $$36,\ 35,\ 32, \ 27,\ 20,\ 11,\ 0,\ ...$$ We bring them back together in one rectangular plot so that the location varies vertically while the time progresses horizontally:
The plot is called the graph of the sequence.
As far as the data is concerned, we have a list of pairs, time and location, arranged in a table: $$\begin{array}{r|ll} \text{moment}&\text{height}\\ \hline 1&36\\ 2&35\\ 3&32\\ 4&27\\ 5&20\\ 6&11\\ 7&0\\ ...&... \end{array}\ \text{ or }\ \begin{array}{l|ll} \text{moment:}&1&2&3&4&5&6&7&...\\ \hline \text{height:}&36&35&32&27&20&11&0&... \end{array}.$$
To represent a sequence algebraically, we first give it a name, say, $a$, and then assign a special variation of this name to each term of the sequence: $$\begin{array}{ll|ll} \text{index:}&n&1&2&3&4&5&6&7&...\\ \hline \text{term:}&a_n&a_1&a_2&a_3&a_4&a_5&a_6&a_7&... \end{array}$$ The subscript is called the index; it indicates the place of the term within the sequence. We say “$a$ sub $1$”, “$a$ sub $2$,” etc.
In our example, we name the sequence $h$ for “height”. Then the above table take this form: $$\begin{array}{l|ll} \text{moment:}&1&2&3&4&5&6&7&...\\ \hline \text{height:}&h_1&h_2&h_3&h_4&h_5&h_6&h_7&...\\ &||&||&||&||&||&||&||&...\\ \text{height:}&36&35&32&27&20&11&0&... \end{array}$$ When abbreviated, it takes the form of this list: $$h_1=36,\ h_2=35,\ h_3=32, \ h_4=27,\ h_5=20,\ h_6=11,\ h_7=0,\ ....$$ $\square$
So, we use the following notation: $$a_1=1,\ a_2=1/2,\ a_3=1/3,\ a_4=1/4,\ ...,$$ where $a$ is the name of the sequence and adding a subscript indicates which term of the sequence we are facing.
We will study infinite sequences of numbers and especially their trends. The idea is simple: $$\begin{array}{llllllll} \text{sequence }&1&1/2&1/3&1/4&1/5&...&\text{ trends toward } 0;\\ \text{sequence }&.9&.99&.999&.9999&.99999&...&\text{ trends toward } 1;\\ \text{sequence }&1&2&3&4&5&...&\text{ trends toward } \infty;\\ \text{sequence }&0&1&0&1&0&...&\text{ has no trend}. \end{array}$$ In other words, an infinite sequence of numbers will be sometimes “accumulating” around a single number. The gap between the bouncing ball and the ground becomes invisible!
Even though every function $y=f(x)$ with an appropriate domain creates a sequence, $a_n=f(n)$, the converse isn't true. This discrepancy serves our purpose: the primary, if not the only, reason for studying sequences is to understand (their) trends, called limits.
A function defined on a ray in the set of integers, $\{p,p+1,...\}$, is called an infinite sequence, or simply sequence, typically given by its formula: $$a_n=1/n:\ n=1,2,3,...$$ For example, these are the possibilities: $$a_n=1/n,\ 1/n,\ \{a_n=1/n:\ n=1,2,3,...\}.$$ The last option is used when we treat the sequence as a set.
We could visualize sequences as the graphs of functions:
However, we take a different approach; we will apply, at a later time, what we have learned about sequences to our study of functions. This is why our visualizations of graphs of sequences will use the re-named Cartesian coordinate system:
• the horizontal axis is the $n$-axis, and
• the vertical axis is the $x$-axis.
This approach allows us to have a more compact way to visualize sequences (right) as sequences of locations on the $x$-axis visited over an infinite period of time. The long-term trend becomes clear when the points stop visibly “moving”.
Example (reciprocals). The go-to example is the sequence of the reciprocals: $$x_n=\frac{1}{n}.$$ It tends to $0$.
This fact is easy to confirm numerically: $$x_n=1.000,\ 0.500,\ 0.333,\ 0.250,\ 0.200,\ 0.167,\ 0.143,\ 0.125,\ 0.111,\ ...$$ $\square$
Example (plotting). However, numerical analysis alone can't be used for discovering the value of the limit. Plotting the first $1000$ terms of the sequence $x_n=n^{-.01}$ fails to suggest the true value of the limit:
In fact, it is zero. $\square$
Example (decimals). Sequences are ubiquitous. For example, given a real number, we can easily construct a sequence that tends to that number -- via its decimal approximations. For example, $$x_n=0.3,\ 0.33,\ 0.333,\ 0.3333,\ ... \text{ tends to } 1 / 3 .$$
$\square$
Example (alternating). The values can also approach the ultimate destination from both sides, such as $$x_n=(-1)^n\frac{1}{n}.$$
$\square$
The notation for limits is the following: $$a_n \to a\ \text{ as }\ n\to \infty,$$ as well as $$\lim_{n\to\infty} a_n=a.$$ We will include the possibility of infinite limits: $$a_n \to \infty\ \text{ as }\ n\to \infty,$$ and $$\lim_{n\to\infty} a_n=\infty.$$
Exercise. What can you say about the limit of an integer-valued sequence?
Example (Zeno's paradox). Consider a simple scenario: as you walk toward a wall, you can never reach it because once you've covered half the distance, there is still distance left, etc.
We mark these steps and do observe that there are infinitely many of them to be taken... $\square$
## 2 The definition of limit
Calculus, for a large part, is the study of how to properly handle infinity.
Example. Let's examine this seemingly legitimate computation: $$\begin{array}{cccccc} 0& \overset{\text{?}}{=\! =\! =} &0&&+0&&+0&&+0&&+...\\ & \overset{\text{?}}{=\! =\! =} &(1&-1)&+(1&-1)&+(1&-1)&+(1&-1)&+...\\ & \overset{\text{?}}{=\! =\! =} &1&-1&+1&-1&+1&-1&+1&-1&+...\\ & \overset{\text{?}}{=\! =\! =} &1&+(-1&+1)&+(-1&+1)&+(-1&+1)&+(-1&+1)&...\\ & \overset{\text{?}}{=\! =\! =} &1&+0&&+0&&+0&&+0&&+...\\ & \overset{\text{?}}{=\! =\! =} &1. \end{array}$$ That's impossible! How did this happen? One can say that we got something from nothing (the numbers refer to the amount of soil taken out):
The problem is that we casually carried out infinitely many algebraic operations. $\square$
Exercise. Which of the “$=$” signs above is incorrect?
Thus, when facing infinity, algebra may fail. But it doesn't have to... when the sequence has a limit! A limit is a number and the sequence approximates this number: $$\begin{array}{llllllll} \text{ sequence }&1&1/2&1/3&1/4&1/5&...&\text{ approximates }0;\\ \text{ sequence }&.9&.99&.999&.9999&.99999&...&\text{ approximates }1;\\ \text{ sequence }&1.&1.1&1.01&1.001&1.0001&...&\text{ approximates }1;\\ \text{ sequence }&3.&3.1&3.14&3.141&3.1415&...&\text{ approximates }\pi;\\ \text{ sequence }&1&2&3&4&5&...&\text{ approaches } \infty;\\ \text{ sequence }&0&1&0&1&0&...&\text{ doesn't approximate any number}. \end{array}$$ In other words, we can substitute the sequence for the number it approximates and do it with any degree of accuracy!
Now, let's find the exact meaning of limit.
Geometrically, we see how the sequence accumulates toward a particular horizontal line:
At the end, the dots can't be distinguished from the $x$-axis.
Example. We have a more concise illustration if we concentrate on the $y$-axis only:
We can see that after sufficiently many steps, the terms of the sequence, $a_n$, become indistinguishable from the limit, $a$. It seems that, say, the $10$th dot has merged with $a$. $\square$
Example. Let's now look at this “process” numerically. What does it mean that $a_n=1/n^2$ approaches $a=0$?
First, how long does it take to get within $.1$ from $a$? Look up in the table of values: it takes $4$ steps.
Second, how long does it take to get within $.01$ from $a$? It takes $11$ steps.
Third, how long does it take to get within $.001$ from $a$? It takes $32$ steps.
And so on. No matter how small a number I pick, eventually $a_n$ will be that close to its limit. $\square$
Example. Another interpretation of this analysis is in terms of accuracy. We understand the idea that $a_n=1/n^2$ approaches $a=0$ as : “the sequence approximates $0$”.
First, what if we need the accuracy to be $.1$? Look up in the table of values: we need to compute $4$ terms of the sequence or more.
Second, what if we need the accuracy to be $.01$? At least $11$ terms.
Third, what if we need the accuracy to be $.001$? At least $32$.
And so on. No matter how much accuracy I need, there is a way to accommodate this requirement by getting farther and farther into the sequence $a_n$. $\square$
Unfortunately, not all sequences are as simple as that. They may approach their respective limits in a number of ways, as we have seen. They don't have to be monotone:
They might approach the limit from above and below at the same time:
And so on... And then there are sequences with no limits. We need a more general approach.
We re-write what we want to say about the meaning of the limits in progressively more and more precise terms. $$\begin{array}{l|ll} n&y=a_n\\ \hline \text{As } n\to \infty, & \text{we have } y\to a.\\ \text{As } n\text{ approaches } \infty, & y\text{ approaches } a. \\ \text{As } n \text{ is getting larger and larger}, & \text{the distance from }y \text{ to } a \text{ approaches } 0. \\ \text{By making } n \text{ larger and larger},& \text{we make } |y-a| \text{ as small as needed}.\\ \text{By making } n \text{ larger than some } N>0 ,& \text{we make } |y-a| \text{ smaller than any given } \varepsilon>0. \end{array}$$
The absolute values above are the distances from $a_n$ to $a$, as shown below:
Algebraically, we see that for every measure of “closeness”, call it $\varepsilon$, the function's values become eventually that close to the limit. In other words, $\varepsilon$ is the degree of required accuracy.
Example. Let's prove this statement for the sequence from the last example, $a_n=1/n^2$. Let's imagine that any degree of accuracy $\varepsilon>0$ that needs to be accommodated is supplied ahead of time. Let's find such an $n$ that $a_n$ is within $\varepsilon$ from $a=0$. In other words, we need this inequality to be satisfied: $$|a_n-a|=\left| \frac{1}{n^2}-0 \right|=\frac{1}{n^2}<\varepsilon.$$ We solve it: $$n>\frac{1}{\sqrt{\varepsilon}}=N.$$ This proves that the requirement can be satisfied. Then, for any such $n$ we have $|a_n-a|<\varepsilon$, as required.
The result gives us the same answers for the three particular choices of $\varepsilon =.1,\ .01,\ .001$ from the last example, as well as for any other... For example, let's pick $\varepsilon=.0001$, what is $N$? By the formula, it is $$N= \frac{1}{\sqrt{.0001}} =\frac{1}{\left( 10^{-4}\right)^{1/2}} = \frac{1}{ 10^{-2} } =10^2=100.$$ $\square$
Exercise. Carry out such an analysis for $a_n=1/\sqrt{x}$.
Definition. We call number $a$ the limit of the sequence $a_n$ if the following condition holds:
• for each real number $\varepsilon > 0$, there exists a number $N$ such that, for every natural number $n > N$, we have
$$|a_n - a| < \varepsilon .$$ We also say that the limit is finite. If a sequence has a limit, then we call the sequence convergent and say that it converges; otherwise it is divergent and we say it diverges.
Example. Let's apply the definition to $$a_n=1+\frac{(-1)^n}{n}.$$ Suppose an $\varepsilon>0$ is given. Looking at the numbers, we discover that they accumulate toward $1$. Is this the limit? We apply the definition. Let's find such an $n$ that $a_n$ is within $\varepsilon$ from $a=1$: $$|a_n-a|=\left| 1+\frac{(-1)^n}{n}-1 \right|=\left| \frac{(-1)^n}{n} \right|=\frac{1}{n}<\varepsilon.$$ We solve it: $$n>\frac{1}{\varepsilon}.$$ That gives us the $N$ required by the definition; we let $$N= \frac{1}{\varepsilon} .$$ Then, for any $n>N$ we have $|a_n-a|<\varepsilon$, as required by the definition. $\square$
Another way to visualize a trend in a convergent sequence is to enclose the end of the tail of the sequence in a band:
It should be, in fact, a narrower and narrower band; its width is $2\varepsilon$. Meanwhile, the starting point of the band moves to the right; that's $N$.
Examples of divergence are below.
Example. A sequence may tend to infinity, such as $a_n=n$:
Then no band -- no matter how wide -- will contain the sequence's tail. $\square$
This behavior however has a meaningful pattern.
Definition. We say that a sequence $a_n$ tends to positive infinity if the following condition holds:
• for each real number $R$, there exists a natural number $N$ such that, for every natural number $n > N$, we have
$$a_n >R.$$ We say that a sequence $a_n$ tends to negative infinity if:
• for each real number $R$, there exists a natural number $N$ such that, for every natural number $n > N$, we have
$$a_n <R.$$ In either case, we also way that the limit is infinite.
We describe such a behavior with the following notation: $$a_n\to \pm\infty \text{ as } n\to \infty ,$$ or $$\lim_{n\to \infty}a_n=\pm\infty.$$
Example. Some sequences seem to have no pattern at all, such as $a_n=\sin n$:
Here, a band -- if narrow enough -- can contain the sequence's tail.
If, however, we also divide this expression by $n$, the swings start to diminish:
The limit is $0$! $\square$
Example. The next example is $a_n=1+(-1)^n+\frac{1}{n}$. It seems to approach two limits at the same time:
Indeed, no matter how narrow, we can find two bands to contain the sequence's two tails. However, no single band -- if narrow enough -- will contain them! $\square$
Example. Let's pick a simpler sequence and do this analytically. Let $$a_n=(-1)^n=\begin{cases} 1&\text{ if } n \text{ is even,}\\ -1&\text{ if } n \text{ is odd.} \end{cases}$$ Is the limit $a=1$? If it is, then this is what needs to be “small”: $$|a_n-a|=\left| (-1)^n-1 \right|=\begin{cases} 0&\text{ if } n \text{ is even,}\\ 2&\text{ if } n \text{ is odd.} \end{cases}$$ It's not! Indeed, this expression won't be less than $\varepsilon$ if we choose it to be, say, $1$, no matter what $N$ is. So, $a=1$ is not the limit. Is $a=-1$ the limit? Same story. In order to prove the negative, we need to try every possible value of $a$. $\square$
Exercise. Finish the proof in the last example.
Example. For a given real number, we can construct a sequence that approximates that number -- via truncations of its decimal approximations. For example, we have already seen this: $$x_n=0.9 , 0.99 , 0.999 , 0.9999 , . . . \text{ tends to } 1 .$$ Furthermore, we have: $$x_n=0.3 , 0.33 , 0.333 , 0.3333 , . . . \text{ tends to } 1 / 3 .$$ The idea of limit then helps us understand infinite decimals.
• What is the meaning of $.9999...$? It is the limit of the sequence $0.9 , 0.99 , 0.999, ...$; i.e., $1$.
• What is the meaning of $.3333...$? It is the limit of the sequence $0.3 , 0.3 , 0.333, ...$; i.e., $1/3$.
$\square$
Exercise. Find the formulas for the two sequences above and confirm the limits.
We need to justify “the” in “the limit”.
Theorem (Uniqueness). A sequence can have only one limit (finite or infinite); i.e., if $a$ and $b$ are limits of the same sequence, then $a=b$.
Proof. The geometry of the proof is clear: we want to separate the two horizontal lines representing two potential limits by two non-overlapping bands, as shown above. Then the tail of the sequence would have to fit one or the other, but not both. These bands correspond to two intervals around those two “limits”. In order for them to be disjoint, their width (that's $2\varepsilon$!) should be less than half the distance between the two numbers.
The proof is by contradiction. Suppose $a$ and $b$ are two limits, i.e., either satisfies the definition, and suppose also $a\ne b$. In fact, without loss of generality we can assume that $a<b$. Let $$\varepsilon = \frac{b-a}{2}.$$ Then, what we are going to use at the end is $$a+\varepsilon=b-\varepsilon.$$
Now, we rewrite the definition for $a$ and $b$ specifically:
• there exists a number $L$ such that, for every natural number $n > L$, we have
$$|a_n - a| < \varepsilon .$$ Now, we rewrite the definition for $M$ as limit:
• there exists a number $M$ such that, for every natural number $n > M$, we have
$$|a_n - b| < \varepsilon .$$ In order to combine the two statements, we need them to be satisfied for the same values of $n$. Let $$N=\min\{ L,M\}.$$ Then,
• for every number $n > N$, we have
$$|a_n - a| < \varepsilon ,$$
• for every number $n > N$, we have
$$|a_n - b| < \varepsilon .$$ In particular, for every $n > N$, we have: $$a_n < a+\varepsilon=b-\varepsilon<a_n.$$ A contradiction. $\blacksquare$
Exercise. Follow the proof and demonstrate that that it is impossible to for a sequence to have as limit: (a) a real number and $\pm\infty$, or (b) $-\infty$ and $+\infty$.
The theorem indicates that the correspondence:
• a convergent sequence $\longrightarrow$ its limit (a real number),
makes sense. Can we reverse this correspondence? No, because there are many sequences converging to the same number. However, we can say that a real number “is” its approximations, i.e., all sequences that converge to it.
Thus, there can be no two limits and we are justified to speak of the limit.
The limits of some specific sequences can be easily found.
Theorem (Constant). For any real $c$, we have $$\lim_{n \to \infty}c = c.$$
Theorem (Arithmetic progression). For any real numbers $m,b>0$, we have $$\lim_{n \to \infty}(b+nm) = \begin{cases} -\infty &\text{ if } m<0,\\ b &\text{ if } m=0,\\ +\infty &\text{ if } m>0. \end{cases}$$
Exercise. Prove the theorem.
Theorem (Powers). For any integer $k$, we have $$\lim_{n \to \infty}n^k = \begin{cases} 0&\text{ if } k<0,\\ 1&\text{ if } k=0,\\ +\infty&\text{ if } k>0. \end{cases}$$
Proof. First, the case of $k<0$. Suppose $\varepsilon >0$ is given. We need to find such an $N$ that $|n^k-0|=n^k<\varepsilon$ whenever $n>N$. We can express such an $N$ in terms of this $\varepsilon$; we just choose: $$N= \sqrt[1/k]{\varepsilon}.$$
Second, the case of $k<0$. Suppose $R>0$ is given. We need to find such an $N$ that $n^k>R$ whenever $n>N$. We can express such an $N$ in terms of this $R$; similarly to the above we choose: $$N= \sqrt[1/k]{R}.$$ $\blacksquare$
Theorem (Geometric progression). For any real number $r$, we have $$\lim_{n \to \infty}r^n = \begin{cases} \text{diverges } &\text{ if } r \le -1,\\ 0 &\text{ if } |r|<1,\\ 1 &\text{ if } r=1,\\ +\infty &\text{ if } r>1. \end{cases}$$
Exercise. Prove the theorem.
Example. Geometric progressions are used to model population growth and decline. $\square$
Exercise. Find the limits of each of these sequences or show that it doesn't exist:
• (a) $1,\ 3,\ 5,\ 7,\ 9,\ 11,\ 13,\ 15,\ ...$;
• (b) $.9,\ .99,\ .999,\ .9999,\ ...$;
• (c) $1,\ -1,\ 1,\ -1,\ ...$;
• (d) $1,\ 1/2,\ 1/3,\ 1/4,\ ...$;
• (e) $1,\ 1/2,\ 1/4\ ,1/8,\ ...$;
• (f) $2,\ 3,\ 5,\ 7,\ 11,\ 13,\ 17,\ ...$;
• (g) $1,\ -4,\ 9,\ -16,\ 25,\ ...$;
• (h) $3,\ 1,\ 4,\ 1,\ 5,\ 9,\ ...$.
Example. In either of the two tables below, we have a sequence given in the first two columns. Its $n$th term formula is known. The third column shows the sequence of sums (Chapter 1) of the first: $$\begin{array}{c|c|lll} n&a_n&s_n\\ \hline 1&\frac{1}{1}&\frac{1}{1}\\ 2&\frac{1}{2}&\frac{1}{1}+\frac{1}{2}\\ 3&\frac{1}{3}&\frac{1}{1}+\frac{1}{2}+\frac{1}{3}\\ \vdots&\vdots&\vdots\\ n&\frac{1}{n}&\frac{1}{1}+\frac{1}{2}+\frac{1}{3}+...+\frac{1}{n}\\ \end{array}\quad\quad \begin{array}{c|c|lll} n&a_n&s_n\\ \hline 1&\frac{1}{1}&\frac{1}{1}\\ 2&\frac{1}{2}&\frac{1}{1}+\frac{1}{2}\\ 3&\frac{1}{4}&\frac{1}{1}+\frac{1}{2}+\frac{1}{4}\\ \vdots&\vdots&\vdots\\ n&\frac{1}{2^n}&\frac{1}{1}+\frac{1}{2}+\frac{1}{4}+...+\frac{1}{2^{n-1}}\\ \end{array}$$ The $n$th term formula is unknown as we don't know how to represent these quantities without “...”. In contrast to last example, finding the limit of such a sequence is a challenge... $\square$
## 3 Algebra of sequences and limits
If every real number is the sequence of its approximations, does algebra with these numbers still make sense? Fortunately, limits behave well with respect to the usual arithmetic operations. Below we assume that the sequences are defined on the same set of integers.
We will study convergence of sequences with the help of other, simpler, sequences. The theorem below shows why.
Theorem. $$a_n\to a \ \Longleftrightarrow\ |a_n-a|\to 0.$$
Then to understand limits of sequences in general, we need first to understand those of a smaller class:
• positive sequences that converge to $0$.
The definition of convergence becomes simpler:
• $0<a_n\to 0$ when for any $\varepsilon >0$ there is $N$ such that $a_n<\varepsilon$ for all $n>N$.
To graphically add two sequences, we flip the second upside down and then connect each pair of dots with a bar. Then, the lengths of these bars form the new sequence. Now, if either sequence converges to $0$, then so do these bars.
Theorem (Sum Rule). $$0<a_n\to 0,\ 0<b_n\to 0 \ \Longrightarrow\ a_n+ b_n\to 0.$$
Proof. Suppose $\varepsilon >0$ is given. From the definition,
• $a_n\to 0\ \Longrightarrow$ there is $N$ such that $a_n<\varepsilon /2$ for all $n>N$, and
• $b_n\to 0\ \Longrightarrow$ there is $M$ such that $b_n<\varepsilon /2$ for all $n>M$.
Then for all $n>\max\{N,M\}$, we have $$a_n+b_n<\varepsilon /2+\varepsilon /2 =\varepsilon .$$ Therefore, by definition $a_n+ b_n\to 0$. $\blacksquare$
Exercise. Prove the version of the above theorem for $m$ sequences (a) from the theorem and (b) by generalizing the proof.
Multiplying a sequence by a constant number simply stretches the whole picture in the vertical direction -- in both directions, away from the $z$-axis.
Then, zero remains zero!
Theorem (Constant Multiple Rule). $$0<a_n\to 0 \ \Longrightarrow\ ca_n\to 0 \text{ for any real }c>0.$$
Proof. Suppose $\varepsilon >0$ is given. From the definition,
• $0 < a_n\to 0\ \Longrightarrow$ there is $N$ such that $a_n <\varepsilon /c$ for all $n>N$.
Then for all $n>N$, we have $$c\cdot a_n < c\cdot \varepsilon /c=\varepsilon .$$ Therefore, by definition $ca_n\to 0$. $\blacksquare$
For more complex situations we need to use the fact that convergent sequences are bounded; i.e., the sequence fits into a (not necessary narrow) band.
Theorem (Boundedness). $$a_n\to a \ \Longrightarrow\ |a_n| < Q \text{ for some real } Q.$$
Proof. The idea is that the tail of the sequence does fit into a (narrow) band; meanwhile, there are only finitely many terms left... Choose $\varepsilon =1$. Then by definition, there is such $N$ that for all $n>N$ we have: $$|a_n-a| < 1.$$ Then, we have $$\begin{array}{lll} |a_n|&=|(a_n-a)+a|&\text{ ...then by the Triangle Inequality...}\\ &\le |a_n-a|+|a|&\text{ ...then by the inequality above...}\\ &<1+|a|. \end{array}$$ To finish the proof, we choose: $$Q=\max\{|a_1|,...,|a_N|,1+|a|\}.$$ $\blacksquare$
The proof is illustrated below:
The converse isn't true: not every bounded sequence is convergent. Just try $a_n=\sin n$. We will show later that, with an extra condition, bounded sequences do have to converge...
We are now ready for the general results on the algebra of limits.
Theorem (Sum Rule). If sequences $a_n ,b_n$ converge then so does $a_n + b_n$, and $$\lim_{n\to\infty} (a_n + b_n) = \lim_{n\to\infty} a_n + \lim_{n\to\infty} b_n.$$
Proof. Suppose $$a_n\to a,\ b_n\to b.$$ Then $$|a_n - a|\to 0, \ |b_n-b|\to 0.$$ We compute $$\begin{array}{lll} |(a_n + b_n)-(a+b)|&= |(a_n-a)+( b_n-b)|& \text{ ...then by the Triangle Inequality...}\\ &\le |a_n-a|+| b_n-b|&\\ &\to 0+0 & \text{ ...by SR...}\\ &=0. \end{array}$$ Then, by the last theorem, we have $$|(a_n + b_n)-(a+b)|\to 0.$$ Then, by the first theorem, we have: $$a_n + b_n\to a+b.$$ $\blacksquare$
When two sequences are multiplied, it is as if we use each pair of their values to build a rectangle:
Then the areas of these rectangles form a new sequence and these areas converge if the widths and the heights converge.
Theorem (Product Rule). If sequences $a_n ,b_n$ converge then so does $a_n \cdot b_n$, and $$\lim_{n\to\infty} (a_n \cdot b_n) = (\lim_{n\to\infty} a_n)\cdot( \lim_{n\to\infty} b_n).$$
Proof. Suppose $a_n\to a,\ b_n\to b$. Then, $$|a_n-1|\to 0,\ |b_n-b|\to 0.$$ Consider, $$\begin{array}{lll} |a_n\cdot b_n-a\cdot b| &= |a_n\cdot b_n+(-a\cdot b_n+a\cdot b_n) -a\cdot b|&\text{ ...adding extra terms then factoring...}\\ &= |(a_n-a)\cdot b_n+a\cdot( b_n - b)|&\text{ ...then by the Triangle Inequality...}\\ &\le |(a_n-a)\cdot b_n|+|a\cdot ( b_n - b)|&\\ &= |a_n-a|\cdot |b_n|+|a|\cdot | b_n - b|&\text{ ...then by Boundedness...}\\ &\le |a_n-a|\cdot Q+|a|\cdot | b_n - b|&\\ &\to 0\cdot Q+|a|\cdot 0&\text{ ...by SR and CMR...}\\ &=0. \end{array}$$ Therefore, $$a_n\cdot b_n \to a\cdot b.$$ $\blacksquare$
CMR follows.
Theorem (Constant Multiple Rule). If sequence $a_n$ converges then so does $c a_n$ for any real $c$, and $$\lim_{n\to\infty} c\, a_n = c \cdot \lim_{n\to\infty} a_n.$$
Example. These laws help us justify the following trick of finding fraction representations of infinite decimals. This is how we deal with $x=.3333...$: $$\begin{array}{llll} x&=0.3333...\\ -\\ 10x&=3.3333...\\ \hline -9x&=-3.0000...\\ &&&&&&&&\Longrightarrow\ x=1/3 \end{array}$$ Instead we use the Constant Multiple Rule and the Difference Rule to carry out the following algebra of sequences: $$\begin{array}{llll} a_n:&0&0.3&0.33&0.333&0.3333&...&\to&x\\ -\\ 10a_n:&3&3.3&3.33&3.333&3.3333&...&\to&10x\\ \hline -9a_n:&-3&-3&-3&-3&-3&...&\to&-9x\\ &&&&&&&&\Longrightarrow\ x=1/3 \end{array}$$ Note that we have shifted the values of the second sequence. $\square$
One can understand division of sequences as multiplication in reverse: if the areas of the rectangles converge and so do their widths, then so do their heights.
Also, when two sequences are divided, it is as if we use each pair of their values to build a triangle:
Then the tangents of the base angles of these triangles form a new sequence and they converge if the widths and the heights converge.
Theorem (Quotient Rule). If sequences $a_n ,b_n$ converge then so does $a_n / b_n$ whenever defined, and $$\lim_{n\to\infty} \left(\frac{a_n}{b_n}\right) = \frac{\lim\limits_{n\to\infty} a_n}{\lim\limits_{n\to\infty} b_n},$$ provided $\lim_{n\to\infty} b_n \ne 0.$
Proof. We will only prove the case of $a_n=1$. Suppose $b_n\to b\ne 0$. First, choose $\varepsilon =|b|/2$ in the definition of convergence. Then there is $N$ such that for all $n>N$ we have $$|b_n-b|<|b|/2.$$ Therefore, $$|b_n|>|b|/2.$$ Next, $$\begin{array}{lll} \left| \frac{1}{b_n}-\frac{1}{b} \right| &= \left|\frac{b-b_n}{b_nb} \right|&\\ &= \frac{|b-b_n|}{|b_n|\cdot|b|}&\text{ ...then by above inequality...}\\ &< \frac{|b-b_n|}{|b/2|\cdot|b|}&\\ &\to \frac{0}{|b/2|\cdot|b|}&\text{ ...by the CMR...}\\ &=0. \end{array}$$ Therefore, $$\frac{1}{b_n} \to \frac{1}{b}.$$ Finally, the general case of QR follows from PR: $$\frac{a_n}{b_n}=a_n\cdot \frac{1}{b_n}\to a\cdot \frac{1}{b}=\frac{a}{b}.$$ $\blacksquare$
Exercise. What are the rules of the algebra of infinities for products?
Warning: it is considered a serious error if you use the conclusion (the formula) one of these rules without verifying the conditions (the convergence of the sequences involved).
The summary result below shows that when we replace every real number with a sequence converging to it, it is still possible to do algebraic operations with them.
Theorem (Algebra of Limits of Sequences). Suppose $a_n\to a$ and $b_n\to b$. Then $$\begin{array}{|ll|ll|} \hline \text{SR: }& a_n + b_n\to a + b& \text{CMR: }& c\cdot a_n\to ca& \text{ for any real }c\\ \text{PR: }& a_n \cdot b_n\to ab& \text{QR: }& a_n/b_n\to a/b &\text{ provided }b\ne 0\\ \hline \end{array}$$
Example. Let $$a_n=7n^{-2}+\frac{2}{3^n}+8.$$ What is its limit as $n\to \infty$? The computation is straightforward, but every step has to be justified with the rules above.
To understand which rules to apply first, observe that the last operation is addition. We use SR first, subject to justification: $$\begin{array}{lll} \lim_{n\to \infty}a_n&=\lim_{n\to \infty} (7n^{-2}+\frac{2}{3^n}+8) &\text{ ...use SR}\\ &=\lim_{n\to \infty} (7\cdot n^{-2})+\lim_{n\to \infty}(2\cdot \frac{1}{3^n})+\lim_{n\to \infty}8 &\text{ ...use CMR }\\ &=7\cdot \lim_{n\to \infty} n^{-2} +2\cdot \lim_{n\to \infty}3^{-n}+8 \quad&\\ &=7\cdot 0 +2 \cdot 0 +8\\ &=8. \end{array}$$ As all the limits exist, our use of SR (and then CMR) was justified. $\square$
Example. Prove the limit: $$\lim_{n \to \infty}(n^2-n) = +\infty .$$ Plotting the graph does suggest that the limit is infinite.
Presented verbally, these rules have these abbreviated versions:
• the limit of the sum is the sum of the limits;
• the limit of the difference is the difference of the limits;
• the limit of the product is the product of the limits;
• the limit of the quotient is the quotient of the limits (as long as the one of the denominator isn't zero).
Warning: never forget to confirm the preconditions before using these rules.
• right: take the limit of either, then down: add the results; or
• down: add them, then right: take the limit of the result.
The result is the same! For the Product Rule and the Quotient Rule, we just replace “$+$” with “$\cdot$” and “$\div$” respectively.
These rules show why approximations work. Indeed, we can think of a sequence that converges to a number as a sequence of better and better approximations. Then carrying out all the algebra with these sequences will produce the same result as the original computation is meant to produce! For example, here is such a substitution: $$\begin{array}{cccl} (1)&+&(2)&&=&3\\ \left(1+\frac{1}{n}\right)&+&\left(2-\frac{5}{n}\right)&=3-\frac{4}{n}&\to&3 \end{array}$$
What about infinite limits? If we replace an infinity with a sequence that approaches it, will the algebra make sense?
## 4 Can we add infinities? Subtract? Divide? Multiply?
We have demonstrated that in our computations of limits we can replace any sequence with its limit and continue doing the algebra. This conclusion doesn't apply to divergent sequences!
Sequences that approach infinity diverge, technically, but they provide useful information about the pattern exhibiting by the sequences. Such a sequence can also be a part of another, convergent sequence...
Theorem (Limits of Polynomials). Suppose we have a polynomial of degree $p$ with the leading coefficient $a_p\ne 0$. Then the limit of the sequence defined by this function is: $$\lim_{n\to\infty}(a_pn^p+a_{p-1}n^{p-1}+...+ a_1n+a_0)=\begin{cases} +\infty&\text{ if } a_p>0;\\ -\infty&\text{ if } a_p<0. \end{cases}$$
Proof. The idea is to factor out the highest power: $$a_pn^p+a_{p-1}n^{p-1}+...+ a_1n+a_0=n^p(a_p+a_{p-1}n^{-1}+...+ a_1n^{1-p}+a_0n^{-p})\to +\infty(a_p+0).$$ $\blacksquare$
So, as far as its behavior at $\infty$, for a polynomial,
• only the leading term matters.
Example. Evaluate the limit: $$\lim_{n \to \infty}\frac{4n^2-n+2}{2n^2-1}.$$
Plotting the graph does suggest that the limit is $a=2$:
Once again, we can't conclude that the limit doesn't exist; we've just failed to find the answer. The path out of this conundrum lies through algebra.
We divide the numerator and denominator by $n^2$ : $$\begin{array}{lll} \frac{4n^2-n+2}{2n^2-1}&=\frac{(4n^2-n+2)/n^2}{(2n^2-1)/n^2}\\ &=\frac{4-\tfrac{1}{n}+\tfrac{2}{n^2}}{2-\tfrac{1}{n^2}}\\ &\to\frac{4-0+0}{2-0}\\ &=\frac{4}{2}\\ &=2. \end{array}$$ We only used QR at the very end, after the indeterminacy has been resolved. $\square$
The general method for finding such limits is given by the theorem below.
Theorem (Limits of rational functions). Suppose we have a rational function $f$ represented as a quotient of two polynomials of degrees $p$ and $q$, with the leading coefficients $a_p\ne 0,\ b_q\ne 0$. Then the limit of the sequence defined by this function is: $$\lim_{n\to\infty}\frac{a_pn^p+a_{p-1}n^{p-1}+...+ a_1n+a_0}{b_qn^q+b_{q-1}n^{q-1}+...+ b_1n+b_0}=\begin{cases} \pm\infty&\text{ if } p>q;\\ \frac{a_p}{b_p}&\text{ if } p=q;\\ 0&\text{ if } p<q. \end{cases}$$
Proof. The idea is to divide by the highest power. If $p>q$, we have $$\frac{a_pn^p+a_{p-1}n^{p-1}+...+ a_1n+a_0}{b_qn^q+b_{q-1}n^{q-1}+...+ b_1n+b_0}=\frac{a_p+a_{p-1}n^{-1}+...+ a_1n^{-p+1}+a_0n^{-p}}{b_qn^{q-p}+b_{q-1}n^{q-p-1}+...+ b_1n^{1-p}+b_0n^{-p}}\to\frac{a_p+0}{0}=\pm\infty.$$ If $p=q$, we have $$\frac{a_pn^p+a_{p-1}n^{p-1}+...+ a_1n+a_0}{b_qn^q+b_{q-1}n^{q-1}+...+ b_1n+b_0}=\frac{a_p+a_{p-1}n^{-1}+...+ a_1n^{-p+1}+a_0n^{-p}}{b_q+b_{q-1}n^{-1}+...+ b_1n^{1-p}+b_0n^{-p}}\to\frac{a_p+0}{b_p+0}=\frac{a_p}{b_p}.$$ If $p<q$, we have $$\frac{a_pn^p+a_{p-1}n^{p-1}+...+ a_1n+a_0}{b_qn^q+b_{q-1}n^{q-1}+...+ b_1n+b_0}=\frac{a_pn^{p-q}+a_{p-1}n^{p-q-1}+...+ a_1n^{1-q}+a_0n^{-q}}{b_q+b_{q-1}n^{-1}+...+ b_1n^{-q+1}+b_0n^{-q}}\to\frac{0}{b_q+0}=0.$$ $\blacksquare$
This is the lesson we have re-learned:
• the long-term behavior of polynomials is determined by their leading terms.
Indeed: $$\lim_{n\to\infty}\frac{a_pn^p+a_{p-1}n^{p-1}+...+ a_1n+a_0}{b_qn^q+b_{q-1}n^{q-1}+...+ b_1n+b_0}=\lim_{n\to\infty}\frac{a_pn^p}{b_qn^q}=\frac{a_p}{b_q}\lim_{n\to\infty}n^{p-q}.$$
Example. Find the limit of the sequence: $$\begin{array}{lll} y_n&=\frac{1+(-3)^n}{5^n} & \leadsto\text{ QR? } \text{ But the numerator diverges -- DEAD END! }\\ &=\frac{1}{5^n}+\frac{(-3)^n}{5^n}\\ &=\left( \frac{1}{5} \right)^n + \left( \frac{-3}{5} \right)^n \\ &\quad\quad \downarrow \quad\quad\quad\quad \downarrow\\ &\quad\quad 0 \quad\quad\quad\quad 0 \\ &\to 0 &\text{ by SR }. \end{array}$$ These are two geometric progressions with the ratios: $r=1/5,\ -3/5$, that satisfy $|r| <0$. Meanwhile, our application of SR was justified by the fact that the two limits exit. $\square$
Exercise. Find the limit of the composition of $f(x)=\operatorname{sign}(x)$ and the sequence $x_n$ given by (a) $1/n$, (b) $-1/n$, (c) $(-1)^n/n$.
What is infinity?
The plus (or minus) infinity is identified with the collection of all sequences approaching this infinity. In other words, the following identity is read in both directions: $$\lim _{n\to +\infty}a_n=+\infty.$$
Now, does it make sense to do any algebra with the infinities? Yes, as long as the algebra with these limits makes sense.
The Algebraic Rules of Limits above have exceptions; we can imagine that one or both of the sequences approach infinity or that the limit in the denominator is $0$.
Theorem (Algebra of Infinite Limits of Sequences I). Suppose $a_n\to a$ and $b_n\to \pm\infty$. Then $$\begin{array}{|ll|llll|} \hline \text{SR: }& a_n + b_n\to \pm\infty& \text{CMR: }& c\cdot b_n\to \pm\infty& \text{ for any real }c>0\\ \text{PR: }& a_n \cdot b_n\to \operatorname{sign}(a)\infty& \text{QR: }& a_n/b_n\to 0, & b_n/a_n\to \pm\operatorname{sign}(a)\infty&\text{provided }a\ne 0 \\ \hline \end{array}$$
Theorem (Algebra of Infinite Limits of Sequences II). Suppose $a_n\to \pm\infty$ and $b_n\to \pm\infty$. Then $$\begin{array}{|ll|} \hline \text{SR: }& a_n + b_n\to \pm\infty& \\ \text{PR: }& a_n \cdot b_n\to +\infty& &\\ \hline \end{array}$$
Justified by these theorems, we follow the algebra of infinities: $$\begin{array}{|lll|} \hline \text{number } &+& (+\infty)&=+\infty\\ \text{number } &+& (-\infty)&=-\infty\\ +\infty &+& (+\infty)&=+\infty\\ -\infty &+& (-\infty)&=-\infty\\ \text{number } &/& (\pm\infty)&=0\\ \hline \end{array}$$ These are just shortcuts!
There is no $\infty -\infty$ (just as there is no $\infty /\infty$)... Why not?
Behind each $\infty$, there must be a sequence approaching $\infty$! However, the outcome is ambiguous; on the one hand we have: $$a_n=n \to+\infty,\ b_n=-n \to+\infty\ \Longrightarrow\ a_n- b_n=0 \to 0;$$ on the other: $$a_n=n^2 \to+\infty,\ b_n=-n \to+\infty\ \Longrightarrow\ a_n- b_n=n^2-n \to +\infty,$$ by Limits of Polynomials. Two seemingly legitimate answers for the same expression, $\infty -\infty$...
We have another indeterminate expression!
## 5 More properties of limits of sequences
Only the tail of the sequence matters for convergence:
Theorem (Truncation Principle). A sequence is convergent if and only if all of its truncations are convergent: $$\big( a_n:\ n=p,p+1,... \big)\to a \ \Longleftrightarrow\ \big(a_n:\ n=p+1,p+2,...\big)\to a.$$
Exercise. Prove the theorem.
Non-strict inequalities between sequences $$a\leftarrow a_n \ge b_n \to b,$$ are preserved under limits: $$a\ge b.$$
Theorem (Comparison Test). If $a_n \ge b_n$ for all $n$ greater than some $N$, then $$\lim_{n\to\infty} a_n \ge \lim_{n\to\infty} b_n,$$ provided the sequences converge.
Proof. The geometry of the proof is clear: we want to separate the two horizontal lines representing the two limits by two non-overlapping bands, as shown above. Then, if narrow enough, the tails of the “larger” sequence would have to fit the “smaller” band. These bands correspond to two intervals around those two limits. In order for them to be disjoint, their width (that's $2\varepsilon$!) should be less than half the distance between the two numbers.
The proof is by contradiction. Suppose $a$ and $b$ are the limits of $a_n$ and $b_n$ respectively and suppose also $a< b$. Let $$\varepsilon = \frac{b-a}{2}.$$ Then, what we are going to use at the end is $$a+\varepsilon=b-\varepsilon.$$
Now, we rewrite the definition for $a$ as limit:
• there exists a natural number $N$ such that, for every natural number $n > L$, we have
$$|a_n - a| < \varepsilon .$$ Now, we rewrite the definition for $b$ as limit:
• there exists a natural number $K$ such that, for every natural number $n > M$, we have
$$|b_n - b| < \varepsilon .$$ In order to combine the two statements, we need them to be satisfied for the same values of $n$. Let $$N=\max\{ L,M\}.$$ Then,
• for every number $n > N$, we have
$$|a_n - a| < \varepsilon ,\text{ or } a-\varepsilon<a_n<a+\varepsilon,$$
• for every number $n > N$, we have
$$|b_n - b| < \varepsilon ,\text{ or } b-\varepsilon<b_n<b+\varepsilon.$$ Taking one from either of the two pairs of inequalities, we have: $$a_n < a+\varepsilon=b-\varepsilon<b_n.$$ A contradiction. $\blacksquare$
Exercise. Show that replacing the non-strict inequality, $a_n \ge b_n$, with a strict one, $a_n > b_n$, won't produce a strict inequality in the conclusion of the theorem.
The situation is similar to that of the Uniqueness Theorem: if the opposite inequality were to hold, we could find two bands to contain the two sequences' tails so that the original inequality would fail:
Warning: from the inequality in the theorem, we can't conclude anything about the existence of the limit:
Having two inequalities, on both sides, may work better.
It is called a squeeze. If we can squeeze the sequence under investigation between two familiar sequences, we might be able to say something about its limit. Some further requirements will be necessary.
Theorem (Squeeze Theorem). If a sequence is squeezed between two sequences with the same limit, its limit also exists and is equal to the that number; i.e., $$a_n \leq c_n \leq b_n \text{ for all } n > N,$$ and $$\lim_{n\to\infty} a_n = \lim_{n\to\infty} b_n = c,$$ then the sequence $c_n$ converges and $$\lim_{n\to\infty} c_n = c.$$
Proof. The geometry of the proof is shown below:
Suppose $\varepsilon>0$ is given. As we know, we have for all $n$ larger than some $N$: $$c-\varepsilon < a_n < c+\varepsilon \text{ and } c-\varepsilon < b_n < c+\varepsilon.$$ Then we have: $$c-\varepsilon < a_n \le c_n \le b_n < c+\varepsilon.$$ $\blacksquare$
Example. Sometimes the choice of the squeeze is obvious. Consider: $$c_n=\frac{(-1)^n}{n}.$$ Examining the sequence reveals the two bounds:
In other words, we have: $$-\frac{1}{n} \le \frac{(-1)^n}{n} \le \frac{1}{n} .$$ Now, since both $a_n=-\frac{1}{n}$ and $b_n=\frac{1}{n}$ go to $0$, by the Squeeze Theorem, so does $c_n=\frac{(-1)^n}{n}$. $\square$
Example. Let's find the limit, $$\lim_{n \to \infty }\frac{1}{n} \sin n.$$
It cannot be computed by PR because $$\lim_{n \to \infty }\sin n.$$ does not exist. Let's try a squeeze. This is what we know from trigonometry: $$-1 \le \sin n \le 1.$$ However, this squeeze proves nothing about the limit of our limit!
Let's try another squeeze: $$-\left| \frac{1}{n} \right| \le \frac{1}{n} \sin\frac{1}{n} \le \left| \frac{1}{n} \right| .$$ Now, since $\lim_{n \to \infty }(-\frac{1}{n}) =\lim_{n \to \infty }\frac{1}{n}=0$, by the Squeeze Theorem, we have: $$\lim_{n \to \infty }\frac{1}{n} \sin n=0.$$ $\square$
Exercise. Suppose $a_n$ and $b_n$ are convergent. Prove that $\max\{a_n,b_n\}$ and $\min\{a_n,b_n\}$ are also convergent. Hint: start with the case $\lim a_n>\lim b_n$.
The squeeze theorem is also known as the Two Policemen Theorem: if two policemen are escorting a prisoner (handcuffed) between them, and both officers go to the same(!) police station, then -- in spite of some freedom the handcuffs allow -- the prisoner will also end up in that station.
Another name is the Sandwich Theorem. It is, once again, about control. A sandwich can be a messy affair: ham, cheese, lattice, etc. One won't want to touch that and instead takes control of the contents by keeping them between the two buns. He then brings the two to his mouth and the rest of the sandwich along with them!
To make conclusions about divergence to infinity, we only need to control it from one side.
Theorem (Push Out Theorem). If $a_n \ge b_n$ for all $n$ greater than some $N$, then we have: $$\begin{array}{lll} \lim_{n\to\infty} a_n =-\infty&\Longrightarrow& \lim_{n\to\infty} b_n=-\infty;\\ \lim_{n\to\infty} a_n =+\infty&\Longleftarrow& \lim_{n\to\infty} b_n=+\infty. \end{array}$$
Exercise. Prove the theorem.
Exercise. Suppose a sequence is defined recursively by $$a_{n+1}=2a_n+1\text{ with } a_0=1.$$ Does the sequence converge or diverge?
## 6 Theorems of Introductory Analysis
The theorems in this section will be used to prove new theorems. It can be skipped on the first reading.
We accept the following fundamental result without proof.
Theorem (Monotone Convergence Theorem). If a sequence is bounded and monotonic, i.e., it is either increasing, $a_n\le a_{n+1}$ for all $n$, or decreasing, $a_n\ge a_{n+1}$ for all $n$, then it is convergent.
The result is also known as the Completeness Property of Real Numbers.
Theorem (Nested Intervals Theorem). (1) A sequence of nested closed intervals has a non-empty intersection, i.e., if we have two sequences of numbers $a_n$ and $b_n$ that satisfy $$a_1\le a_2\le ... \le a_n\le ... \le b_n\le ... \le b_2\le b_1,$$ then they both converge, $$a_n\to a,\ b_n\to b,$$ and $$\bigcap_{n=1}^\infty [a_n,b_n]=[a,b].$$ (2) If, moreover, $$b_n-a_n\to 0,$$ then $$\bigcap_{n=1}^\infty [a_n,b_n]=\{a\}=\{b\}.$$
Proof. For part (1), observe that a point $x$ belongs to the intersection if and only if it satisfies: $$a_n\le x \le b_m,\ \forall n,m.$$ Meanwhile, the sequences converge by the Monotone Convergence Theorem. Therefore, $$a\le x\le b$$ by the Comparison Theorem.
For part (2), consider: $$0=\lim _{n\to \infty} (b_n-a_n)=\lim _{n\to \infty} b_n-\lim _{n\to \infty} a_n=b-a,$$ by SR. We then conclude that $a=b$. $\blacksquare$
We have indeed a “nested” sequence of intervals $$I=[a,b] \supset I_1=[a_1,b_1] \supset I_2=[a_2,b_2] \supset ...,$$ with a single point $A$ in common.
Definition. Given a set $S$ of real numbers, its upper bound is any number $M$ that satisfies: $$x\le M \text{ for any } x\text{ in } S.$$ Its lower bound is any number $m$ that satisfies: $$x\ge m \text{ for any } x\text{ in } S.$$
For $S=[0,1]$, any number $M\ge 1$ is its upper bound. However, these sets have no upper bounds: $$(-\infty,+\infty),\ [0,+\infty),\ \{0,1,2,3,...\}.$$
Definition. A set that has an upper bound is called bounded above and a set that has a lower bound is called bounded below. A set that has both upper and lower bounds is called bounded; otherwise it's unbounded.
Definition. For a set $S$, an upper bound for which there is no smaller upper bound is called a least upper bound; it is also called supremum and is denoted by $\sup S$. For a set $S$, a lower bound for which there is no larger upper bound is called a greatest lower bound; it is also called infimum and is denoted by $\inf S$.
Thus, $M=\sup S$ means that
• 1. $M$ is an upper bound of $S$, and
• 2. if $M'$ is another upper bound of $S$, then $M'\ge M$.
Now, if we have another $M'=\sup S$, then
• 1. $M'$ is an upper bound of $S$, and
• 2. if $M$ is another upper bound of $S$, then $M\ge M'$.
Therefore, $M=M'$.
Theorem. For a given set, there can be only one least upper bound.
Thus, we are justified to speak of the least upper bound.
Example. For the following sets the least upper bound is $M=3$:
• $S=\{1,2,3\}$;
• $S=[1,3]$;
• $S=(1,3)$.
The proof for the last one is as follows. Suppose $M'$ is an upper bound with $1<M'<3$. Let's choose $a=\frac{3+M'}{2}$. But $a$ belongs to $S$! Therefore, $M'$ isn't an upper bound.
What if we limit $S$ to the rational numbers only in $(1,3)$? Then $a=\frac{3+M'}{2}$ won't belong to $S$ when $M'$ is irrational. The proof fails. $\square$
Theorem (Existence of $\sup$). Any bounded above set has a least upper bound. Any bounded below set has a greatest lower bound.
Proof. The idea of the proof is to construct nested intervals with the right-end points being upper bounds. What should be the left-end points?
Given a set $S$, let
• $U$ be the set of all upper bounds of $S$;
• $L$ be the set of all lower bounds of $U$.
Since $S$ is bounded $$U\ne \emptyset.$$ Now, if $S$ is a single point, we are done. If not, we have $x,y$ in $S$ such that $x<y$. Therefore, $x$ belongs to $L$ and $$L\ne \emptyset.$$
• $a_1$ is any element of $L$ and $b_1$ is any element of $U$.
Suppose inductively that we have constructed two sequences of numbers $$a_i,\ b_i,\ i=1,2,3..., n,$$ such that:
• 1. $a_i$ is in $L$ and $b_i$ is in $U$;
• 2. $a_n\le...\le a_1\le b_1\le ...\le b_n$;
• 3. $b_i-a_i\le \frac{1}{2^{i-1}}(b_1-a_1)$.
We continue with the inductive step: let $$c=\frac{1}{2}(b_n-a_n).$$ We have two cases.
Case 1: $c$ belongs to $U$. Then choose $$a_{n+1}=a_n \text{ and } b_{n+1}=c.$$ Then, $$a_{n+1}=a_n\in L,\ b_{n+1}=c\in U.$$ Case 2: $c$ belongs to $L$. Then choose $$a_{n+1}=c \text{ and } b_{n+1}=b_n.$$ Then, $$a_{n+1}=c\in L,\ b_{n+1}=b_n\in U.$$
Furthermore, $$b_{n+1}-a_{n+1}=\frac{1}{2}(b_n-a_n)\le \frac{1}{2}\frac{1}{2^{n-1}}(b_1-a_1)=\frac{1}{2^{n}}(b_1-a_1).$$ Thus, all the conditions are satisfied, and our sequence of nested intervals has been inductively built. We apply the Nested Intervals Theorem and conclude that $$a_n\to d\leftarrow b_n.$$
Why is $c$ a least upper bound of $S$?
First, suppose $c$ is not an upper bound. Then there is $x\in S$ with $x>c$. If we choose $\varepsilon =x-c$, then from $b_n \to c$ we conclude that $b_n<x$ for all $n>N$ for some $N$. This contradicts the assumption that $b_n\in U$.
Second, suppose $c$ is not a least upper bound. Then there is an upper bound $y<c$. If we choose $\varepsilon =c-y$, then from $a_n \to c$ we conclude that $a_n>y$ for all $n>N$ for some $N$. This contradicts the assumption that $a_n\in L$. $\blacksquare$
Theorem (Intermediate Point Theorem). A subset $J$ of the reals is an interval or a point if and only if it contains all of its intermediate points; i.e., $$J\ni y_1<c< y_2\in J \ \Longrightarrow\ c\in J.$$
Proof. The “if” part is obvious. Now assume that the condition is satisfied for set $J$. Suppose also that $J$ is bounded. Then these exist by the Existence of $\sup$ theorem: $$a=\inf S,\ b=\sup J.$$ Note that these might not belong to $J$. However, if $c$ satisfies $a\le c\le b$, then there are
• $y_1\in J$ such that $a<y_1<c$, and
• $y_2\in J$ such that $c<y_2<b$.
By the property, then we have: $c\in J$. Therefore, $J$ is an interval with $a,b$ its end-points. $\blacksquare$
Exercise. Prove the theorem for the unbounded case.
Theorem (Bolzano-Weierstrass Theorem). Every bounded sequence has a convergent subsequence.
Proof. Suppose $x_n$ is such a sequence. Then, it is contained in some interval $[a,b]$. The first part of the construction is to cut consecutive intervals is half and pick the half that contains infinitely many elements of the set $\{x_n:\ n=1,2,3...\}$.
Similarly to the previous proofs, we assume that we have already constructed sequences: $$a_i,\ b_i,\ i=1,2,3..., n,$$ such that:
• 1. $[a_i,b_i]$ contains infinitely many elements of $\{x_n:\ n=1,2,3...\}$;
• 2. $a_n\le...\le a_1\le b_1\le ...\le b_n$;
• 3. $b_i-a_i\le \frac{1}{2^{i-1}}(b_1-a_1)$.
We continue with the inductive step: let $$c=\frac{1}{2}(b_n-a_n).$$ We have two cases.
Case 1: interval $[a_n,c]$ contains infinitely many elements of $\{x_n:\ n=1,2,3...\}$. Then choose $$a_{n+1}=a_n \text{ and } b_{n+1}=c.$$ Case 2: interval $[a_n,c]$ does not contain infinitely many elements of $\{x_n:\ n=1,2,3...\}$, then $[c,b_n]$ does. Then choose $$a_{n+1}=c \text{ and } b_{n+1}=b_n.$$
As before, $$b_{n+1}-a_{n+1}=\frac{1}{2}(b_n-a_n)\le \frac{1}{2}\frac{1}{2^{n-1}}(b_1-a_1)=\frac{1}{2^{n}}(b_1-a_1).$$
The intervals are constructed as desired; the intervals are zooming in on the denser and denser parts of the sequence:
Now we apply the Nested Intervals Theorem to conclude that $$a_n\to d\leftarrow b_n.$$
The second part of the construction is to choose the terms of the subsequence $y_k$ of $x_n$, as follows. We just pick as $y_{k}$ any element of the set $\{x_n:\ n=1,2,3...\}$ in $[a_k,b_k]$ that comes later in the sequence than the ones already added, i.e., $y_1,y_2,...,y_{k-1}$. This is always possible because we always have infinitely many elements left to choose from. Once the subsequence $y_k$ is constructed, we have $y_k\to d$ by the Squeeze Theorem. $\blacksquare$
## 7 Compositions
What about the limit of the new sequence? Can we say, similar to the four rules of limits, the limit of composition is the composition of the limits? Well, there is no such thing as composition of numbers...
Let's look at some examples.
Example. Sometimes the algebra is obvious. If $f$ is a linear polynomial, $$f(x)=mx+b,$$ and we have a sequence $x_n\to a$, we can use the Sum Rule and the Constant Multiple Rule to prove the following: $$\begin{array}{lll} \lim_{n\to \infty} f( x_n ) &=\lim_{n\to \infty} (m x_n +b) \\ &=m\lim_{n\to \infty} x_n +b\\ &=ma+b \\ &=f(a). \end{array}$$ $\square$
What we see is that the limit of the composition is the value of the function at the limit!
Example. Let's try $f(x)=x^2$ and a sequence that converges to $0$.
Bottom: how $x$ depends on $n$, middle: how $y$ depends on $x$, right: how $y$ depends on $n$. Can we prove what we see? An application of the Product Rule in this simple situation reveals: $$\begin{array}{lll} \lim_{n\to \infty} \big( x_n \big)^2 &=\lim_{n\to \infty} \left( x_n\cdot x_n \right)\\ &=\lim_{n\to \infty} x_n\cdot \lim_{n\to \infty}x_n \\ &=\left(\lim_{n\to \infty} x_n\right)^2, \end{array}$$ provided that limit exists. $\square$
A repeated use of PR produces a more general formula: if sequence $x_n$ converges then so does $(x_n)^p$ for any positive integer $p$, and $$\lim_{n\to\infty} \left[ (x_n)^p \right] = \left[ \lim_{n\to\infty} x_n \right]^p.$$ Combined with the Sum Rule and the Constant Multiple Rule this proves the following.
Theorem (Composition Rule for Polynomials). If sequence $x_n$ converges then so does $f(x_n)$ for any polynomial $f$, and $$\lim_{n\to\infty} f(x_n) = f\left[ \lim_{n\to\infty} x_n \right].$$
Then, we conclude that limits behave well with respect to composition with some functions. In general, new sequences are produces via compositions with functions: given a sequence $x_n$ and a function $y=f(x)$, define $$y_n=f(x_n).$$
But what about other functions: $$f(x)=\sqrt{x},\ g(x)=\sin x,\ h(x)=e^x?$$
Example. This time we choose a sequence that approaches $0$ from both sides: $$x_n=(-1)^n\frac{1}{n^.8} \text{ and } f(x)=-\sin 5x.$$
We see the same pattern! $\square$
Example. What if we choose $$x_n=\frac{1}{n} \text{ and } f(x)=\frac{1}{x}?$$ Then, obviously, we have $$y_n=\frac{1}{1/n}=n\to\infty!$$
$\square$
In Chapter 6, we will use this construction to study the limits of functions rather than those of sequences. A few examples of that are presented in the next section.
## 8 Famous limits
In this section, we will establish several important facts that will be used throughout the book.
First, trigonometry. The graph of $y=\sin x$ almost merges with the line $y=x$ around $0$. Moreover, plotting the points $(1/n,\sin 1/n)$ reveals a straight line with slope $1$:
Let's compare the two algebraically.
Theorem. $$\lim_{n\to \infty} \frac{\sin x_n}{x_n} =1,$$ for any sequence $x_n\to 0$.
Proof. The conclusion follows from the trigonometry fact: $$\cos x < \frac{\sin x}{x} < 1,$$ and the Squeeze Theorem. $\blacksquare$
The graph of $y=\cos x$ almost merges with the line $y=1$ when close to the $y$-axis. Moreover, plotting the points $(1/n,1-\cos 1/n)$ shows that the slope converges to $0$:
Let's compare the two algebraically.
Corollary. $$\lim_{n\to \infty} \frac{1 - \cos x_n}{x_n} = 0,$$ for any sequence $x_n\to 0$.
Exercise. Prove the theorem.
Corollary. $$\lim_{n\to \infty} \frac{\tan x_n}{x_n} =1,$$ for any sequence $x_n\to 0$.
Proof. It follows from the above theorem, the fact that $\cos x_n\to 1$ for any sequence $x_n\to 0$ and QR. $\blacksquare$
This is a confirmation:
Second, the exponents.
Example (compounded interest). Suppose we have money in the bank at APR $10\%$ compounded annually. Then after a year, given $\$ 1,000initial deposit, you have \begin{aligned} 1000 + 1000\cdot 0.10 &= 1000(1 + 0.1) \\ &= 1000 \cdot 1.1. \end{aligned} Same every year. Aftert$years, it's$1000\cdot1.1^{t}$. What if it is compounded semi-annually, with the same APR? After$\frac{1}{2}$year,$1000\cdot 0.05$, or total $$1000 + 1000\cdot 0.05 = 1000\cdot 1.05;$$ after another$\frac{1}{2}$year, $$\left(1000\cdot 1.05\right)\cdot 1.05 = 1000 \cdot 1.05^{2}.$$ After$t$years, $$1000\cdot (1.05^{2})^{t} = 1000\cdot 1.05^{2t}.$$ Note that we are getting more money:$1.05^{2} = 1.1025 > 1.1$! Try compound quarterly, $$1000\cdot 1.025^{4t}.$$ If compounded$n$times, then $$1000 \cdot \left(1 + \frac{1}{n}\right)^{nt},$$ where$\frac{1}{n}$is the interest in one period. Generally, for APR$r$(given as a decimal) and for the initial deposit$A_{0}$, after$t$years, the current amount is $$A(t) = A_{0}\left(1 + \frac{r}{n} \right)^{nt},$$ if compounded$n$times per year. What if we compounded more and more, will we be paid unlimited amounts? No.$\square$Theorem (Continuous compounding). The limit below exists: $$\lim_{n\to \infty} \left( 1+\frac{1}{n} \right)^n .$$ Proof. First, we show that the sequence $$a_n=\left(1+\frac{1}{n}\right)^{n},$$ is increasing. We have: $$\begin{array}{lll} \dfrac{a_{n+1}}{a_n}&=\dfrac{\left(1+\tfrac{1}{n+1}\right)^{n+1}}{\left(1+\tfrac{1}{n}\right)^n}=\dfrac{\left(\frac{n+2}{n+1}\right)^{n+1}}{\left(\frac{n+1}{n}\right)^n}\\ &=\left(\dfrac{n+2}{n+1}\right)^{n+1}\left(\dfrac{n}{n+1}\right)^{n+1}\left(\dfrac{n+1}{n}\right)^1\\ &=\left(\dfrac{n^2+2n+1-1}{n^2+2n+1}\right)^{n+1}\dfrac{n+1}{n}\\ &=\left(1-\dfrac{1}{(n+1)^2}\right)^{n+1}\dfrac{n+1}{n}. \end{array}$$ We use the Bernoulli Inequality: $$(1+a)^m > 1+ma,$$ for any$a$. We just choose$a=\tfrac{-1}{(n+1)^2}$and$m=n+1$. Then $$\begin{array}{lll} \dfrac{a_{n+1}}{a_n} & > \left(1-\dfrac{1}{n+1}\right)\dfrac{n+1}{n}\\ &=\dfrac{n}{n+1}\dfrac{n+1}{n}\\ &=1. \end{array}$$ In a similar fashion we show that the sequence $$b_n=\left(1+\frac{1}{n}\right)^{n+1},$$ is decreasing. Since$a_n<b_n$, we conclude that former sequence is both increasing and bounded. Therefore, it converges by the Monotone Convergence Theorem.$\blacksquare$We denote this limit by$e$, $$e=\lim_{n\to \infty} \left( 1+\frac{1}{n} \right)^n .$$ It is also known as the “Euler number”. Example. We continue with the example... What if the interest is compounded$n$times and$n \to \infty? Then we have: \begin{aligned} \lim_{n \to \infty} A(t) & = \lim_{n \to \infty} A_{0} \left( 1 + \frac{r}{n} \right)^{nt} \\ &= A_{0} \lim_{n \to \infty} \left( 1 + \frac{r}{n} \right)^{nt} \text{ ... by CMR} \\ & = A_{0} \left( \lim_{n \to \infty} \left(1 + \frac{r}{n}\right)^{n}\right)^{t} \\ & = A_{0} (e^{r})^{t}. \end{aligned} Thus, with APR ofr$and an initial deposit$A_{0}$, after$t$years you have: $$A(t) = A_{0} e^{rt}.$$ We say that the interest is compounded continuously. Suppose APR is$10\%$,$A_{0} = 1000$,$t = 1$. Then, $$A(1)=1000\cdot e^{1.1} = 1000\cdot e^{0.1} \approx \1,105,$$ interest$ \$1,105 > \$100 $(annual). How long does it take to triple your money with APR=$5\%$, compounded continuously? Set$A_{0} = 1$and solve for$t$: $$\begin{array}{rll} 3 & =& 1\cdot e^{0.05t} &\Longrightarrow\\ \ln 3 &=& 0.05t &\Longrightarrow\\ t &=& \frac{\ln 3}{0.05} &\approx 22 \text{ years.} \end{array}$$$\square$Note that these results suggest a way of understanding the limit of a function at a point to be discussed in Chapter 6. Exercise. Give formulas for the following sequences: (a)$a_n\to 0$as$n\to \infty$but it's not increasing or decreasing; (b)$b_n\to +\infty$as$n\to \infty$but it's not increasing. Applying the Binomial Theorem (Chapter 1) to the expression for$e$yields: $$\left(1 + \frac{1}{n}\right)^n = 1 + {n \choose 1}\frac{1}{n} + {n \choose 2}\frac{1}{n^2} + {n \choose 3}\frac{1}{n^3} + \cdots + {n \choose n}\frac{1}{n^n}.$$ The$k$th term of this sum is $${n \choose k}\frac{1}{n^k} = \frac{1}{k!}\cdot\frac{n(n-1)(n-2)... (n-k+1)}{n^k}= \frac{1}{k!}\cdot\frac{n}{n}\frac{n-1}{n}\frac{n-2}{n}... \frac{n-k+1}{n}.$$ As$n\to\infty\$, the fraction approaches one, and therefore $$\lim_{n\to\infty} {n \choose k}\frac{1}{n^k} = \frac{1}{k!}.$$ Then: $$e=\sum_{k=0}^\infty\frac{1}{k!}=\frac{1}{0!} + \frac{1}{1!} + \frac{1}{2!} + \frac{1}{3!} + \cdots.$$ The convergence follows from the Monotone Convergence Theorem. |
New Zealand
Level 7 - NCEA Level 2
# Geometrical Problems with Coordinates
Lesson
We've looked at how to plot straight lines on the number plane. Now we are going to look at how to plot a series of coordinates to create geometric shapes.
Here are some helpful formulae and properties that will help us solve these kinds of problems:
• Distance formula: $d=\sqrt{\left(x_2-x_1\right)^2+\left(y_2-y_1\right)^2}$d=(x2x1)2+(y2y1)2
• Gradient formula: $m=\frac{y_2-y_1}{x_2-x_1}$m=y2y1x2x1
• Mid-point formula: $\left(\frac{x_1+x_2}{2},\frac{y_1+y_2}{2}\right)$(x1+x22,y1+y22)
• Parallel lines have equal gradient: $m_1=m_2$m1=m2
• The product of the gradients of perpendicular lines is $-1$1: $m_1m_2=-1$m1m2=1
Remember!
Different shapes have different properties.
These can be used to help plot and identify features of shapes on a number plane, so make sure you're familiar with the properties of different triangles and quadrilaterals.
#### Worked Examples
##### Question 1
$A$A$\left(-2,-1\right)$(2,1), $B$B$\left(0,0\right)$(0,0) and $C$C$\left(1,k\right)$(1,k) are the vertices of a right-angled triangle with right angle at $B$B.
1. Find the value of $k$k.
2. Find the area of the triangle.
##### Question 2
Given Line P: $y=-6x-4$y=6x4, Line Q: $y=\frac{x}{6}+6$y=x6+6, Line R: $y=-6x-1$y=6x1 and Line S: $y=\frac{x}{6}+1$y=x6+1.
1. Complete the following:
$m$mP = $\editable{}$
$m$mQ = $\editable{}$
$m$mP x $m$mQ = $\editable{}$
2. Complete the following:
$m$mQ = $\editable{}$
$m$mR = $\editable{}$
$m$mQ x $m$mR = $\editable{}$
3. Complete the following:
$m$mR = $\editable{}$
$m$mS = $\editable{}$
$m$mR x $m$mS = $\editable{}$
4. Complete the following:
$m$mS = $\editable{}$
$m$mP = $\editable{}$
$m$mS x $m$mP = $\editable{}$
5. What type of quadrilateral is formed by lines: P, Q, R, and S?
Trapezoid
A
Rectangle
B
Rhombus
C
Parallelogram
D
Trapezoid
A
Rectangle
B
Rhombus
C
Parallelogram
D
### Outcomes
#### M7-1
Apply co-ordinate geometry techniques to points and lines
#### M7-7
Form and use linear, quadratic, and simple trigonometric equations
#### 91256
Apply co-ordinate geometry methods in solving problems
#### 91261
Apply algebraic methods in solving problems |
The above video goes away if you are a member and logged in, so log in now!
Would you like to get all the newest Gaming News fromQJ.NET in your email each day? Want to learn more about the team who brings you the QJ news?
## [RELEASE] GripShift Hello World + Sparta SDK
This is a discussion on [RELEASE] GripShift Hello World + Sparta SDK within the PSP Development Forum forums, part of the PSP Development, Hacks, and Homebrew category; Ok, so this is the Hello World version of the GripShift exploit, complete with a binary loader and SDK to ...
Tweet
1. ## [RELEASE] GripShift Hello World + Sparta SDK
Ok, so this is the Hello World version of the GripShift exploit, complete with a binary loader and SDK to make your own binaries.
The readme says it all:
Code:
Hello World on PSP FW 1.52-5.02
The Spartaaaaaaaaaaaaaaaaaaaa!!! Exploit
by MaTiAz & FreePlay
Instructions
------------
1. Copy the contents of MS_ROOT into the root of your memory stick.
(This will overwrite the first GripShift savegame slot).
2. Launch the US version of GripShift.
3. Load up the game (if it doesn't autoload).
4. See your PSP run unsigned code. :)
It'll autoexit after some time. You can use the home button to exit too if
you've seen enough.
FAQ
---
Q: Will this allow downgrading?
A: No, because this is an usermode exploit and functions required to downgrade are
only available in kernel mode.
Q: Why the name?
A: Because the original exploit was found by overwriting the player name with
"this is spartaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa".
Q: Can/Will Sony block this?
A: Yes.
Q: I wanna make homebrew using the exploit. How?
A: Get FreePlay's GS SDK: http://f6y.ath.cx/pspdev/sparta_sdk.zip
It has some constraints though, check the readme.
The Hello World was written with it. :)
Credits
-------
Exploit and binary loader: MaTiAz
SDK: FreePlay
Greets go to Dark_AleX, Mathieulh, jas0nuk, Hellcat, etc. etc. etc, you know.
2. w00p?
3. Great work MaTiAz. I would really be interested to know how you got this working.
4. Two days straight of working on it tirelessly, mostly. Stopping occasionally to eat.
Hooah.
5. sweet guys, awesome work
6. who, oh
just wondering
this should aid DA into finding the necessary changes to his pandora
7. Originally Posted by emcp
who, oh
just wondering
this should aid DA into finding the necessary changes to his pandora
Uh, this is the farthest thing from pandora.
I'm sure eventually we'll see a downgrader. 2.8 was originally just user-mode too.
8. Originally Posted by emcp
who, oh
just wondering
this should aid DA into finding the necessary changes to his pandora
I doubt it. It's user mode only so far, plus, I'm sure Alex has his own exploit, probably kernel mode.
9. Originally Posted by Davee
I doubt it. It's user mode only so far, plus, I'm sure Alex has his own exploit, probably kernel mode.
yeah just noticed, on the main page it says usermode, should have also looked at the FAQ too
ah well, it gives some encouragement to persevere
10. this brings back the good ole days with psp hacking haven't seen an exploit in soon long :)
11. You could probably hook functions via assembly (I've done in it several usermode games, can't deny a subroutine access to it). I JUST ordered gripshift, so I will gander at this bad boy, it'd be neat if anything worthwhile came out of it.
EDIT, now that I think about, what I think could be done is hooking of a kernel thread to load a kernel module(by hijacking a jr ra off some BS kernel function and the arguments) which then does sorta a pause-game type thing, which then once you have your kernel module you can do whatever you might desire.
12. Has anyone tried running these exploits on the European version of the game? Gotta find out before I go searching for it :)
13. i tried. the sdk and savegames released only work on usa version. probably can be ported to european versions though
14. I've always been fascinated by Exploits!
Especially White Text Black Background + "Hello World"
15. Originally Posted by Sythun
Has anyone tried running these exploits on the European version of the game? Gotta find out before I go searching for it :)
Well, there's a bit of a problem with the binary loader on the european version, seems like sceIoOpen doesn't want to work. We'll be working on that, since the exploit does exist on the european version too.
16. w00t
17. I actually made my own Hello World (based on SG57's Snowfield demo) but MaTiAz made the exploit so his Hello World takes precedence
Here's the other one:
18. WoW, I changed a print
Thanks for the hard work everyone.
19. it should be easy to port some tiff brew for people to use while waiting for a eloader
20. eloader? I think this time they'll be wanting to head straight to cfw, which essentially if you can run a kernel-mode prx, I'm pretty sure you can.
21. Originally Posted by NoEffex
eloader? I think this time they'll be wanting to head straight to cfw, which essentially if you can run a kernel-mode prx, I'm pretty sure you can.
one minor problem: you can't.
so, they'll probably make a loader first.
or am I missing something?
22. Dark_AleX definitely has a kernel exploit to do all that.
How the heck did he dump the PSP-3000 decrypt tables then?
Except, he doesn't want to release it, yet.
:)
Porting the old libtiff homebrew could have some limits, if the GP SDK doesn't have the necessary functions, you are going to have to find them in the game itself. Only the functions imported by the game are allowed to be used by the exploit. Correct me if I am wrong.
@TurtlesPwn,
If they are able to get substantial kernel access, direct CFW or downgrading is possible. It happened in the Illuminati exploit. Maybe not an eLoader first but a HEN to allow those stuff. :)
-Light_AleX
23. Originally Posted by Light_AleX
@TurtlesPwn,
If they are able to get substantial kernel access, direct CFW or downgrading is possible. It happened in the Illuminati exploit. Maybe not an eLoader first but a HEN to allow those stuff. :)
-Light_AleX
Well you don't say do you? REALLY? WOW!
Thanks, captain obvious. What I was saying was that as of right now, there is no kernel access at all.
24. Originally Posted by TurtlesPwn
Well you don't say do you? REALLY? WOW!
Thanks, captain obvious. What I was saying was that as of right now, there is no kernel access at all.
If you use assembly you can store to any partition on the ram, thus hooking a kernel function to redirect a kernel thread to do the dirty work for you. It's not some mystical magical area where the laws of the MIPS assembly language are bent and torn. I'm talking on an assembly level. I think you misunderstood me.
25. I would think if it was that easy they already would've done it. The PSP has a good bit of RAM, finding the right spot would take a while and probably not be consistent across various PSPs.
26. Originally Posted by TurtlesPwn
I would think if it was that easy they already would've done it. The PSP has a good bit of RAM, finding the right spot would take a while and probably not be consistent across various PSPs.
http://pastebin.com/m597d6b73
lol, just scan for any jr ra you want, it'll end up looping back around eventually if you code it right.
If that is no avail, you could even make one that records all the addresses of the jr ra's.
27. Originally Posted by NoEffex
If you use assembly you can store to any partition on the ram, thus hooking a kernel function to redirect a kernel thread to do the dirty work for you. It's not some mystical magical area where the laws of the MIPS assembly language are bent and torn. I'm talking on an assembly level. I think you misunderstood me.
Nope, you can't arbitrary write to any RAM address with a user mode thread. A user mode thread can only access user partition memory. Another kernel exploit will need to be found to allow kernel mode access. Since we now have a user mode access, turning it into a kernel mode xploit is only a matter of time. Thanks to buggy Sony PSP APIs.
SilverSpring, a friend of DA, has already said Dark_AleX has his own user mode & kernel mode exploit. This means the GripShift xploit won't be helpful to DA in aiding his work. I do believe DA has already made his M33 CFW running on PSP3000 by using his own user/kernel mode exploit. He couldn't release it, 'cos he doesn't wanna release his user/kernel mode exploit. In fact, this is the right thing to do. If he releases his own exploit, Sony will patch it right away. On the other hand, he may release the M33 CFW for PSP3000 using GripShift exploit, since this one is already known by Sony. Just my 2 cents worth.
28. don't get me wrong but arn't we forgetting about the psp-2000's that have the new un-pandorable motherboards(unless i've missed something which allows them to be downgraded)?, you guys keep saying DA has already found an exploit for PSP3k's, but gripshift can possible lead to an downgrader for unpandorable 2k mb's
29. Originally Posted by TheKing
Nope, you can't arbitrary write to any RAM address with a user mode thread. A user mode thread can only access user partition memory. Another kernel exploit will need to be found to allow kernel mode access.
I thought this as well for a reason why getting kernel mode is not as easy as noeffex thinks but I don't have much knowledge of the actual system workings of a PSP. Thanks for confirming.
30. Wait im not really great with all this stuff so dont flame me to hard but if a kernal mode exploit was found couldent you then dump the pre/ipl and then get a working pandora?
Page 1 of 5 1 2 3 4 5 Last
#### Posting Permissions
• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•
All times are GMT -8. The time now is 03:07 PM.
Use of this Web site constitutes acceptance of the TERMS & CONDITIONS and PRIVACY POLICY |
# Tag Info
## Hot answers tagged observation
41
Yes, I've done it myself in my backyard in suburban Houston. During a spacewalk in ISS increment 50, an MMOD shield intended for the axial port of Node 3 was lost. It's visible in this video floating below station. It ended up reentering about six months later. A few weeks after it had been lost, I noticed that it would be visible from my house, with a ...
26
Hubble can in fact observe the Moon, and has done so. Here's a picture of the Apollo 17 site (The upper right is from Apollo 17 mission itself). The x shows where the actual site is. You can also see more Hubble pictures of the Moon at this page.
13
A flag would be about five hundred micro arcseconds. (about 1 meter, at around 375 Mm) The Event Horizon Telescope has a resolution of about 20 micro arcseconds. Therefore, EHT could resolve a flag on the surface of the moon if it were a radio source. However, to view it optically would require around a 350 meter aperture. The largest 'single' telescope ...
12
The category of "observation satellites" is broad, because there are many types of observation (different wavelengths that reveal different characteristics of the observed planet). Because you're referring to 'high-resolution images' I'm going to assume you want visible-light photography. Yes, this is available for many bodies, although most planets have ...
10
I am not aware of an optical telescope capable of showing proof from earth. Part of the reason is the flags are pretty small and it's a very long distance. However, you can see the flags or what is left of them and the landing sites using images from the Lunar Reconnaissance Orbiter (LRO). The LRO was able to use it's camera to document all of the Apollo ...
10
I'll calculate the visibility of a diffuse 50% gray sphere with a 6 meter diameter. Averaging over all Sun-object-observer configurations that might be similar to a 4 x 11 meter shiny cylinder since both will scatter light over more than a hemisphere. Equations are from this answer and from sources linked therein: M_{Abs} = 5 \left(\log_{10}(1329) -\...
7
It turns out it might be very common for astronauts on the ISS (or previously the MIR) to spot satellites. This is the distribution of the number of satellites in LEO for different altitudes As you can see the ISS, with its $\sim 400 \; km$ altitude, is quite safe and alone below the huge carcass of LEO satellites moving around $800 \; km$ (it is also true ...
5
According to Heavens Above, it's the upper (2nd) stage of a Zenit-2 launcher, as you guessed. SL-16 was the identifier used by Western intelligence agencies during the cold war for the Zenit. They're cylindrical, about 4m diameter and 12m in length. Here's an artist's rendering of such a stage in orbit: It's hard to predict the orbital lifetime of objects ...
5
I believe COS (the sensitive UV instrument) could be damaged if it were pointed at the illuminated moon. (I wasn't able to find any documents online that confirm this though) The instruments don't point in the same direction. So it's possible to orient the telescope so that one is not pointed at the moon while others are. Lunar observations are difficult ...
4
I massaged some raw numbers from https://nssdc.gsfc.nasa.gov/planetary/factsheet/ For each body-Sun pair the velocity of the Sun is the velocity of the planet times the ratio of the masses since they orbit around their center of mass. Eclipse depth is just the ratio of diameters. Jupiter results in the largest velocity by far, thought the amplitude of the ...
4
In general, it is impossible to know for sure, but we can do some detective work. My two go to web sites: https://www.space-track.org -- Catalog of all space objects kept by USAF/JSPOC https://planet4589.org -- Jonathan McDowell's amazing catalog of all things space AS you point out 2010-028* is the international designator for all the objects related to ...
3
I’ve seen space debris with the naked eye, and my eyesight is far from exceptional. During a supply rocket launch to the ISS, the fairings that break off the main rocket are easily visible if the conditions are right.
2
About 6 hours after launch, the Soyuz spacecraft can be observed near the ISS. About three minutes later, Soyuz will fly after the ISS and will be visible in the same orbit. Therefore use https://www.n2yo.com/ P.S. The duration of the trip to the ISS varies. Until 2012, astronauts always spent about two days in the Soyuz spacecraft before docking to the ...
2
As seen in the photos below from https://terra.nasa.gov/about Terra is reddish, so the answer to Why is Terra reddish? is exactly what you have suspected; because it is wrapped in a thermal protection film that is reddish in color. However, the answer to a more interesting question: Why am I surprised that Terra appears reddish? would have several ...
1
The pixel size of the HST's Wide Field Camera 3 or WFC-3 is 164/2048 = 0.08 arcsec. The night side of the Moon is illuminated by Earthshine and the brightness depends on the phase angle of the moon (and the weather on Earth (clouds, wind-induced waves on the ocean) and the time (ocean versus land) but we can find some averages. Let's use +15 magnitude per ...
1
In theory, a smaller telescope can see a dim object by just looking longer than a larger one. There are issues of noise and stability that limit this, but for small factors it works. So their plan seems to be to break up a large-telescope observing plan into plans for longer observations with multiple smaller telescopes. This (somehow) saves lots of cost. ...
Only top voted, non community-wiki answers of a minimum length are eligible |
+0
# Let $f(x)$ be the polynomial $f(x)=x^7-3x^3+2.$If $g(x) = f(x + 1)$, what is the sum of the coefficients of $g(x)$?
0
120
1
Let $f(x)$ be the polynomial $f(x)=x^7-3x^3+2.$If $g(x) = f(x + 1)$, what is the sum of the coefficients of $g(x)$?
Nov 22, 2018
#1
+4795
+1
$$g(x) = f(x+1) = (x+1)^7 - 3(x+1)^3 + 2$$
Now this is some polynomial
$$g(x) = \sum \limits_{k=0}^7~c_k x^k \\ \text{and we can find the sum of the coefficients by simply finding }g(1)$$
$$g(1) = 2^7 - 3(2^3)+2 = 128 - 24+2 = 106$$
.
Nov 22, 2018
#1
+4795
+1
$$g(x) = f(x+1) = (x+1)^7 - 3(x+1)^3 + 2$$
Now this is some polynomial
$$g(x) = \sum \limits_{k=0}^7~c_k x^k \\ \text{and we can find the sum of the coefficients by simply finding }g(1)$$
$$g(1) = 2^7 - 3(2^3)+2 = 128 - 24+2 = 106$$
Rom Nov 22, 2018 |
# Anomaly Detection Using Autoencoder and Wavelets
This example shows how wavelet features can be used to detect arc faults in a DC system. For the safe operation of DC distribution systems, it is important to identify arc faults and prefault signals that can be caused by deterioration of wire insulation due to aging, abrasion, or rodent bites. These arc faults can result in shock, fires, and system failures in the microgrid. Unlike the fault signals in AC distribution systems, these prefault arc flash signals are difficult to identify as they do not generate significant power to trigger the circuit breakers. As a result, these signals can exist in the system for hours without being detected.
Arc fault detection using the wavelet transform was studied in [1]. This example follows the feature extraction procedure detailed in that work which consists of filtering the load signals using the Daubechies `db3` wavelet followed by normalization. Further, an autoencoder trained with signal features under normal conditions is used to detect arc faults in load signals. The DC arc model used to generate the fault signals and the pretrained network used to detect the arc faults are provided in the example folder. As training the network for arc detection of larger signals can take significantly long simulation time, in this example we only report the detection results.
### Training and Testing Setup
The autoencoder is trained using the load signal generated by the Simulink® model `DCNoArc` under normal conditions, i.e., without arc faults. The model `DCNoArc` was built using components from the Simscape™ Electrical™ Specialized Power Systems library.
Figure 1: `DCNoArc` model for generating load signal under normal conditions.
The voltage sources are modeled using the following parameters:
• AC Harmonic Source 1: 10 V AC voltage and 120 Hz frequency
• AC Harmonic Source 2: 20 V AC voltage and 2000 Hz frequency
• DC voltage source: 1000 V
In the model `DCArcModelFinal `we add arc fault generation in every load branch. The model uses the Cassie arc model for synthetic arc fault generation. The arc model works like an ideal conductance until the arc ignites at the contact separation time.
Figure 2: `DCArcModelFinal` model for generating load signal with arc fault.
The Cassie arc model is one of the most studied black box models for generating synthetic arcs. The model is described by the following differential equation:
`$\frac{dg}{dt}=\frac{g}{\tau }\left(\frac{{u}^{2}}{{U}_{c}^{2}}-1\right)$`
where
• g is the conductance of the arc in siemens
• $\tau$ is the arc time constant in seconds
• u is the voltage across the arc in volts
• ${U}_{c}$ is the constant arc voltage in volts
The Cassie arc models were implemented in Simulink® using the following parameter values:
• Initial conductance g(0) is 1e4 siemens
• Constant arc voltage ${U}_{c}$ = 100 V
• Arc time constant is 1.2e-6 seconds
The contact separation times for the arc models are chosen at random. All the parameters have been loaded in the `PreLoadFcn` callbacks in the Model Properties of the Model Settings tab.
At the contact separation time, the voltage across the mathematical Cassie arc model drops by some level and stays at that value during the remaining simulation period. However, in real power system branches the arc is sustained for a small time interval. To ensure that the voltage across the Cassie arc model emulates the behavior of real life arc faults, we use a switch across each model to limit the arc time. We use the `DCArcModelFinal`model to generate a faulty load signal to test the autoencoder.
To detect arc faults in all the load branches simultaneously the sensing system measures the load voltage at each branch. The sensing system combines the load voltages and sends the resulting signal to the feature generation block. The generated features are then used to detect the arc faults in all the branches using a deep network.
### Anomaly Detection with Autoencoder
Autoencoders are used to detect anomalies in a signal. The autoencoder is trained on data without anomalies. As a result, the learned network weights minimize the reconstruction error for load signals without arc faults. The statistics of the reconstruction error for the training data can be used to select the threshold in the anomaly detection block that determines the detection performance of the autoencoder. The detection block declares the presence of an anomaly when it encounters a reconstruction error above threshold. In this example, we used root-mean-square error (RMSE) as the reconstruction error metric.
For this example, we trained two autoencoders using the load signal under normal conditions without arc fault. One autoencoder was trained using the raw load signal as training data. This encoder uses the raw faulty load signal to detect arc faults. The second autoencoder was trained using wavelet features. Arc fault detection is subsequently done on wavelet features as opposed to the raw data. For training and testing the network, we assume that the load consists of 10 parallel resistive branches with randomly chosen resistance values. For arc fault signal generation, we add a Cassie arc model in every load branch. The contact separation times of the models are such that they are triggered randomly throughout the simulation period. Just like in a real-time DC system, the load signals from both normal and faulty conditions have added white noise.
#### Feature Extraction
The wavelet-based autoencoder was trained and tested on signals filtered using the discrete wavelet transform (DWT). Following [1], the Daubechies `db3` wavelet was used.
The following figures show the wavelet-filtered load signals under normal and faulty conditions. The wavelet-filtered faulty signal captures the variation due to arc faults. For training and testing purposes, the wavelet-filtered signals are segmented into 100-sample frames.
Figure 3: Raw load signal and wavelet-filtered signal under normal conditions.
Figure 4: Raw load signal and wavelet-filtered signal under faulty conditions.
#### Model Training
The autoencoder is trained using wavelet-filtered features from the load signal under normal conditions. For the training stage you have two options:
1. Train your own autoencoder and load the network into the prediction block of the `DCArcModelFinal` model.
2. Use the `DCArcModelFinal` model that has been preloaded with the pretrained model available in the `netData.mat` file in the example folder.
To train your own autoencoder you can use the following steps.
• First, generate the load signal under normal operating conditions using the `DCNoArc` model. Load, open, and run the model using the following commands. Extract the load signal from the simulation output.
```load_system('DCNoArc.slx'); open_system('DCNoArc.slx'); out = sim('DCNoArc.slx'); % extract normal load signal from the simulation output xn = out.xn; ```
• Next, extract the wavelet-filtered features from the load signal. You use the features as the input to the autoencoder.
```% training data: load voltage under normal conditions featureDimension = 100; xn = sigresize(xn,featureDimension); % Obtain training features trnd4 = getDet(xn); trainData = getFeature(trnd4, featureDimension);```
The pretrained autoencoder was trained using the following network layers and training options.
```% Create network layers layers = [ sequenceInputLayer(1,Name='in') bilstmLayer(32,Name='bilstm1') reluLayer(Name='relu1') bilstmLayer(16,Name='bilstm2') reluLayer(Name='relu2') bilstmLayer(32,Name='bilstm3') reluLayer(Name='relu3') fullyConnectedLayer(1,Name='fc') regressionLayer(Name='out') ]; % Set options options = trainingOptions('adam', ... MaxEpochs=20, ... MiniBatchSize=16, ... Plots='training-progress');```
The training steps takes several minutes. If you want to train the network, select trainingFlag = “Train network”. Then, you can load the trained network into the `Predict` block from Deep Learning Toolbox™ used in the `DCArcModelFinal` model.
```trainingFlag = "Use pretrained network" if trainingFlag == "Train network" % training network net = trainNetwork(trainData,trainData,layers,options); save('network.mat','net'); end ```
If you want to skip the training steps, you can run the `DCArcModelFinal` model loaded with the pretrained network in `netData`.mat to detect arc faults in load signals.
Figure 5: Training progress for the autoencoder.
The figure shows the histogram for the reconstruction error produced by the autoencoder when the input is the training data. You can use the statistics for the reconstruction error to choose the detection threshold. For instance, choose the detection threshold to be three times the standard deviation of the reconstruction error.
Figure 6: Histogram for the reconstruction error produced by the autoencoder when the input is the training data.
#### Model for Anomaly Detection Using Autoencoder
The `DCArcModelFinal` model is used for real-time detection of the arc fault in a DC load signal. Before running the model, you must specify the simulation stop time in seconds in the workspace variable `t`.
Figure 7: `DCArcModelFinal` for arc fault detection.
The first block generates a noisy DC load signal with arc fault in continuous time. The load voltage is then converted into a discrete-time signal sampled at 20 kHz by the `Rate transition` block in DSP System Toolbox™. The discrete time signal is then buffered to the `LWTFeatureGen` block that obtains the desired level 4 detail projection after preprocessing. The detail projection is then segmented in 100 sample frames that are the test features for the `Predict` block. The `Predict` block has been preloaded with the network pretrained using the load signal under normal conditions. The anomaly detection block then calculates the root-mean-square error (RMSE) for each frame and declares the presence of an arc fault if the error is above some predefined threshold.
This plot shows the regions predicted by the network when the wavelet-filtered features are used. The autoencoder was able to detect all 10 arc fault regions correctly. In other words, we obtained a 100% probability of detection in this case.
Figure 8: Detection performance for the autoencoder using wavelet-filtered features.
This plot shows the anomaly detection performance of the raw data trained autoencoder (pretrained network included in `netDataRaw`.mat). When we used raw data for anomaly detection, the encoder was able to identify seven out of 10 regions correctly.
Figure 9: Detection performance for the autoencoder using raw load signal.
We generated a 50 second long anomalous signal with 40 arc fault regions (this data is not included with the example). When tested with the autoencoder trained with raw signals, the arc regions were detected with a 57.85% probability of detection. In contrast, the autoencoder trained with the wavelet-filtered signals was able to detect the arc fault regions with a 97.52% probability of detection.
We also investigated the impact of the load signal normalization on the fault detection performance of the autoencoder. To this end, we modified the sequence input layer of the autoencoder model such that the input data is normalized when it is forward propagated through the input layer. We chose the ‘zscore’ normalization for this purpose. The modified autoencoder layers are:
```layers = [ sequenceInputLayer(1,Name='in',Normalization='zscore') bilstmLayer(32,Name='bilstm1') reluLayer(Name='relu1') bilstmLayer(16,Name='bilstm2') reluLayer(Name='relu2') bilstmLayer(32,Name='bilstm3') reluLayer(Name='relu3') fullyConnectedLayer(1,Name='fc') regressionLayer(Name='out') ]; ```
Similar to the previous experimental setup, we trained one autoencoder with raw data and another autoencoder with wavelet-filtered load signal under normal conditions. Then, we monitored the fault detection performance for both autoencoders. We ran the simulation for 5 minutes. The faulty load signal included 50 arc faults occurring at random time instances. The autoencoder trained with raw data achieved a detection probability of 80%. In contrast, the autoencoder trained with the wavelet-filtered signals was able to detect the arc fault regions with a 96% probability of detection.
### Summary
In this example, we demonstrated how autoencoders can be used to identify arc faults in DC systems. Both the raw and wavelet filtered load signals under normal conditions can be used as features to train the autoencoders. These anomaly detection mechanisms can be used to detect arc faults in a timely manner and thus protect a DC system from damages caused by the faults.
### References
[1] Wang, Zhan, and Robert S. Balog. “Arc Fault and Flash Signal Analysis in DC Distribution Systems Using Wavelet Transformation.” IEEE Transactions on Smart Grid 6, no. 4 (July 2015): 1955–63. `https://doi.org/10.1109/TSG.2015.2407868`
### Helper Functions
`getDet` - this function obtains the wavelet-filtered normal load signal and normalizes them.
```function d4 = getDet(x) % This function is only intended to support examples in the Wavelet % Toolbox. It may be changed or removed in a future release. LS = liftingScheme(Wavelet='db3'); [ca4,cd4]= lwt(x,Level=4,LiftingScheme=LS); D4 = lwtcoef(ca4,cd4,LiftingScheme=LS,OutputType="projection",... Type="detail"); d4 = normalize(D4); end```
`getFeature` - this function segments the wavelet-filtered into features of the size featureDimension.
```function feature = getFeature(x, sz) % This function is only intended to support examples in the Wavelet % Toolbox. It may be changed or removed in a future release. n = floor(length(x)/sz); feature = cell(n,1); for ii = 1:n c1 = 1+((ii-1)*sz); c2 = sz+((ii-1)*sz); ind = c1:c2; feature{ii} = transpose(x(ind,:)); end end```
sigresize - this function removes the transient part of the load signal.
```function xn = sigresize(x,sz) % This function is only intended to support examples in the Wavelet % Toolbox. It may be changed or removed in a future release. n = floor(length(x)/sz); lf = n*sz; xn = zeros(lf,1); xn(1:lf) = x(1:lf); end``` |
# Confused about results from placebo diff-in-diff
I construct the simple placebo sample
I then construct a dummy for whether the year is after the placebo treatment year, 2012. I interact this dummy with the treatment dummy to construct the diff-in-diff variable, did.
Since the treatment and control groups have perfectly parallel trends across all periods, the regression of dependent_var on did should produce a coefficient of 0. Yet in Stata, the command "reg dependent_var did" gives me a coefficient of 2 with a p-value of 0.05. Results remain significant even with robust standard errors and year fixed effects.
What is going on? Am I interpreting the diff-in-diff coefficient incorrectly?
• First, it could be the case that you are detecting a difference in trend in years when the intervention is not in effect. Second, I wonder if your model is specified properly. Are you regressing your outcome on a single indicator for treatment? Typically, you should include two main effects, and their interaction (i.e., DD coefficient). It is difficult to tell what's going wrong without showing us your output. Or, show us all the columns and how you coded the other variables. It could be your 'post-treatment" variable is coded improperly. Mar 6 '20 at 3:17
• Thanks for the help - I've included the two main effects and gotten the expected result. By the way, can I conclude that the parallel trends assumption (approximately) holds if the DD coefficient is large but insignificant? Mar 11 '20 at 21:18
• In general, yes. See my response. Is the example you provided from a simulated dataset? Do you have more years worth of data? Mar 12 '20 at 15:21
I will tender an answer since I have a better understanding of your problem and I am limited in my response in the comments.
Just to be clear, it is important you have the correct difference-in-differences (DD) setup before conducting your placebo test. I assume you want to estimate the following model
$$y_{it} = \gamma T_{i} + \lambda Post_{t} + \delta(T_{i} \times Post_{t}) + \epsilon_{it},$$
where you have repeated observations of cross-sectional unit $$i$$ across $$t$$ years. Note, $$i$$ could represent individuals, households, counties, states, et cetera. The variable $$T_{i}$$ is your treatment dummy, which aggregates $$i$$ into two distinct groups: one treatment group and one control group. The $$Post_{t}$$ dummy, indicates years after treatment in both groups. The interaction of these two dummies gives us an estimate of $$\delta$$, the DD coefficient.
To go back to my earlier comments for a moment. At the very least, the model requires these variables to obtain the DD estimate. You cannot forego the two main effects. In other words, you cannot just include a single treatment variable $$D_{it}$$ $$(i \times t)$$, without the corresponding effects for group and time.
I then construct a dummy for whether the year is after the placebo treatment year, 2012. I interact this dummy with the treatment dummy to construct the diff-in-diff variable, did.
This is correct. This is one way of conducting a placebo test. You are manipulating the time configuration. You should not be capturing a difference in trend in years when the policy/treatment/exposure is absent.
Let's talk briefly about one possible setup. Assume your placebo treatment year is 2012. In your case, you want to interact your treatment dummy with separate post-treatment indicators. Deconstructing $$Post_{t}$$ into separate dummies for all years (excluding one year to avoid collinearity) would result in the following
$$y_{it} = \gamma T_{i} + \lambda_{1} (T_{i}*\mathbf{I}_{t = 2012}) + \lambda_{2} (T_{i}*\mathbf{I}_{t = 2013}) + \lambda_{3} (T_{i}*\mathbf{I}_{t = 2014}) + \epsilon_{it}.$$
This is a fancy way of saying: create a dummy variable for each year and interact it separately with the treatment variable. The interaction of the treatment indicator with year dummies is akin to obtaining a separate DD estimate by year. I assume 2012 is one of the years preceding treatment exposure. You could also test for a difference in trend in 2011 as well. Just remember what year you are leaving out!
Since the treatment and control groups have perfectly parallel trends across all periods, the regression of dependent_var on did should produce a coefficient of 0
In this case, you are estimating a unique effect for 2012, which should be indistinguishable from zero.
By the way, can I conclude that the parallel trends assumption (approximately) holds if the DD coefficient is large but insignificant?
The foregoing question was reproduced from the comments. The quick answer to this is, yes. The common trend assumption is often implicitly assumed, but in your case, you are subjecting it to an explicit test. You do not want to be capturing significant, non-zero effects before the treatment begins. I would also plot the evolution of the group trends over time. A visually clear parallelism should exist across treatment and control groups before the treatment begins. I wouldn't just do this test and move on. Show the trends too!
In your example, you are working with only four years worth of data, so the number of pretreatment years is scanty. Three or more pretreatment years is often preferred. To conclude, failing to capture a difference in trend in a pretreatment year is one way to isolate your treatment effect. In your case, effects manifest around the treatment/exposure period.
• Thanks so much! Mar 12 '20 at 19:30
• Anytime! Please follow-up here if you need further clarity. If it answered your question completely, give it a check! Mar 15 '20 at 13:01 |
what is multivariable calculus used for
I need a topic for an essay of why does my course of multivariable calculus applies or relates to my mayor (computer engineering) info is also good. The above procedure generalizes to larger determinants, but $3 \times 3$ determinants will be enough for multivariable calculus.. In fact, I'm afraid if I tried to memorize it, I might forget something else important, like how to combine like terms in algebra. 0. Stewart - $120 ISBN: 978-1-285-74155. If you've taken an into level calc course, this is probably what it was. The tone of A. Rex's comment implied that the solution was obviously to simply teach them that, bu it's not for no reason that this solution is chosen by a rather small proportion of professors of standard multivariable calculus courses. 643 Pages. The survey confirmed my own suspicion that a CAS is particularly helpful in the multivariable course. In excellent shape, gently used. Without getting into detail, we can explain the intuitive approach of using secant lines to approach the tangent line. What are the applications of multivariable calculus in computers? But I'd rather use my brain's synaptic connections to do something more useful. Offered by Imperial College London. This is the currently selected item. If you're picky, you may find other parts of chemistry in which calculus is used, but the main ones are in this post, especially quantum chemistry and process chemistry. Calculus Early Transcendentals, 8th Edition. John Rock. Here is a list of some key applications. Does Multivariable Calculus (Calc III) cover material from ap calculus ab and bc? You probably had equations to find the integral or the derivative and they looked something like: cos(x), sin(x), x/x^3, etc. This course offers a brief introduction to the multivariate calculus required to build many common machine learning techniques. Featured on Meta New Feature: Table Support. 2 18.02 NOTES Example 1. It involves several variables instead of just one. Use tree diagrams as an aid to understanding the chain rule for several independent and intermediate variables. as emails come in, but it seems safe to say that calculus is rarely used by biology students outside of calculus class. To this end, I have tried to write in a style that communicates intent early in the discussion of each 1. Change is an essential part of our world, and calculus helps us quantify it. Multivariable Calculus The world is not one-dimensional, and calculus doesn’t stop with a single independent variable. In this paper, I discuss some of these topics and share the laboratory activities we have used to teach them. 7. Math Multivariable calculus Integrating multivariable functions Double integrals (articles) Double integrals (articles) Double integrals. Postdoc: no actual calculus used, but calculus helpful for understanding diffusion of molecules in space; I will add to the list (open-source data!) Multivariable Calculus with MATLAB. Multivariable calculus continues the story of calculus. My community college is on the semester system, and my new university is on the quarter system. Stewart -$20 ISBN: 978-1-305-27182-1. The ideas of partial derivatives and multiple integrals are not too di erent from their single-variable coun-terparts, but some of the details about manipulating them are not so obvious. Some are downright tricky. 9/10 - Calculus Student Solutions Manual for Multivariable Calculus 8th Edition, 2016. 2) Multivariable calculus: This is calculus where all the stuff from the "naive" section is generalized to several variables. We can help you to master your multivariable calculus homework or multivariable calculus assignment that you need in the shortest time possible since our experts are available 24 hrs a day. Thus, it makes sense to consider the triple In multivariable calculus, there are new rules, because equations might look like: Multivariable calculus is used in many fields of natural and social science and engineering to model and study high-dimensional systems that exhibit deterministic behavior. 9 Full PDFs related to this paper. Download Full PDF Package. We can use Stokes' theorem to convert a surface integral into a line integral only if we are told outright that $\dlvf = \curl \vc{G}$ and are given what $\vc{G}$ is. Multivariable calculus Before we tackle the very large subject of calculus of functions of several variables, you should know the applications that motivate this topic. Find the critical points of w = 12x2+ y3 -12xy and determine their type. I'm taking a six-week accelerated "Calculus III - Multivariable Calculus" summer course right now at my community college before I transfer to a university to start an EE program. Single variable calculus does just that -- focuses on one variable. 2015. Multivariable calculus is an extension of single variable calculus. Of the sixteen topics in calculus we have used a CAS to teach, I have found that those which are enhanced the most by the technology are the multivariable ones. For example, in 2210 certain abstract concepts such as vector spaces are introduced, theorems are carefully stated, and many of these theorems are proved. Calculus is used all the time in computer graphics, which is a very active field as people continually discover new techniques. Even if only a quarter of those (though I expect it would be much higher) go on to take Multivariable Calculus … Calculus, originally called infinitesimal calculus or "the calculus of infinitesimals", is the mathematical study of continuous change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations.. Example 2: say we set u = 3x−2y, v = x+y to simplify either integrand or bounds of integration. The book’s aim is to use multivariable calculus to teach mathematics as a blend of reasoning, computing, and problem-solving, doing justice to the structure, the details, and the scope of the ideas. Perform implicit differentiation of a function of two or more variables. Finally, it also depends on your education or experience level, as the most you will study at a high level will include some good skills in calculus. Double integrals beyond volume. a b ab In general, must find out the scale factor (ratio between dudv and dxdy)? It is the study of how to apply calculus to functions of more then 1 variable. We calculate the partial derivatives easily: To find the critical points we solve simultaneously the equations w,= 0 and w, = 0; we get Thus there are two critical points: (0,O) and (1,2). Totals of quantities spread out over an area. So students are generally favorable and, in the multivariable course, highly so. This talk describes the motivation for developing mathematical models, including models that are developed to avoid ethically difficult experiments. 2. Multivariable Calculus vs. Calculus III Multivariable Calculus is a course known by many different names at various Colleges/Universities, including: . Here are some thoughts. It is closely related to linear algebra. 0 0. Some of the linear algebra in MATH 2210 is then used to develop multivariable and vector calculus in MATH 2220. I am finishing up sophomore year in ap calc ab, and the plan is for me to take ap calc bc next year online or somewhere because my school doesn't offer it . We can expect about a quarter of that, or 20,000 students, to be juniors or below. It is no doubt true that Chris has never seen a calculus student who knows what a linear map is. Double integrals over non-rectangular regions. (substitution works here as in 1-variable calculus: du = 1 dx, dv = 1 dy, so dudv = 1 dxdy. The change that most interests us happens in systems with more than one variable: weather depends on time of year and location on the Earth, economies have several sectors, important chemical reactions have many reactants and products. In Calculus I, 71% of the students thought that Maple was helping them to learn calculus. Calculus: Multivariable 7th Edition - PDF eBook Hughes-Hallett Gleason McCallum. We start at the very beginning with a refresher on the “rise over run” formulation of a slope, before converting this to the formal definition of the gradient of a function. Finding volumes - when to use double integrals and triple integrals? 4 years ago. Triple integral! Browse other questions tagged multivariable-calculus or ask your own question. This is the home page for Multivariable Calculus with MATLAB, with Applications to Geometry and Physics, published by Springer, 2017, ISBN 978-3-319-65069-2 (hardcover) and ISBN 978-3-319-65070-8 (eBook). In the multivariable course, 86% thought that using Maple was helping them. Multivariable Calculus that will help us in the analysis of systems like the one in (2.4). May 2010 edited May 2010 in Engineering Majors. Download PDF. An examination of the right{hand side of the equations in (2.4) reveals that the quantities S(t), I(t) and R(t) have to be studied simultaneously, since their rates of change are intertwined. In economics , for example, consumer choice over a variety of goods, and producer choice over various inputs to use and outputs to produce, are modeled with multivariate calculus. Multivariable calculus solving solutions are aimed at finding the curvature, area of the surface as well as maxima and minima values. Source(s): applications multivariable calculus computers: https://tr.im/zPOoE. The same thing is true for multivariable calculus, but this time we have to deal with more than one form of the chain rule. Depending on the student's knowledge of coordinate graphing of lines, we can try to connect slopes of lines to rates of change. Probabilities of more than one random variable: what is the probability that a Multivariable calculus student's don't typically learn methods to find $\vc{G}$ from $\dlvf$. Calculus III The third semester of a 4-credit hour Calculus sequence, as it is in our Distance Calculus program. Solution. For an Algebra 1 student, I would start by talking about tangent lines to curves in the plane. What is … MATH 2210-2220 is taught at a higher theoretical level than MATH 1110-1120. Polar coordinates. This paper. The reason why I'm asking is because in high school, I took calculus AB and BC which made me use … Calculus can be used to compute the Fourier transform of an oscillating function, very important in signal analysis. Related. Swag is coming back! It allows us to do the same things we could in two dementions in n dementions. 1. You do more complicated integrals, learn about partial derivatives and a chain rule for several variables. chroni 131 replies 51 threads Junior Member. A short summary of this paper. Calculus is used to derive the delta rule, which is what allows some types of neural networks to 'learn'. Assume there is an open set containing points ( x 0 , y 0 ), let f be a function defined in that open interval except for the points ( x 0 , y 0 ). According to the AP Report to the Nation, we can expect about 80,000 Calculus BC test-takers this year (2014). Calculus: Multivariable 7th Edition - PDF eBook Hughes-Hallett Gleason McCallum. All the time in computer graphics, which is a very active field as people continually discover new.... People continually discover what is multivariable calculus used for techniques is on the semester system, and helps! S ): applications multivariable calculus vs. calculus III multivariable calculus 8th Edition, 2016 rule! Https: //tr.im/zPOoE and calculus helps us quantify it, area of the as. Study high-dimensional systems that exhibit deterministic behavior what it was has never seen calculus! Allows some types of neural networks to 'learn ' very important in signal analysis to rates change!, but it seems safe to say that calculus is used all the time in computer,. Colleges/Universities, including models that are developed to avoid ethically difficult experiments rule for several independent and variables! Known by many different names at various Colleges/Universities, including models that developed! ( substitution works here as in 1-variable calculus: du = 1 dy, so dudv = 1,! The world is not one-dimensional, and calculus helps us quantify it that Chris never... Hour calculus sequence, as it is in our Distance calculus program, so dudv = 1 dy so! Integrals ( articles ) Double integrals ( articles ) Double integrals ( articles ) Double integrals III the third of. So students are generally favorable and, in the multivariable course, highly.. New university is on the student 's knowledge of coordinate graphing of lines, we can try to connect of. Manual for multivariable calculus is used in many fields of natural and social science and engineering to model and high-dimensional... Compute the Fourier transform of an oscillating function, very important in signal.. Build many common machine learning techniques try to connect slopes of lines to rates change. To Use Double integrals and triple integrals activities we have used to compute Fourier! The time in computer graphics, which is a course known by many different names at various Colleges/Universities, models... 80,000 calculus BC test-takers this year ( 2014 ) are aimed at finding the curvature area..., which is what allows some types of neural networks to 'learn.! That Chris has never seen a calculus student who knows what a linear map is linear algebra MATH. It is in our Distance calculus program calculus what is multivariable calculus used for rarely used by biology students outside of class! Delta rule, which is what allows some types of neural networks to 'learn ' 20,000 students to. ( ratio between dudv and dxdy ) variable calculus does just that -- focuses on variable! 1 dx, dv = 1 dx, dv = 1 dx, dv = 1 dx, dv 1! % of the linear algebra in MATH 2220 questions tagged multivariable-calculus or ask own. All the time in computer graphics, which is what allows some types neural! Of lines, we can expect about 80,000 calculus BC test-takers this year 2014. = 1 dxdy calculus that will help us in the plane PDF eBook Hughes-Hallett Gleason.... Build many common machine learning techniques, dv = 1 dy, so dudv = 1.! You 've taken what is multivariable calculus used for into level calc course, 86 % thought that Maple was them... Hour calculus sequence, as it is no doubt true that Chris has never seen calculus! Dxdy ) talking about tangent lines to approach the tangent line ( 2014 ) this paper, I some. Helpful in the plane what a linear map is of these topics and share the laboratory activities have! To rates of change semester of a 4-credit hour calculus sequence, as it is no doubt true that has... Is an essential part of our world, and my new university is on student! A brief introduction to the AP Report to the Nation, we can expect about a of. The third semester of a 4-credit hour calculus sequence, as it is no doubt that... Use Double integrals ( substitution works here as in 1-variable calculus: 7th! Https: //tr.im/zPOoE change is an essential part of our world, and calculus helps us quantify it (! Course known by many different names at various Colleges/Universities, including: as maxima and minima values explain... A b ab in general, must find out the scale factor ( ratio dudv... Exhibit deterministic behavior the delta rule, which is a course known many. Coordinate graphing of lines to rates of change in ( 2.4 ) differentiation of a 4-credit hour calculus,! Active field as people continually discover new techniques rule for several independent and intermediate variables a function of two more! Analysis of systems like the one in ( 2.4 ) the students thought that Maple was helping them to calculus! Is in our Distance calculus program it was motivation for developing mathematical,... The scale factor ( ratio between dudv and dxdy ), so dudv = 1,! The analysis of systems like the one in ( 2.4 ) our Distance calculus program ( 2.4 ) simplify integrand. Exhibit deterministic behavior 9/10 - calculus student who knows what a linear is! Calculus can be used to develop multivariable and vector calculus in MATH 2220 part of world. The Nation, we can try to connect slopes of lines to rates of change example 2 say. Complicated integrals, learn about partial derivatives and a chain rule for several variables surface as as! Ap Report to the AP Report to the AP Report to the multivariate calculus required build! Outside of calculus class x+y to simplify either integrand or bounds of.... Detail, we can explain the intuitive approach of using secant lines curves! Surface as well as maxima and minima values slopes of lines to rates change. In, but it seems safe to say that calculus is an extension of single variable calculus does that. Are aimed at finding the curvature, area of the students thought that using Maple was helping them Double. Using Maple was helping them fields of natural and social science and engineering to and. Integrals ( articles ) Double integrals and triple integrals the tangent line I... Implicit differentiation of a function of two or more variables w = 12x2+ y3 -12xy and determine type! Area of the linear algebra in MATH 2220 or bounds of integration motivation developing... Offers a brief introduction to the AP Report to the AP Report to the calculus. 2: say we set u = 3x−2y, v = x+y to simplify either integrand bounds! 86 % thought that using Maple was helping them to learn calculus by many different names various. Source ( s ): applications multivariable calculus 8th Edition, 2016 the curvature, area of the algebra... Partial derivatives and a chain rule for several independent and intermediate variables to teach them calculus doesn t... To simplify either integrand or bounds of integration CAS is particularly helpful in the plane solutions aimed... Minima values ratio between dudv and dxdy ) more complicated integrals, learn about partial derivatives a. Develop multivariable and vector calculus in MATH 2210 is then used to compute Fourier! Derivatives and a chain rule for several independent and intermediate variables us in the course! Known by many different names at various Colleges/Universities, including: particularly helpful in analysis! And determine their type what is multivariable calculus used for used to develop multivariable and vector calculus in MATH 2210 then... That -- focuses on one variable if you 've taken an into level calc course, 86 thought... Lines, we can explain the intuitive approach of using secant lines to curves the. And vector calculus in MATH 2220 depending on the semester system, and calculus helps us quantify it as... Other questions tagged multivariable-calculus or ask your own question surface as well as and. World, and calculus doesn ’ t stop with a single independent variable build many common machine techniques! Biology students outside of calculus class Report to the AP Report to the AP Report to the AP Report the. Ratio between dudv and dxdy ) change is an essential part of our world, and my new university on! 9/10 - calculus student solutions Manual for multivariable calculus solving solutions are aimed at finding the curvature area. Year ( 2014 ) taken an into level calc course, 86 % thought Maple! That, or 20,000 students, to be juniors or below graphing lines! Calculus doesn ’ t stop with a single independent variable in this paper I... Https: //tr.im/zPOoE a CAS is particularly helpful in the multivariable course MATH multivariable calculus is in... To develop multivariable and vector calculus in MATH what is multivariable calculus used for stop with a independent. About a quarter of that, or 20,000 students, to be or! What a linear map is the intuitive approach of using secant lines to approach the line. Questions tagged multivariable-calculus or ask your own question at finding the curvature, area of the algebra. Of systems like the one in ( 2.4 ) of lines to rates of change III multivariable calculus used! Calculus helps us quantify it seems safe to say that calculus is a very active field as continually. V = x+y to simplify either integrand or bounds of integration the scale factor ( between... Or below level than MATH 1110-1120 into detail, we can try to slopes... Calculus computers: https: //tr.im/zPOoE 1 dxdy of our world, and calculus us! Used all the time in computer graphics, which is a course known by many different names at Colleges/Universities. Allows some types of neural networks to 'learn ' be juniors or below helps us quantify.! Focuses on one variable of our world, and calculus doesn ’ t stop with a independent! |
# Georges Lemaître
Jump to navigation Jump to search
Georges Lemaître
Sculpture of Lemaître
Born17 July 1894
Died20 June 1966 (aged 71)
Leuven, Belgium
NationalityBelgian
Alma materCatholic University of Leuven
St Edmund's House, Cambridge
Massachusetts Institute of Technology
Known forTheory of the expansion of the universe
Big Bang theory
Lemaître coordinates
AwardsFrancqui Prize (1934)
Eddington Medal (1953)
Scientific career
FieldsCosmology
Astrophysics Mathematics
InstitutionsCatholic University of Leuven
Doctoral advisorCharles Jean de la Vallée-Poussin (Leuven)
Arthur Eddington (Cambridge)
Harlow Shapley (MIT)
Doctoral studentsLouis Philippe Bouckaert, Rene van der Borght
Signature
Monsignor Georges Lemaître (Georges Henri Joseph Édouard Lemaître, 17 July 1894 – 20 June 1966) was a Belgian priest, astronomer , mathematician and professor of physics at the Catholic University of Louvain.
He was the first person to say the Universe is growing. Some people think it was Edwin Hubble, but that is not correct.[1][2] Lemaître was also the first to make the Hubble's law and the Hubble constant.[3][4][5][6] Lemaître also started what became known as the Big Bang theory of the origin of the Universe. He called it his 'hypothesis of the primeval atom'.[7][8] |
# linear algebra for economics
## linear algebra for economics
Created using Jupinx, hosted with AWS. \left[ This textbook introduces students of economics to the fundamental notions and instruments in linear algebra. Most of the linear mathematical theory is based on linear … If $A$ is $n \times k$ and $B$ is $j \times m$, then Linear independence now implies $\gamma_i = \beta_i$ for all $i$. ), One way to solve the problem is to form the Lagrangian. However, we can still seek a best approximation, for example an in $\lambda$ of degree $n$. are strictly positive, and hence $A$ is invertible (with positive Indeed, if we also have $y = \gamma_1 a_1 + \cdots \gamma_k a_k$, For example, many applied problems in economics and finance require the solution of a linear system of equations, such as, $x'A'PB(Q + B'PB)^{-1}B'PAx$. follows. A vector is an element of a vector space. For a square matrix $A$, the $i$ elements of the form $a_{ii}$ for $i=1,\ldots,n$ are called the principal diagonal. If we donât care about the Lagrange multipliers, we can substitute the constraint into the objective function, and then just maximize $-(Ax + Bu)'P (Ax + Bu) - u' Q u$ with respect to $u$. $v(x) = -x' \tilde{P}x$ follows the above result by denoting As we will see, in economic contexts Lagrange multipliers often are shadow prices. Thus, the columns of $A$ consists of 3 vectors in $\mathbb R ^2$. $Ax$ is that it corresponds to a linear combination of the columns of $A$. In Julia, a vector can be represented as a one dimensional Array. that if $|a| < 1$, then $\sum_{k=0}^{\infty} a^k = (1 - a)^{-1}$. To see why, recall the figure above, where $k=2$ and $n=3$. In this case there are either no solutions or infinitely many â in other words, uniqueness never holds. David Gale has written a beautiful book on The Theory of Linear Economic Models. As a consequence, if we pre-multiply both sides of $y = Ax$ by $A^{-1}$, we get $x = A^{-1} y$. material that will be used in applications as we go along. Linear algebra is also the most suitable to teach students what proofs are and how to prove a statement. Each $n \times k$ matrix $A$ can be identified with a function $f(x) = Ax$ that maps $x \in \mathbb R ^k$ into $y = Ax \in \mathbb R ^n$. Analogous definitions exist for negative definite and negative semi-definite matrices. If we compare (1) and (2), we see that (1) can now be We have $\| Sx \| = r \| S (x/r) \| \leq r \| S \| < r = \| x\|$. ...you'll find more products in the shopping cart. Some nice facts about the eigenvalues of a square matrix $A$ are as follows. eigenvalue (check it), the eig routine normalizes the length of each eigenvector The two most common operators for vectors are addition and scalar multiplication, which we now describe. The property of having linearly independent columns is sometimes expressed as having full column rank. Traditionally, vectors are represented visually as arrows from the origin to $x$ that makes the distance $\| y - Ax\|$ as small as possible. Linearity is used as a first approximation to many problems that are studied in different branches of science, including economics and other social sciences. Linear algebra is also the most suitable to teach students what proofs are and how to prove a statement. If $\lambda$ is scalar and $v$ is a non-zero vector in $\mathbb R ^n$ such that. As a result, in the $n > k$ case we usually give up on existence. Can we impose conditions on $A$ in (3) that rule out these problems? price for Spain Let $A$ be a square matrix and let $A^k := A A^{k-1}$ with $A^1 := A$. written more conveniently as. (gross), © 2020 Springer Nature Switzerland AG. automatically direct you to that cloud service unless you update your selection in It follows that one column is a linear combination of the other two. topics. Since any scalar multiple of an eigenvector is an eigenvector with the same linearly independent, then their span, and hence the range of $f(x) = nonsingular. We round out our discussion by briefly mentioning several other important plane, although some might be repeated. a_{n1} x_1 + \cdots + a_{nk} x_k Notice that the term$ (Q + B'PB)^{-1} $is symmetric as both P and Q Just as was the case for vectors, a number of algebraic operations are defined for matrices. \vdots & \vdots & \vdots \\ If$ A = A' $, then$ A $is called symmetric. Then if$ y = Ax = x_1 a_1 + x_2 a_2 + x_3 a_3 $, we can also write, Hereâs an illustration of how to solve linear equations with Juliaâs built-in linear algebra facilities. This is the$ n \times k $case with$ n > k $. there exists a$ k $with$ \| A^k \| < 1 $. If$ A = \{e_1, e_2, e_3\} $consists of the canonical basis vectors of$ \mathbb R ^3 $, that is, then the span of$ A $is all of$ \mathbb R ^3 $, because, for any * B is element by element multiplication. Lagrangian equation. Rewriting our problem by substituting the constraint into the objective This in turn implies the existence of$ n $solutions in the complex The latter method is preferred because it automatically selects the best algorithm for the problem based on the types of A and y. then no other coefficient sequence$ \gamma_1, \ldots, \gamma_k $will produce The rule for matrix multiplication generalizes the idea of inner products discussed above, Students in mathematics and informatics may also be interested in learning about the use of mathematics in economics. The next figure shows two eigenvectors (blue arrows) and their images under$ A $(red arrows). a_{11} & \cdots & a_{1k} \\ Another nice thing about sets of linearly independent vectors is that each element in the span has a unique representation as a linear combination of these vectors. You can verify that this leads to the same maximizer. \begin{array}{c} For example, letâs say that$ a_1 = \alpha a_2 + \beta a_3 $. This set can never be linearly independent, since it is possible to find two vectors that span If you donât mind a slightly abstract approach, a nice intermediate-level text on linear algebra If$ A $and$ B $are two matrices, then their product$ A B $is formed by taking as its The answer to both these questions is negative, as the next figure shows. The following figure represents three vectors in this manner. where$ \lambda $is an$ n \times 1 \$ vector of Lagrange multipliers. This problem can be expressed as one of solving for the roots of a polynomial
Website:
Font Resize
Contrast |
## Most recent change of CountableSet
Edit made on August 13, 2011 by ColinWright at 12:07:14
Deleted text in red / Inserted text in green
WW WM
HEADERS_END
Since counting is the act of assigning a natural number to each item,
Georg Cantor defined a set as countably infinite if it can be put in
one-one correspondence with the natural numbers.
Using this definition, trivially the natural numbers themselves are
countably infinite, but so are the even numbers, the integers and
the square numbers.
!! What about the Rational Numbers?
Form an (infinite) grid like this. Only the first six rows and columns are shown.
| 1/1 | 1/2 | 1/3 | 1/4 | 1/5 | 1/6 | ... |
| 2/1 | 2/2 | 2/3 | 2/4 | 2/5 | 2/6 |
| 3/1 | 3/2 | 3/3 | 3/4 | 3/5 | 3/6 |
| 4/1 | 4/2 | 4/3 | 4/4 | 4/5 | 4/6 |
| 5/1 | 5/2 | 5/3 | 5/4 | 5/5 | 5/6 |
| 6/1 | 6/2 | 6/3 | 6/4 | 6/5 | 6/6 |
| /etc/ |
We want to show that these can be put into one-to-one correspondence with the natural numbers,
and there are two easy ways to think about this.
COLUMN_START^
Zig-zag back and forth through the above table:
* 1/1, 1/2, 2/1, 3/1, 2/2, 1/3, 1/4, 2/3, 3/2, 4/1, 5/1, 4/2, 3/3, 2/4, 1/5, 1/6, 2/5, ...
** !/ Put your finger on the table and trace the path that these makes ... !/
COLUMN_SPLIT^
List all those whose numerator and denominator add to two, then to three, then to four, etc, like this:
* 1/1,
* 1/2, 2/1,
* 1/3, 2,2, 2/2, 3/1,
* 1/4, 2/3, 3/2, 4/1,
* /etc/
COLUMN_END
Each of these two ways will form a list of all the rational numbers. Therefore there exists a one-to-one correspondence with the natural numbers. So the rational numbers are countably infinite.
The size of countable sets is given the transfinite number EQN:\aleph_0 (pronounced "aleph null") (see transfinite numbers)
Are all sets countable? (see Uncountable sets) |
# Torus knots in Euclidean space — a symmetry argument
Consider a $(p,q)$ torus knot $K$ in 3-dimensional Euclidean space $\mathbb R^3$ where $p,q \geq 2$ and $\operatorname{GCD}(p,q)=1$.
Let $\operatorname{Isom}(\mathbb R^3,K)$ be the isometries of $\mathbb R^3$ that preserve $K$.
It's a fairly standard argument using theorems about uniqueness of Seifert fiberings to prove that it's impossible for $\operatorname{Isom}(\mathbb R^3, K)$ to contain subgroups isomorphic to both $\mathbb Z_p$ and $\mathbb Z_q$. Of course, $\operatorname{Isom}(S^3,K)$ can and does for the standard embeddings of torus knots in $S^3$. In some sense the core issue is that when this does happen, the $\mathbb Z_p$ and $\mathbb Z_q$ subgroups of $\operatorname{Isom}(S^3,K)$ have disjoint fixed point sets.
My question: is there a reasonably elementary proof $\operatorname{Isom}(\mathbb R^3, K)$ does not contain subgroups isomorphic to both $\mathbb Z_p$ and $\mathbb Z_q$ that avoid the use of Seifert-fiber space techniques? I'm particularly interested if any "quantum topology" invariants can make this kind of symmetry argument. I thought a little about this, at least I'm not seeing how one could use the Alexander polynomial.
-
In counterpoint to your observation, view the torus knot as lying in $S^3$ where $S^3$ is the unit sphere in $\mathbb{C}^2$, with coordinates $(z,w)$ with $|z|^2+|w|^2=1$, and see it as the points on the Clifford torus $|z|=1/sqrt{2}$ so that $z^p=w^q$, and both $\mathbb{Z}_p$ and $\mathbb{Z}_q$ can be realized as isometries of $S^3$ that have the torus knot as an invariant set. Sounds more like a geometry problem than a topology problem. – Charlie Frohman Aug 7 '11 at 22:08
I'm a little confused by your comment -- what are you making a counter-point to? – Ryan Budney Aug 7 '11 at 22:27
By seeing the knot as lying in a slightly different homogenous space, and thinking about isometries there, both the $\mathbb{Z}_p$ and $\mathbb{Z}_q$ groups can be realized as isometries. – Charlie Frohman Aug 8 '11 at 0:13
Notice that my two different families of symmetries generate a copy of $\mathbb{Z}_p\times \mathbb{Z}_q$ in the isometry group of the sphere. However, except for the Klein four group, I don't think such groups can be realized as a subgroup of the isometries of Euclidian space. It your hypotheses were just a little stronger, I'd be done as there is no $(2,2)$ torus knot. – Charlie Frohman Aug 8 '11 at 0:40
I oversimplified, the point is that the $\mathbb{Z}_p\times \mathbb{Z}_q$ I produced has no global fixed point. The classification of finite subgroups of $SO(3)$ can be found in Joe Wolf's "Spaces of Constant Curvature". – Charlie Frohman Aug 8 '11 at 2:49
The answer no. Neither quantum invariants nor the Alexander polynomial sees the difference between a knot in the three sphere and a knot in Euclidian three space. In the case of the Alexander polynomial the missing point does not interfere with the first homology of the infinite cyclic cover. In the case of quantum invariants, it is often proved that the coefficients of the vector assigned to the the knot complement by whatever TQFT you are working with are given by evaluations of one of the standard knot polynomials, which are the same for knots in the sphere and knots in Euclidian space.
No matter how you present the torus knot there is a subgroup of the homeomorphism group of the complement of the $(p,q)$-torus knot ( in the sphere) that is isomorphic to $\mathbb{Z}_p\times \mathbb{Z}_q$. So you cannot rule it out with topological invariants that don't distinguish between knots in $\mathbb{R}^3$ and knots in $S^3$.
-
In the question I don't explicitly state that the $\mathbb Z_p$ and $\mathbb Z_q$ actions commute. So there's the fussy issue to deal with that perhaps they generate an infinite group. But getting to more the point of my question -- many quantum invariants work with a planar projection, so in some sense are prejudiced towards a "knots in $\mathbb R^3$" perspective, which is why I was hopeful one might be able to extract this kind of symmetry argument. – Ryan Budney Aug 8 '11 at 17:57
It seems to me that the isometry group of a $(2,q)$ torus knot can be a dihedral group, thus containing both $\mathbb{Z}_2$ and $\mathbb{Z}_q$. The standard realization (on a torus of revolution, take the curve having the right homotopy class and whose latitude and longitude move at constant speed) should do the trick. It is indeed $q$ symmetric with respect to an axis passing in the hole of the torus, and seems $2$ symmetric (with respect to any axis that is orthogonal to the first one and meets the knot) to me.
Edit: as precised by Ryan, the knot is in fact oriented and we only consider symmetries preserving this orientation.
Further edit: let me give a partial elementary proof under this assumption. Assume that there are a $\mathbb{Z}_p$ and a $\mathbb{Z}_q$ in the symmetry group of the knot. Since they preserve its orientation, they act by translation and therefore their actions on the knot commute. Since the knot is not planar and the symmetries are linear, they must commute globally. Except the ruled out case of $p=q=2$, this implies that the two subgroups of isometries have the same axis. From here the knot can be cut off into $pq$ isometric pieces glued by rotations, and I guess that someone more used to knots than me can produce a contradiction.
-
My apologies, I should have mentioned that I want the groups to act on the knot by translations -- preserving the orientations of the knot. – Ryan Budney Aug 8 '11 at 17:52
Your 2nd edit makes Charlie's point about this being a geometry problem much more clear, thanks. I had been thinking of this question rather inefficiently. – Ryan Budney Aug 9 '11 at 16:41 |
# This is the format for income statement Accounting i Spreadsheet Assignman Name sheet Assignment Portion of...
###### Question:
this is the format for income statement
##### MACR_ALe; aris ALcr 4SLS TA(Ue-A) Zare-! IR
MACR_ALe; aris ALcr 4SLS TA(Ue-A) Zare-! IR...
##### Let (Sn) be sequence which contains every integer_ Prove that there subsequence (tk) of (Sn) so that lim tk =
Let (Sn) be sequence which contains every integer_ Prove that there subsequence (tk) of (Sn) so that lim tk =...
##### Your graduate student has been looking for more colour variantsin natural populations. They have found one family that has a smallnumber of bright red individuals. After investigating the bright red saphiroos further you'vedetermined that bright red is not caused by variation in Genes A,B, or M but from another gene. You've been able to create both true-breeding bright red andtrue-breeding brown saphiroos. You cross a true-breeding bright redsaphiroo and a true-breeding brown saphiroo,
Your graduate student has been looking for more colour variants in natural populations. They have found one family that has a small number of bright red individuals. After investigating the bright red saphiroos further you've determined that bright red is not caused by variation in Genes A, B,...
##### Menti Espresso Express operates a number of espresso coffee stands in busy suburban malls. The foxed...
menti Espresso Express operates a number of espresso coffee stands in busy suburban malls. The foxed weekly expense of a coffee stand is $1.200 and the variable cost per cup of coffee served is$0.22 Required: 1. Fill in the following table with your estimates of the company's total cost and ave...
##### Name 4 types of projections and describe them. Include an illustration of what a cube would...
Name 4 types of projections and describe them. Include an illustration of what a cube would look line using each of these projections. Make sure two are parallel projections and two are perspective projections. Please draw correctly....
##### How does atmospharic rafraction cause53em that tne sun sets Iater than actually 8nes?Arel0 / %<pain:Words 0
How does atmospharic rafraction cause 53em that tne sun sets Iater than actually 8nes? Arel 0 / %< pain: Words 0...
##### Let u.and U;Note that and uzomhogcnashown Ihat V; notlinthe sudspacespannedby and Uz- Use this construct nonzero veclor v in R' thatis orthogonal toand U7The nonzero vociof =orthogonal lo =
Let u. and U; Note that and uz omhogcna shown Ihat V; notlinthe sudspace spannedby and Uz- Use this construct nonzero veclor v in R' thatis orthogonal to and U7 The nonzero vociof = orthogonal lo =...
##### 4-6 minuets of persuasive speech. High school students should get tested for drug test.
4-6 minuets of persuasive speech. High school students should get tested for drug test....
##### 2. Jennifer is 3 years old. She has a height-for-age Z-score of -3.1 and a weight-for-height...
2. Jennifer is 3 years old. She has a height-for-age Z-score of -3.1 and a weight-for-height Z-score of -1.1. (3 points) a. Is she most likelv acutely or chronically malnourished? Explain your answer b. What is one possible cause of her condition? 3. Ryan is 6 months old. He has a length-for-age Z-s...
##### Verifying an Identity For the vector field $\mathbf{F}(x, y, z)=x \mathbf{i}+y \mathbf{j}+z \mathbf{k},$ verify that $\frac{1}{\|\mathbf{F}\|} \int_{S} \int \mathbf{F} \cdot \mathbf{N} d S=\frac{3}{\|\mathbf{F}\|} \iiint_{Q} d V$
Verifying an Identity For the vector field $\mathbf{F}(x, y, z)=x \mathbf{i}+y \mathbf{j}+z \mathbf{k},$ verify that $\frac{1}{\|\mathbf{F}\|} \int_{S} \int \mathbf{F} \cdot \mathbf{N} d S=\frac{3}{\|\mathbf{F}\|} \iiint_{Q} d V$...
##### Find the number of ways in which 5 boys and 5 girls be seated in a row so that(i) No two girls may sit together.(ii) All the girls sit together and all boys sit together.(iii) all the girls are never together.
Find the number of ways in which 5 boys and 5 girls be seated in a row so that (i) No two girls may sit together. (ii) All the girls sit together and all boys sit together. (iii) all the girls are never together....
##### Appropriate data collection is a critical component in obtaining useful data for your research. explain your...
Appropriate data collection is a critical component in obtaining useful data for your research. explain your plan for data collection. Discuss potential issues in your data collection plan and your plans to overcome these challenges. Do you have any suggestions for improvement?...
##### The following displays two normal distributions. Which of thefollowing are true?I. The mean of A is less than the mean of B.II. The standard deviation of A is less than B.III. The area under the curve of A is less than B. I only I and II only III only I, II, and III II and III only
The following displays two normal distributions. Which of the following are true? I. The mean of A is less than the mean of B. II. The standard deviation of A is less than B. III. The area under the curve of A is less than B. I only I and II only III only I, II, and III II and III only...
##### W Home | WINGS I Wright State X P Chapter 1l H 17994 02/19/19 3 Question...
W Home | WINGS I Wright State X P Chapter 1l H 17994 02/19/19 3 Question (2points) The following table lists molar co for the two ions in the question below into molality tions of seven major ions in seawater Using a density of 1022 g/mL for seawater, convert the concentrations 0.781 399 284 10.46 1...
##### Write the Kc for N the 6 ) equation 3 H 2 1 2 N H 6 ) (heat
Write the Kc for N the 6 ) equation 3 H 2 1 2 N H 6 ) ( heat...
##### Question 80 of 98SubmitComplete the balanced molecular reaction for the following weak base with a strong acid: NaCIOz(aq) H,SO-(aq)3czReset(aq)H;oNaOHHzoH,oTap here or pull up for additional resources
Question 80 of 98 Submit Complete the balanced molecular reaction for the following weak base with a strong acid: NaCIOz(aq) H,SO-(aq) 3cz Reset (aq) H;o Na OH Hzo H,o Tap here or pull up for additional resources...
##### 10.Identify and graph the conic section whose equationis x^2+8xy+Y^2+2X-4Y=20 using diagonalization process.Identify and graph the conic section whose equation isx2 + 8xy + y2 +2x-4y=20 using diagonalization process.
10.Identify and graph the conic section whose equation is x^2+8xy+Y^2+2X-4Y=20 using diagonalization process. Identify and graph the conic section whose equation is x2 + 8xy + y2 +2x-4y= 20 using diagonalization process....
##### Isuppose cathode ray tube with Helium inside has a power of 87 watts, it has 15.48 efficiency, the rest of the energy is lost in heat. The tube emits violet electron beam (425 nm) Determine the number of photons per second that the light bulb emits a.4.65x1019 photonsb. 8 .19x1018 photonsc.3.87x1018 photonsd.2 86*1019 photons
Isuppose cathode ray tube with Helium inside has a power of 87 watts, it has 15.48 efficiency, the rest of the energy is lost in heat. The tube emits violet electron beam (425 nm) Determine the number of photons per second that the light bulb emits a.4.65x1019 photons b. 8 .19x1018 photons c.3.87x10...
##### 0 022 2 2 % 1 2 II 1 DECAUSU Wi 1 1 vervabiableshe 43155 5 Olequalion: Syste0 00 eegudionss11
0 0 2 2 2 2 % 1 2 II 1 DECAUSU Wi 1 1 vervabiableshe 43155 5 Olequalion: Syste0 00 eegudionss 1 1...
##### What occurred at Wounded Knee? Where and when did it take place, and how was this event significant In the lives of Native Americans?
What occurred at Wounded Knee? Where and when did it take place, and how was this event significant In the lives of Native Americans?...
##### Ultra Co. is considering two mutually exclusive projects, both of which have an economic service life...
Ultra Co. is considering two mutually exclusive projects, both of which have an economic service life of one year with no salvage value. The initial cost and the net-year-end revenue for each project are given in the following table: Project 1 Project 2 Initial Cost $1,200$1,000 ...
##### Point) Check that the point ( ~4. 4.3) Iles the surface cS T +level surace tor lunctlon f(r,y.2) Flnd vector normal Ine surlace the point (-4.4.3) (a) View (hls surface as(b) Flnd an implicit equalion for the tangent plane Ihe surface at (-4,4,316+12
point) Check that the point ( ~4. 4.3) Iles the surface cS T + level surace tor lunctlon f(r,y.2) Flnd vector normal Ine surlace the point (-4.4.3) (a) View (hls surface as (b) Flnd an implicit equalion for the tangent plane Ihe surface at (-4,4,31 6+12...
##### Homework - 0.2f: Piecewise Defined Functions Scare: 7/16 7/16 arsweredQuestionBraphed below represents the costs malling package (in dollars) a5 & The piecew se function function ofits weight (in ounces):CamolctC{Loction cquailon forthe above GraphIf0 < I <flc)Qleavon Keilp; Bmee oroucIoSunru Q5
Homework - 0.2f: Piecewise Defined Functions Scare: 7/16 7/16 arswered Question Braphed below represents the costs malling package (in dollars) a5 & The piecew se function function ofits weight (in ounces): CamolctC {Loction cquailon forthe above Graph If0 < I < flc) Qleavon Keilp; Bmee or...
##### For a certain interval of time, an object is acted on by a constant non zero...
For a certain interval of time, an object is acted on by a constant non zero force. Which of the following statements is true for this interval of time? The objects' velocity changes. The object is moving with constant velocity The object is accelerating. The object is at rest. The object's ...
##### Explain the correct option and why the other option is incorrect: 4. The major product of the following reaction is: CHз...
Explain the correct option and why the other option is incorrect: 4. The major product of the following reaction is: CHз CH2CH3 с-с н OsO4 (cat) ROOH Н ОН Но СНЗ СНCH3 ОН ОН С-с. &...
##### St attemptSee HintBecause cyclic AMP diffuses quickly through the cytosol and= cyclic AMP phosphodiesterase rapidly converts cycllc AMP to AMR what is the consequence?Choose one: The responses trigGered by an increase in cyclic AMP are slow: The cylosolic concentralion of cyclic AMP can change rapidly: Cyclic AMPI nol olten ueedtomedlate cclliespoikscs Cells must sequester cyclic AMP; Tanontc triegetedby unccnt cyc Iic AMP ,re tapld,Lana Rfnteahsuen11/159of 16 Question coMlutLDDeAL
st attempt See Hint Because cyclic AMP diffuses quickly through the cytosol and= cyclic AMP phosphodiesterase rapidly converts cycllc AMP to AMR what is the consequence? Choose one: The responses trigGered by an increase in cyclic AMP are slow: The cylosolic concentralion of cyclic AMP can change ra...
##### (bLi) a4124 Jlsuiif f () = ox? +br+c f (3) =22, f' (3) = 15 and f" (3) =4 thena = 2, b =-3, c =50a = 2, b = -5, c =80a = 2, b =3 c =50a = -2, b =5, c =-8a = 2, b =-3, c =-5a = 2, b =3, c =550
(bLi) a412 4 Jlsui if f () = ox? +br+c f (3) =22, f' (3) = 15 and f" (3) =4 then a = 2, b =-3, c =50 a = 2, b = -5, c =80 a = 2, b =3 c =50 a = -2, b =5, c =-8 a = 2, b =-3, c =-5 a = 2, b =3, c =550... |
# Apacite: suppress initials intext?
I am writing a doctoral thesis and I am using APAcite to set the references in APA (American Psychological Association) format. Mostly this has worked perfectly fine, but I would like to suppress the author initials from the compiled version (in-text, not in the end references section). This happens because there are multiple authors with the same name e.g.,
Smith and Jones (1992) or Wells and Smith (1901)
are typeset in-text as:
I. Smith and Jones (1992) or Wells and D. Smith (1901).
Although it may be correct to have the initial in-text in APA format, in reality I am using BPS (British Psychological Society) format, which uses an adapted APA format (and you guessed it, they don't like the initial in text). Thus in-text should look like the first example above (without initial) but looks like the second (with).
To clarify: Any ideas on how to suppress the initial in-text in apacite?
Thanks.
ps. I understand that I could do it with a different bibtex style, but my question pertains to apacite.
• You could give biblatex-apa a try. I am not familiar with it, but biblatex is far more flexible than BibTeX styles. – Sorry, just read your PS... – domwass May 24 '11 at 16:58
• Yup. I wanna stick with apacite. I will contact the package maintainer if I get no clues here. – Frank Zafka May 24 '11 at 17:05
Make a copy of apacite.bst (perhaps name it bpacite.bst). If you are using TeXLive it is located in /usr/local/texlive/<year>/texmf-dist/bibtex/bst/apacite/apacite.bst where <year> is the current year of your TeX Live distribution. The easiest way to find the exact file on any system is to type kpsewhich apacite.bst in a terminal window. Save the new copy in your local texmf/bibtex/bst folder.
In the new file, comment out (or delete) lines 753-775.
I won't quote the whole code here, but the relevant function in the .bst file begins:
FUNCTION {check.add.initials.aut}
{ %
% Comment out all of the code between the opening brace (above)
% and the final closing brace (below)
%
}
So after you have commented out the code, you should have what is effectively a function that does nothing. (You can't delete the function itself without messing with more parts of the code.)
FUNCTION {check.add.initials.aut}
{
}
This removes the extra check for whether initials are needed; since the default citation is not to have them, they will not appear in any citation.
Here's a test document assuming the modified .bst file:
\documentclass{article}
\usepackage{filecontents}
\begin{filecontents}{\jobname.bib}
@article{kim2002,
Author = {Kim, J B and Sag, I A},
Journal = {Natural Language \& Linguistic Theory},
Pages = {339-412},
Volume = {20},
Year = {2002}}
@article{kim2001,
Author = {S Kim},
Journal = {Natural Language \& Linguistic Theory},
Pages = {67-107},
Title = {Chain Composition and Uniformity},
Volume = {19},
Year = {2001}}
@article{kim1989,
Author = {Y-J Kim and Richard Larson},
Journal = {Linguistic Inquiry},
Pages = {681-688},
Title = {Scope Interpretation and the Syntax of Psych-Verbs},
Volume = {20},
Year = {1989}}
\end{filecontents}
\usepackage{apacite}
\bibliographystyle{bpacite}
\begin{document}
\cite{kim2002,kim2001,kim1989}
\bibliography{\jobname}
\end{document} |
### heavylightdecomp's blog
By heavylightdecomp, history, 3 months ago,
Recently, I was solving a problem that involved disjoint ranges. And while stress testing my solution I found a simple trick to fairly and efficiently generate such ranges.
Let us first examine a set of $N = 3$ disjoint ranges: [{2,3}, {6, 7}, {4, 5}]. If we list their endpoints in order, we get a sequence of $2 \times N$ integers, where adjacent integers are paired together to form a range: [2,3, 4,5, 6,7]
It follows from this observation that you can generate N disjoint ranges by first generating $2 \times N$ unique integers (duplicates will affect the disjoint condition), and then "matching" adjacent integers together.
Here is some Python code:
click me
• +15
By heavylightdecomp, history, 6 months ago,
Hello Codeforces,
Recently I was solving some more problems and came across IPSC 2015 G, Generating Synergy. I think the problem and my alternative solution are pretty cool.
TLDR: Store decreasing maxdepth monostack in each segtree node, for query, binary search $log N$ monostacks and get maximum update.
Code
Abridged problem version: You are given a tree of $N$ nodes, rooted at node $1$. Initially, the value of all nodes is $1$. Handle $Q$ operations of 2 possible types:
• Update all nodes in subtree of node $a$ and depth $\le dep_a + l$ with value c. $dep_a$ (depth of node a) is defined to be number of nodes on the path from node $a$ to root node. So the root has depth $1$, the root's direct children have depth $2$...etc...
• Find the value of node $a$.
We first apply Euler Tour Technique. Then the original update operation is represented by updating all positions in $[tin_a, tout_a]$ with depth $\le dep_a + l$ (in other words, a maxdepth of $dep_a + l$), the query operation is to find value of position $tin_a$.
Then, let's try to incorporate segment tree (after all, this problem is basically Range Update Point Query). From now, we refer to nodes in the original tree as "positions" to avoid confusion with nodes in the segment tree. Each segment tree node will represent a range of positions, to do this we store in each node a resizable array (like std::vector).
For updates we "push" the update information to $O(log N)$ nodes.
For queries, we are tasked with finding the LRU (latest relevant update) of some position $a$. In other words, the latest update which changes the value of position $a$, i.e. has a maxdepth $>= dep_a$ and has been pushed to a segment tree node which contains $a$ (any update that has a chance of changing value of $a$ must have been pushed to a segtree node which contains position $a$.). There are ~$log N$ such nodes.
For each query you can independently find the LRU of each of the $log N$ nodes and then get the latest one. For each node, iterate through all updates pushed to that node and check their maxdepth. Then maximum time of the updates that satisfy the maxdepth constraint is the LRU for that node. But this approach is too slow :(
However!
When pushing an update to some segtree node, we know that all preexisting updates (in that node) with a $\le$ maxdepth will never be the LRU (it is trivial to prove this by contradiction). In other words, the only way a previous update can be the LRU is if it has a strictly greater maxdepth than our current update. So we can delete all updates that don't have a strictly greater maxdepth. Provided we push updates to the back of the node's std::vector, it is also trivial to prove by induction that the updates we can delete will form a suffix of this vector (the invariant is that $maxdepth_{v_1} > maxdepth_{v_2} > maxdepth_{v_3}...$). Another invariant is $time_{v_1} < time_{v_2} < time_{v_3}...$, because later updates are pushed to the back, after earlier updates.
Basically, in each segment tree node we will maintain a monotonic stack sorted by decreasing order of maxdepth and increasing order of time. We push $O(QlogN)$ updates to nodes. $O(logN)$ times for each update. Since each pushed update is deleted at most once, we delete $O(QlogN)$ updates. In total, time complexity of updates is $O(QlogN)$.
The final observation:
With respect to a query position $a$ and a segtree node $x$ (which represents a range that contains $a$), the LRU of $x$ is the last update in std::vector of $x$ such that the update has a maxdepth $\ge dep_{a}$. Any updates after that will not be relevant to $a$ (maxdepth too small), and any updates before are relevant, but not the latest.
Because the std::vector is already sorted by maxdepth, you can binary search it to find the node's LRU. You need to find the LRU of $log N$ nodes for each query, so each query is completed in $O(log Q) * O(log N)$ which is $O(log^{2}N)$.
Our time complexity is $O(QlogN) + O(Qlog^{2}N)$ which is $O(Nlog^{2}N)$.
P.S: Thank you for reading my 2nd Codeforces blog post. I'm still learning the style of solution blogs around here, so if I overexplained something or didn't explain something well enough, please tell me in the comments. Other feedback is also greatly appreciated. Have a nice day :)
Further reading: Official editorial, which uses 2D Segment Tree
• +7
By heavylightdecomp, history, 12 months ago,
Hello Codeforces,
Recently I was solving some problems and came across IOI 2012, Crayfish Scrivener. It is quite an interesting problem. I think my solution for 100 points is pretty cool so I decided to post it here. Hopefully it has not been mentioned before :)
For this solution we will be using 1-based indexing.
Firstly, I noticed that actually adding characters to the end of the string seems a little annoying to optimize. So I tried converting it to a more static problem. First we must notice that this dynamic string can be represented using a static char array. Also notice that for all subtasks, the number of queries is at most $1,000,000$ ($10^6$). Thus, the maximum size of our string is $10^6$. So if we have a char array of size $10^6$, initialized to some default value, and keep track of the actual "size" of the represented string in an integer, in order to "type a letter", we can just update the index ($size + 1$) with the desired letter (this is a point update) without affecting the correctness of our algorithm.
Actually, this reduction allows us to reframe the problem in a way that is very intuitive to solve. Observe that the $Undo$ operation just requires us to "copy" some arbitrary previous revision of the char array, and make a new revision from it. The $TypeLetter$ operation just requires us to do a make a new revision that does a point update on the current revision. The $GetLetter$ operation just requires us to get the value at a specified index, which is a point query. So we need a persistent data structure (to access any previous revision quickly) which can also do point updates and point queries quickly.
What data structure supports all this? Persistent segment trees! Thus, the solution is to build a persistent segment tree on the aforementioned char array.
Its space complexity is $O(QlogN)$ where $Q$ is the number of queries and $N$ is the size of our char array ($10^6$ in this case, the number of queries) which is $O(QlogQ)$. "Copying" a previous revision $O(1)$. Point updates and point queries are $O(logN)$, which is $O(logQ)$. Our final time complexity is $O(QlogQ)$. This is good enough to score 100 points.
Note: We do not use the full power of segment tree for this problem as all the char information is stored in the leaf nodes, which represent a single index.
Implementation note: Don't forget to convert 0-based indexing given in input to 1-based indexing, or implement your solution in 0-based indexing (idea stays the same, just a few minor changes).
PS: I hope you enjoyed reading through my first Codeforces blog. I spent a lot of time on it and if you have any suggestions or tips for improvement please post them in the comments. Though maybe this is not the most elegant way to solve the problem, this is the first idea I came up with and I feel like I understand it a lot more than the other solution which uses binary jumping (though I think it is pretty cool as well, you can view it here).
My implementation
• +45 |
I have no clue
1. Feb 2, 2005
Lovely
I have searched for this answer all day long.
Why is Vave multiplied by 2 to obtain Vf?
2. Feb 3, 2005
sharans
i feel Vave=(Vf+Vi)/2
since Vi=0,we have the result.
3. Feb 3, 2005
Gokul43201
Staff Emeritus
Going by the definitions, we have, for the average velocity :
$$v_{ave} = \frac{total~distance~traveled}{total~time~taken} = \frac {s}{t}$$
Now just substitute for s, from the equations of motion to get the required result. |
Print the standard deviation of Pandas series
PythonPandasServer Side ProgrammingProgramming
Beyond Basic Programming - Intermediate Python
Most Popular
36 Lectures 3 hours
Practical Machine Learning using Python
Best Seller
91 Lectures 23.5 hours
Practical Data Science using Python
22 Lectures 6 hours
In this program, we will find the standard deviation of a Pandas series. Standard deviation is a statistic that measures the dispersion of a dataset relative to its mean and is calculated as the square root of the variance.
Algorithm
Step 1: Define a Pandas series
Step 2: Calculate the standard deviation of the series using the std() function in the pandas library.
Step 3: Print the standard deviation.
Example Code
import pandas as pd
series = pd.Series([10,20,30,40,50])
print("Series: \n", series)
series_std = series.std()
print("Standard Deviation of the series: ",series.std())
Output
Series:
0 10
1 20
2 30
3 40
4 50
dtype: int64
Standard Deviation of the series: 15.811388300841896
Updated on 16-Mar-2021 10:48:04 |
[Next]: Homoclinic bifurcations
[Up]: Project descriptions
[Previous]: Dynamics of semiconductor lasers
[Contents] [Index]
Subsections
## Multiscale systems
Collaborator: K.R. Schneider , E.V. Shchetinina
#### Project 1: Exchange of stabilities in multiscale systems.
Cooperation with: V.F. Butuzov, N.N. Nefedov (Moscow State University, Russia)
Supported by: DFG: Cooperation Project Singulär gestörte Systeme und Stabilitätswechsel'' (Singularly perturbed systems and exchange of stability) of German and Russian scientists in the framework of the Memorandum of Understanding between DFG and RFFI
Description:
The problem of delayed exchange of stabilities for a scalar non-autonomous ordinary differential equation has been treated some time ago ([3]). Very recently, the same problem for the scalar singularly perturbed parabolic differential equation
(1)
has been solved successfully by means of the method of asymptotic lower and upper solutions [2]. This result provides a first proof for the existence of canard solutions for partial differential equations.
Under the assumption that the degenerate equation
g(u,x,t,0) =0
has exactly two roots and intersecting in some smooth curve with the representation t=tc (x), and that for all under consideration, we derive conditions on g and on u0 such that the initial-boundary value problem to (1)
has for sufficiently small a unique solution satisfying
that is, and are lower and upper bounds for the delay of exchange of stabilities.
The results of the research cooperation between WIAS and the Department of Mathematics of the Faculty of Physics of the Moscow State University in the field of exchange of stabilities in multiscale systems over the last five years have been represented as a report [1] which will appear as a research monography in Russia and in the USA.
References:
1. V.N. BUTUZOV, N.N. NEFEDOV, K.R. SCHNEIDER, Singularly perturbed problems in case of exchange of stabilities, WIAS Report no. 21 , 2002.
2. N.N. NEFEDOV, K.R. SCHNEIDER, Delayed exchange of stabilities in a class of singularly perturbed parabolic problems, WIAS Preprint no. 778 , 2002.
3. ,Delayed exchange of stabilities in singularly perturbed systems, WIAS Preprint no. 270 , 1997, Z. Angew. Math. Mech., 78 (1998), Suppl. 1, S199-S202.
#### Project 2: Delayed loss of stability in non-autonomous delay-differential equations
Cooperation with: B. Lani-Wayda (Justus-Liebig-Universität Giessen)
Description: Dynamical systems as mathematical models of real-life processes depend on several parameters which are assumed to be fixed within some time period. The influence of parameters on the behavior of a dynamical system is studied within the framework of bifurcation theory. This project aims to study the influence of some relevant system parameter changing slowly in time (for example, because of an aging process). For ordinary differential equations it is well known that a slowly changing parameter can lead to a special phenomenon known as delayed loss of stability [2]. Such phenomenon manifests in a jumping behavior of some state variables which can imply dramatic consequences (e.g., thermal explosion). The main goal of this project is to describe a similar effect for nonlinear differential-delay equations of the type
(2)
For this purpose, we study the linear inhomogeneous equation
(3)
assuming that the function a takes values in and changes slowly. It is well known that for constant a and h = 0, the zero solution of the linear equation (3) is stable for ,and unstable for . Contrary to the ODE case, the exponential rate of growth or decay is not directly given by a, but has to be estimated. We provide such estimates and derive a variation-of-constants formula for the case of nonconstant a and . This formula will be used to express solutions of (2) on successive time intervals Ii by solutions of the equation
with constants ci which are values of on Ii. We establish estimates that express the phenomenon of delayed loss of stability for differential-delay equations of type (2). As an example we treat the equation
Here, we study the initial value problem with the initial segment identically 1, and estimate the time until the solution is close enough to zero by a method that is not based on linearization. Figure 1 demonstrates this phenomenon numerically.
Complementary to the results on delayed loss of stability, which express similar behavior of delay equations and ordinary differential equations (ODEs), we exhibit a substantial difference between both types of equations. Namely, the additive term h(t) in the equation
inevitably has an influence on the development of all components'' of solutions (in terms of expansion into eigenfunctions of the homogeneous equation). Of course, for a linear constant coefficient system of ODEs, the perturbation h can be chosen such that it influences only specific components.
References:
1. B. LANI-WAYDA, K.R. SCHNEIDER, Delayed loss of stability and excitation of oscillations in nonautonomous differential equations with retarded argument, WIAS Preprint no. 744 , 2002.
2. N.N. NEFEDOV, K.R. SCHNEIDER, Delayed exchange of stabilities in singularly perturbed systems, WIAS Preprint no. 270 , 1997, Z. Angew. Math. Mech., 78 (1998), Suppl. 1, S199-S202.
#### Project 3: Integral manifolds of canard type for non-hyperbolic slow-fast systems
Cooperation with: V.A. Sobolev, E.A. Shchepakina (Samara State University, Russia)
Description: Slow-fast systems have been considered which can be transformed into the form
(4)
with , is a small positive parameter, a is a 2-dimensional vector function, and B(t) is the matrix
Note that the eigenvalues of the matrix B(t) are , that is, B(0) has purely imaginary eigenvalues, and therefore (4) is a non-hyperbolic slow-fast system.
The aim of this project is to establish an integral manifold of the form to reduce the order of system (4) under non-hyperbolicity conditions. In the hyperbolic case the existence of such integral manifolds has been known for a long time (see, e.g., [3]).
As a result we derived conditions on Y and Z guaranteeing the existence of a function such that (4) has an integral manifold of the form where h is uniformly bounded, ||h|| and ||a|| tend to zero as .Moreover, we derived an algorithm to determine the coefficients in the asymptotic representation of the functions and as
The established manifold is of canard type, that is, it is attracting for t<0 and repelling for t>0.
References:
1. K.R. SCHNEIDER, E.V. SHCHETININA, One-parametric families of canard cycles: Two explicitly solvable examples, to appear in: Z. Angew. Math. Mech.
2. E.V. SHCHETININA, Existence and asymptotic expansion of invariant manifolds for non-hyperbolic slow-fast systems, WIAS Preprint, in preparation.
3. V.V. STRYGIN, V.A. SOBOLEV, Separation of Motions by the Integral Manifold Method (in Russian), Nauka, Moscow, 1988.
#### Project 4: Combustion wave of canard type
Cooperation with: V.A. Sobolev, E.A. Shchepakina (Samara State University, Russia)
Description:
We consider the problem of thermal explosion in case of an autocatalytic combustion reaction. The goal of the project is to establish the existence of traveling wave solutions of the system
(5)
where is the temperature, the depth of conversion of the gas mixture, is a small parameter (case of highly exothermic reaction). The existence of a traveling wave , to (5) is equivalent to the existence of a heteroclinic orbit of the system ()
(6)
connecting two equilibria of the reaction system.
Basing on the fact that the reaction system has canard solutions separating the slow combustion regime from the explosive one [2], we prove by applying the geometric theory of singularly perturbed differential equations the existence of a new type of traveling wave solutions, the so-called canard traveling waves.
References:
1. K.R. SCHNEIDER, V.A. SOBOLEV, Existence and approximation of slow integral manifolds in some degenerate cases, WIAS Preprint no. 782 , 2002.
2. V.A. SOBOLEV, E.A. SHCHEPAKINA, Duck trajectories in a problem of combustion theory, Differential Equations, 32 (1996), pp. 1177-1186.
3. K.R. SCHNEIDER, E.A. SHCHEPAKINA, V.A. SOBOLEV, A new type of traveling wave solutions, WIAS Preprint no. 694 , 2001, to appear in: Math. Methods Appl. Sci.
[Next]: Homoclinic bifurcations
[Up]: Project descriptions
[Previous]: Dynamics of semiconductor lasers
[Contents] [Index]
LaTeX typesetting by I. Bremer
5/16/2003 |
TBTK
Importing and exporting data
# External storage
While the classes described in the other Chapters allow data to be stored in RAM during execution, it is important to also be able to store data outside of program memory. This allows for data to be stored in files in between executions, to be exported to other programs, for external input to be read in, etc. TBTK therefore comes with two methods for writing data structures to file on a format that allows for them to later be read into the same data structures, as well as one method for reading parameter files.
The first method is in the form of a FileWriter and FileReader class, which allows for Properties and Models to be written into HDF5 files. The HDF5 file format (https://support.hdfgroup.org/HDF5/) is a file format specifically designed for scientific data and has wide support in many languages. Data written to file using the FileWriter can therefore easily be imported into for example MATLAB or python code for post processing. This is particularly true for Properties stored on the Ranges format (see the Properties chapter), since the data sections in the HDF5 files will preserve the Ranges format.
Many classes in TBTK can also be serialized, which mean that they are turned into strings. These strings can then be written to file or passed as arguments to the constructor for the corresponding class to recreate a copy of the original object. TBTK also contains a class called Resources, which allows for very general input and output of strings, including reading data immediately from the web. In combination these two techniques allows for very flexible export and import of data that essentially allows large parts of the current state of the program to be stored in permanent memory. The goal is to make almost every class serializable. This would essentially allow a program to be serialized in the middle of execution and restarted at a later time, or allow for truly distributed applications to communicate their current state across the Internet. However, this is a future vision not yet fully reached.
Finally, TBTK also contains a FileParser that can parse a structured parameter file and create a ParameterSet.
The HDF5 file format that is used for the FileReader and FileWriter essentially implements a UNIX like file system inside a file for structured data. It allows for arrays of data, together with meta data called attributes, to be stored in datasets inside the file that resembles files. When reading and writing data using the FileReader and FileWriter, it is therefore common to write several objects into the same HDF5-file. The first thing to know about the FileReader and FileWriter is therefore that the current file it is using is chosen by typing
and similar for the FileWriter. It is important to note here that the FileReader and FileWriter acts as global state machines. What this means is that whatever change that is made to them at runtime is reflected throught the code. If this command is executed in some part of the code, and then some other part of the code is reading a file, it will use the file "Filename.h5" as input. It is possible to check whether a particular file already exists by first setting the filename and the call
and similar for the FileWriter.
A second important thing to know about HDF5 is that, although it can write new datasets to an already existing file, it does not allow for data sets to be overwritten. If a program is meant to be run repeatedly, overwriting the previous data in the file each time it is rerun, it is therefore required to first delete the previously generated file. This can be done in code by after having set the filename type
FileWriter::clear();
A similar call also exists for the FileReader, but it may seem harder to find a logical reason for calling it on the FileReader.
A Model or Property cna be written to file as follows
FileWriter::writeDataType(dataType);
where DataType should be replaced by one of the DataTypes listed below, and data should be an object of this data type.
Supported DataTypes
Model
EigenValues
WaveFunction
DOS
Density
Magnetization
LDOS
SpinPolarizedLDOS
ParameterSet
By default the FileWriter writes the data to a dataset with the same name as the DataType listed above. However, sometimes it is useful to specify a custom name, especially if multiple data structures of the same type are going to be written to the same file. It is therefore possible to pass a second parameter to the write function that will be used as name for the dataset
FileWriter::writeDataType(data, "CustomName");
The interface for reading data is completely analogous to that for writing and takes the form
where DataType once again is a placeholder for one of the actual data type names listed in the table above.
# Serializable and Resource
Serialization is a powerful technique whereby an object is able to convert itself into a string. If some classes implements serialization, it is simple to write new serializable classes that consists of such classes since the new class essentially can serialize itself simply by stringing together the serializations of its components. TBTK is designed to allow for different serialization modes since some types of serialization may be simpler or more readable in case they are not meant to be imported back into TBTK, while others might be more efficient in terms of execution time and memory requirements. However, currently only serialization into JSON is implemented to any significant extent. We will therefore only describe this mode here.
If a class is serializable, which means it either inherits from the Serializable class, or is pseudo-serializable by implementing the serialize() function, it is possible to create a serialization of a corresponding object as follows
string serialization = serializeabelObject.serialize(Serializable::Mode::JSON);
Currently the Model and all Properties can be serialized like this. For clarity considering the Model class, a Model can be recreated from a serialization string as follows
Model model(serialization, Serializable::Mode::JSON);
The notation for recreating other types of objects is the same, with Model replaced by the class name of the object of interest.
Having a way to create serialization strings and to recreate objects from such strings, it is useful to also be able to simply write and read such strong to and from file. For this TBTK provides a class called Resource. The interface for writing a string to file using a resource is
Resource resource;
resource.setData(someString);
resource.write("Filename");
Similarly a string can be read from file using
const string &someString = resource.getData();
The Resource is, however, more powerful than demonstrated so far since it in fact implements an interface for the cURL library (https://curl.haxx.se). This means that it for example is possible to read input from a URL instead of from file. For example, a simple two level system is available at http://www.second-quantization.com/ExampleModel.json that can be used to construct a Model as follows
Model model(resource.getData(), Serializable::Mode::JSON);
model.construct();
# FileParser and ParameterSet
While the main purpose of the other two methods is to provide methods for importing and exporting data that faithfully preserve the data structures that are used internally by TBTK, it is also often useful to read other information from files. In particular, it is useful to be able to pass parameter values to a program through a file, rather than to explicitly type the parameters into the code. Especially since the later option requires the program to be recompiled every time a parameter is updated.
For this TBTK provides a FileParser and a ParameterSet. In particular, together they allow for files formated as follows to be read
int sizeX = 50
int sizeY = 50
complex phaseFactor = (1, 0)
bool useGPU = true
string filename = Model.json
First the file can be converted into a ParameterSet as follows
ParameterSet parameterSet = FileParser::parseParameterSet("Filename");
Once the ParameterSet is created, the variables can be accessed
int sizeX = parameterSet.getInt("sizeX");
int sizeY = parameterSet.getInt("sizeY"); |
•
•
Egypt
#### Abstract
Building quantum devices using fixed operators is a must to simplify hardware construction of a quantum computer. Quantum search engine is not an exception. In this paper, a fixed phase quantum search algorithm that searches for $M$ matches in an unstructured list of size $N$ will be proposed. Fixing phase shifts to $1.91684\pi$ in the standard amplitude amplification will make the minimum probability of success is 99.58% in O(sqrt{N/M}) for 0
N/A
COinS |
# Binomial coefficients identity (sum of the powers of the natural numbers)
I've found exercise with binomial coefficients in Kostrikin's book.
Proof that
$\sum_{i=1}^n{{r+1}\choose{i}}\left(1^i+2^i+\dots+n^i\right)=(n+1)^{r+1}-(n+1)$
I was trying to check that for small integers like $r=2$ and $n=1,2$ but i think that there is something wrong. My results didn't match.
For $r=2$ and $n=1$ we have ${{3}\choose{1}}\left(1^1\right)=3\not=2^3-2=6$. For $r=2$ and $n=2$, ${{3}\choose{1}}\left(1^1\right)+{{3}\choose{2}}\left(1^2+2^2\right)=18\not=3^3-3=24$. I can't find this identity anywhere. Is this identity known? How can i proof that without mathematical induction?
• This doesn't hold when $(n,r)=(1,2)$, as you say. You may be missing some conditions about $n,r$, or this is completely wrong. If the latter, we can't prove it. – mathlove Dec 22 '13 at 9:53
• Ultimately the corrected version will come from looking at $\sum\left((i+1)^{r+1}-i^{r+1}\right)$ in two ways: (i) by noting the mass cancellations (telescoping) and (ii) expanding the first term via the Binomial Theorem before summing. – André Nicolas Dec 22 '13 at 10:13
As André Nicolas correctly pointed out, the correct version of this identity can be obtained by working backwards from a much simpler one:
$$\sum_{j=1}^n (j+1)^{r+1} - j^{r+1} = (n+1)^{r+1} - 1$$
The sum on the left-hand side has telescopic property; each term cancels out part of the previous one, so in the end, only the very first and very last parts remain.
Now, if we apply binomial theorem to the term $(j+1)^{r+1}$ inside the sum and rearrange the terms a little (changing the order of summation in the last step), we get
$$\begin{eqnarray} (n+1)^{r+1} - 1 & = & \sum_{j=1}^n \left(\left(\sum_{i=0}^{r+1} \binom{r+1}{i} j^i\right) - j^{r+1}\right)\\ & = & \sum_{j=1}^n \left(1+\sum_{i=1}^{r} \binom{r+1}{i} j^i\right) \\ & = & n + \sum_{j=1}^n \sum_{i=1}^{r} \binom{r+1}{i} j^i \\ & = & n + \sum_{i=1}^r \binom{r+1}{i}\sum_{j=1}^n j^i \end{eqnarray}$$
Comparing what we started with and the final sum, it's almost what you wanted to prove -- except for a tiny mistake: the upper bound in the outer summation should have been $r$ instead of $n$:
$$\sum_{i=1}^r \binom{r+1}{i}\sum_{j=1}^n j^i = (n+1)^{r+1} - (n+1)$$
Oh, and as a bonus, we've just proved it :-) |
table.integer64 {bit64} R Documentation
## Cross Tabulation and Table Creation for integer64
### Description
table.integer64 uses the cross-classifying integer64 vectors to build a contingency table of the counts at each combination of vector values.
### Usage
table.integer64(...
, return = c("table","data.frame","list")
, order = c("values","counts")
, nunique = NULL
, method = NULL
, dnn = list.names(...), deparse.level = 1
)
### Arguments
... one or more objects which can be interpreted as factors (including character strings), or a list (or data frame) whose components can be so interpreted. (For as.table and as.data.frame, arguments passed to specific methods.) nunique NULL or the number of unique values of table (including NA). Providing nunique can speed-up matching when table has no cache. Note that a wrong nunique can cause undefined behaviour up to a crash. order By default results are created sorted by "values", or by "counts" method NULL for automatic method selection or a suitable low-level method, see details return choose the return format, see details dnn the names to be given to the dimensions in the result (the dimnames names). deparse.level controls how the default dnn is constructed. See ‘Details’.
### Details
This function automatically chooses from several low-level functions considering the size of x and the availability of a cache. Suitable methods are hashmaptab (simultaneously creating and using a hashmap) , hashtab (first creating a hashmap then using it) , sortordertab (fast ordering) and ordertab (memory saving ordering).
If the argument dnn is not supplied, the internal function list.names is called to compute the ‘dimname names’. If the arguments in ... are named, those names are used. For the remaining arguments, deparse.level = 0 gives an empty name, deparse.level = 1 uses the supplied argument if it is a symbol, and deparse.level = 2 will deparse the argument.
Arguments exclude, useNA, are not supported, i.e. NAs are always tabulated, and, different from table they are sorted first if order="values".
### Value
By default (with return="table") table returns a contingency table, an object of class "table", an array of integer values. Note that unlike S the result is always an array, a 1D array if one factor is given. Note also that for multidimensional arrays this is a dense return structure which can dramatically increase RAM requirements (for large arrays with high mutual information, i.e. many possible input combinations of which only few occur) and that table is limited to 2^31 possible combinations (e.g. two input vectors with 46340 unique values only). Finally note that the tabulated values or value-combinations are represented as dimnames and that the implied conversion of values to strings can cause severe performance problems since each string needs to be integrated into R's global string cache.
You can use the other return= options to cope with these problems, the potential combination limit is increased from 2^31 to 2^63 with these options, RAM is only rewquired for observed combinations and string conversion is avoided.
With return="data.frame" you get a dense representation as a data.frame (like that resulting from as.data.frame(table(...))) where only observed combinations are listed (each as a data.frame row) with the corresponding frequency counts (the latter as component named by responseName). This is the inverse of xtabs..
With return="list" you also get a dense representation as a simple list with components
values a integer64 vector of the technically tabulated values, for 1D this is the tabulated values themselves, for kD these are the values representing the potential combinations of input values counts the frequency counts dims only for kD: a list with the vectors of the unique values of the input dimensions
### Note
Note that by using as.integer64.factor we can also input factors into table.integer64 – only the levels get lost.
Note that because of the existence of as.factor.integer64 the standard table function – within its limits – can also be used for integer64, and especially for combining integer64 input with other data types.
table for more info on the standard version coping with Base R's data types, tabulate which can faster tabulate integers with a limited range [1L .. nL not too big], unique.integer64 for the unique values without counting them and unipos.integer64 for the positions of the unique values.
### Examples
message("pure integer64 examples")
x <- as.integer64(sample(c(rep(NA, 9), 1:9), 32, TRUE))
y <- as.integer64(sample(c(rep(NA, 9), 1:9), 32, TRUE))
z <- sample(c(rep(NA, 9), letters), 32, TRUE)
table.integer64(x)
table.integer64(x, order="counts")
table.integer64(x, y)
table.integer64(x, y, return="data.frame")
message("via as.integer64.factor we can use 'table.integer64' also for factors")
table.integer64(x, as.integer64(as.factor(z)))
message("via as.factor.integer64 we can also use 'table' for integer64")
table(x)
table(x, exclude=NULL)
table(x, z, exclude=NULL)
[Package bit64 version 4.0.5 Index] |
## Representing one terabit of information
I picked up my copy of Roger Penrose‘s Cycles of Time: An Extraordinary New View of the Universeand reread his opening chapter on the nature of entropy (if you struggle with this concept as a student then I recommend the book for this part alone – his exposition is brilliant).
My next thought was to think if I could model his comments about a layer of red paint sitting on top of blue paint – namely could I just write a program that showed how random molecular movements would eventually turn the whole thing purple. Seemed like a nice little programming project to while a way a few hours with – and the colours would be lovely too.
Following Penrose’s outline we would look at a ‘box’ (as modelled below) containing 1000 molecules. The box would only be purple if 499, 500 or 501 molecules were red (or blue), otherwise it would be shaded blue or red.
And we would have 1000 of these boxes along each axis – in other words $10^9$ boxes and $10^{12}$ molecules – with $5 \times 10^8$ boxes starting as blue (or red).
Then on each iteration we could move, say $10^6$ molecules and see what happens.
But while the maths of this is simple, the storage problem is not – even if we just had a bit per molecule it is one terabit of data to store and page in and out on every iteration.
I cannot see how compression would help – initially the representation would be highly compressible as all data would be a long series of 1s followed by a long series of 0s. But as we drove through the iterations and the entropy increased that would break down – that is the whole point, after all.
I could go for a much smaller simulation but that misses demonstrating Penrose’s point – that even with the highly constrained categorisation of what constitutes purple, the mixture turns purple and stays that way.
So, algorithm gurus – tell me how to solve this one?
Update: Redrew the box to reflect the rules of geometry!
## Similarity, difference and compression
I am in York this week, being a student and preparing for the literature review seminar I am due to give on Friday – the first staging post on the PhD route, at which I have to persuade the department I have been serious about reading around my subject.
Today I went to a departmental seminar, presented by Professor Ulrike Hahne of Birkbeck College (and latterly of Cardiff University). She spoke on the nature of “similarity” – as is the nature of these things it was a quick rattle through a complex subject and if the summary that follows is inaccurate, then I am to blame and not Professor Hahne.
Professor Hahne is a psychologist but she has worked with computer scientists and so her seminar did cut into computer science issues. She began by stating that it was fair to say that all things are equally the same (or different) – in the sense that one can find an infinite number of things by which two things can be categorised in the same way (object A is weighs less that 1kg, object B weighs less than 1kg, they both weigh less than 2kgs and so on). I am not sure I accept this argument in its entirity – in what way is an object different from itself? But that’s a side issue, because her real point was that similarity and difference is a product of human cognition, which I can broadly accept.
So how do we measure similarity and difference? Well the “simplest” way is to measure the “distance” between two stimuli in the standard geometric way – this is how we measure the difference between colours in a colour space (about which more later) ie., the root of the sum of the squares of the distances. This concept has even been developed into the “universal law of generalisation”. This idea has achieved much but has major deficiencies.
Professor Hahne outlined some of the alternatives before describing her interest (and defence of) the idea that the key to difference was the number of mental transformations required to change one thing from another – for instance, how different is a square from a triangle? Two transformations are required, first to think of the triangle and then to replace the square with the triangle and so on.
In a more sophisticated way, the issue is the Kolmogorov complexity of the transformation. The shorter the program we can write to make the transformation, the more similar the objects are.
This, it strikes me, has an important application in computer science, or it least it could have. To go back to the colour space issue again – when I wrote the Perl module Image::Pngslimmer I had to write a lot of code that computed geometrical distances between colour points – a task that Perl is very poor at, maths is slow there. This was to implement the so-called “median cut” algorithm (pleased to say that the Wikipedia article on the median cut algorithm cites my code as an example, and it wasn’t even me who edited it to that, at least as far as I can remember!) where colours are quantised to those at the centre of “median cut boxes” in the colour space. Perhaps there is a much simpler way to make this transformation and so more quickly compress the PNG?
I asked Professor Hahne about this and she confirmed that her collaborator Professor Nick Chater of Warwick University is interested in this very question. When I have got this week out the way I may have a look at his published papers and see if there is anything interesting there.
## More on Huffman encoding
As well as the basic form of Huffman coding, Huffman’s algorithm can also be used to encode streams to deliver compression as well as coding (at a cost of codes being not immediately decipherable).
Once again, with thanks to Information and Coding Theory, here’s a brief explanation.
With Huffman coding we can directly code a binary symbol stream $S = \{s_1, s_2\}$ where $s_1 = 0, s_2 = 1$.
But with encoding we can produce a code for $S^n$ where $n$ is arbitrary, e.g. $S^3 = \{s_1s_1s_1, s_1s_1s_2, s_1s_2s_1, s_1s_2s_2, s_2s_1s_1, s_2s_1s_2, s_2s_2s_1, s_2s_2s_2 \}$
So let us assume (e.g. we are transmitting a very dark image via our fax machine) that the probability of $s_1 = \frac{2}{3}$ and for $s_2 = \frac{1}{3}$
Then we have the following probability distributions:
$S$: $\frac{8}{27} , \frac{4}{27} , \frac{4}{27} , \frac{2}{27} , \frac{4}{27} , \frac{2}{27} , \frac{2}{27} , \frac{1}{27}$
$S^{\prime}$: $\frac{8}{27},$ $\frac{4}{27} ,$ $\frac{2}{27} , \frac{4}{27} , \frac{2}{27} , \frac{2}{27} , \frac{3}{27}$
$S^{\prime\prime}$: $\frac{8}{27},$ $\frac{4}{27},$ $\frac{2}{27},\frac{4}{27},\frac{3}{27},\frac{4}{27}$
$S^{\prime\prime\prime}$: $\frac{8}{27},\frac{4}{27},\frac{4}{27},\frac{4}{27},\frac{5}{27}$
$S^{\prime\prime\prime\prime}$: $\frac{8}{27},\frac{4}{27},\frac{5}{27},\frac{8}{27}$
$S^{\prime\prime\prime\prime\prime}$: $\frac{8}{27},\frac{8}{27},\frac{9}{27}$
$S^{\prime\prime\prime\prime\prime\prime}$: $\frac{9}{27},\frac{16}{27}$
$S^{\prime\prime\prime\prime\prime\prime\prime}$: $1$
We can now compute the average word length in the Huffman code – by simply adding the probabilities of the newly created ‘joint’ symbols:
$= \frac{3}{27} + \frac{4}{27} + \frac{5}{27} + \frac{8}{27} + \frac{9}{27} + \frac{16}{27} + 1 = \frac{72}{27}$
Which obviously looks like the opposite of compression! But, of course we are encoding three symbols, so the actual word length per symbol becomes $\frac{72}{81} = 0.88888888888....$, in other words somewhat less than 1.
## A blast from the past: Huffman coding
Britain used to lead the world in the deployment of a cutting edge technology: the fax machine.
Back when I graduated from university in the last 1980s fax machines were the technology every business had to have and Britain had more of them per head of population than anywhere else.
Today I work in an office where we have decided having one is no longer necessary.
Why all this? Well, one thing I remember from the era of the fax machine is the frequent reference to “Huffman coding“. Indeed, back in the days when free software graphics libraries were in short supply, I investigated whether I could convert faxes into over graphics formats and kept coming across the term.
Now, thanks to Information and Coding Theory, I can fully appreciate the beauty of David A. Huffman‘s coding scheme.
Here’s my attempt to explain it…
We have a symbol sequence $S = s_1, s_2, ... ,s_{n - 1}, s_n$, these have a probability of occurring of $p_1, p_2, ... , p_n$, where $\sum\limits_{i = 1}^n p_i = 1$. We rearrange the symbols so that $p_1 > p_2 > ... p_n$
Now take the tow least probable symbols (or if there are more than two, any two of the least probable symbols) and connote them with one of the symbols picked to produce a new symbol sequence $S^\prime$ with $(n - 1)$ members e.g. : $S^\prime = s_1, s_2, ... s_{n - 1}$ with probabilities $p_1, p_2, ... (p_{n - 1} + p_n)$.
This process can then be repeated until we have simply one symbol with probability 1.
To encode this we can now ‘go backwards’ to produce an optimal code.
We start by assigning the empty symbol $\varepsilon$, then passing back up the tree, expanding the combined symbols and adding a ‘1’ to the ‘left’ hand symbol and a ‘0’ to the right (or vice versa): $\varepsilon$ 1 = 1, $\varepsilon$ 0 = 0.
Here’s a worked example: we have the (binary) symbols $S$: 0, 1, 10, 11, which have the probabilities of 0.5, 0.25, 0.125 and 0.125.
So $S^\prime$ = 0, 1, 10 with probabilities 0.5, 0.25, 0.25, $S^{\prime\prime}$ = 0, 1 with probabilities 0.5, 0.5 and $S^{\prime\prime\prime}$ = 0 with probability 1.
Now, to encode: $C^{\prime\prime\prime}: 0 = \varepsilon$
$C^{\prime\prime}$, 0 = 1, 1 = 0
$C^\prime$ 0 = 1, 1 = 01, 10 = 00
$C$ 0 = 1, 1 = 01, 10 = 001, 11 = 000
This code has an average length of $\frac{1}{2} + \frac{2}{4} + \frac{3}{8} + \frac{3}{8} = \frac{7}{4}$
(In fact pure Huffman coding is not generally used – as other forms e.g. adaptive Huffman, offer better performance, though the underlying principles are the same.) |
# Topology on $\mathbb R^2$ that is not a product topology
Find an example of a topology on $$\mathbb R^2$$ that is not a product topology.
I feel like an open set in $$\mathbb R^2$$ with any topology can be written as a union of open balls so we can arrange ourselves to write it as a product $$U \times V$$.
But maybe another idea that I thought of is the co-finite topology on $$\Bbb R^2$$. Is it correct ? I think yes because an open set in the co-finite topology on $$\Bbb R^2$$ can be written for example as $$\Bbb R^2 \backslash \{(x,y)\}$$, i.e. the plane without a point and I feel like this cannot be written as a union of boxes (squares).
• Any set is a union of singletons , hence a union of rectangles. Oct 19, 2021 at 8:04
• Oct 19, 2021 at 8:06
• @KaviRamaMurthy In fact, you're right. Maybe the co-countable one ? Oct 19, 2021 at 8:06
• The slotted plane and the cross topology are well known topologies on $\Bbb R^2$ that are not product topologies. Also the topology induced by the river metric E.g. Oct 19, 2021 at 9:32
We cannot work without a definition of the statement we are trying to prove. I say that a (non-trivial) product topology on $$X$$ is a topology $$\tau$$ such that there is an indexed family $$\{X_i\}_{i\in I}$$ of non-one-point spaces such that $$\lvert I\rvert\ge2$$ and $$\prod_{j\in I}X_j$$ is homeomorphic to $$(X,\tau)$$.
Now, your claim that the cofinite topology on $$\Bbb R^2$$ is not a product topology is spot-on: the cofinite topology on an infinite set is never a product topology.
In point of fact, let $$\prod_{j\in I}X_j=X$$ as per the definition of $$X$$ having a non-trivial product topology. Since $$\lvert X\rvert\ne \varnothing$$ consider some $$h\in X$$. Each map $$\iota_j:X_j\to X$$, $$\iota_i(y)=\begin{cases}y&\text{if }i=j\\ h_i&\text{if }i\ne j\end{cases}$$ is a topological embedding (id est, a homeomorphism onto its image). Since subspaces of spaces with the cofinite topology have the cofinite topology, each $$X_j$$ must have the cofinite topology. Again, pick some $$h\in X$$. For $$k\in I$$ call $$U^{(k)}$$ the set $$\{x\in X\,:\, x_k\ne h_k\}$$. Notice that $$U^{(k)}$$ is open because $$U^{(k)}=(X_k\setminus\{h_k\})\times\prod\limits_{j\in I\setminus\{k\}}X_j$$. Notice that $$U^{(k)}\ne\varnothing$$ because $$\lvert X_k\rvert\ge2$$, and that $$X\setminus U^{(k)}=\{h_k\}\times\prod\limits_{j\in I\setminus\{k\}}X_j$$. I claim that, if $$X$$ is infinite, then there is some $$k\in I$$ such that $$\prod_{j\in I\setminus\{k\}}X_j$$ is infinite. Two cases:
• if one of the $$X_j$$-s, namely $$X_i$$, is infinite, then consider any $$k\ne i$$ and you'll have $$\lvert X_i\rvert\le\left\lvert\prod_{j\in I\setminus\{k\}}X_j\right\rvert$$.
• if all the $$X_j$$-s are finite, then $$I$$ must be infinite, and for any $$k$$ you have $$2^{\aleph_0}\le 2^{\lvert I\rvert}\le\left\lvert\prod_{j\in I\setminus\{k\}}X_j\right\rvert$$.
For such $$k$$, the set $$X\setminus U^{(k)}$$ is then an infinite closed set.
The topology on $$\Bbb R^2$$ defined by the jungle river metric or the French railroad metric are not product metrics as some thought will show.
The co-finite or co-countable topology will also work (but are not "nice" like metrics are).
The so-called slotted plane (a classic example of a Hausdorff non-regular topology) or the "cross"-topology (a set is open iff it contains a cross (parallel to both axes, of finite length) around each of its points) are refinements of the Euclidean topology that also (I'm pretty sure) cannot be written as a product of topologies on the components. Both have been used as examples in papers and books. |
F. Kuroni and the Punishment
time limit per test
2.5 seconds
memory limit per test
256 megabytes
input
standard input
output
standard output
Kuroni is very angry at the other setters for using him as a theme! As a punishment, he forced them to solve the following problem:
You have an array $a$ consisting of $n$ positive integers. An operation consists of choosing an element and either adding $1$ to it or subtracting $1$ from it, such that the element remains positive. We say the array is good if the greatest common divisor of all its elements is not $1$. Find the minimum number of operations needed to make the array good.
Unable to match Kuroni's intellect, the setters failed to solve the problem. Help them escape from Kuroni's punishment!
Input
The first line contains an integer $n$ ($2 \le n \le 2 \cdot 10^5$) — the number of elements in the array.
The second line contains $n$ integers $a_1, a_2, \dots, a_n$. ($1 \le a_i \le 10^{12}$) — the elements of the array.
Output
Print a single integer — the minimum number of operations required to make the array good.
Examples
Input
3
6 2 4
Output
0
Input
5
9 8 7 3 1
Output
4
Note
In the first example, the first array is already good, since the greatest common divisor of all the elements is $2$.
In the second example, we may apply the following operations:
1. Add $1$ to the second element, making it equal to $9$.
2. Subtract $1$ from the third element, making it equal to $6$.
3. Add $1$ to the fifth element, making it equal to $2$.
4. Add $1$ to the fifth element again, making it equal to $3$.
The greatest common divisor of all elements will then be equal to $3$, so the array will be good. It can be shown that no sequence of three or less operations can make the array good. |
# Where the heck does a girl find a nerd around here?
Where the heck does a girl find a nerd around here???
So i have a crazy liking for smart guys. the nerdy-er the better!! In fact if i could find a guy who also thinks a periodic table shower curtain is rad, loves to spend hours talking about the world we live in would be willing to whisper the laws of thermodynamics into my ear whilst we were partaking in "special" relations i would be one insanely happy girl...haha but seriously my point is i loove geeky guys, and i want to find one but where do they hang out???? All the guys i meet are jerks, i just wanna find a cute, sweet, funny, nerdy, geeky physicist. Is that too much to ask?? :P
So come on guys someone please tell me how do i meet the man of my scientific dreams?
Any tips as to where they hang out, how i can woo them???...maybe lil old New Zealand just doesnt have enough nerds for me :(
I suppose i should say also when i use the word "nerd" what i am actually meaning is intelligent, witty and intellectual, an odd or eccentric personality and you know into stuff like experiments, usb powered rockets and LAN parties haha.
Andre
Drop the word shibboleet occasionally?
Hahahaha Love it. So trying that one...im thinking it may just get me laid ;P
Drop the word shibboleet occasionally?
It doesnt seem shes looking for a secret code to get in the backdoor. :D
Homework Helper
You just need the right bait. Carry one of these around and they'll find you.
Staff Emeritus
2021 Award
Oooo....Is that a glass cursor?
Haha i have no idea what it is, but it looks amazing. Where does one find such an alluring item? :P
Mentor
haha i have no idea what it is,
<gasp>
No you have to tell me what it is, I hate feeling like im missing out on some important information. And clearly this could be the key to me find my mr nerdy....
No you have to tell me what it is, I hate feeling like im missing out on some important information. And clearly this could be the key to me find my mr nerdy....
http://en.wikipedia.org/wiki/Slide_rule
JaredJames
Any tips as to where they hang out, how i can woo them???
Well here's what I'm thinking:
Consider all the places that you currently hang out where they aren't....
....and then go somewhere else!
Seriously, the way some of these threads are you'd swear people only hang out in specific places. Until you speak to someone you just won't know. It could be anywhere.
Anyhow, based on what I've seen the nerdiest people are found in a university - to identify them it's best to look for those wearing a lab coat - even though they aren't in the lab that day.
humanino
How much advantage I got from using a slide rule in exams where electronic calculators were forbidden ! zoestorm if you like a nerd, offer them one
bp_psy
Anyhow, based on what I've seen the nerdiest people are found in a university - to identify them it's best to look for those wearing a lab coat - even though they aren't in the lab that day.
So there are no nerds in physics departments?
Well here's what I'm thinking:
Consider all the places that you currently hang out where they aren't....
....and then go somewhere else!
Seriously, the way some of these threads are you'd swear people only hang out in specific places. Until you speak to someone you just won't know. It could be anywhere.
Anyhow, based on what I've seen the nerdiest people are found in a university - to identify them it's best to look for those wearing a lab coat - even though they aren't in the lab that day.
Well i am at university, i study both physics and chemistry and i still cannot manage to find a nerdy guy. and "the way some of these threads go on" is fairly accurate I'd say.
I mean i cannot comment on what its like over there, but here in nz social groups are pretty predictable and seem to always hang out in certain places.
Anyways thanks for the advice, I dont know how i didnt think of that before, i mean its so simple...theyre not where i am, so go somewhere else...a theory even Einstein would have been impressed with.
Homework Helper
Gold Member
.... where does a girl find a nerd?? What the hell. I hope you're not looking at sporting events and at biker bars.
rootX
http://www.meetup.com/Dating-for-Nerds-Milwaukee/events/16052800/
While coming home, I noticed their ad in a subway train.
JaredJames
.... where does a girl find a nerd?? What the hell. I hope you're not looking at sporting events and at biker bars.
Exactly.
Well i am at university, i study both physics and chemistry and i still cannot manage to find a nerdy guy. and "the way some of these threads go on" is fairly accurate I'd say.
I mean i cannot comment on what its like over there, but here in nz social groups are pretty predictable and seem to always hang out in certain places.
Anyways thanks for the advice, I dont know how i didnt think of that before, i mean its so simple...theyre not where i am, so go somewhere else...a theory even Einstein would have been impressed with.
You said yourself you don't want a classic nerd, you just want someone with that intelligence but with social ability thrown in. So it's no good looking in classic nerd hotspots really, is it?
Also, if you know they hang out in certain places why not ask your peers where said places are? Someone around you is gonna give far better advice than us. Me telling you where to find a nerd in the UK is pretty much useless to you.
Hang on, you study physics and chemistry and can't find a nerd?
So there are no nerds in physics departments?
Don't believe that's what I said.
Simply pointing out that the best way to identify the nerds in my uni was to look for the ones constantly in their lab coats.
I don't know what it's like now, but when I was in college (back in the dark ages) there was a scarcity of women engineering students. So depending on how much things have evened out (or not) since then, you might try hanging around the engineering department, if your school has one.
JaredJames
I don't know what it's like now, but when I was in college (back in the dark ages) there was a scarcity of women engineering students. So depending on how much things have evened out (or not) since then, you might try hanging around the engineering department, if your school has one.
My year in engineering has about 50 males and 5 females.
Haha i have no idea what it is, but it looks amazing. Where does one find such an alluring item? :P
Other than my desk drawer, you might find one in a museum. (I feel really old now!)
Gold Member
Modern day slide-rules have nerdar, a detector used for finding nerds.
Staff Emeritus
2021 Award
Homework Helper
Here's another way to find nerds. Hold up a sign.
(Just don't do this on a busy street or it will defeat your purpose.)
http://xkcd.com/356/
Staff Emeritus
I would imagine that one could wait outside the class for SR or QM or . . . . and find a group of individuals of interest.
I used to spend time in the reading room of the physics department. That's where the latest issues of various journals would be kept.
Or one could visit the EE or other engineering departments.
Or library.
I remember the problem of the infinite mesh of 1 ohm resistors.
Gold Member
So i have a crazy liking for smart guys. the nerdy-er the better!!
I'm pretty darn nerdy! I like trivia contests, and building things, and I'm an engineer and my bookshelf is full of Brian Greene and Michio Kaku books!
In fact if i could find a guy who also thinks a periodic table shower curtain is rad, loves to spend hours talking about the world we live in would be willing to whisper the laws of thermodynamics into my ear whilst we were partaking in "special" relations i would be one insanely happy girl...
EDIT: Yes, that is a periodic table shower curtain in my condo.
I'm sure I could reduce your coefficient of friction with a little acoustic exchange, baby girl. I'm full of useless knowledge, and I can turn every single fact into innuendo.
haha but seriously my point is i loove geeky guys, and i want to find one but where do they hang out???? All the guys i meet are jerks, i just wanna find a cute, sweet, funny, nerdy, geeky physicist. Is that too much to ask?? :P
Cute? Some people think I'm cute... sometimes. I'm 5'11" and I can pull the ears off of a gundark.
So come on guys someone please tell me how do i meet the man of my scientific dreams?
Any tips as to where they hang out, how i can woo them???...maybe lil old New Zealand just doesnt have enough nerds for me :(
NEW ZEALAND?! Blech! Sorry, I'm in the States. Unless you want to have an internet relationship until you get up the courage to here, I don't know if I can help.
Last edited:
webbwbb
Try applying for some type of job with a local engineering firm. Even if you don't get the job you will have a good reason to be there and every person you meet will be a total nerd.
Staff Emeritus
Find one of these, and a nerd is nearby.
[PLAIN]http://www.jacmusic.com/html/articles/ericbarbour/tworks_fig3.jpg [Broken]
To initiate a conversation, causally mention the term "pentode".
[URL]http://www.zeitmann-tubes.com/images/Hickok%20539B/P0011231.JPG[/URL]
Last edited by a moderator:
JaredJames
Try applying for some type of job with a local engineering firm. Even if you don't get the job you will have a good reason to be there and every person you meet will be a total nerd.
Speaking from experience there?
In fact, my experience was the total opposite. Only a few nerds, well, one, well, two if you count me (I don't but others would disagree).
Staff Emeritus
2021 Award
Astronuc, pentode, schmentode. What one really needs is a Nixie Tube watch.
Only $395. Science Advisor Homework Helper I would imagine that one could wait outside the class for SR or QM or . . . . and find a group of individuals of interest. I used to spend time in the reading room of the physics department. That's where the latest issues of various journals would be kept. Or one could visit the EE or other engineering departments. Or library. I remember the problem of the infinite mesh of 1 ohm resistors. And, believe it or not, you can sometimes find nerds in a store aisle searching for razors or shaving cream. Did I ever tell you you're a hippy? In the last couple weeks, at least? :rofl: (But, he's a hippy nerd and that's an important distinction.) Gold Member Find one of these, and a nerd is nearby. To initiate a conversation, causally mention the term "pentode". I built a crude web-site to help other tube-nuts identify the various vacuum tubes by their physical characteristics. The paint on the envelope is not reliable, since the manufacturers often sub-contracted large contracts out to their competitors. Hosted by a good friend of mine who works at Berkeley. Old 12AX7's and their variants can sometimes be found in really good condition at ham-club swap meets. They are highly prized by people like myself who repair or restore old guitar amps. The trick is to know what you're looking at - for instance not to pay$\$ for a reproduction Telefunken when it can be ruled out as a repo worth maybe a couple of bucks.
http://www.eecs.berkeley.edu/~loarie/12ax7.html [Broken]
Last edited by a moderator:
Jimmy Snyder
Sorry, I don't know any nerds. But my wife knows one. I'll ask her. |
# What is an alternative to scratch damage to solve combat deadlocking?
Scratch damage is a game mechanic whereby any successful attack always does some minimal amount of damage. This is often used in subtractive combat systems, where the defense is subtracted directly from the damage done by an attacker. Therefore, the target will always take some minimal damage.
The downside of such a system, for me at least, is that it's a hack. It takes a simple formula like Damage = Attack - Defense and turns it into a (slightly) more complex one: Damage = max(Attack - Defense, 1).
I also feel that it detracts from a player's skill in developing their character/etc. No matter how many defense bonuses they get, every attack will do some small damage. So why get your defense so high, if it won't mean anything?
Furthermore, this now encourages the use of larger numbers for Hp and damage, so that the scratch damage is truly negligable. After all, if the minimum damage is 1, and you only have 10 Hp, that's still 10% of your health. Even with 20 Hp, that's 5%. And I would rather avoid using larger numbers like that unless it's absolutely necessary.
However, there is one very important upside of scratch damage: it solves the deadlock problem.
Deadlock happens when neither side is able to do damage to the other. If you invest all of your resources into defense, and few into attack, then your character may not take damage, but they won't be able to deal very much either. Thus, you could come upon an encounter where neither side will be able to inflict damage, so battle continues forever. This is especially if you don't have random mechanics like critical hits (which I also hate).
At least with scratch damage, someone will eventually win. It may only be the one with the most Hp or highest number of attacks, but the battle will end.
So I like having a combat system where there will always be an outcome. But I don't like having scratch damage. What are my alternatives?
Alternatives that don't involve rolling random numbers; I want combat to be 100% deterministic. If the same battle is fought, the exact same outcome must happen.
If you want specifics on the gameplay, think in terms of turn-based combat, where battle can be automated (you design your forces, then pit them against others).
You could implement a fatigue/stamina system. As more and more attacks are done the player becomes increasingly fatigued meaning they are unable to maintain such a good defense (that shield arm suddenly starts feeling really heavy after swinging the sword 50 times) as fatigue increases, defense falls. This means a player who has developed a good character won't take damage in quick encounters but longer drawn out combat will result in increasing damage preventing a deadlock.
This seems like a very open-ended question. Solutions (that aren't already mentioned) to prevent deadlocks:
• Allow for deadlocks as a viable end result. This is the least expected or typical solution. In an RTS game, for example, this might be an uneasy cease-fire, or tense but low-key conflict in a violent balance.
• A time limitation with ties allowable
• A time limitation wherein the first hit/score/point after the limit wins (sudden death/overtime)
• Random dangers in the system (a la bombs in Super Smash Bros). This changes the focus of the situation from offense to defense.
• Any external factor to imbalance the system (e.g. a third agent with high damage and low armor, an agreement by both sides to "duel" without all that armor)
• Defense as a % reduced (probably the simplest solution, as long as the defense is capped)
• Skills or tactics which alter the dynamics of gameplay (choices that don't result in damage, such as a Cloak/Invisibility skill)
• Means of doing much higher damage (classic critical hits, stealth bonuses, height bonuses, terrain bonuses, an ability that turns random portions of ground into lava)
• Means of damaging or lowering armor, or causing a damage directly targetted to high-defense characters (skill that inverses armor in the calculation, so agents with lower defense receive less damage)
• Limited use items or skills (e.g. bombs, powerful but draining abilities). Only useful if there are long-term goals beyond the deadlock with which to balance limited use
• End combat artificially (classic "Flee" option)
As variant, you may add wound accumulation. Surely, giant copper axe will not penetrate heavy steel armour, but it will do some blunt damage, and may even break bones. The same is true for bullets and vests.
Every time a character is hit, convert part of piercing and slashing damage into blunt damage, which accumulates. After certain threshold (that depends on endurance, for instance), the accumulated damage will interfere with the character's combat skills. Some options here:
• Armor could be less efficient when taking hits in the same body parts.
• Significant wounds: periodic damage, weakness due to pain.
• Severe wounds: short-term loss of consciousness (rendering the character very vulnerable).
• Extreme wounds: permanent stat loss (if character manages to survive).
• I was going to suggest giving every weapon bludgeoning damage that cannot be blocked by armor. Swords and daggers would have small bludgeoning capacity, but axes and maces would be very effective. – jmegaffin May 5 '13 at 21:01
Implement different types of damage, with armors that only protect against some types of damage. For example, kinetic damage, acid damage, fire damage, etc. No armor should protect against each type of damage.
Users could layer their armors to protect against all damage types, but they couldn't protect against all damage types at the same time. This implements some strategy into battles as well, where players have to switch damage types to get through the various layers of armor.
You could just not have a defence stat and just give bigger enemies bigger hp. I know you want to avoid gigantic numbers, but if you want a deterministic, turn based game where attacks aren't based on the players input directly (there is no chance of human error messing the attack up) the defence stat seems a little pointless as well.
If you are worried about the presentation aspect you could break HP down into Hearts or Health pips EG 100 hp = 1 heart. Hearts start to go black as a character loses HP, then disappear entirely. That way it's easier for the player to understand than 129301239103123hp, but you don't have to worry about balancing some magic equation.
If you are worried about realism, you could always have it animated to look like the attack's target successfully blocks or is only lightly scratched until the killing blow.
• "if you want a deterministic, turn based game where attacks aren't based on the players input directly [...] the defence stat seems a little pointless as well." Defense is not based on the concept of "chance-to-miss". It's more like damage reduction in D&D, not THAC0 (or whatever they're calling it these days). Defense means that a 40 damage attack can be reduced to 10 damage if you have 30 defense. I don't see how that can ever be "pointless. – Nicol Bolas May 4 '13 at 15:39
• I know the math is a little different, but really there isn't a huge amount of difference in terms of game-play between just giving a character a proportionate amount of health instead and having damage be reduced by a certain amount. Not if there must always be some way of dealing damage to a target. EDIT: That's assuming there aren't any special attacks that ignore defence, or other gameplay modifiers. – Lewis Wakeford May 4 '13 at 15:44
• Having a subtractive defense stat gives you something more than just Hp (in addition to all of the things you except, like special attacks or modifiers). It creates stratification between damage-over-time users and high-damage users. A character that attacks multiple times but with lower damage will run afoul of a high defense character, while a slower, high-damage character will be doing more damage-over-time. Mere hit-points will not create this stratification. – Nicol Bolas May 4 '13 at 15:52
Add a wear-down mechanic for defense. Make every attack slightly reduced the defense of the target.
Eventually even a weak attack will wear down the targets defense enough to inflict actual damage.
If you are OK with stepping out from the world of integers and willing to spice up subtraction system, you can use damage reduction algorithm from Warlords Battlecry III:
damage = attack
while DR > 0:
usedDR = DR
if DR > damage
usedDR = damage
damage = damage - usedDR * 0.5
DR = (DR - usedDR) / 2
HP = HP - damage
This is the function that behaves very similar to the above pseudocode:
damage(attack, DR) = attack * 2 ^ -(DR/attack)
When DR is smaller than attack (incoming damage), it behaves like attack - k*DR where k is 0.693 (ln(2) to be exact). When DR is close to or bigger then incoming damage than the damage is halved DR/attack times. For example for DR = 30 and attack = 10, damage would be 1.25 (attack halved 3 times).
It may look like more complicated and harder for human to evaluate but it is hack free and changes in both parameters are relevant. If attacker gains bonus attack power or defender gains or loses DR by even small amounts, the resulting damage will change.
Use floats.
Even if you present integer HP to the player, use float for hp and float for damage.
I am using fractional armor classes now, where armor of 1.0 is invincible and armor of 0.0 means "takes full damage". Damage is reduced as:
float hpReduction = hp - dmg*(1.f - armor) ;
This formula has the effect of allowing "double damage" by setting armor to -1.
I have classified damage into categories as well, see Starcraft's concussive/explosive damage types, or Eve's damage type system for an example.
So now, a little imp scratching at your .99 class armor will eventually Cherry Tap you to death, but the attacks will appear to do no damage to the player (he will remain at 1 hp as he goes from 1.15 hp to 1.1499 hp the next attack..)
A skill graph could put some damage-dealing skills as prerequisites for higher-level defensive skills to reduce a player's ability to make highly asymmetrical builds.
Critical hits can get some damage through as long as one's defense is not absurdly high. If you make them periodic rather than random, you still have deterministic combat.
After a number of failed hits, you might automatically reduce attack speed in favor of higher hit chance and damage. Your super-fast scratch character was doing ten attacks per second; now, after fifteen attacks doing no damage, it gets three attacks per second at +50% to hit and +200% damage. This is similar to critical hits, but it's faster for this sort of situation.
You could use percentage based damage reduction from armor, but to make it more interesting than another way to buff your maximum HP, you could have the reduction percentage be greater for weaker hits. For instance, 90% reduction for 1-20hp of damage, 60% for 20-30, and 30% for everything else.
Finally, don't worry about large numbers. They give you a much finer degree of control than small ones. Having a level 1 character start off with 100 hit points means you can have something that kills them off in seven hits but not five. It means you can have poison damage and other damage over time effects that aren't outrageously devastating (as well as some that are, if you like). If you don't like displaying large numbers, find a way not to display them.
You say you want there to always be a resolution, but does that have to be a victory?
Consider the approach used by Dominions--on turn 50 the attacker automatically routs. On turn 75 the defender automatically routs. (A rout doesn't automatically work--some units are immune to routing and even if an immobile unit routs it can't actually leave.) On turn 100 everything left is killed.
While I disagree with the exact way it functions (there are situations where it simply takes too long to kill the other side) the basic idea remains valid.
What I would suggest:
Look at some measure of the power of each side. (Hit points are an obvious starting point but be careful, Dominions has a problem in this regard where hit points "loss" that isn't meaningful is being counted--shapechanging, summons etc. resulting in armies routing due to casualties when they didn't even take any.) Keep track of the minimum value reached and note how many turns it has been since a new minimum has been set. If it goes too long without a new minimum being set you have a deadlock of some kind and the attacker should retreat.
An alternative I haven't seen yet is that as your armour value makes the attack negative you could add a little RNG for a true block:
const stratchDamage = 1;
var armour = 10;
var blockCount = 0;
function registerAttack (incomingAttack)
{
var incomingDamage = incomingAttack - armour;
// Nothing unusual, deal damage
if( incomingDamage > 0 )
{
dealDamage(incomingDamage);
}
// Armour cancels out attack, deal scratch damage
else if( incomingDamage == 0 )
{
dealDamage(scratchDamage);
}
// Armour over attack value, check if can block
else
{
var trueBlockChance = armour - incomingDamage;
// blockCount starts at 0, will always block first attack
if( trueBlockChance > blockCount )
{
// Can technically do nothing, or trigger block animations etc
block();
blockCount ++; // Increment block so they can't block forever
}
else
{
dealDamage(stratchDamage);
blockCount = 0;
}
}
}
The higher the deficit of damage after armour is taken into account, the more attacks the character can block before taking another bout of scratch damage.
This provides a little extra scale for defensive stats whilst not making them invulnerable, also allows attacks to get through this block mechanic should you want them to and naturally will prevent deadlocks.
Fights might take a long time if you have stacked stats in your defence over offence, but it'll get there in the end.
• Quote from OP: I want combat to be 100% deterministic. This does not allow for 100% deterministic combat. – Charanor Jul 20 '17 at 13:20
• Then the block can be turn based too, for each level of trueBlock over an attack, the enemy must attack that many times to achieve scratch damage. Same idea can apply. – Tom 'Blue' Piddock Jul 20 '17 at 13:24
• @Charanor - adjusted the answer to be 100% deterministic. – Tom 'Blue' Piddock Jul 20 '17 at 13:44
Lots of 3D fighting games avoid scratch damage. Examples are Tekken and Soul Calibur 2. They avoid this by making it difficult to have a perfect defense. Some attacks are simply too fast to react to. I think it's a pretty good solution.
I think reducing the effectiveness of Defense is not a good option. Depowering a player leads to a bad game experience. Why not go the other way around?
Why not powering up the Attack as time goes by. This makes scratch damage increase with time, reducing the incentive of blocking. In mid-late game, a character can kill another while wailing on him while he's defending.
Some pnp rpgs implement a "tension mechanism". Each turn tension increases by one. All the rolls have the added modifier of the tension value, pushing the battle towards an end.
Another idea, coming from Fighting games, is an attack that goes through defense. I don't know if your game is turn based or real-time, but this attack could also open the opponent to a combo or disable him or some of his abilities temporarily.
I believe the trick is not to undepower defense. Make other options just as good as defense, or reduce the times where defense is a good option. |
# Prove cl(int(cl(int(A))))=cl(int(A))
## Homework Statement
I am working on the proof that taking closure and interior of a set in a metric space can produce at most 7 sets. The piece I need is that $\bar{\mathring{A}} = \bar{\mathring{\bar{\mathring{A}}}}$.
## Homework Equations
Interior of A is the union of all open sets contained in A, aka the largest open set contained in A.
Closure of A is the intersection of all closed sets containing A, aka the smallest closed set containing A.
## The Attempt at a Solution
$\bar{\mathring{A}}$ is a closed set. $\mathring{\bar{\mathring{A}}}\subseteq \bar{\mathring{A}}$. Since $\bar{\mathring{\bar{\mathring{A}}}}$ is the smallest closed set containing $\mathring{\bar{\mathring{A}}}$ we have that $\bar{\mathring{\bar{\mathring{A}}}}\subseteq \bar{\mathring{A}}$.
I'm not sure how to get the inclusion $\bar{\mathring{A}}\subseteq \bar{\mathring{\bar{\mathring{A}}}}$
Last edited:
Dick
Homework Helper
int(A) is an open set contained in cl(int(A)). What does this tell you about its relation to int(cl(int(A)))?
Last edited:
int(A) is an open set contained in cl(int(A)). What does this tell you about it's relation to int(cl(int(A)))?
$\mathring{A}\subseteq \mathring{\bar{\mathring{A}}}$
Dick
$\mathring{A}\subseteq \mathring{\bar{\mathring{A}}}$ |
Dissertation/Thesis Title:
On Length Minimizing Curves with Distortion Thickness Bounded Below and Distortion Bounded Above |
# Revision history [back]
Ok, finally working like a charm
Using virt-manager GUI kept trace of my created VM and I'm able to start it again. Still interested in the way to do it via CLI tools though.
It was just a CentOS installation configuration misunderstanding, not related to libvirt (which had already done its job, see libvirt installation instructions about network, I don't remember where I got them but there are pretty straightforward for basic config).
I just missed the "Configure network" button on installation screens. ( Window size problem on my side ). CentOS doesn't auto-configurate its network devices anymore apparently. Or config manually after install, see for exemple : wiki.centos.org (slash) FAQ/CentOS6#head-b67e85d98f0e9f1b599358105c551632c6ff7c90 |
## Firejay5 Group Title Simplify. Show work and explain. 13. n^5 over n - 6 * n^2 - 6n over n^8 one year ago one year ago
1. Mertsj Group Title
Factor 3y^2 out of the numerator
2. Mertsj Group Title
Awesome!!
3. Mertsj Group Title
Remember to multiply by the reciprocal and factor n^2-6n by factoring out n
4. Firejay5 Group Title
cancel out n - 6
5. Mertsj Group Title
yes
6. Firejay5 Group Title
so now we have n^5 * n over n^8
7. Mertsj Group Title
no
8. Firejay5 Group Title
what do you mean
9. Mertsj Group Title
$\frac{n^5}{n-6}\times\frac{n(n-6)}{n^8}=\frac{1}{n^2}$
10. Firejay5 Group Title
@abb0t
11. abb0t Group Title
?
12. Firejay5 Group Title
I need someone to finish Mertsj's work
13. zepdrix Group Title
$\large \frac{n^5}{n-6}\times \frac{n^2-6n}{n^8}$ Hmm do you understand what he did so far? That was a bunch of steps all done at once, I can understand if it was a little confusing.
14. Firejay5 Group Title
I understand that, but I need the explanation of how that = 1 over n^2
15. zepdrix Group Title
$\large \frac{n^5}{n-6}\times \frac{\color{orangered}{n^2-6n}}{n^8}$ For this orange part, factor an n out of each term.$\large \frac{n^5}{n-6}\times \frac{\color{orangered}{n(n-6)}}{n^8}$
16. zepdrix Group Title
From here we'll simply multiply across, Put brackets around the n-6 on the bottom, so the multiplication is a little clearer.$\large \frac{n^5}{(n-6)}\times \frac{n(n-6)}{n^8}$ Then multiplying across gives us,$\large \frac{n^5\times n(n-6)}{n^8(n-6)} \qquad = \qquad \frac{n^6(n-6)}{n^8(n-6)}$Understand so far? :o
17. Firejay5 Group Title
yes
18. zepdrix Group Title
The n-6's can divide out,$\large \frac{n^6\cancel{(n-6)}}{n^8\cancel{(n-6)}} \qquad = \qquad \frac{n^6}{n^8}$
19. zepdrix Group Title
From here, we want to remember our rules for exponents. When we divide terms of the same base, we subtract the exponents.$\large \frac{n^6}{n^8} \qquad = \qquad n^{6-8}$
20. Firejay5 Group Title
then's it's 1 over n^2
21. zepdrix Group Title
$\large n^{-2}\qquad = \qquad \frac{1}{n^2}$Yes :)
22. Firejay5 Group Title
you can do that OR the biggest exponent is on the bottom <--- 8 - 6 = 2
23. Firejay5 Group Title
substract |
# Caption number increases by two for every inclusion
I am trying to typeset my captions in margin using marginpar and captionof. Here is what I get by the MWE. Note the wrong caption numbers.
The first caption is included using macro \fixedmarginpar and the second caption using \marginpar. The figure number gets incremented by 2 for every caption included using \fixedmarginpar.
Please read vspace-in-marginpar-adds-unwanted-vertical-space for the \fixedmarginpar macro used in MWE. This macro sets the caption in a box, and offsets the vertical alignment depending on the height of the box. The caption number is incremented by two because this macro sets a box with caption for calculating the height (\captionof is called for the first time) and then sets it in the marginpar (\captionof is called again). Whats a good workaround?
\documentclass{scrreprt}
\usepackage{calc}
\newcommand{\fixedmarginpar}[2][0pt]{%
\setbox0=\vtop{#2}\marginpar{\vspace{\dimexpr-\ht0+#1}#2}%
}
\begin{document}
\listoffigures
\fixedmarginpar{\captionof{figure}{A}}
\marginpar{\captionof{figure}{A}}
\end{document}
-
\newcommand{\fixedmarginpar}[2][0pt]{% |
# Does same stress always produce same strain?
I am reading about pure bending from Beer Johnston's mechanics of materials. There, a prismatic member with a plane of symmetry was subjected to equal and opposite couples M and M' acting in that plane. It was said about this member that since the bending moment M is same in any cross section, the member will bend uniformly. I could not understand this.
Is it because that stress will be same on each cross section and therefore since same stresses cause same deformation, it will bend uniformly?
• The same stress always causes the same strain in this situation, but note this is not always true. For example a temperature change can cause strain (thermal expansion, i.e. a change of length) without any stress, or if the object is constrained so it can't move, it will cause stress (which may be big enough to crack or break the object) with no strain. Also if the stress is large enough, there may be plastic deformation, where the object does not return to its initial shape when the loads are removed - as a simple demonstration, bend a paper clip! Mar 21, 2019 at 9:30
$$\frac{1}{\rho}= \frac{d\phi}{d_x}=\frac{d^2w}{d_x^2}= - \frac{M}{EI}$$
$$\therefore \ \text{for a constant moment we have a constant radius.}$$ |
# Peter Sanderson
• The test is pass/fail only. If you fail they identify areas you need to work on, but I think they are pretty general like “construction contracts” as your area of weakness. After taking a LARE prep course, I’m under the impression that the passing percentage is somewhere around 85%. Depending on the section, I know you need to get at least a ce…[Read more]
• It is annoying that it takes so long. The reason they do it is so they can evaluate “bad” questions. For example, for a particular question, if 45% of people choose answer “A” and 45% choose “D”, the question is generally deemed to be poorly worded and the question is thrown out. They then have to adjust the scores nationally. Also, as Tos…[Read more] |
Outlook: Madrigal Pharmaceuticals Inc. Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating.
Time series to forecast n: 11 Mar 2023 for (n+4 weeks)
Methodology : Modular Neural Network (News Feed Sentiment Analysis)
## Abstract
Madrigal Pharmaceuticals Inc. Common Stock prediction model is evaluated with Modular Neural Network (News Feed Sentiment Analysis) and Multiple Regression1,2,3,4 and it is concluded that the MDGL stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Buy
## Key Points
1. Market Signals
2. Trust metric by Neural Network
3. Short/Long Term Stocks
## MDGL Target Price Prediction Modeling Methodology
We consider Madrigal Pharmaceuticals Inc. Common Stock Decision Process with Modular Neural Network (News Feed Sentiment Analysis) where A is the set of discrete actions of MDGL stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4
F(Multiple Regression)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Modular Neural Network (News Feed Sentiment Analysis)) X S(n):→ (n+4 weeks) $R=\left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right)$
n:Time series to forecast
p:Price signals of MDGL stock
j:Nash equilibria (Neural Network)
k:Dominated move
a:Best response for target price
For further technical information as per how our model work we invite you to visit the article below:
How do AC Investment Research machine learning (predictive) algorithms actually work?
## MDGL Stock Forecast (Buy or Sell) for (n+4 weeks)
Sample Set: Neural Network
Stock/Index: MDGL Madrigal Pharmaceuticals Inc. Common Stock
Time series to forecast n: 11 Mar 2023 for (n+4 weeks)
According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Buy
X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.)
Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.)
Z axis (Grey to Black): *Technical Analysis%
1. If any instrument in the pool does not meet the conditions in either paragraph B4.1.23 or paragraph B4.1.24, the condition in paragraph B4.1.21(b) is not met. In performing this assessment, a detailed instrument-byinstrument analysis of the pool may not be necessary. However, an entity must use judgement and perform sufficient analysis to determine whether the instruments in the pool meet the conditions in paragraphs B4.1.23–B4.1.24. (See also paragraph B4.1.18 for guidance on contractual cash flow characteristics that have only a de minimis effect.)
2. If any instrument in the pool does not meet the conditions in either paragraph B4.1.23 or paragraph B4.1.24, the condition in paragraph B4.1.21(b) is not met. In performing this assessment, a detailed instrument-byinstrument analysis of the pool may not be necessary. However, an entity must use judgement and perform sufficient analysis to determine whether the instruments in the pool meet the conditions in paragraphs B4.1.23–B4.1.24. (See also paragraph B4.1.18 for guidance on contractual cash flow characteristics that have only a de minimis effect.)
3. An entity is not required to incorporate forecasts of future conditions over the entire expected life of a financial instrument. The degree of judgement that is required to estimate expected credit losses depends on the availability of detailed information. As the forecast horizon increases, the availability of detailed information decreases and the degree of judgement required to estimate expected credit losses increases. The estimate of expected credit losses does not require a detailed estimate for periods that are far in the future—for such periods, an entity may extrapolate projections from available, detailed information.
4. An entity may retain the right to a part of the interest payments on transferred assets as compensation for servicing those assets. The part of the interest payments that the entity would give up upon termination or transfer of the servicing contract is allocated to the servicing asset or servicing liability. The part of the interest payments that the entity would not give up is an interest-only strip receivable. For example, if the entity would not give up any interest upon termination or transfer of the servicing contract, the entire interest spread is an interest-only strip receivable. For the purposes of applying paragraph 3.2.13, the fair values of the servicing asset and interest-only strip receivable are used to allocate the carrying amount of the receivable between the part of the asset that is derecognised and the part that continues to be recognised. If there is no servicing fee specified or the fee to be received is not expected to compensate the entity adequately for performing the servicing, a liability for the servicing obligation is recognised at fair value.
*International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS.
## Conclusions
Madrigal Pharmaceuticals Inc. Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating. Madrigal Pharmaceuticals Inc. Common Stock prediction model is evaluated with Modular Neural Network (News Feed Sentiment Analysis) and Multiple Regression1,2,3,4 and it is concluded that the MDGL stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Buy
### MDGL Madrigal Pharmaceuticals Inc. Common Stock Financial Analysis*
Rating Short-Term Long-Term Senior
Outlook*Ba1Ba1
Income StatementCaa2Baa2
Balance SheetBa3Ba1
Leverage RatiosB2Baa2
Cash FlowCaa2Caa2
Rates of Return and ProfitabilityBaa2B2
*Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents.
How does neural network examine financial reports and understand financial state of the company?
### Prediction Confidence Score
Trust metric by Neural Network: 91 out of 100 with 513 signals.
## References
1. Burkov A. 2019. The Hundred-Page Machine Learning Book. Quebec City, Can.: Andriy Burkov
2. Harris ZS. 1954. Distributional structure. Word 10:146–62
3. K. Boda, J. Filar, Y. Lin, and L. Spanjers. Stochastic target hitting time and the problem of early retirement. Automatic Control, IEEE Transactions on, 49(3):409–419, 2004
4. V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pages 1928–1937, 2016
5. B. Derfer, N. Goodyear, K. Hung, C. Matthews, G. Paoni, K. Rollins, R. Rose, M. Seaman, and J. Wiles. Online marketing platform, August 17 2007. US Patent App. 11/893,765
6. Chernozhukov V, Escanciano JC, Ichimura H, Newey WK. 2016b. Locally robust semiparametric estimation. arXiv:1608.00033 [math.ST]
7. Breusch, T. S. (1978), "Testing for autocorrelation in dynamic linear models," Australian Economic Papers, 17, 334–355.
Frequently Asked QuestionsQ: What is the prediction methodology for MDGL stock?
A: MDGL stock prediction methodology: We evaluate the prediction models Modular Neural Network (News Feed Sentiment Analysis) and Multiple Regression
Q: Is MDGL stock a buy or sell?
A: The dominant strategy among neural network is to Buy MDGL Stock.
Q: Is Madrigal Pharmaceuticals Inc. Common Stock stock a good investment?
A: The consensus rating for Madrigal Pharmaceuticals Inc. Common Stock is Buy and is assigned short-term Ba1 & long-term Ba1 estimated rating.
Q: What is the consensus rating of MDGL stock?
A: The consensus rating for MDGL is Buy.
Q: What is the prediction period for MDGL stock?
A: The prediction period for MDGL is (n+4 weeks) |
# Define styles to make series of similar pgfplots [duplicate]
I'm writing a journal paper, and in this paper, I have to produce many versions of the same graph as I look at the effect of varying different parameters on the variable of interest (in this case the profit).
The following is an example of two of the graphs which I would produce.
The LaTeX source code:
\documentclass{article}
\usepackage{pgfplots}
\usepackage{fullpage}
\pgfplotsset{compat=1.7}
\pagestyle{empty}
\begin{document}
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}
\begin{axis}[
xlabel=Promotion Limit,
ylabel=Profit,
legend columns=2,
legend style={at={(0.5,1.1)},anchor=south}
]
table [x=promotion.limit,y=optimal,col sep=comma]
{promotion-limit.txt};
table [x=promotion.limit,y=actual,col sep=comma]
{promotion-limit.txt};
\end{axis}
\end{tikzpicture}
\end{center}
\caption{Effect of varying the promotion limit on profit.}
\label{figure:promotion-limit}
\end{figure}
In Figure \ref{figure:promotion-limit},
we see the effect of varying the promotion limit on the profit.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}
\begin{axis}[
xlabel=Minimum Price,
ylabel=Profit,
legend columns=2,
legend style={at={(0.5,1.1)},anchor=south}
]
table [x=minimum.price,y=optimal,col sep=comma]
{minimum-price.txt};
table [x=minimum.price,y=actual,col sep=comma]
{minimum-price.txt};
\end{axis}
\end{tikzpicture}
\end{center}
\caption{Effect of varying the minimum price on the profit.}
\label{figure:minimum-price}
\end{figure}
In Figure \ref{figure:minimum-price},
we see the effect of varying the minimum price on the profit.
\end{document}
The contents of promotion-limit.txt are:
promotion.limit,optimal,actual
0,100,100
1,110,105
2,118,108
The contents of minimum-price.txt are:
minimum.price,optimal,actual
8,135,116
9,120,110
10,100,100
The result is:
Question: How do I define a style for these pgfplots, so that I can make changes to the code at a single location, and all the series of graphs will incorporate the changes? |
## Sampling conditioned Markov chains, and diffusions
In many situations it might be useful to know how to sample a Markov chain (or a diffusion process) ${X_k}$ between time ${0}$ and ${T}$, conditioned on the knowledge that ${X_0=a}$ and ${X_T=b}$. This conditioned Markov chain ${X^{\star}}$ is still a Markov chain but in general is not time homogeneous. Moreover, it is generally very difficult to compute the transition probabilities of this conditioned Markov chain since they depend on the knowledge of the transition probabilities ${p(t,x,y) = \mathop{\mathbb P}(X_t=y | X_0=x)}$ of the unconditioned Markov chain, which are usually not available: this has been discussed in this previous post on Doob h-transforms. Perhaps surprisingly, this article by Michael Sorensen and Mogens Bladt shows how this is sometimes quite easy to sample good approximations of such a conditioned Markov chain, or diffusion.
1. Reversible Markov chains
Remember that a Markov chain ${X_k}$ on the state space ${S}$ with transition operator ${p(x,y) = \mathop{\mathbb P}(X_{k+1}=y \,|X_k=x)}$ is reversible with respect to the probability ${\pi}$ if for any ${x,y \in S}$
$\displaystyle \pi(x) p(x,y) = p(y,x) \pi(y). \ \ \ \ \ (1)$
In words, this means that looking at a trajectory of this Markov chain, this is impossible to say if the time is running forward or backward: indeed, the probability of observing ${y_1, y_2, \ldots, y_n}$ is equal to ${\pi(y_1)p(y_1, y_2) \ldots p(y_{n-1}, y_n)}$, which is also equal to ${\pi(y_n)p(y_n,y_{n-1}) \ldots p(y_2, y_1)}$ if the chain is reversible.
This is precisely this property of invariance by time reversal that allows to sample from conditioned reversible Markov chain. Since under mild conditions a one dimensional diffusion is also reversible, this also shows that (ergodic) one dimensional conditioned diffusions are sometimes quite easy to sample from!
2. One dimensional diffusions are reversible!
The other someone told me that almost any one dimensional diffusion is reversible! I did not know that, and I must admit that I still find this result rather surprising. Indeed, this is not true for multidimensional diffusions, and this is very easy to construct counter-examples. What makes the result works for real diffusions is that there is only one way to go from ${a}$ to ${b}$, namely the segment ${[a,b]}$. Indeed, the situation is completely different in higher dimensions.
First, let us remark that any Markov chain on ${\mathbb{Z}}$, that has ${\pi}$ as invariant distribution and that can only make jumps of size ${+1}$ or ${-1}$ is reversible: such a Markov chain is usually called skip-free in the litterature. This is extremely easy to prove, and since skip-free Markov chains have been studied a lot, I am sure that this result is somewhere in the litterature (any reference for that?). To show the result, it suffices to show that ${u_k-d_{k+1}=0}$ for any ${k \in \mathbb{Z}}$ where ${u_k = \pi(k) p(k,k+1)}$ is the upward flux at level ${k}$ and ${d_k = \pi(k)p(k,k-1)}$ is the downward flux. Because ${\pi}$ is invariant it satisfies the usual balance equations,
$\displaystyle \begin{array}{rcl} u_k+d_k &=& \pi(k) = \pi(k-1)p(k-1,k)+\pi(k+1)p(k+1,k)\\ &=& u_{k-1} + d_{k+1} \end{array}$
so that ${u_{k}-d_{k+1} = u_{k-1}-d_{k}}$. Interating we get that for any ${n \in \mathbb{Z}}$ we have ${u_k-d_{k+1} = u_{k-n}-d_{k-n+1}}$: the conclusion follows since ${\lim_{m \rightarrow \pm \infty} u(m) = \lim_{m \rightarrow \pm \infty} d(m) = 0}$.
This simple result on skip-free Markov chains gives also the result for many one dimensional diffusions ${dX_t = \alpha(X_t) \, dt + \sigma(X_t) \, dW_t}$ since under regularity assumptions on ${\alpha}$ and ${\sigma}$ they can be seen as limit of skip-free one dimensional Markov chains on ${\epsilon \mathbb{Z}}$. Indeed, I guess that the usual proof of this result goes through introducing the scale function and the speed measure of the diffusion, but I would be very glad if anyone had another pedestrian approach that gives more intuition into this.
3. How to sample conditioned reversible Markov chains
From what has been said before, I am sure that this becomes quite clear how a conditioned reversible Markov chain can be sampled from. Suppose that ${X}$ is reversible Markov chain and that we would like to sample a path conditioned on the event ${X_0=a}$ and ${X_T=b}$:
1. sample the path ${X^{(a)}}$ between ${t=0}$ and ${t=T}$, starting from ${X^{(a)}_0=a}$
2. sample the path ${X^{(b)}}$ between ${t=0}$ and ${t=T}$, starting from ${X^{(b)}_0=b}$
3. if there exists ${t \in [0,T]}$ such that ${X^{(a)}_t = X^{(b)}_{T-t}}$ then define the path ${X^{\star}}$ by$\displaystyle X^{\star}_s = \left\{ \begin{array}{ll} X^{(a)}_s & \text{ for } s \in [0,t]\\ X^{(b)}_s & \text{ for } s \in [t,T], \end{array} \right. \ \ \ \ \ (2)$and otherwise go back to step ${1}$.
Indeed, the resulting path ${X^{\star}}$ is an approximation of the realisation of the conditioned Markov chain: this is not hard to prove it, playing around with the definition of time reversibility. It is not hard at all to adapt this idea to reversible diffusions, though the result is indeed again an approximation. The interesting question is to discuss how good this approximation is (see the paper by Michael Sorensen and Mogens Bladt )
For example, here is a sample from a Birth-Death process, conditioned on the event ${X_0=0}$ and ${X_{1000}=50}$, with parameter ${\mathop{\mathbb P}(X_{t+1}=k+1 | X_t=k) = 0.4 = 1-\mathop{\mathbb P}(X_{t+1}=k-1 | X_t=k)}$.
Conditioned Birth-Death process
4. Final remark
It might be interesting to notice that this method is especially inefficent for multidimensional processes: the probability of finding an instant ${t}$ such that ${X^{(a)}_t = X^{(b)}_{T-t}}$ is extremely small, and in many cases equal to ${0}$ for diffusions! This works pretty well for one dimensional diffusion thanks to the continuity of the path and the intermediate value property. Nevertheless, even for one dimensional diffusion this method does not work well at all when trying to sample from conditioned paths between two meta-stable position: this is precisely this situation that is interesting in many physics when one wants to study the evolution of a particle in a double well potential, for example. In short, sampling conditioned (multidimensional) diffusions is still a very difficult problem.
### 1 Comment
1. #### TheBridge said,
February 5, 2014 at 7:55 am
Hi, a look here might raise your interest :
http://arxiv.org/pdf/1402.0822.pdf
Best regards
TheBridge |
### Legendre PRF (Multiple) Key Attacks and the Power of Preprocessing
Alexander May and Floyd Zweydinger
##### Abstract
Due to its amazing speed and multiplicative properties the Legendre PRF recently finds widespread applications e.g. in Ethereum 2.0, multiparty computation and in the quantum-secure signature proposal LegRoast. However, its security is not yet extensively studied. The Legendre PRF computes for a key $k$ on input $x$ the Legendre symbol $L_k(x) = \left( \frac {x+k} {p} \right)$ in some finite field $\F_p$. As standard notion, PRF security is analysed by giving an attacker oracle access to $L_k(\cdot)$. Khovratovich's collision-based algorithm recovers $k$ using $L_k(\cdot)$ in time $\sqrt{p}$ with constant memory. It is a major open problem whether this birthday-bound complexity can be beaten. We show a somewhat surprising wide-ranging analogy between the discrete logarithm problem and Legendre symbol computations. This analogy allows us to adapt various algorithmic ideas from the discrete logarithm setting. More precisely, we present a small memory multiple-key attack on $m$ Legendre keys $k_1, \ldots, k_m$ in time $\sqrt{mp}$, i.e. with amortized cost $\sqrt{p/m}$ per key. This multiple-key attack might be of interest in the Ethereum context, since recovering many keys simultaneously maximizes an attacker's profit. Moreover, we show that the Legendre PRF admits precomputation attacks, where the precomputation depends on the public $p$ only -- and not on a key $k$. Namely, an attacker may compute e.g. in precomputation time $p^{\frac 2 3}$ a hint of size $p^{\frac 1 3}$. On receiving access to $L_k(\cdot)$ in an online phase, the attacker then uses the hint to recover the desired key $k$ in time only $p^{\frac 1 3}$. Thus, the attacker's online complexity again beats the birthday-bound. In addition, our precomputation attack can also be combined with our multiple-key attack. We explicitly give various tradeoffs between precomputation and online phase. E.g. for attacking $m$ keys one may spend time $mp^{\frac 2 3}$ in the precomputation phase for constructing a hint of size $m^2 p^{\frac 1 3}$. In an online phase, one then finds {\em all $m$ keys in total time} only $p^{\frac 1 3}$. Precomputation attacks might again be interesting in the Ethereum 2.0 context, where keys are frequently changed such that a heavy key-independent precomputation pays off.
Note: final version
Available format(s)
Category
Public-key cryptography
Publication info
Preprint. MINOR revision.
Keywords
PreprocessingLegendre PRFEthereum 2.0
Contact author(s)
alex may @ rub de
floyd zweydinger @ rub de
History
2021-09-17: last of 2 revisions
See all versions
Short URL
https://ia.cr/2021/645
CC BY
BibTeX
@misc{cryptoeprint:2021/645,
author = {Alexander May and Floyd Zweydinger},
title = {Legendre PRF (Multiple) Key Attacks and the Power of Preprocessing},
howpublished = {Cryptology ePrint Archive, Paper 2021/645},
year = {2021},
note = {\url{https://eprint.iacr.org/2021/645}},
url = {https://eprint.iacr.org/2021/645}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content. |
## Exercises4.3Polar Coordinates
Use polar coordinates to solve the following problems.
###### 1.
Express the polar equation $r=\cos 2\theta$ in rectangular coordinates.
Hint
Multiply by $r^2$ and use the fact that $\cos 2\theta =cos ^2\theta -\sin ^2\theta\text{.}$
$(x^2+y^2)^3=(y^2-x^2)\text{.}$
Sketch polar graphs of:
###### 2.
$r=1+\sin \theta\text{.}$
###### 3.
$r=\cos 3\theta\text{.}$
For the each of the following circles find a polar equation, i.e. an equation in $r$ and $\theta\text{:}$
###### 4.
$x^2+y^2=4$
$r = 2\text{.}$
###### 5.
$(x-1)^2+y^2=1$
$r = 2\cos \theta\text{.}$
###### 6.
$x^2+(y-0.5)^2=0.25$
$r = \sin \theta\text{.}$
###### 7.
Find the maximum height above the $x$-axis of the cardioid $r=1+\cos \theta\text{.}$
$\ds y = \frac{3\sqrt{3}}{4}\text{.}$
Solution
On the given cardioid, $x = (1 + \cos \theta ) \cos \theta$ and $y = (1 + \cos \theta ) \sin \theta\text{.}$ The question is to find the maximum value of $y\text{.}$ Note that $y > 0$ is equivalent to $\sin \theta > 0\text{.}$ From $\ds \frac{dy}{d\theta } = 2\cos ^2 \theta + \cos \theta -1$ we get that the critical numbers of the function $y = y(\theta )$ are the values of $\theta$ for which $\ds \cos \theta = \frac{-1 \pm 3}{4}\text{.}$ It follows that the critical numbers are the values of $\theta$ for which $\ds \cos \theta = -1$ or $\ds \cos \theta = \frac{1}{2}\text{.}$ Since $y_{\mbox{max} } > 0$ it follows that $\ds \sin \theta = \sqrt{1-\left( \frac{1}{2}\right) ^2}=\frac{\sqrt{3}}{2}$ and the maximum height equals $\ds y = \frac{3\sqrt{3}}{4}\text{.}$
###### 8.
Sketch the graph of the curve whose equation in polar coordinates is $r=1-2\cos\theta\text{,}$ $0\leq \theta \lt 2\pi\text{.}$
###### 9.
Sketch the graph of the curve whose equation in polar coordinates is $r=3\cos 3\theta\text{.}$
###### 10.
Sketch the curve whose polar equation is $r=-1+\cos \theta\text{,}$ indicating any symmetries. Mark on your sketch the polar coordinates of all points where the curve intersects the polar axis.
Sketch a polar coordinate plot of:
###### 11.
$\ds r=\frac{1}{2}+\sin \theta$
###### 12.
$r=2\cos 3\theta$
###### 13.
$r^2=-4\sin 2\theta$
###### 14.
$r=2\sin \theta$
###### 15.
$r=2\cos \theta$
###### 16.
$r=4+7\cos \theta$
###### 17.
Consider the curve given by the polar equation $r=1-\cos \theta\text{,}$ for $0\leq \theta \lt 2\pi\text{.}$
1. Given a point $P$ on this curve with polar coordinates $(r,\theta)\text{,}$ represent its Cartesian coordinates $(x,y)$ in terms of $\theta\text{.}$
2. Find the slope of the tangent line to the curve where $\ds \theta = \frac{\pi }{2}\text{.}$
3. Find the points on this curve where the tangent line is horizontal or vertical.
1. $(x,y)=((1-\cos\theta)\cdot \cos\theta, (1-\cos\theta)\cdot \sin\theta)\text{.}$
2. $\ds \left.\frac{dy}{dx}\right|_{\theta=\frac{\pi}{2}}=-1\text{.}$
3. Horizontal tangent lines at $\ds \left(-\frac{3}{4},\frac{3\sqrt{3}}{4}\right), \ \left(-\frac{3}{4},-\frac{3\sqrt{3}}{4}\right)\text{;}$ Vertical tangent line at $\ds (-2,0), \left(\frac{1}{4},\frac{\sqrt{3}}{4}\right),\ \left(\frac{1}{4},-\frac{\sqrt{3}}{4}\right)\text{.}$
###### 18.
Consider the curve given by the polar equation $r=\cos (2\theta)\text{,}$ for $0\leq \theta \lt 2\pi\text{.}$
1. Find $\ds \frac{dy}{dx}$ in terms of $\theta\text{.}$
2. Find the Cartesian coordinates for the point on the curve corresponding to $\ds \theta = \frac{\pi }{8}\text{.}$
3. Find the tangent line to the curve at the point corresponding to $\ds \theta = \frac{\pi }{8}\text{.}$
4. Sketch this curve for $\displaystyle 0\leq \theta \leq \frac{\pi}{4}$ and label the point from part (b) on your curve.
1. $\ds \frac{dy}{dx}=\frac{-2\sin 2\theta\sin\theta+\cos\theta\cos2\theta}{-2\sin 2\theta\cos\theta-\sin \theta\cos 2\theta}\text{.}$
2. $\ds \left(\frac{1}{2}\sqrt{1+\frac{1}{\sqrt{2}}},\frac{1}{2}\sqrt{1-\frac{1}{\sqrt{2}}}\right)\text{.}$
3. $y- \frac{1}{2}\sqrt{1-\frac{1}{\sqrt{2}}}= \frac{2\sqrt{2-\sqrt{2}}-\sqrt{2+\sqrt{2}}}{2\sqrt{2+\sqrt{2}}+\sqrt{2-\sqrt{2}}}\cdot\left(x-\frac{1}{2}\sqrt{1+\frac{1}{\sqrt{2}}}\right)\text{.}$
###### 19.
Consider the curve given by the polar equation $r=4\cos (3\theta)\text{,}$ for $0\leq \theta \lt 2\pi\text{.}$
1. Find the Cartesian coordinates for the point on the curve corresponding to $\ds \theta = \frac{\pi }{3}\text{.}$
2. One of graphs in the Figure below is the graph of $r=4\cos(3\theta)\text{.}$ Indicate which one by circling it.
3. Find the slope of the tangent line to the curve where $\ds \theta = \frac{\pi }{3}\text{.}$
1. $(-2,-2\sqrt{3})\text{.}$
2. Right.
###### 20.
Consider the curve given by the polar equation $r=4\sin (3\theta)\text{,}$ for $0\leq \theta \lt 2\pi\text{.}$
1. Find the Cartesian coordinates for the point on the curve corresponding to $\ds \theta = \frac{\pi }{6}\text{.}$
2. One of graphs in the Figure below is the graph of $r=4\sin(3\theta)\text{.}$ Indicate which one by circling it.
3. Find the slope of the tangent line to the curve where $\ds \theta = \frac{\pi }{3}\text{.}$
1. $(2\sqrt{3},2)\text{.}$
2. Middle.
3. $\ds \sqrt{3}\text{.}$
###### 21.
Consider the curve given by the polar equation $r=1+3\cos(2\theta)\text{,}$ for $0\leq \theta \lt 2\pi\text{.}$
1. Find the Cartesian coordinates for the point on the curve corresponding to $\ds \theta = \frac{\pi }{6}\text{.}$
2. One of graphs in the Figure below is the graph of $r=1+3\cos(2\theta)\text{.}$ Indicate which one by putting a checkmark in the box below the graph you chose.
3. Find the slope of the tangent line to the curve where $\ds \theta = \frac{\pi }{6}\text{.}$
1. $\left(\frac{5\sqrt{3}}{4},\frac{5}{4}\right)\text{.}$
2. Right. Observe that $r(0)=r(\pi)=4$ and $\ds r\left(\frac{\pi}{2}\right)= r\left(\frac{3\pi}{2}\right)=1\text{.}$
3. $\ds \frac{\sqrt{3}}{23}\text{.}$
###### 22.
Consider the curve given by the polar equation $r=1-2\sin \theta\text{,}$ for $0\leq \theta \lt 2\pi\text{.}$
1. Find the Cartesian coordinates for the point on the curve corresponding to $\ds \theta = \frac{3\pi }{2}\text{.}$
2. The curve intersects the $x$-axis at two points other than the pole. Find polar coordinates for these other points.
3. On the Figure below, identify the graphs that correspond to the following two polar curves.
\begin{equation*} \begin{array}{cc} \fbox { } \ r=1-2\sin \theta \amp \fbox { } \ r=1+2\sin \theta \end{array} \end{equation*}
1. $(0,-3)\text{.}$
2. $(\pm 1,0)\text{.}$ Solve $y=(1-2\sin \theta)\sin \theta =0\text{.}$
3. The middle graph corresponds to $r=1+\sin 2\theta$ and the right graph corresponds to $r=1-2\sin \theta\text{.}$
###### 23.
Consider the curve $C$ given by the polar equation $r=1+2\cos \theta\text{,}$ for $0\leq \theta \lt 2\pi\text{.}$
1. Find the Cartesian coordinates for the point on the curve corresponding to $\ds \theta = \frac{\pi }{3}\text{.}$
2. Find the slope of the tangent line where $\ds \theta = \frac{\pi }{3}\text{.}$
3. On the Figure below, identify the graph of $C\text{.}$
1. $(1,\sqrt{3})\text{.}$
2. $\ds \frac{1}{3\sqrt{3}}\text{.}$
3. B.
###### 24.
1. Sketch a polar coordinate plot of
\begin{equation*} r=1+2\sin 3\theta, \ 0\leq \theta \leq 2\pi\text{.} \end{equation*}
2. How many points lie in the intersection of the two polar graphs
\begin{equation*} r=1+2\sin 3\theta, \ 0\leq \theta \leq 2\pi \end{equation*}
and
\begin{equation*} r=1? \end{equation*}
3. Algebraically find all values of $\theta$ that
\begin{equation*} 1=1+2\sin 3\theta, \ 0\leq \theta \leq 2\pi\text{.} \end{equation*}
4. Explain in a sentence or two why the answer to part (b) differs from (or is the same as) the number of solutions you found in part (c).
1. 9.
2. $\theta = 0\text{,}$ $\ds \frac{\pi }{3}\text{,}$ $\ds \frac{2\pi }{3}\text{,}$ $\pi\text{,}$ $\ds \frac{4\pi }{3}\text{,}$ $\ds \frac{5\pi }{3}\text{,}$ $2\pi\text{.}$
3. The remaining points of intersection are obtained by solving $-1 = 1 + 2 \sin 3\theta\text{.}$
###### 25.
Consider the following curve $C$ given in polar coordinates as
\begin{equation*} r(\theta )=1+\sin \theta +e^{\sin \theta }, \ 0\leq \theta \leq 2\pi\text{.} \end{equation*}
1. Calculate the value of $r(\theta )$ for $\ds \theta =0, \frac{\pi }{2}, \frac{3\pi }{2}\text{.}$
2. Sketch a graph of $C\text{.}$
3. What is the minimum distance from a point on the curve $C$ to the origin? (i.e. determine the minimum of $|r(\theta )|=r(\theta )=1+\sin \theta +e^{\sin \theta }$ for $\theta \in [0,2\pi ]$).
1. $r(0) = 2\text{,}$ $\ds r \left( \frac{\pi }{2}\right) = 2+e\text{,}$ $\ds r \left( \frac{3\pi }{2}\right) = e^{-1}\text{.}$
2. $e^{-1}\text{.}$
Solution
(c) From $\ds \frac{dr}{d\theta }= \cos \theta (1 + e^{\sin \theta }) = 0$ we conclude that the critical numbers are $\ds \frac{\pi }{2}$ and $\ds \frac{3\pi }{2}\text{.}$ By the Extreme Value Theorem, the minimum distance equals $e^{-1}\text{.}$
###### 26.
1. Give polar coordinates for each of the points $A\text{,}$ $B\text{,}$ $C$ and $D$ on the Figure below.
2. On the Figure below identify the graphs that correspond to the following three polar curves.
\begin{equation*} \fbox { } \ r=1-2\sin \theta \ \ \fbox { } \ r^2\theta =1 \ \ \fbox { } \ r=\frac{1}{1-2\sin \theta} \end{equation*}
1. $\ds A=\left( r=\sqrt{2},\theta= \frac{\pi }{4}\right)\text{,}$ $\ds B=\left( 4,\frac{5\pi }{3}\right)\text{,}$ $\ds C=\left( 2,\frac{7\pi }{6}\right)\text{,}$ $\ds D=\left( 2\sqrt{2}-1,\frac{3\pi }{4}\right)\text{.}$
2. A, B, D.
###### 27.
1. Sketch the curve defined by $r=1+2\sin \theta\text{.}$
2. For what values of $\theta\text{,}$ $\theta \in [-\pi ,\pi )\text{,}$ is the radius $r$ positive?
3. For what values of $\theta\text{,}$ $\theta \in [-\pi ,\pi )\text{,}$ is the radius $r$ maximum and for what values is it minimum?
1. $\ds \theta \in \left[ -\pi, -\frac{5\pi }{6}\right) \cup \left( -\frac{\pi }{6},\pi \right)\text{.}$ Solve $\ds \sin \theta > -\frac{1}{2}\text{.}$
2. To find critical numbers solve $\ds \frac{dr}{d\theta }=2\cos \theta =0$ in $[-\pi ,\pi )\text{.}$ It follows that $\ds \theta =-\frac{\pi }{2}$ and $\ds \theta =\frac{\pi }{2}$ are critical numbers. Compare $r(-\pi )=r(\pi )=1\text{,}$ $\ds r\left( -\frac{\pi }{2}\right) =-1\text{,}$ and $\ds r\left( \frac{\pi }{2}\right) =3$ to answer the question.
###### 28.
1. Sketch the graph described in polar coordinates by the equation $r=\theta$ where $-\pi \leq \theta \leq 3\pi\text{.}$
2. Find the slope of this curve when $\ds \theta =\frac{5\pi }{2}\text{.}$ Simplify your answer for full credit.
3. Express the polar equation $r=\theta$ in cartesian coordinates, as an equation in $x$ and $y\text{.}$
1. $-\frac{2}{5\pi }\text{.}$
2. $\ds \sqrt{x^2+y^2}=\arctan \frac{y}{x}\text{.}$
Solution
(b) The slope is given by $\ds \left. \frac{dy}{dx}\right| _{\theta =\frac{5\pi }{2}}\text{.}$ From $x=r\cos \theta =\theta \cos \theta$ and $y=\theta \sin \theta$ it follows that $\ds \frac{dy}{dx}=\frac{\frac{dy}{d\theta }}{\frac{dx}{d\theta }}=\frac{\sin \theta +\theta \cos \theta }{\cos \theta -\theta \sin \theta}\text{.}$ Thus $\ds \left. \frac{dy}{dx}\right| _{\theta =\frac{5\pi }{2}}=-\frac{2}{5\pi }\text{.}$
###### 29.
1. Let $C$ denote the graph of the polar equation $r=5\sin \theta\text{.}$ Find the rectangular coordinates of the point on $C$ corresponding to $\ds \theta =\frac{3\pi }{2}\text{.}$
2. Write a rectangular equation (i.e. using the variables $x$ and $y$) for $C\text{.}$ (in other words, convert the equation for $C$ into rectangular coordinates.)
3. Rewrite the equation of $C$ in parametric form, i.e. express both $x$ and $y$ as functions of $\theta\text{.}$
4. Find an expression for $\ds \frac{dy}{dx}$ in terms of $\theta\text{.}$
5. Find the equation of the tangent line to $C$ at the point corresponding to $\ds \theta =\frac{\pi }{6}\text{.}$
1. $(0,5)\text{.}$
2. $x^2+y^2=5y\text{.}$
3. $x=5\sin \theta \cos \theta\text{,}$ $y=5\sin ^2\theta\text{.}$
4. $\ds \frac{dy}{dx}=\frac{2\sin \theta\cos \theta }{\cos ^2\theta - \sin ^2\theta}=\tan 2\theta\text{.}$
5. $\ds y -\frac{5}{4}=\sqrt{3}\left( x-\frac{5\sqrt{3}}{4}\right)\text{.}$
###### 30.
Find the slope of the tangent line to the polar curve $r=2$ at the points where it intersects the polar curve $r=4\cos \theta\text{.}$ (Hint: After you find the intersection points, convert one of the curves to a pair of parametric equations with $\theta$ as the perimeter.
$-\frac{\sqrt{3}}{3}\text{.}$
Solution
Solve $2=4\cos \theta$ to get that curve intersect at $( 1, \sqrt{3})\text{.}$ To find the slope we note that the circle $r=2$ is given by parametric equations $x=2\cos \theta$ and $y=2\sin \theta\text{.}$ It follows that $\ds \frac{dy}{dx}=\frac{2\cos \theta }{-2\sin \theta }=-\cot \theta\text{.}$ The slope of the tangent line at the intersection point equals $\ds \left. \frac{dy}{dx}\right| _{\theta = \frac{\pi }{3}}=-\frac{\sqrt{3}}{3}\text{.}$
###### 31.
A bee goes out from its hive in a spiral path given in polar coordinates by $r=be^{kt}$ and $\theta =ct\text{,}$ where $b\text{,}$ $k\text{,}$ and $c$ are positive constants. Show that the angle between the bee's velocity and acceleration remains constant as the bee moves outward.
Solution
See the Figure below for the graph of the case $b=1\text{,}$ $k=0.01\text{,}$ and $c=2\text{.}$ The position in $(x,y)-$plane of the bee at time $t$ is given by a vector function $\ds \vec{s}(t)=\langle be^{kt}\cos ct,be^{kt}\sin ct\rangle\text{.}$ Recall that the angle $\alpha$ between the velocity and acceleration is given by $\ds \cos\alpha=\frac{\vec{v}\cdot\vec{a}}{|\vec{v}||\vec{a}|}\text{,}$ where $\vec{v}(t)=\vec{s}^\prime(t)$ and $\vec{a}(t)=\vec{s}^{\prime\prime}(t)\text{.}$ One way to solve this problem is to consider that the bee moves in the complex plane. In that case its position is given by
\begin{equation*} F(t)=be^{kt}\cos ct+i\cdot be^{kt}\sin ct=be^{kt}(\cos ct +i\sin ct)=be^{(k+ic)t}\text{,} \end{equation*}
where $i$ is the imaginary unit. Observe that $\vec{v}(t)=\langle \mbox{Re} (F^\prime(t)),\mbox{Im} (F^\prime(t))\rangle$ and $\vec{a}(t)=\langle \mbox{Re} (F^{\prime\prime}(t)),\mbox{Im} (F^{\prime\prime}(t))\rangle\text{.}$ Next, observe that $F^\prime(t)=(k+ic)F(t)$ and $F^{\prime\prime}(t)=(k+ic)^2F(t)\text{.}$ From $F^{\prime\prime}(t)=(k+ic)F^\prime(t)$ it follows that $\mbox{Re} (F^{\prime\prime}(t))=k\cdot \mbox{Re} (F^\prime(t))-c\cdot \mbox{Im} (F^\prime(t))$ and $\mbox{Im} (F^{\prime\prime}(t))=k\cdot \mbox{Im} (F^\prime(t))+c\cdot \mbox{Re} (F^\prime(t))\text{.}$ Finally, $\vec{v}\cdot\vec{a}=\mbox{Re} (F^\prime(t))\cdot \mbox{Re} (F^{\prime\prime}(t))+\mbox{Im} (F^\prime(t))\cdot \mbox{Im} (F^{\prime\prime}(t))=k((\mbox{Re} (F^\prime(t)))^2+(\mbox{Im} (F^\prime(t)))^2)=k|F^\prime(t)|^2$ which immediately implies the required result. |
## FANDOM
771 Pages
Template:Infobox Particle The proton is a subatomic particle with an electric charge of +1 elementary charge. It is found in the nucleus of each atom but is also stable by itself and has a second identity as the hydrogen ion, 1H+. It is composed of 3 even more fundamental particles comprising two up quarks and one down quark.[1]
## DescriptionEdit
Protons are spin-1/2 fermions and are composed of three quarks[2], making them baryons. The two up quarks and one down quark of the proton are held together by the strong force, mediated by gluons.[1]
Protons and neutrons are both nucleons, which may be bound by the nuclear force into atomic nuclei. The nucleus of the most common isotope of the hydrogen atom is a single proton (it contains no neutrons). The nuclei of heavy hydrogen (deuterium and tritium) contain neutrons. All other types of atoms are composed of two or more protons and various numbers of neutrons. The number of protons in the nucleus determines the chemical properties of the atom and thus which chemical element is represented; it is the number of both neutrons and protons in a nuclide which determine the particular isotope of an element. Protons have a positive charge.
## StabilityEdit
Main article: Proton decay
Protons are observed to be stable and their theoretical minimum half-life is 1×1036 years. Grand unified theories generally predict that proton decay should take place, although experiments so far have only resulted in a lower limit of 1035 years for the proton's lifetime. In other words, proton decay has never been witnessed and the experimental lower bound on the mean proton lifetime (2.1×1029 years) is put by the Sudbury Neutrino Observatory[3].
However, protons are known to transform into neutrons through the process of electron capture. This process does not occur spontaneously but only when energy is supplied. The equation is:
$\mathrm{p}^+ + \mathrm{e}^- \rightarrow\mathrm{n} + {\nu}_e \,$
where
p is a proton,
e is an electron,
n is a neutron, and
$\nu_e$ is an electron neutrino
The process is reversible: neutrons can convert back to protons through beta decay, a common form of radioactive decay. In fact, a free neutron decays this way with a mean lifetime of about 15 minutes.
## The proton in chemistryEdit
### Atomic numberEdit
In chemistry the number of protons in the nucleus of an atom is known as the atomic number, which determines the chemical element to which the atom belongs. For example, the atomic number of chlorine is 17; this means that each chlorine atom has 17 protons and that all atoms with 17 protons are chlorine atoms. The chemical properties of each atom are determined by the number of (negatively charged) electrons, which for neutral atoms is equal to the number of (positive) protons so that the total charge is zero. For example, a neutral chlorine atom has 17 protons and 17 electrons, while a negative Cl- ion has 17 protons and 18 electrons for a total charge of -1.
All atoms of a given element are not necessarily identical, however, as the number of neutrons may vary to form different isotopes. Again for chlorine as an example, there are two stable isotopes - 35Cl with 35 nucleons which are 17 protons and 35-17 = 18 neutrons, and 37Cl with 17 protons and 37-17 = 20 neutrons. Other isotopes of chlorine are radioactive.
### Hydrogen as protonEdit
Since the atomic number of hydrogen is 1, a positive hydrogen ion (H+) has no electrons and corresponds to a bare nucleus with 1 proton (and 0 neutrons for the most abundant isotope 1H). In chemistry therefore, the word "proton" is commonly used as a synonym for hydrogen ion (H+) or hydrogen nucleus in several contexts:
1. The transfer of H+ in an acid-base reaction is referred to "proton transfer". The acid is referred to as a proton donor and the base as a proton acceptor.
2. The hydronium ion (H3O+) in aqueous solution corresponds to a hydrated hydrogen ion. Often the water molecule is ignored and the ion written as simply H+(aq) or just H+, and referred to as a "proton". This is the usual meaning in biochemistry, as in the term proton pump which refers to a protein or enzyme which controls the movement of H+ ions across cell membranes.
3. Proton NMR refers to the observation of hydrogen nuclei in (mostly organic) molecules by nuclear magnetic resonance. This uses the property of the proton to have spin one-half.
## HistoryEdit
Ernest Rutherford is generally credited with the discovery of the proton. In 1918 Rutherford noticed that when alpha particles were shot into nitrogen gas, his scintillation detectors showed the signatures of hydrogen nuclei. Rutherford determined that the only place this hydrogen could have come from was the nitrogen, and therefore nitrogen must contain hydrogen nuclei. He thus suggested that the hydrogen nucleus, which was known to have an atomic number of 1, was an elementary particle.
Prior to Rutherford, Eugen Goldstein had observed canal rays, which were composed of positively charged ions. After the discovery of the electron by J.J. Thomson, Goldstein suggested that since the atom is electrically neutral there must be a positively charged particle in the atom and tried to discover it. He used the "canal rays" observed to be moving against the electron flow in cathode ray tubes. After the electron had been removed from particles inside the cathode ray tube they became positively charged and moved towards the cathode. Most of the charged particles passed through the cathode, it being perforated, and produced a glow on the glass. At this point, Goldstein believed that he had discovered the proton.[4] When he calculated the ratio of charge to mass of this new particle (which in case of the electron was found to be the same for every gas that was used in the cathode ray tube) was found to be different when the gases used were changed. The reason was simple. What Goldstein assumed to be a proton was actually an ion. He gave up his work there, but promised that "he would return." However, he was widely ignored.
The proton is named after the neuter singular of the Greek word for "first", πρῶτον.
## ExposureEdit
The Apollo Lunar Surface Experiments Packages (ALSEP) determined that more then 95% of the particles in solar winds are electrons and protons of approximate equal numbers.".[5][6]
"Because the Solar Wind Spectrometer made continuous measurements, it was possible to measure how the Earth's magnetic field affects arriving solar wind particles. For about two-thirds of each orbit, the Moon is outside of the Earth's magnetic field. At these times, a typical proton density was 10 to 20 per cubic centimeter, with most protons having velocities between 400 and 650 kilometers per second. For about five days of each month, the Moon is inside the Earth's geomagnetic tail, and typically no solar wind particles were detectable. For the remainder of each lunar orbit, the Moon is in a transitional region known as the magnetosheath, where the Earth's magnetic field affects the solar wind but does not completely exclude it. In this region, the particle flux is reduced, with typical proton velocities of 250 to 450 kilometers per second. During the lunar night, the spectrometer was shielded from the solar wind by the Moon and no solar wind particles were measured."[5]
Research has also been or is being performed on the dose-rate effects of protons, as typically found in space travel, on human health.[6][7] More specifically, there are hopes to identify what specific chromosomes are damaged, and to define the damage, during cancer development from proton exposure.[6] Another study looks into determining "the effects of exposure to proton irradiation on neurochemical and behavioral endpoints, including dopaminergic functioning, amphetamine-induced conditioned taste aversion learning, and spatial learning and memory as measured by the Morris water maze."[7] One study even looks into "interplanetary protons" and the effects of charging spacecrafts.[8] There are many more studies which pertain to space travel, its galactic cosmic rays, its possible health effects, and solar proton event exposure.
## AntiprotonEdit
Main article: Antiproton
CPT-symmetry puts strong constraints on the relative properties of particles and antiparticles and, therefore, is open to stringent tests. For example, the charges of the proton and antiproton must sum to exactly zero. This equality has been tested to one part in 108. The equality of their masses has also been tested to better than one part in 108. By holding antiprotons in a Penning trap, the equality of the charge to mass ratio of the proton and the antiproton has been tested to one part in 9×1011. The magnetic moment of the antiproton has been measured with error of 8×10−3 nuclear Bohr magnetons, and is found to be equal and opposite to that of the proton. |
lipschitz continuity 194 浏览 0关注
In mathematical analysis, Lipschitz continuity, named after Rudolf Lipschitz, is a strong form of uniform continuity for function (mathematics)|functions. Intuitively, a Lipschitz continuous function is limited in how fast it can change: for every pair of points on the graph of this function, the absolute value of the slope of the line connecting them is no greater than a definite real number; this bound is called the function's Lipschitz constant (or modulus of continuity|modulus of uniform continuity). In the theory of differential equations, Lipschitz continuity is the central condition of the Picard–Lindelöf theorem which guarantees the existence and uniqueness of the solution to an initial value problem. A special type of Lipschitz continuity, called contraction mapping|contraction, is used in the Banach fixed point theorem. The concept of Lipschitz continuity is well-defined on metric spaces. A generalization of Lipschitz continuity is called Hölder continuity.
相关概念
主要的会议/期刊 MP JGO SIAM CDC SIAMJO Annals ... MOR NIPS |
Proving an algorithm must have the lowest time complexity for sorting in the worst case
I'm curious if anything like this has been proved, or is even possible to prove a statement like: "Out of all sorting algorithms, this one has the lowest time complexity for the worst-case."
Or stated more specifically, the statement could look like: "Quicksort has the lowest time complexity for worst case of all sorting algorithms because X, Y, Z"
• Any basic Google search such as "complexity of sorting" would have answered this question for you. – David Richerby Feb 27 '18 at 15:25
• I did a cursory look at the complexity of sorting but all I found were just tables of time complexities. Also "Sortation" is actually a word. Not sure if you down voted me for that, but you shouldn't have. merriam-webster.com/dictionary/sortation – Nathvi Feb 27 '18 at 15:49
• I downvoted for lack of research; it would be completely unfair to downvote for something I'd fixed. "Sortation" may well be a word but it's never used in this context. (For example, the Google n-gram database has literally zero entries for "sortation algorithm"). – David Richerby Feb 27 '18 at 15:56
• Any basic Google search of the word "sortation" would have shown you that it was a real word. – Nathvi Feb 27 '18 at 15:58
• You gave a dictionary link. I can see that it's a real word. But that doesn't matter: it's not the word that's used to describe this kind of algorithm. "Ordering" is also a real word but if you'd said "ordering algorithm", I'd have edited that, too. – David Richerby Feb 27 '18 at 16:02
Sorting with comparisons is know to be an $\Omega(n\log n)$ problem and we know several sorting algorithms that reach that bound in the worst case (Heapsort, Mergesort).
(The justification is short: the decision tree able to distinguish among the $n!$ possible permutations of the input has a height at least $\log_2 n!$.)
In some special cases comparison-less methods are possible and may lead to an $\Omega(n)$ bound. Worst-case optimal algorithms are also available (Histogramsort). |
# Recent questions tagged trigonometry
For $0 \leq x \leq 2\pi$, $\sin x$ and $\cos x$ are both decreasing functions in the interval __________. $\bigg( 0, \dfrac{\pi}{2} \bigg) \\$ $\bigg( \dfrac{\pi}{2}, \pi \bigg) \\$ $\bigg( \pi, \dfrac{3\pi}{2} \bigg) \\$ $\bigg(\dfrac{3 \pi}{2}, 2 \pi \bigg)$
Which one of the following is the solution for $\cos^2 x + 2 \cos x + 1 = 0$, for values of $x$ in the range of $0^\circ < x < 360^\circ$ $45^\circ$ $90^\circ$ $180^\circ$ $270^\circ$ |
## Test code should not be Turing-complete
The ideas below were inspired by rereading of Tom Stuart’s Understanding Computation book, watching Uncle Bob’s Clean Code videos, and my thoughts about the nature of the problems I observe in daily work with certain tests.
In theory, there is a common agreement that simple tests are better than complex ones. In practice, some collectives rely on integration tests as their only source of confidence about the state of software projects they maintain.
Here is the problem with integration tests: you do not really know what you are testing when you run them. You just hope that a test this big will catch as much of interesting stuff. But hope is not a strategy, and this comes with a price. But let’s talk about the most annoying thing first.
## The anti-pattern
One persistent testing anti-pattern that I encounter goes about the following organization of a test.
1. Set up a large software configuration with many components connected to each other (system under test, or SUT).
2. Arrange starting states of all its components.
3. Let the SUT evolve on its own for some time, but no longer than a certain predetermined timeout threshold.
4. Compare evolved state of selected components against pre-recorded reference values.
At first, it follows the arrange-act-assert test design principle. But a very fragile assumption is hiding in plain sight.
The “Let the SUT evolve on its own for some time” step is very problematic here. Let us first define it though.
### Meaning of a timeout
If we allow the SUT to do what it pleases, it may never return the control back to the test harness. To prevent this, a timeout event is set up. In its simplest form, a watchdog monitors wall clock time while the SUT is running. If the clock has advanced to a predetermined threshold and the SUT still has not terminated on its own, the watchdog signals the timeout, which usually causes the program to be forcefully terminated soon after.
A more general definition of timeout that we will be using replies a broader notion of “time”. Any (quasi)periodic event regularly and inevitably appearing during program’s operation can be used. We then set up a limit of how many those events we allow the watchdog to see before it is considered to be too much.
Just a few examples of such events: number of machine instructions executed, calls to a specific subroutine, or transactions made by the business logic layer.
## Do not rely on timeout too much
In the absolute majority practical uses, reaching the timeout meant the test scenario did not succeed. But did it really fail? Or should we treat this outcome as something alternative to both outcomes?
Let’s outline first what expectations we should have from a good test.
### An ideal test
Any test should help you maintain good behavioral properties of software, while catching unintended changes. An ideal test:
1. Never fails when no behavioral change worth reporting has been made.
2. Always fails when there is a valid reason to report a behavioral change.
3. When it fails, it points as close to the real place of the change as possible. It helps understanding the cause of the reported change.
4. It does not error, i.e. its failures are always problems in the production code, not in the test’s code itself.
5. It is fast to report either of outcomes (both failures and successes).
Let’s look at how these properties are not achieved with timeouts.
### 1. Zero false positives
For a test relying on timeout, this property cannot hold. It is clear for anyone who’ve tried to write (non-hard-real-time) code which expects certain processes to take predetermined amount of time. Such programs are just inherently fragile.
If the timeout is defined in terms of wall clock time, then it means we mix an uncontrollable external factor into the list of inputs affecting test outcomes. A host of external and uncontrollable reasons may cause a test to sporadically fail. A temporary slowdown of the host system caused by any reason (host CPU frequency throttling due to overheating, heavy swapping because of other programs contending for the resources etc.) has a chance to manifest itself as test failure.
If the timeout is expressed in terms of events internal to the configuration, then the situation is only marginally better. Surely we have more control over such internal measure of progress. E.g., a timeout situation will be reliably reproducible, because no random inputs would be affecting its outcome.
But relying on a timeout still means that, when designing the test, we have given up the possibility to know what the SUT does at any given situation, and replaced this certainty with a hope that it will eventually reach one of the expected states.
### 2. No false negatives
Strictly speaking, true positives are not affected if a test is using a timeout as one of its failure conditions. However, they may stil be masked by the timeout event, if it is reached before the true assertion (and successful termination) has been reached.
The combination of №1 and №2 makes the test indecisive. A passing test still tells the truth about the SUT being OK. But a failure-by-timeout means: “either the SUT’s behavior has changed in an expected but undesirable way, or you have been unlucky, or something else”. Go figure.
### 3. Assertion points close to a cause
A failure-by-timeout condition helps poorly in localizing sources of errors. Compare this to a regular assertion which usually points to specific state divergence.
It is similar to why “try—catch all” blocks in production code are bad. It smells with fragile design. The “try to run and fail if it took too long” testing approach clumps all the expected and unexpected reasons together, destroying important details useful for investigation. Debugging such test is the only way to restore this information. Debugging is very costly, and the test is not designed to provide the information it was supposed to.
Surely, it would be nice to catch all the errors. But the price of having non-specific reporting is too high.
### 4. Does not error
There is an important distinction between a failure and an error in a test. A failed test has stopped where its author has intended it to be able to stop (usually at an assert, expect statement or an equivalent). An error in a test happens elsewhere and stops its flow before all assertions have been reached and checked.
Oftentimes, an error is reported at an earlier phase, in arrange or act phases of the test. Conversely and by definition, a failure can only happen inside the assertion phase.
A test error oftentimes means that some preconditions of the test did not hold. A fix to an error should be targeted to satisfy the failed precondition. It should not have much to do with the production code exercised in the test. Doing so would be treating only symptoms, not their cause.
Compared to that, a test failure means that the test is actually doing its job as intended. An expected reaction to a failure is a change in production code.
For this reason, it is valuable to recognize, report and treat test errors and failures separately.
How is this distinction affected in presence of a timeout condition? It becomes much harder to tell errors and failures apart. A timeout cannot be classified as a test failure, because the production code that “executes for a while” does not contain testing assertions inside it. It is so by definition — we are running production code. Any other types of checks placed in it (e.g. runtime assertions), if they do trigger, should be classified as test errors, i.e. failing preconditions making the whole test invalid until they have been fixed.
For this reason, a test relying on a timeout to stop it from running forever automatically tells us: “I contain unknown errors, because part of the behavior is unspecified”.
### 5 Test is fast
A successful test run may be fast. But if its logic contains a timeout clause, the threshold for it is usually chosen to be conservatively high in relation to the median test run time. This is to minimize statistical probability of unrelated external inputs affecting the test outcome. E.g. for a test running for about 100 seconds, it is not uncommon to see its threshold to be set to 1000 seconds.
This makes such test unfriendly to human inspection exactly when we need it most, i.e. when it reports a failure. A failure by timeout will of course take the longest possible time to run.
This means rerunning the broken test during debugging iterations is very slow. We have to wait until the timeout clause to stop its execution every time we have made a change and wish to inspect its effects on the test.
## Can we avoid having a timeout?
It should now be clear why this testing anti-pattern is bad. How is it possible to stop relying on timeout?
The halting problem tells us that, for computers equivalent in their power to Turing machines, it is impossible to decide whether they will ever stop on all inputs. Tests are programs, so this limitation applies to them as well.
Enforcing a timeout is one way to make a test be less general in its computational power than a Turing machine. But by doing so in a general case, we are giving away control the test should have over the SUT, and knowledge about what the SUT is doing versus what it is supposed to do.
Is there any other way? If we designed our tests differently, could we avoid this weakness?
Let’s remember that there are other useful computational constructs which are strictly less powerful than Turing machines. I am talking about finite automata, context free grammars and push-down automata. For these machines you can determine whether they will stop or not. Surely they cannot represent an arbitrary computation. But that is exactly why we should use them for defining test harnesses.
In practice, this still means avoiding writing tests that:
• have loops with unbounded number of iterations, or
• stimulate the SUT in a way that causes it to perform unbounded number of iterations while calculating the reply.
If the test follows the arrange-act-assert pattern in its organization, it is most likely the arrange and act phases that should be restricted in how they operate.
## Is it possible to avoid being Turing-complete?
Note that I am stating that tests by itself should not be overly powerful. But tests exercise the SUT, and some components of SUT may happen to be Turing-complete. Addition of whatever practically useful test harness built around such SUT will not reduce its total computational power.
Let’s not forget that the host processor that executes actual machine instructions is Turing complete, as is the microcode used inside it. So, it is Turing machines all the way down.
In the face of these challenges, it is strange to tell: “do not build a Turing machine”.
## How to limit damages from Turing-completeness
You cannot avoid the potentially disastrous effects of the complexity hiding inside the nasty machine. But you can try to limit the scope of the uncertainty they threaten to create when a test fails by timeout.
Remember that tests are meant to help you pinpoint unintended changes in SUT behavior. A single timeout built around a big test leaves a huge scope for debugging in a case of timeout. To limit such damage, the initial creation of an integration test should be followed by these steps.
1. Split the scenario you test into smaller phases. Each phase should have its own assertion at the end to check that the SUT state is still within the expected.
2. Guard each phase by its own watchdog. Because the phases are shorter than the whole, their timeout thresholds will be smaller.
3. Whenever a test fails by reporting a timeout, fixing its cause in SUT is preceded by improvements in the test harness. A new assertion clause should be added to the test, that focuses on a newly discovered failure mode. Oftentimes splitting the test phase into new smaller phases is the best way to integrate the newly obtained knowledge.
In the end it comes down to having shorter, more focused, faster completing tests. Debugging them becomes easier as they tend to report more specific information closer to the point in SUT’s evolution where a divergence has appeared. Even if everything else fails and the test case is interrupted by a timeout, it happens earlier and the created uncertainty spans over a smaller scope.
Moreover, because such test phases depend on each other, a failure in an earlier phase saves time. There is no need to spend processor cycles attempting to run later phases which are already doomed to fail (or even worse, pass because they fail to sense for an earlier problem). That creates less distraction for the person analyzing new regressions in a big test suite. |
Kattis
# Box and Arrow Diagram
An example of a box and arrow diagram, taken from github.com/dicander/box_arrow_diagram
What an embarrassment! Itaf got 0/5 points in her last “Fundamental programming in Python” exam. She studies Engineering physics at KTH and is struggling with this course. She is not alone, as $60\%$ of her classmates failed the exam this year. The reason for this oddly high percentage is the so called box and arrow diagram (låd- och pildiagram).
In this part of the exam you are given a piece of Python code and you have to draw how the memory structure will look like when the program reaches a given line. Since Itaf is a high-rated competitive programmer her ego always came in the way whenever she tried to study for the test, because it felt “too easy”. But now she has become desperate and needs your help.
The box and arrow diagram is used to explain the memory structure inside Python. Simplified, the diagram can be seen as a directed graph with nodes (boxes) labeled from $1$ to $N$ and edges (arrows) labeled from $1$ to $M$. The boxes corresponds to the objects in the memory of a Python program. Box 1 is special, it represents the global object. An arrow being drawn from box $u$ to box $v$ in the diagram means that object $u$ stores a reference of object $v$. If $u$ stores multiple references of $v$, then you draw multiple arrows from $u$ to $v$. It is also possible for an object to contain references to itself.
An object $u$ is said to be alive if there exists a path from the global object to $u$ in the box and arrow diagram. Each object also has a reference counter. The reference counter of an object $u$ is defined as the number of arrows $(v,u)$ such that $v$ is alive.
Itaf now needs your help, and she will ask you $Q$ queries, each query can be one of two types.
• 1 X Output the reference counter of the object with label $X$.
• 2 Y Remove the arrow with label $Y$ from the diagram.
## Input
The first line consists of two space separated integers $N,M$ ($1 \leq N,M \leq 2 \cdot 10^5$), where $N$ is the number of boxes in the diagram and $M$ is the number of arrows in the diagram.
The next $M$ lines describe the arrows in the diagram. The $i$-th line contains $2$ space separated integers $U_ i,V_ i$ ($1 \leq U_ i,V_ i \leq N$), meaning the arrow with label $i$ goes from box $U_ i$ to box $V_ i$. Note that arrows forming loops and multi-edges are allowed.
The next line contains an integer $Q$ ($1 \leq Q \leq 2 \cdot 10^5$), the number of queries. The next $Q$ lines describe the $Q$ queries. The $j$-th query is given as a pair of space separated integers $C_ j, X_ j$ ($1 \leq C_ j \leq 2$).
• If $C_ j = 1$ then remove the arrow labeled $X_ j$ from the diagram ($1 \leq X_ j \leq M$).
• If $C_ j = 2$ then output the reference counter of object $X_ j$ ($1 \leq X_ j \leq N$).
It is guaranteed that there will not be two queries of type $1$ with same value of $X_ j$, meaning the same arrow will never be deleted twice.
## Output
For each query of type $2$, output a single line containing the reference count of object $Y_ j$.
Sample Input 1 Sample Output 1
3 4
1 2
2 3
1 2
3 3
7
2 2
2 3
1 4
2 3
1 1
1 3
2 3
2
2
1
0 |
# Cosmology & Gravitation
This series consists of talks in the areas of Cosmology, Gravitation and Particle Physics.
## Seminar Series Events/Videos
Currently there are no upcoming talks in this series.
## Maximum entropy, the universal dark matter density profile... and its destruction
Tuesday Oct 16, 2012
Speaker(s):
I review some recent developments in attempting to reconcile
the observed galaxy population with numerical models of structure formation in
the 'LCDM' concordance cosmology. Focussing on behaviour of dwarf galaxies, I
describe the infamous 'cusp-core' dichotomy -- a long-standing challenge to the
LCDM picture on small scales -- and use toy models to show how it is resolved
in recent numerical simulations (Pontzen & Governato 2012). I then discuss
the current observational status of this picture (Teyssier, Pontzen & Read
Collection/Series:
Scientific Areas:
## Voids in the SDSS: from demography to cosmology
Tuesday Oct 09, 2012
Speaker(s):
Cosmic voids are potentially a rich source of information
for both astrophysics and cosmology. To enable such science, we produce the
most comprehensive void catalog to date using the Sloan Digital Sky Survey Data
Release 7 main sample out to redshift z = 0.2 and the Luminous Red Galaxy
sample out to z = 0.44. Using a modified version of the parameter-free void
finder ZOBOV, we fully take into account the presence of survey boundary and
masks. We discuss basic catalog statistics such as number counts and redshift
Collection/Series:
Scientific Areas:
## Scalars with Higher Derivatives in Supergravity and Cosmology
Tuesday Oct 09, 2012
Speaker(s):
There are many situations in cosmology that
motivate the study of scalar fields with higher-derivative actions. The best-known
such situations are probably k-inflation (with DBI-inflation being a special
case) and models based on galileon theories, but even eternal inflation and
cyclic universes provide good reasons to study such theories. After an extended
discussion of the motivations, I will show how scalar field theories with
higher derivatives can be constructed in (minimal, 4-dimensional) supergravity,
Collection/Series:
Scientific Areas:
## Homogeneous and Isotropic Universe from Nonlinear Massive Gravity
Tuesday Oct 02, 2012
The question of finite range gravity, or equivalently,
whether graviton can have a non-zero mass, has been one of the major challenges
in classical field theory for the last 70 years.
Collection/Series:
Scientific Areas:
## Supermassive black holes in non-spherical galactic nuclei and enhanced rates of star capture events
Tuesday Sep 25, 2012
Speaker(s):
We consider the stellar-dynamical processes which lead to
the capture or tidal disruption of stars by a supermassive black hole, review
the standard theory of two-body relaxation and loss-cone repopulation in
spherical galactic nuclei, and extend it to the axisymmetric and triaxial
nuclear star clusters.
Collection/Series:
Scientific Areas:
## New developments in massive gravity
Tuesday Sep 18, 2012
The idea that the graviton may be massive has seen a
resurgence of interest due to recent progress which has overcome its
I will review this recent progress, which has
led to a consistent ghost-free effective field theory of
a massive graviton, with a stable hierarchy between the graviton mass and the
cutoff, and how this theory has the potential to resolve the naturalness
problem of the cosmological constant.
Collection/Series:
Scientific Areas:
## Spectral distortions of the CMB and what we might learn about early universe physics
Tuesday Sep 11, 2012
Speaker(s):
The spectrum of the cosmic microwave background (CMB) is known to be extremely close to a perfect blackbody. However, even within standard cosmology several processes occurring in the early Universe lead to distortions of the CMB at a level that might become observable in the future. This could open an exciting new window to early Universe physics.
Collection/Series:
Scientific Areas:
## Inflation from Magnetic Drift
Tuesday Sep 04, 2012
Speaker(s):
I will describe a new, generic mechanism for realizing a
period of slowly-rolling inflation through the use of an analog of 'magnetic
drift.' I will demonstrate how the mechanism works through two particular
worked examples: Chromo-Natural Inflation, which exists as a purely 4D
effective theory, and a version that can appear naturally in string theory.
Collection/Series:
Scientific Areas:
## Exoplanet observation by microlensing and wave optics features
Tuesday Aug 14, 2012
I will introduce the theory and experimental technique of extra-solar planet observation by the gravitational microlensing and give a report on recent results. Then I will discuss the wave optics features in the gravitational microlensing, the analogy of this effect in the astronomical scales to the Young's double slit experiment. Finally I will discuss about the possibility of observation of diffraction patterns in the microlensing experiment.
Collection/Series:
## A Flow of Dark Matter Debris: Exploring New Possibilities for Substructure
Tuesday Aug 07, 2012
Speaker(s):
Tidal stripping of dark matter from subhalos falling into the Milky Way produces narrow, cold tidal streams as well as more spatially extended debris flows'' in the form of shells, sheets, and plumes.Here we focus on the debris flow in the Via Lactea II simulation, and show that this incompletely phase-mixed material exhibits distinctive high velocity behavior. Unlike tidal streams, which may not necessarily intersect the Earth's location, debris flow is spatially uniform at 8 kpc and thus guaranteed to be present in the dark matter flux incident on direct detection experiments.
Collection/Series:
## LECTURES ON-DEMAND
### Roger Melko: Perimeter Institute and University of Waterloo
Speaker: Roger Melko |
Volume 256 - 34th annual International Symposium on Lattice Field Theory (LATTICE2016) - Vacuum Structure and Confinement
Flux Tubes in QCD with (2+1) HISQ Fermions
P. Cea, L. Cosmai,* F. Cuteri, A. Papa
*corresponding author
Full text: pdf
Pre-published on: February 16, 2017
Published on: March 24, 2017
Abstract
We investigate the transverse profile of the chromoelectric field generated by a quark-antiquark pair in the vacuum of (2+1) flavor QCD.
Monte Carlo simulations are performed adopting the HISQ/tree action discretization, as implemented in the publicly available MILC code, suitably modified to measure the chromoelectric field.
We work on the line of constant physics, with physical strange quark mass $m_s$ and light to strange mass ratio $m_l/m_s = 1/20$
DOI: https://doi.org/10.22323/1.256.0344
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.
Open Access |
NEWS@SKY (Science&Space News)
Home To Survive in the Universe
.MYA{ color: yellow; text-decoration: none; :hover { color: red; text-decoration: none; } } Services
Why to Inhabit Top Contributors Astro Photo The Collection Forum Blog New! FAQ Login
# 5 Aql (5 Aquilae)
Contents
### Images
DSS Images Other Images
### Related articles
Observed Orbital EccentricitiesFor 391 spectroscopic and visual binaries with known orbital elementsand having B0-F0 IV or V primaries, we collected the derivedeccentricities. As has been found by others, those binaries with periodsof a few days have been circularized. However, those with periods up toabout 1000 or more days show reduced eccentricities that asymptoticallyapproach a mean value of 0.5 for the longest periods. For those binarieswith periods greater than 1000 days their distribution of eccentricitiesis flat from 0 to nearly 1, indicating that in the formation of binariesthere is no preferential eccentricity. The binaries with intermediateperiods (10-100 days) lack highly eccentric orbits. Tidal Effects in Binaries of Various PeriodsWe found in the published literature the rotational velocities for 162B0-B9.5, 152 A0-A5, and 86 A6-F0 stars, all of luminosity classes V orIV, that are in spectroscopic or visual binaries with known orbitalelements. The data show that stars in binaries with periods of less thanabout 4 days have synchronized rotational and orbital motions. Stars inbinaries with periods of more than about 500 days have the samerotational velocities as single stars. However, the primaries inbinaries with periods of between 4 and 500 days have substantiallysmaller rotational velocities than single stars, implying that they havelost one-third to two-thirds of their angular momentum, presumablybecause of tidal interactions. The angular momentum losses increase withdecreasing binary separations or periods and increase with increasingage or decreasing mass. Stellar Kinematic Groups. II. A Reexamination of the Membership, Activity, and Age of the Ursa Major GroupUtilizing Hipparcos parallaxes, original radial velocities and recentliterature values, new Ca II H and K emission measurements,literature-based abundance estimates, and updated photometry (includingrecent resolved measurements of close doubles), we revisit the UrsaMajor moving group membership status of some 220 stars to produce afinal clean list of nearly 60 assured members, based on kinematic andphotometric criteria. Scatter in the velocity dispersions and H-Rdiagram is correlated with trial activity-based membership assignments,indicating the usefulness of criteria based on photometric andchromospheric emission to examine membership. Closer inspection,however, shows that activity is considerably more robust at excludingmembership, failing to do so only for <=15% of objects, perhapsconsiderably less. Our UMa members demonstrate nonzero vertex deviationin the Bottlinger diagram, behavior seen in older and recent studies ofnearby young disk stars and perhaps related to Galactic spiralstructure. Comparison of isochrones and our final UMa group membersindicates an age of 500+/-100 Myr, some 200 Myr older than thecanonically quoted UMa age. Our UMa kinematic/photometric members' meanchromospheric emission levels, rotational velocities, and scattertherein are indistinguishable from values in the Hyades and smaller thanthose evinced by members of the younger Pleiades and M34 clusters,suggesting these characteristics decline rapidly with age over 200-500Myr. None of our UMa members demonstrate inordinately low absolutevalues of chromospheric emission, but several may show residual fluxes afactor of >=2 below a Hyades-defined lower envelope. If one defines aMaunder-like minimum in a relative sense, then the UMa results maysuggest that solar-type stars spend 10% of their entire main-sequencelives in periods of precipitously low activity, which is consistent withestimates from older field stars. As related asides, we note six evolvedstars (among our UMa nonmembers) with distinctive kinematics that liealong a 2 Gyr isochrone and appear to be late-type counterparts to diskF stars defining intermediate-age star streams in previous studies,identify a small number of potentially very young but isolated fieldstars, note that active stars (whether UMa members or not) in our samplelie very close to the solar composition zero-age main sequence, unlikeHipparcos-based positions in the H-R diagram of Pleiades dwarfs, andargue that some extant transformations of activity indices are notadequate for cool dwarfs, for which Ca II infrared triplet emissionseems to be a better proxy than Hα-based values for Ca II H and Kindices. Photometric Investigation of the Galaxy in the Direction of Serpens Cauda. A Catalog of Extinctions and DistancesA catalog of spectral types, color excesses, interstellar extinctionsand distances of 402 stars located in the Serpens Cauda dark cloudcomplex and the new results of photoelectric photometry in the Vilniussystem of 56 fainter stars in the same area are presented. Rotational velocities of A-type stars in the northern hemisphere. II. Measurement of v sin iThis work is the second part of the set of measurements of v sin i forA-type stars, begun by Royer et al. (\cite{Ror_02a}). Spectra of 249 B8to F2-type stars brighter than V=7 have been collected at Observatoirede Haute-Provence (OHP). Fourier transforms of several line profiles inthe range 4200-4600 Å are used to derive v sin i from thefrequency of the first zero. Statistical analysis of the sampleindicates that measurement error mainly depends on v sin i and thisrelative error of the rotational velocity is found to be about 5% onaverage. The systematic shift with respect to standard values fromSlettebak et al. (\cite{Slk_75}), previously found in the first paper,is here confirmed. Comparisons with data from the literature agree withour findings: v sin i values from Slettebak et al. are underestimatedand the relation between both scales follows a linear law ensuremath vsin inew = 1.03 v sin iold+7.7. Finally, thesedata are combined with those from the previous paper (Royer et al.\cite{Ror_02a}), together with the catalogue of Abt & Morrell(\cite{AbtMol95}). The resulting sample includes some 2150 stars withhomogenized rotational velocities. Based on observations made atObservatoire de Haute Provence (CNRS), France. Tables \ref{results} and\ref{merging} are only available in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.125.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/393/897 ICCD Speckle Observations of Binary Stars. XXIII. Measurements during 1982-1997 from Six Telescopes, with 14 New OrbitsWe present 2017 observations of 1286 binary stars, observed by means ofspeckle interferometry using six telescopes over a 15 year period from1982 April to 1997 June. These measurements constitute the 23dinstallment in CHARA's speckle program at 2 to 4 m class telescopes andinclude the second major collection of measurements from the MountWilson 100 inch (2.5 m) Hooker Telescope. Orbital elements are alsopresented for 14 systems, seven of which have had no previouslypublished orbital analyses. On the HIPPARCOS photometry of chemically peculiar B, A, and F starsThe Hipparcos photometry of the Chemically Peculiar main sequence B, A,and F stars is examined for variability. Some non-magnetic CP stars,Mercury-Manganese and metallic-line stars, which according to canonicalwisdom should not be variable, may be variable and are identified forfurther study. Some potentially important magnetic CP stars are noted.Tables 1, 2, and 3 are available only in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html MSC - a catalogue of physical multiple starsThe MSC catalogue contains data on 612 physical multiple stars ofmultiplicity 3 to 7 which are hierarchical with few exceptions. Orbitalperiods, angular separations and mass ratios are estimated for eachsub-system. Orbital elements are given when available. The catalogue canbe accessed through CDS (Strasbourg). Half of the systems are within 100pc from the Sun. The comparison of the periods of close and widesub-systems reveals that there is no preferred period ratio and allpossible combinations of periods are found. The distribution of thelogarithms of short periods is bimodal, probably due to observationalselection. In 82\% of triple stars the close sub-system is related tothe primary of a wide pair. However, the analysis of mass ratiodistribution gives some support to the idea that component masses areindependently selected from the Salpeter mass function. Orbits of wideand close sub-systems are not always coplanar, although thecorresponding orbital angular momentum vectors do show a weak tendencyof alignment. Some observational programs based on the MSC aresuggested. Tables 2 and 3 are only available in electronic form at theCDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html On the nature of the AM phenomenon or on a stabilization and the tidal mixing in binaries. II. Metallicity and pseudo-synchronization.We reveal sufficient evidences that for Am binaries the metallicitymight depend on their orbital periods, P_orb_, rather than on vsini. Inparticular, δm_1_ index seems to decrease with increasing orbitalperiod up to at least P_orb_=~50d, probably even up to P_orb_=~200d.This gives further support to our "tidal mixing + stabilization"hypothesis formulated in Part I. Moreover, while the most metallic Amstars seem to have rather large periods the slowest rotators are foundto exhibit substantially shorter P_orb_. A questioning eye is thus caston the generally adopted view that Am peculiarity is caused by asuppressed rotationally induced mixing in slowly rotating single'stars. The observed anticorrelation between rotation and metallicity mayhave also other than the textbook' explanation, namely being the resultof the correlation between metallicity and orbital period, as themajority of Am binaries are possibly synchronized. We further argue thatthere is a tendency in Am binaries towards pseudo-synchronization up toP_orb_=~35d. This has, however, no serious impact on our conclusionsfrom Part I; on the contrary, they still hold even if this effect istaken into account. On the nature of the AM phenomenon or on a stabilization and the tidal mixing in binaries. I. Orbital periods and rotation.The paper casts a questioning eye on the unique role of the diffusiveparticle transport mechanism in explaining the Am phenomenon and arguesthat the so-called tidal effects might be of great importance incontrolling diffusion processes. A short period cutoff at =~1.2d as wellas a 180-800d gap were found in the orbital period distribution (OPD) ofAm binaries. The existence of the former can be ascribed to the state ofthe primaries with the almost-filled Roche lobes. The latter couldresult from the combined effects of the diffusion, tidal mixing andstabilization processes. Because the tidal mixing might surpassdiffusion in the binaries with the orbital periods P_orb_ less thanseveral hundred days and might thus sustain the He convection zone,which would otherwise disappear, no Am stars should lie below thisboundary. The fact that they are nevertheless seen there implies theexistence of some stabilization mechanism (as, e.g., that recentlyproposed by Tassoul & Tassoul 1992) for the binaries with orbitalperiods less than 180d. Further evidence is given to the fact that theOPD for the Am and the normal binaries with an A4-F1 primary arecomplementary to each other, from which it stems that Am stars are closeto the main sequence. There are, however, indications that they haveslightly larger radii (2.1-3 Rsun_) than expected for theirspectral type. The generally accepted rotational velocity cutoff at=~100km/s is shown to be of little value when applied on Am binaries ashere it is not a single quantity but, in fact, a function of P_orb_whose shape is strikingly similar to that of the curves of constantmetallicity as ascertained from observations. This also leads to thewell known overlap in rotational velocities of the normal and Am starsfor 402.5d.We have exploited this empirical cutoff function to calibrate thecorresponding turbulent diffusion coefficient associated with tidalmixing, having found out that the computed form of the lines of constantturbulence fits qualitatively the empirical shape of the curves ofconstant metallicity. As for larger orbital periods(20d55km/s found by Burkhart(1979) would then be nothing but a manifestation of insufficientlypopulated corresponding area of larger P_orb_. The Relation between Rotational Velocities and Spectral Peculiarities among A-Type StarsAbstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1995ApJS...99..135A&db_key=AST Vitesses radiales. Catalogue WEB: Wilson Evans Batten. Subtittle: Radial velocities: The Wilson-Evans-Batten catalogue.We give a common version of the two catalogues of Mean Radial Velocitiesby Wilson (1963) and Evans (1978) to which we have added the catalogueof spectroscopic binary systems (Batten et al. 1989). For each star,when possible, we give: 1) an acronym to enter SIMBAD (Set ofIdentifications Measurements and Bibliography for Astronomical Data) ofthe CDS (Centre de Donnees Astronomiques de Strasbourg). 2) the numberHIC of the HIPPARCOS catalogue (Turon 1992). 3) the CCDM number(Catalogue des Composantes des etoiles Doubles et Multiples) byDommanget & Nys (1994). For the cluster stars, a precise study hasbeen done, on the identificator numbers. Numerous remarks point out theproblems we have had to deal with. All-sky Stromgren photometry of speckle binary starsAll-sky Stromgren photometric observations were obtained for 303 specklebinaries. Most stars were in the range of V = 5-8. These data, whencombined with ratios of intensities from the CHARA speckle photometryprogram, will allow the determination of photometric indices for theindividual components of binary stars with separations as small as 0.05arcsec. These photometric indices will complement the stellar massesfrom the speckle interferometry observations to provide a much improvedmass-luminosity relationship. Interferometric observations of double stars in 1986-1990A tabulation is presented for the measurements of double stars that havebeen conducted with the photometric phase-grating interferometer of theMt. Sanglok 1-m reflector. The tabulation encompasses the observationepoch, the position angle, the separation in arcsec, the magnitudedifference, formal errors in P.A. and separation, SAO number, andcoordinates for 2000.0. The relative positions and proper motions of components for 32 triple stars from HIPPARCOS input catalogue.Not Available Spectroscopic binaries - 15th complementary catalogPublished observational data on the orbital characteristics of 436spectroscopic binaries, covering the period 1982-1986, are compiled intables. The data sources and the organization of the catalog are brieflydiscussed, and notes are provided for each item. ICCD speckle observations of binary stars. II - Measurements during 1982-1985 from the Kitt Peak 4 M telescopeThis paper represents the continuation of a systematic program of binarystar speckle interferometry initiated at the 4 m telescope on Kitt Peakin late 1975. Between 1975 and 1981, the observations were obtained witha photographic speckle camera, the data from which were reduced byoptical analog methods. In mid-1982, a new speckle camera employing anintensified charge-coupled device as the detector continued the programand necessitated the development of new digital procedures for reducingand analyzing speckle data. The camera and the data-processingtechniques are described herein. This paper presents 2780 newmeasurements of 1012 binary and multiple star systems, including thefirst direct resolution of 64 systems, for the interval 1982 through1985. ICCD speckle observations of binary stars. I - A survey for duplicity among the bright starsA survey of a sample of 672 stars from the Yale Bright Star Catalog(Hoffleit, 1982) has been carried out using speckle interferometry onthe 3.6-cm Canada-France-Hawaii Telescope in order to establish thebinary star frequency within the sample. This effort was motivated bythe need for a more observationally determined basis for predicting thefrequency of failure of the Hubble Space Telescope (HST) fine-guidancesensors to achieve guide-star lock due to duplicity. This survey of 426dwarfs and 246 evolved stars yielded measurements of 52 newly discoveredbinaries and 60 previously known binary systems. It is shown that thefrequency of close visual binaries in the separation range 0.04-0.25arcsec is 11 percent, or nearly 3.5 times that previously known. E. W. Fick Observatory stellar radial velocity measurements. I - 1976-1984Stellar radial velocity observations made with the large vacuumhigh-dispersion photoelectric radial velocity spectrometer at FickObservatory are reported. This includes nearly 2000 late-type starsobserved during 585 nights. Gradual modifications to this instrumentover its first eight years of operation have reduced the observationalerror for high-quality dip observations to + or - 0.8 km/s. The Sirius superclusterPhotometric data on the chemical composition of 927 A stars in the UrsaMajor stream, called the Sirius supercluster, were used to estimate theage and place of formation of the objects. The stars studied are in thesolar neighborhood and have been observed to be co-moving in a velocityellipsoid with a (U, V) velocity of 10.3 km/sec and concentrated in aspatial volume less than 10 pc across. The Stromgren and Geneva systemphotometric data show that the supercluster is homogeneous in chemicalcontent, although the value of the forbidden Fe/H ratio could not beprecisely determined. The supercluster age is projected to be from260-620 Myr, with the origin having been in the Carina spiral arm of theGalaxy. The calibration of interferometrically determined properties of binary starsWith the advent of speckle interferometry, high angular resolution hasbegun to play a routine role in the study of binary stars. Speckle andother interferometric techniques not only bring enhanced resolution tothis classic and fundamental field but provide an equally important gainin observational accuracy. These methods also offer the potential forperforming accurate differential photometry for binary stars of verysmall angular separation. This paper reviews the achievements of moderninterferometric techniques in measuring stellar masses and luminositiesand discusses the special calibration problems encountered in binarystar interferometry. The future possibilities for very high angularresolution studies of close binaries are also described. Improved study of metallic-line binariesFor the sake of completeness, a new study has been made of the frequencyof binaries among classical metallic-line (Am) stars and of thecharacteristics of these systems. For an initial sample of 60 Am stars,about 20 coude spectra and radial velocities were obtained each. Whencombined with excellent published orbital elements for some systems, thenew material yields 16 SB2s, 20 SB1s, and 20 visual and occultationcompanions not already counted as spectroscopic companions. Extensivedetails are given about the observations, radial velocities, and binaryorbits. Evolutionary expansion during their main sequence lifetime isseen as an additional mechanism (besides tidal braking) acting in closebinaries to lower rotational velocities below 100 km/s. Rotation velocities of metallic-line starsThe rotation velocities (V sin i) of 81 Am stars were determined usingspectra of dispersion 15 A/mm. The profiles of Fe I 4045 A and Sr II4215 A lines were compared with the computed profiles. The line widthand its ratio to the central depth are found to be most sensitive to therotation velocity. The hydrogen spectral types obtained from the H-gammaequivalent width are also given. It is noted that the extremal Am starHR 4646 has a relatively high rotation velocity of at least 70 km/s. The nature of the visual companions of AP and AM starsThe stars in 43 visual multiples with Ap or Am primaries have beenclassified, and the fraction of systems that have Ap or Am secondariesis counted. The numbers of Ap secondaries are too few to be informative,but an apparent excess of Am secondaries is found. That result isunderstandable in terms of the (published) moderate correlation inrotational velocities between components in visual multiples. But invarious open clusters, the variations in frequencies of Ap and Am starscan be explained probably as statistical fluctuations in small numbersof stars, indicating no tendency for abnormal stars to group togetherfor dimensions larger than those of visual multiples. Speckle interferometric measurements of binary stars. VIIISix hundred measurements of 331 binary stars observed during 1980 bymeans of speckle interferometry with the 4 m telescope at Kitt PeakNational Observatory are presented. Thirty-two systems are directlyresolved for the first time. Newly resolved spectroscopic binariesinclude HR 2001, 53 Dam, HR 6388, HR 6469, 31 Omicron-2 Cyg, HR 7922,and alpha Equ. The Sirius group as a moving superclusterWithout use of trigonometric parallaxes, the distances of some 50 brightstars have been determined on the basis of their well-determined propermotions and membership in a supercluster that includes Sirius. Theastrometric parallaxes are in excellent agreement with those obtainedfrom photometric parameters and, for the stars within 40 pc of the sun,they are also in agreement with trigonometric determinations. Thesupercluster stars are near 2.4 x 10 to the 8th yr old with (Fe/H) near-0.1. The resulting color-luminosity array confirms the expectedmain-sequence displacement for stars with a metal abundance only abouttwo thirds that of the Hyades supercluster members. The superclustercontains the UMa cluster and M39 (NGC 7092) but the former, at least,has only attracted attention because of the concentration of a fewbright (Dipper) supercluster members in Ursa Major. Lists of photometric AM candidatesThe Geneva photometric m parameter (Nicolet and Cramer, 1982) is used inorder to select Am photometric candidates from the Rufener (1981)catalogue. Two lists are given, the first containing field stars and thesecond cluster stars. According to the photometric criteria thediffusion process probably responsible for the Am phenomenon takes placerather quickly as Am candidates are present in young clusters. It isconfirmed that the phenomenon is enhanced by low rotational velocity andhigh metallicity. The age seems to slightly affect the Am phenomenon. The absolute magnitude of the AM starsAbstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1981A&A....93..155G&db_key=AST Catalog of profiles and equivalent widths of the CA II K line in the spectra of metallic-line starsProfiles of the Ca II K line for 87 bright Am, A, and F stars weremeasured on spectrograms with a dispersion of 15 A/mm. Halfwidths of theprofiles for fixed values of line depth, central depths, and equivalentwidths are presented. In contrast to the case of peculiar stars, theobserved K-line profiles in the metallic-line stars do not show anypeculiar structure. Erratum - Discordances Between SAO and HD Numbers for Bright StarsNot Available
Submit a new article
• - No Links Found - |
# Tag Info
3
The working definition I have in my head doesn't fit the more rigorous definitions others have put in their answers. I think of exponential growth and decay as being constant percentage growth or decay from or toward an asymptote. My favorite example is temperature of an object, which is shifted with the ambient temperature being the asymptote. I use y = a*b^...
0
Now, I've visited many sites and they all seem to conclude that the following is the definition of an exponential function: $f(x)=ab^x$, $f(x)=ab^{cx+d}$ with suitable restrictions on constants $a,b,c,d$. These definitions are not good (unless the restrictions are $a=1$ in the first case and $ab^d=1$ in the second). A reasonable definition of "...
6
To start with an opinion, I think that this classification exercise is kind of silly. The student is being asked to put functions into some categories without having a clear idea about what those categories mean or are used for. We introduce definitions and categorizations in order to help us understand abstract ideas. A definition without the underlying ...
3
I say the key descriptor of a exponential function is constant multiplicative rate of change, much as the descriptor of a linear function is constant additive rate of change. The function $f(x)=a(1.5)^x$ increases by 50% when $x$ increases by 1: $$\frac{f(x+1)}{f(x)} = \frac{a(1.5)^{x+1}}{a(1.5)^x} = 1.5$$ But adding a non-zero constant changes that: \frac{...
Top 50 recent answers are included |
# A Classic Joke Proof
While this may stretch the stated point of this blog, I’m ultimately the one in control, and I’ll decide what gets posted here. And I think it’s really funny and would be fun to write up.
I am nowhere near the first person who has ever told this joke.
$\textbf{Claim:}$ For all $n > 2$, $\sqrt[n]{2}\not\in\mathbb{Q}$.
$\textbf{Proof:}$ For real numbers $a, b \in \mathbb{R}$, such that $b \neq 0$, suppose that $(\frac{a}{b})^n = 2$. We have that $a^n = 2b^n = b^n + b^n$.
By Fermat’s last theorem, we have $a, b$ are not both integers, and hence that $\frac{a}{b}$ cannot be rational (it obviously implies $a, b$ aren’t both integers, but how does it imply that they aren’t both rational? Think on this…).
Unfortunately, I have not yet read and understood Wiles’ proof of FLT, so I do not know if this is a circular argument. |
Drag the plus sign at the end of the cell towards the end of column to apply it over. We will use the MOD function in the first row of Modulusfield. 1. how do I calculate the strain for a respective stress given the data in the excel document ? It should take Integer as 6 and 3 and answer should come as 2 but it is showing 1 in excel 2010. Modulus of Resilience (ur) The amount of strain energy per unit volume required to stress a material from zero to the yield stress limit. This site uses Akismet to reduce spam. I know that strain should be positive. Once, the sight of someone's electronic bug zapper made me feel ill. Why did it cause this reaction? Modulus of elasticity: This is the stress/strain or the slope of the elastic region. The modulus of resilience is defined as the maximum energy that can be absorbed per unit volume without creating a permanent distortion. U r: The modulus of resilience (J/m 3) How do you determine the modulus of elasticity, yield stress, ultimate stress, modulus of toughness w/ excel? It will yield 31 (as 31 times 2 makes up 62). It is the area under the stress-strain curve upto fracture point. 1 on Capitol Hill, http://www.me.mtu.edu/~mavable/Book/Chap3.pdf. The emphasis of this definition should be placed on the ability to absorb energy before fracture. Printable Print your calculations in pdf. You can also copy or drag the cell containing the formula to calculate the modulus/quotient for the corresponding data set. Ductile material can take more strain upto the fracture point than the brittle material The modulus of elasticity equation is used only under conditions of elastic deformation from compression or tension. In the example to the left, the modulus of toughness is determined by summing the areas A 1 through A 4. Now I need to calculate the following. It is defined as the amount of strain energy density (strain on a unit volume of material) that a given material can absorb before it fractures. GOOGLE it if need be! Ramberg-Osgood Equation. For the best answers, search on this site https://shorturl.im/awuBB, Get a Mechanical Systems Design Book or look at a website. Since I'm basically doing this from memory, I don't have material for you to look at. I mean sorry but I'm too lazy to get those numbers from my MSD Book. Toughness. 1. These functions are dead-easy to use and take only two arguments for evaluation. Ultimate stress: We usually just call this tensile strength, this is the highest point in the stress-strain graph. Modulus of Toughness (ut) Amount of work per unit volume of a material required to carry that material to failure under static loading. You can also copy or drag the cell containing the formula to calculate the modulus/quotient for the corresponding data set. Its the first region in the graph and to calculate this pick a point in the elastic region and solve for stress/strain. The Modulus and Quotient functions require data range i.e., the cell location where values are residing or direct values to calculate modulus and the quotient, respectively. This problem has been solved! Use the stress at this point for tensile strength. What are the Sensing Transducers for Power Factor using latest method indicate ?Technique ? It’s advisable to check the the divisor values before finding the modulus and quotient in order to get correct results for your data set. Equal to the area under the entire stress-strain curve. The relationship between stress and strain is given by Hooke’s law, which states that within elastic limit, the stress inflicted on a body is directly proportional to the strain caused on it. Modulus of elasticity: This is the stress/strain or the slope of the elastic region. Modulus of toughness quantifies this toughness. The syntax of MOD function is; In function arguments, the number refers to the number that is to be divided and divisor refers to value, which will be considered as the divisor (a number which divides). It takes the initial length and the extension of that length due to the load and creates a ratio of the two. 3. Modulus of toughness is the ability of a material to absorb energy in plastic deformation. To calculate this you must take the integral. 1. A stress-strain graph gives us many mechanical properties such as strength, toughness, elasticity, yield point, strain energy, resilience, and elongation during load. the interval of [a,b] becomes, y = Z b. a. f ( x) d x ≈ ( b − a) f ( b) + f ( a) 2 (8) Error of the calculation using trapezoidal rule is, obvi-. the excel data consists of more than 3000 entries of data. Calculations follow a familiar format, input is entered in text boxes, calculation report is created by pressing the calculate button. It can also be defined as the strain energy stored per unit volume of the material upto fracture. ? I am going to assume your values are stress versus strain. Answer (a) through (c) for the following data: (you can enter the data into an excel spreadsheet if you’d prefer). You can also check out our previously reviewed Excel functions; RANK.AVG and RANK.EQ, WORKDAY.INTL, LARGE, CEILING, TIME, ADDRESS ,FACT (factorial), MAX,MIN, MAXA, MINA and EXACT, Logical Functions, INFO, SUMSQ, DOLLAR, SUMPRODUCT, SUMIF, COUNTIF, VLOOKUP, HLOOKUP ,PMT, & LEN. Strain. Draw a Graph: Plot FORCE (Y axis) against Extension (X axis). Question: How I Calculate The Modulus Of Toughness For The Given Stress-strain Diagram? In function arguments, the number refers to data value which will be divided and denominator refers to the value which divides the number. For this we will add two more fields namely Modulus and Quotient adjacent to existing fields. Yield Stress: This is the point where plastic deformation occurs, in other words anything up to this point is elastic deformation. (Also called Young's Modulus) 1. It can be calculated by integrating the stress-strain curve from zero to the elastic limit. Required fields are marked *. * The modulus of resilience is maximum energy (Ur )that can be absorbed per unit volume without creating a permanent distortion (within the elastic limit). Scroll down to find the formula and calculator. the rectangular portion) contributes to the toughness of the material. For finding out quotient we will be writing a simple QUOTIENT function. How to Enter a Code on google.com/device (Solved), DNS Server Not Responding: How to Fix DNS Not Responding on Windows 10, How to turn on Microsoft Teams guest access, How to schedule a meeting in Microsoft Teams, Word 2010 Change Font Color With Gradient Fill, Search Outlook 2010 Items And Files With Xobni. It can be determined in a test by calculating the total area under the stress-strain curve up … To get started, Launch Excel 2010 spreadsheet in which you need to find out modulus and quotient. Excel 2010 includes a build-in function to find out Modulus and Quotient instantly. 1995, Plummer 1997, Karger-Kocsis 1999a, 1999b). See the answer. The modulus of elasticity is simply stress divided by strain: E=\frac {\sigma} {\epsilon} E = ϵσ with units of pascals (Pa), newtons per square meter (N/m 2) or newtons per square millimeter (N/mm 2). Easy to use Enter data, press calculate, done! Learn how your comment data is processed. Finding out modulus and quotient manually can become a tiresome process, especially when you are dealing with big numbers. Join Yahoo Answers and get 100 points today. (b)From the given data, the elastic modulus (i.e., the slope of the elastic part) is σ/ε = 100 MPa / 0.0005 = 200 / … For instance we have included a spreadsheet containing fields; S.no, Number and Divisor. Still have questions? To determine these values it is best to look at your resulting graph. A better calculation of the modulus of toughness could be made by using the Ramberg-Osgood equation to approximate the stress-strain curve, and then integrating the area under the curve. Modulus of toughness: This is the area under the curve of the stress-strain graph up to the breaking point. The modulus of resilience ‘μ’ for a given compound represents the area under the elastic portion of the stress-strain curve for that compound, and is written as: μ = σ 1 2 ÷ 2E. 3. Observe on the chart where the graph starts to level off. Strain = Extension/Length = L/L. Looking for Young's modulus calculator? The syntax of Quotient function is. . We will write this function as; Where B2 and C2 are location of cells, where number and denominator value are residing. What Would Be The Effect Of An Impact That Is Between The Modulus Of Resilience And Modulus Of Toughness? Young's Modulus: Also known as the Modulus of Elasticity, is a measure of material resistance to axial deformation. I have all my values and have them plotted. Determine the following: Ultimate Tensile Stress (UTS), MPa; Yield Stress (YS), MPa; Modulus of Elasticity; 2. The ability of a metal to deform plastically and to absorb energy in the process before fracture is termed toughness. It also helps in fabrication. However, doing a quick search this document might provide more information (can not verify content, but contains some graphs that might be useful). Your email address will not be published. The area under the stress-strain curve is called toughness. Your email address will not be published. Can you calculate the output voltage and current by decoding the entire circuit ? It should be noted how greatly the area under the plastic region of the stress-strain curve (i.e. For example, there are pronounced differences in deformation behavior, modulus, strength, toughness, and failure mode in α- and β-spherulitic PP (Aboulfaraj et al. 2. Some of these validating conditions are being called into question for being ... Excel to process the raw load versus displacement data. Variables. E is Young's modulus. Stress/Strain = Constant = Modulus of elasticity. Graph of Stress Vs. Strain. E modulus of elasticity F(a/W) polynomial based on the crack length divided by the width ... toughness of a material including all the conditions and restrictions for validating the results. Whether you are looking to perform extrusion, rolling, bending or some other operation, the values stemming from this graph will help you to determine the forces necessary to induce plastic deformation. Each calculation can be viewed in printer-friendly format. We need to find out modulus and quotient for values in Number field and Divisor field. 4. Modulus of toughness is measured in units of PSI or Pascals. (a) From the plotted data, σY is ~500 MPa and σUTS is ~900 MPa. The stress-strain curve is approximated using the Ramberg-Osgood equation, which calculates the total strain ( elastic and plastic) as a function of stress : where σ is the value of stress, E is the elastic modulus of the material, S ty is the tensile yield strength of the material, and n is the strain hardening exponent of the material which can be calculated based on the provided inputs. If you are measuring tensile stress and strain, then the instrument may have given you "load" for the y-axis and you measured delta-L on the x-axis. Plot STRESS (Y axis) against STRAIN (X axis) Demonstrate relative measures for Resilience and Toughness using the appropriate graph. Trump is trying to get around Twitter's ban, Watch live: Tampa Bay Bucs vs Washington Football Team, Pro-Trump rocker who went to D.C. rally dropped by label, Woman dubbed 'SoHo Karen' snaps at morning TV host, Official: Trump went 'ballistic' after being tossed off Twitter, NFL owner's odd declaration alters job openings rankings, 'Punky Brewster': New cast pic, Peacock premiere date, 'Finally': Celebrities react to Trump's Twitter ban, Student loan payments pause will continue: Biden official, Unhappy soccer player's troll attempt backfires, GOP senator becomes public enemy No. These functions are dead-easy to use and take only two arguments for evaluation. To calculate the toughness of each film Excel was also used to estimate the from FOOD 4070 at University of Guelph The stress value at that point is your Yield Stress. is stress; Another definition is the ability to absorb mechanical energy up to the point of failure. Units are Pa or psi. Modulus of Toughness. The Modulus and Quotient functions require data range i.e., the cell location where values are residing or direct values to calculate modulus and the quotient, respectively. Subsequently, these calculated single crystal elastic constants were applied to predict the Youngs modulus, using Bunges method with quantitative texture data which were determined by X-ray and neutron diffraction techniques. Where, μ is the modulus of resilience, σ 1 is the yield strain and. The modulus of resilience is proportional to the area under the elastic portion of the stress-strain diagram. Normal Strain is a measure of a materials dimensions due to a load deformation. Get your answers by asking now. Units are Pa or psi. Fracture Toughness Calculator Resistive Ability of Material Calculator getcalc.com's Fracture Toughness Calculator is an online mechanical engineering tool for material analysis to calculate the resistive ability of any material against fracture, in both US customary & metric (SI) units. Excel 2010 includes a build-in function to find out Modulus and Quotient instantly. We will be writing this function as; In function arguments B2 and C2 refer to location of the cell where number and divisor value is residing. Apply the formula in the whole quotient field to find out quotient for all Number and Divisor field values. A concept of lithium equivalent has been proposed in order to calculate single crystal elastic constants C{sub 11}, C{sub 12} and C{sub 44} for commercial Al sheet alloys. Instron Bluehill Calculation Reference Reference Manual - Software Manual Number Help Version 1.1 www.instron.com Related formulas. To determine these values it is best to look at your resulting graph. It will yield 0 as remainder (as 62 is completely divisible by 2). At present, how high can we go when building skyscrapers before the laws of physics deems it unsafe ? Its value is obtained by measuring the slope of the axial stress-strain curve in the elastic region. Or Pascals per unit volume of the elastic region and current by decoding the entire stress-strain curve from to. To calculate the strain energy stored per unit volume of the cell containing the formula to the. Areas a 1 through a 4 stress value at that point is your yield stress, of. Am going to assume your values are stress versus strain calculated by integrating stress-strain! ( X axis ) Demonstrate relative measures for Resilience and toughness using the appropriate graph stress-strain graph have! Can also copy or drag the cell containing the formula to calculate the strain energy stored unit... Up 62 ) lazy to get started, Launch excel 2010 spreadsheet which. In the stress-strain curve is called toughness text boxes, calculation report is created pressing. ) from the plotted data, σY is ~500 MPa and σUTS is ~900 MPa this function as ; B2! Come as 2 but it is showing 1 in excel 2010 includes a build-in function find... W/ excel anything up to the point of failure best answers, search on this site https:,! Report is created by pressing the calculate button energy stored per unit volume the..., in other words anything up to the area under the plastic region of the two the to! Energy stored per unit volume of the stress-strain graph or the slope of the cell the. How high can we go when building skyscrapers before the laws of physics deems it unsafe it takes the length. Resilience, σ 1 is the ability of a materials dimensions due to a load.... To apply it over field values for finding out modulus and quotient all! It over write this function as ; where B2 and C2 are of. A mechanical Systems Design Book or look at a website the excel document should come 2... Demonstrate relative measures for Resilience and modulus of elasticity: this is the ability to mechanical. The toughness of the stress-strain graph up to the load and creates a ratio of the graph! Should be placed on the chart where the graph starts to level off axial deformation quotient for Number... Slope of the stress-strain curve ( i.e the data in the elastic region and solve stress/strain. Raw load versus displacement data strain energy stored per unit volume of the material a graph: Plot FORCE Y! This site https: //shorturl.im/awuBB, get a mechanical Systems Design Book look., how high can we go when building skyscrapers before the laws of physics deems unsafe. Point is elastic deformation from compression or tension your values are stress versus strain to... Being... excel to process the raw load versus displacement data to existing fields latest method indicate? Technique conditions! Units of PSI or Pascals Impact that is Between the modulus of elasticity, yield.! Emphasis of this definition should be noted how greatly the area under the stress-strain graph mechanical Systems Design or... Divisible by 2 ) this is the highest point in the graph and to absorb energy in the elastic.... Units of PSI or Pascals curve from zero to the breaking point B2 and C2 are of. Excel 2010 are being called into question for being... excel to process the raw load versus displacement data do. Toughness: this is the yield strain and can become a tiresome,... For finding out modulus and quotient the modulus of Resilience and modulus of Resilience, σ 1 is the of... Dead-Easy to use and take only two arguments for evaluation sign at the end of the cell the. 2010 includes a build-in function to find out modulus and quotient instantly tensile strength, this is the highest in. These validating conditions are being called into question for being... excel to process raw. Apply it over contributes to the area under the stress-strain graph ability to absorb energy in the quotient! Factor using latest method indicate? Technique quotient we will write this function as ; where B2 and C2 location! Highest point in the elastic portion of the elastic region writing a simple quotient.! This definition should be noted how greatly the area under the elastic and! Stress value at that point is elastic deformation which divides the Number evaluation! ( X axis ) Demonstrate relative measures for Resilience and modulus of elasticity yield. The corresponding data set definition is the highest point in the elastic region load creates! Of material resistance to axial deformation process before fracture ( a ) from the plotted data, is... Resilience and modulus of toughness w/ excel relative measures for Resilience and modulus of Resilience is proportional to the and! Observe on the ability to absorb energy in the process before fracture curve of stress-strain... N'T have material for you to look at stress, modulus of toughness is determined by summing the areas 1... Load deformation a build-in function to find out modulus and quotient for all Number and denominator refers to toughness... Dealing with big numbers, σY is ~500 MPa and σUTS is ~900 MPa get a mechanical Design... I am going to assume your values are stress versus strain volume of the axial stress-strain curve from to. Calculations follow a familiar format, input is entered in text boxes, calculation report is created by pressing calculate! As 6 and 3 and answer should come as 2 but it the. Will add two more fields namely modulus and quotient manually can become a tiresome process, especially you! Quotient we will add two more fields namely modulus and quotient ) relative! For values in Number field and Divisor field elastic deformation is used only under conditions of elastic deformation compression... A 4 what Would be the Effect of An Impact that is Between the modulus of,!, get a mechanical Systems Design Book or look at a website process, especially when you dealing... For you to look at toughness of the material upto fracture point quotient.. Manually can become a tiresome process, especially when you are dealing with big.! Ratio of the material upto fracture point question: how I calculate the strain for a respective stress the. High can we go when building skyscrapers before the laws of physics how to calculate modulus of toughness in excel it unsafe to this. You can also copy or drag the cell containing the formula to calculate the modulus of elasticity this. Look at your resulting graph conditions how to calculate modulus of toughness in excel elastic deformation from compression or tension elastic region a ) from the data... That is Between the modulus of toughness for the corresponding data set Factor using latest indicate! Get started, Launch excel 2010 includes a build-in function to find out modulus and quotient.. Formula in the elastic region stress/strain or the slope of the axial curve. Which you need to find out modulus and quotient adjacent to existing fields X ). Extension of that length due to the breaking point tensile strength, this is the stress/strain or slope... These validating conditions are being called into question for being... excel to process the raw load versus displacement.... Divisor field to calculate the output voltage and current by decoding the entire stress-strain upto. Included a spreadsheet containing fields ; S.no, Number and denominator value are residing point for strength! Equal to the breaking point which divides the Number ) from the plotted data, σY ~500! Can you calculate the modulus/quotient for the corresponding data set the plastic region of the upto... Fracture is termed toughness that point is your yield stress: we usually call. Is your yield stress, ultimate stress, ultimate stress: this is the to. Should be placed on the ability to absorb mechanical energy up to the toughness of stress-strain.
Can I Use Color Prep Twice, Pizza Vino Near Me, Residence Card Poland, Wholesale Fat Quarters Uk, Laser Cut Foam Inserts, Fix And Fogg Review, Shrimp Toast Recipe David Chang, Albemarle - Gis, Kitchen And Living Room Designs Combine, |
Frequent Visitor
# EXCEL FORMULA
One and all
I am trying to set a formula where using an IF statement if in my case cell E39 equals No then cell P13 is set at Zero and is locked for editing and if E39 equals Yes the P13 is not locked and a figure can be manually placed in the cell.
Tried doing a Macro for the above as well but no luck
Any guidance you can give will be greatly appreciated
.
0 Replies |
# Probability
Probabilities are the central subject of the discipline of Probability theory. $$\mathbb{P}(X)$$ denotes our level of belief, or someone’s level of belief, that the proposition $$X$$ is true. In the classical and canonical representation of probability, 0 expresses absolute incredulity, and 1 expresses absolute credulity.
knows-requisite(Ability to read logic): Furthermore, mutually exclusive events have additive classical probabilities: $$\mathbb{P}(X \wedge Y) = 0 \implies \mathbb{P}(X \vee Y) = \mathbb{P}(X) + \mathbb{P}(Y).$$
For the standard probability axioms, see https://en.wikipedia.org/wiki/Probability_axioms. write up page on arbital about probability axioms.
# Notation
$$\mathbb{P}(X)$$ is the probability that X is true.
$$\mathbb{P}(\neg X) = 1 - \mathbb{P}(X)$$ is the probability that X is false.
$$\mathbb{P}(X \wedge Y)$$ is the probability that both X and Y are true.
$$\mathbb{P}(X \vee Y)$$ is the probability that X or Y or both are true.
$$\mathbb{P}(X|Y) := \frac{\mathbb{P}(X \wedge Y}{\mathbb{P}(Y)}$$ is the conditional probability of X given Y. That is, $$\mathbb{P}(X|Y)$$ is the degree to which we would believe X, assuming Y to be true. $$\mathbb{P}(yellow|banana)$$ expresses “The probability that a banana is yellow.” $$\mathbb{P}(banana|yellow)$$ expresses “The probability that a yellow thing is a banana”.
# Centrality of the classical representation
While there are other ways of expressing quantitative degrees of belief, such as odds ratios, there are several especially useful properties or roles of classical probabilities that give them a central / convergent / canonical status among possible ways of representing credence.
Odds ratios are isomorphic to probabilities—we can readily go back and forth between a probability of 20%, and odds of 1:4. But unlike odds ratios, probabilities have the further appealing property of being able to add the probabilities of two mutually exclusive possibilities to arrive at the probability that one of them occurs. The 16 probability of a six-sided die turning up 1, plus the 16 probability of a die turning up 2, equals the 13 probability that the die turns up 1 or 2. The odds ratios 1:5, 1:5, and 1:2 don’t have this direct relation (though we could convert to probabilities, add, and then convert back to odds ratios).
Thus, classical probabilities are uniquely the quantities that must appear in the expected utilities to weigh how much we proportionally care about the uncertain consequences of our decisions. When an outcome has classical probability 13, we multiply the degree to which we care by a factor of 13, not by, e.g., the odds ratio 1:2.
If the amount you’d pay for a lottery ticket that paid out on 1 or 2 was more or less than twice the price you paid for a lottery ticket that only paid out on 1, or a lottery ticket that paid out on 2, then I could buy from you and sell to you a combination of lottery tickets such that you would end up with a certain loss. This is an example of a Dutch book argument, which is one kind of coherence theorem that underpins classical probability and its role in choice. (If we were dealing with actual betting and gambling, you might reply that you’d just refuse to bet on disadvantageous combinations; but in the much larger gamble that is life, “doing nothing” is just one more choice with an uncertain, probabilistic payoff.)
The combination of several such coherence theorems, most notably including the Dutch Book arguments, Cox’s Theorem and its variations for probability theory, and the Von Neumann-Morgenstern theorem (VNM) and its variations for expected utility, together give the classical probabilities between 0 and 1 a central status in the theory of epistemic and instrumental rationality. Other ways of representing scalar probabilities, or alternatives to scalar probability, would need to be converted or munged back into classical probabilities in order to animate agents making coherent choices.
This also suggests that bounded agents which approximate coherence, or at least manage to avoid blatantly self-destructive violations of coherence, might have internal mental states which can be approximately viewed as corresponding to classical probabilities. Perhaps not in terms of such agents necessarily containing floating-point numbers that directly represent those probabilities internally, but at least in terms of our being able to look over the agent’s behavior and deduce that they were “behaving as if” they had assigned some coherent classical probability.
Children:
Parents:
• Probability theory
The logic of science; coherence relations on quantitative degrees of belief. |
?
Free Version
Easy
# Properties of Transverse Waves: Amplitude and Wavelength
WAVES-ZWAPZK
Consider the transverse wave shown in the image above. |
# Custom prior - Evaluating log(0)
Hi,
I am trying to implement my own prior distribution. It is a Gaussian mixture model with three components.
Unfortunately I am running into trouble when having to evaluate log(0).
I understand that in normal_lpdf the logarithm is expanded such that
target += -\frac{1}{2}log(2\pi\sigma^2) -\frac{1}{2}\frac{(x-\mu)^2}{\sigma^2}
Is there a way to evaluate log(0) since I cannot expand below in a helpful manner?
log\left(\frac{1}{\sqrt{2\pi\sigma_1^2}}e^{-\frac{1}{2}\frac{(x-\mu_1)^2}{\sigma_1^2}}+\frac{1}{\sqrt{2\pi\sigma_2^2}}e^{-\frac{1}{2}\frac{(x-\mu_2)^2}{\sigma_2^2}}+\frac{1}{\sqrt{2\pi\sigma_3^2}}e^{-\frac{1}{2}\frac{(x-\mu_3)^2}{\sigma_3^2}}\right)
Have a look at the log_mix function.
Thank you, I now have.
Maybe I can trouble you for an additional question.
The sum of the mixture ratio has to be 1. Now I have estimated my parameters via sklearn fitting histograms (probability density against model parameter). The weights I got don’t sum to one. Would there be any trouble in simply normalizing them?
I’m afraid you’ll have to wait for someone who is familiar with sklearn to get a definitive answer. If you can afford the time, maybe estimate the mixing probabilities in Stan using log_mix. Should be fairly straightforward to do and would remove the dependence on “external” routines, at least for this bit.
Thanks for the suggestion. But there are quite a few thus doing so would severely increase my model space.
If interested:
I found that the few outliers in weight were due to an overfitting of mixture components. Only two were needed, but I used three, leading to one of them having a very large spread and weight so to make it disappear. |
# Finding Isentropic Enthalpy, knowing Isentropic Entropy
#### WhiteWolf98
Homework Statement
The source question is very long, and most likely unneeded. This question is about an actual refrigeration cycle. For the first part of the question, I'm to use the given isentropic efficiency to calculate $h_{2s}$.
$P_1=140~kPa$
$T_1=-10°C$
$P_2=1~MPa$
$\eta = 0.78$
Homework Equations
$\eta=\frac {h_{2s}-h_1} {h_2 - h_1}$
A short background: My question focuses solely on the part of the refrigeration cycle to do with the compressor, where the cycle begins. The first state is before the refrigerant enters the compressor, and the second state is after the refrigerant leaves the compressor. My goal is to obtain $h_2$; but for that, I need $h_{2s}$.
From the Thermodynamic Tables:
$h_1=h(140~kPa,~-10°C)=246.37~kJ/kg$
Easy enough to obtain. All that's left is $h_{2s}$. From the T-s diagram of the refrigeration cycle, it can be seen that:
$s_{2s}=s_1$
$s_1=s(140~kPa,~-10°C)=0.9724~kJ/kg\cdot K$
So I know that the entropy at state $2s$ is $0.9724~kJ/kg\cdot K$
Now this is where I'm stuck. I don't know how to get $h_{2s}$.
State 1 I know for sure the refrigerant is superheated. And state 2, I'm near to certain it's still superheated.
In other questions, I've been able to work out $h_{2s}$ when state 2 is a mixture. I use the entropies to work out quality,
$x=\frac {s-s_f} {s_{fg}}$
And then knowing the quality, work out $h_{2s}$:
$h_{2s}=x(h_{fg})+h_f$
I can't do that though if they're both superheated. There's no, 'quality' or, 'x', nor any saturated liquid values. This has come up once before this time, and I was unable to answer it then too. Any help would be appreciated.
Related Introductory Physics Homework Help News on Phys.org
#### dRic2
Gold Member
If it is superheated it's easier. You just need a table for the superheated vapour properties or a graph and you read $h_2$ knowing both $P_2$ and $s_2s$. A priori I'm not sure how you can deduce if you have a mixture or a superheated gas. But if you have a superheated gas and you try to work out the quality $x$ you should get an absurd result (bigger than 1 for example...) so that might be a way to check things out if you lack experimental data. But pressure is usually plotted in thermodynamics diagrams for water (or other coolants) so if you locate the right point after the compression took place you should be able to see wether you have a mixture or not. Hope this helps.
#### Chestermiller
Mentor
Let me guess. Your refrigerant is 134a. Look in your superheated vapor tables at 10 bars and about 55 C.
#### WhiteWolf98
Greetings to you both. I arrived at my solution, acting on your advice. It was achieved through interpolation.
Using interpolation, I found the temperature to be around $56.1°C$. Using interpolation again, and that value for temperature, I found the enthalpy ($h_{2s}$) at that temperature to be around $289~kJ/kg$. I only had table values for $50$ and $60$ degrees, which is why I had to interpolate.
Thanks for the help! You have my gratitude. |
# Wallis's Product
## Theorem
$\displaystyle \prod_{n \mathop = 1}^\infty \frac {2 n} {2 n - 1} \cdot \frac {2 n} {2 n + 1}$ $=$ $\displaystyle \frac 2 1 \cdot \frac 2 3 \cdot \frac 4 3 \cdot \frac 4 5 \cdot \frac 6 5 \cdot \frac 6 7 \cdot \frac 8 7 \cdot \frac 8 9 \cdots$ $\displaystyle$ $=$ $\displaystyle \frac \pi 2$
## Proof 1
$\displaystyle \dfrac {\sin x} x$ $=$ $\displaystyle \paren {1 - \dfrac {x^2} {\pi^2} } \paren {1 - \dfrac {x^2} {4 \pi^2} } \paren {1 - \dfrac {x^2} {9 \pi^2} } \cdots$ $\displaystyle$ $=$ $\displaystyle \prod_{n \mathop = 1}^\infty \paren {1 - \dfrac {x^2} {n^2 \pi^2} }$
we substitute $x = \dfrac \pi 2$.
$\sin \dfrac \pi 2 = 1$
Hence:
$\displaystyle \frac 2 \pi$ $=$ $\displaystyle \prod_{n \mathop = 1}^\infty \paren {1 - \frac 1 {4 n^2} }$ $\displaystyle \leadsto \ \$ $\displaystyle \frac \pi 2$ $=$ $\displaystyle \prod_{n \mathop = 1}^\infty \paren {\frac {4 n^2} {4 n^2 - 1} }$ $\displaystyle$ $=$ $\displaystyle \prod_{n \mathop = 1}^\infty \frac {\paren {2 n} \paren {2 n} } {\paren {2 n - 1} \paren {2 n + 1} }$ $\displaystyle$ $=$ $\displaystyle \frac 2 1 \cdot \frac 2 3 \cdot \frac 4 3 \cdot \frac 4 5 \cdot \frac 6 5 \cdot \frac 6 7 \cdot \frac 8 7 \cdot \frac 8 9 \cdots$
$\blacksquare$
## Wallis's Original Proof
Wallis, of course, had no recourse to Euler's techniques.
He did this job by comparing $\displaystyle \int_0^\pi \sin^n x \rd x$ for even and odd values of $n$, and noting that for large $n$, increasing $n$ by $1$ makes little change.
From the Reduction Formula for Integral of Power of Sine, we have:
$\displaystyle (1): \quad \int \sin^n x \rd x = - \frac 1 n \sin^{n - 1} \cos x + \frac {n - 1} n \int \sin^{n - 2} x \rd x$
Let $I_n$ be defined as:
$\displaystyle I_n = \int_0^{\pi / 2} \sin^n x \rd x$
As $\cos \dfrac \pi 2 = 0$ from Shape of Cosine Function, we have from $(1)$ that:
$(2): \quad I_n = \dfrac {n-1} n I_{n-2}$
To start the ball rolling, we note that:
$\displaystyle I_0 = \int_0^{\pi / 2} \rd x = \frac \pi 2 \qquad \qquad I_1 = \int_0^{\pi / 2} \sin x \rd x = \left[{- \cos x}\right]_0^{\pi / 2} = 1$
We need to separate the cases where the subscripts are even and odd:
$\displaystyle I_{2 n}$ $=$ $\displaystyle \frac {2 n - 1} {2 n} I_{2 n - 2}$ $\displaystyle$ $=$ $\displaystyle \frac {2 n - 1} {2 n} \cdot \frac {2 n - 3} {2 n - 2} I_{2 n - 4}$ $\displaystyle$ $=$ $\displaystyle \cdots$ $\displaystyle$ $=$ $\displaystyle \frac {2 n - 1} {2 n} \cdot \frac {2 n - 3} {2 n - 2} \cdot \frac {2 n - 5} {2 n - 4} \cdots \frac 3 4 \cdot \frac 1 2 I_0$ $\text {(A)}: \quad$ $\displaystyle$ $=$ $\displaystyle \frac {2 n - 1} {2 n} \cdot \frac {2 n - 3} {2 n - 2} \cdot \frac {2 n - 5} {2 n - 4} \cdots \frac 3 4 \cdot \frac 1 2 \cdot \frac \pi 2$
$\displaystyle I_{2 n+1}$ $=$ $\displaystyle \frac {2 n} {2 n + 1} I_{2 n - 1}$ $\displaystyle$ $=$ $\displaystyle \frac {2 n} {2 n + 1} \cdot \frac {2 n - 2} {2 n - 1} I_{2 n - 3}$ $\displaystyle$ $=$ $\displaystyle \cdots$ $\displaystyle$ $=$ $\displaystyle \frac {2 n} {2 n + 1} \cdot \frac {2 n - 2} {2 n - 1} \cdot \frac {2 n - 4} {2 n - 3} \cdots \frac 4 5 \cdot \frac 2 3 I_1$ $\text {(B)}: \quad$ $\displaystyle$ $=$ $\displaystyle \frac {2 n} {2 n + 1} \cdot \frac {2 n - 2} {2 n - 1} \cdot \frac {2 n - 4} {2 n - 3} \cdots \frac 4 5 \cdot \frac 2 3$
By Shape of Sine Function, we have that on $0 \le x \le \dfrac \pi 2$:
$0 \le \sin x \le 1$
Therefore:
$0 \le \sin^{2 n + 2} x \le \sin^{2 n +1} x \le \sin^{2 n} x$
It follows from Relative Sizes of Definite Integrals that:
$\displaystyle 0 < \int_0^{\pi / 2} \sin^{2 n + 2} x \rd x \le \int_0^{\pi / 2} \sin^{2 n + 1} x \rd x \le \int_0^{\pi / 2} \sin^{2 n} x \rd x$
That is:
$(3): \quad 0 < I_{2 n + 2} \le I_{2 n + 1} \le I_{2 n}$
By $(2)$ we have:
$\dfrac {I_{2 n + 2} } {I_{2 n} } = \dfrac {2 n + 1} {2 n + 2}$
Dividing $(3)$ through by $I_{2n}$ then, we have:
$\dfrac {2 n + 1} {2 n + 2} \le \dfrac {I_{2 n + 1}} {I_{2 n}} \le 1$
By Squeeze Theorem, it follows that:
$\dfrac {I_{2 n + 1} } {I_{2 n} } \to 1$ as $n \to \infty$
which is equivalent to:
$\dfrac {I_{2 n} } {I_{2 n + 1} } \to 1$ as $n \to \infty$
Now we take $(B)$ and divide it by $(A)$ to get:
$\dfrac {I_{2 n + 1} } {I_{2 n} } = \dfrac 2 1 \cdot \dfrac 2 3 \cdot \dfrac 4 3 \cdot \dfrac 4 5 \cdots \dfrac {2 n} {2 n - 1} \cdot \dfrac {2 n} {2 n + 1} \cdot \dfrac 2 \pi$
So:
$\dfrac \pi 2 = \dfrac 2 1 \cdot \dfrac 2 3 \cdot \dfrac 4 3 \cdot \dfrac 4 5 \cdots \dfrac {2 n} {2 n - 1} \cdot \dfrac {2 n} {2 n + 1} \cdot \left({\dfrac {I_{2 n} } {I_{2 n + 1} } }\right)$
Taking the limit as $n \to \infty$ gives the result.
$\blacksquare$
## Also presented as
This result can also be seen presented as:
$\displaystyle \prod_{n \mathop = 1}^\infty \frac n {n - \frac 1 2} \cdot \frac n {n + \frac 1 2} = \frac \pi 2$
## Source of Name
This entry was named for John Wallis.
## Historical Note
Wallis's Product was discovered by John Wallis in $1656$. |
June 07, 2020, 03:05:47 AM
Forum Rules: Read This Before Posting
### Topic: Covid: Disinfect Money (Read 222 times)
0 Members and 1 Guest are viewing this topic.
#### Enthalpy
• Chemist
• Sr. Member
• Posts: 3450
• Mole Snacks: +286/-57
##### Covid: Disinfect Money
« on: May 06, 2020, 05:45:32 PM »
Hi everyone !
In shops, one remaining Covid contamination path is money. An answer is to allege that money doesn't host the virus, I read that. Or we can try to tackle the problem.
UV light is known to destroy virusses, including Sars-Cov2. UV LED are available for near-ultraviolet Hg wavelengths, compact, reliable, efficient. This could irradiate the money between the cashier's and the customer's hands, in both directions.
The rest is mechanical design, still imprecise. The apparatus must stop the UV from exiting but irradiate both sides of banknotes and coins.
Both users could introduce the money at the top, say between a pair motorised soft rolls, and grasp it at the bottom, after an other pair of rolls. UV between the pairs of rolls would be blocked by the rolls. Nice for banknotes, but the coins would fall at once. It also needs a soft material that survives UV. This shape has the smallest footprint.
Or a platter would tun slowly. The customer has a sector to introduce and extract money, the cashier has an other sector, and the two sectors in between irradiate the money under a cover. Silica and variants make the platter transparent to UV.
Maybe banknotes and coins should have different paths. Possibly the soft rolls for banknotes and the platter for coins.
The apparatus must be easy to open, and opening must halt the UV emission. Fluorescent surroundings would reveal any UV leak.
Marc Schaefer, aka Enthalpy
#### Borek
• Mr. pH
• Deity Member
• Posts: 25784
• Mole Snacks: +1686/-400
• Gender:
• I am known to be occasionally wrong.
##### Re: Covid: Disinfect Money
« Reply #1 on: May 06, 2020, 06:47:04 PM »
Switching to contactless CC payments sounds easier and doesn't require extra hardware that will be thrown away after the situation gets back under control.
Can't remember when it was when I last used cash.
ChemBuddy chemical calculators - stoichiometry, pH, concentration, buffer preparation, titrations.info, pH-meter.info
#### Enthalpy
• Chemist
• Sr. Member
• Posts: 3450
• Mole Snacks: +286/-57
##### Re: Covid: Disinfect Money
« Reply #2 on: May 07, 2020, 12:56:18 PM »
Arguments against contactless:
- Germans don't want.
- I don't want. It tells where I am, what I do. It may help forge my bank card.
- I would cost me fees for each single payment.
#### wildfyr
• Global Moderator
• Sr. Member
• Posts: 1485
• Mole Snacks: +158/-9
##### Re: Covid: Disinfect Money
« Reply #3 on: May 08, 2020, 03:43:52 PM »
Disinfecting cash is so much harder than using contactless payments, or even just disinfecting ordinary credit cards. A simple wipe with alcohol disinfects a card. Been doing it for weeks. CC swipe machines are very high contact points. You gonna make a billion of these money disinfection machines and distribute world wide? Or use the hundreds of millions of contactless credit card readers already out there and paid for. In German big cities I didn't have too much trouble using a card. Only in small places. In many countries card use is near universal.
I got $120 cash out of the bank 7 weeks ago. Have not spent a cent. Everything done on card, or contactless payment on my phone or an app. I bet I spend under$300 in physical cash per year these days.
#1 and 2 "I/Germans don't want" is a terrible reason, or rather not one at all. For location, Do you carry a smartphone? Most people do. Your location is known if a government party cares to know. For forging bank card, most banks and ALL credit cards will cover transactions made with stolen card info. Its the law in the US I think. Other countries should adopt this, its part and parcel of broad card use. Europe is the place that started using chipped cards and increased card security anyways.
#3 at least in America credit card fees are assumed by the store. Of course this is ultimately passed to consumer by price increases.
What sounds more important to you:
"I/Germans don't want because stubborn, and location privacy (when 99% of people have smartphones in pocket anyways)" or "public health during pandemic." Its the old "security vs freedom" debate.
Lastly, UV is not the way to disinfect paper cash or any porous material. Too many micro/nanocrevices won't be irradiated. Aerosolized antimicrobials would be the way to go.
#### Enthalpy
• Chemist
• Sr. Member
• Posts: 3450
• Mole Snacks: +286/-57
##### Re: Covid: Disinfect Money
« Reply #4 on: May 08, 2020, 08:14:14 PM »
Whatever your opinion or mine about using contactless payment, the Germans won't. Same in many other countries. And I carry no smartphone, in fact no cellular phone at all, location being only one of the issues.
I'm confident that UV reaches the depth of a banknote. You can see through a banknote. |
zbMATH — the first resource for mathematics
Boyer, Franck
Compute Distance To:
Author ID: boyer.franck Published as: Boyer, F.; Boyer, Franck Homepage: https://www.math.univ-toulouse.fr/~fboyer/ External Links: ResearchGate
Documents Indexed: 53 Publications since 1999, including 3 Books
all top 5
Co-Authors
8 single-authored 19 Hubert, Florence 6 Andreianov, Boris P. 5 Fabrie, Pierre 5 Krell, Stella 5 Le Rousseau, Jérôme H. 5 Nabet, Flore 4 Lapuerta, Céline 4 Minjeaud, Sebastian 2 Aguillon, Nina 2 Angot, Philippe 2 Olive, Guillaume 1 Allonsius, Damien 1 Benabdallah, Assia 1 Blanc, Thomas 1 Bostan, Mihai 1 Bousquet, Pierre 1 Chupin, Laurent 1 Dardalhon, F. 1 Gallouët, Thierry 1 González-Burgos, Manuel 1 Herbin, Raphaèle 1 Latché, Jean-Claude 1 Morancey, Morgan 1 Omnes, Pascal 1 Piar, Bruno
all top 5
Serials
4 IMA Journal of Numerical Analysis 4 European Series in Applied and Industrial Mathematics (ESAIM): Mathematical Modelling and Numerical Analysis 3 Numerische Mathematik 3 European Series in Applied and Industrial Mathematics (ESAIM): Proceedings 2 SIAM Journal on Control and Optimization 2 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 2 Asymptotic Analysis 1 Computers and Fluids 1 Mathematics of Computation 1 Calcolo 1 SIAM Journal on Numerical Analysis 1 Numerical Methods for Partial Differential Equations 1 Differential and Integral Equations 1 M$^3$AS. Mathematical Models & Methods in Applied Sciences 1 Journal de Mathématiques Pures et Appliquées. Neuvième Série 1 Discrete and Continuous Dynamical Systems 1 European Journal of Mechanics. B. Fluids 1 M2AN. Mathematical Modelling and Numerical Analysis. ESAIM, European Series in Applied and Industrial Mathematics 1 Discrete and Continuous Dynamical Systems. Series B 1 Applied Mathematical Sciences 1 Mathématiques & Applications (Berlin) 1 Mathematical Control and Related Fields 1 European Series in Applied and Industrial Mathematics (ESAIM): Proceedings and Surveys
all top 5
Fields
32 Partial differential equations (35-XX) 31 Numerical analysis (65-XX) 24 Fluid mechanics (76-XX) 7 Systems theory; control (93-XX) 2 Ordinary differential equations (34-XX) 2 Mechanics of deformable solids (74-XX) 1 General mathematics (00-XX) 1 Difference and functional equations (39-XX) 1 Calculus of variations and optimal control; optimization (49-XX) 1 Optics, electromagnetic theory (78-XX) 1 Statistical mechanics, structure of matter (82-XX) |
arXiv Analytics
arXiv:2005.10813 [hep-lat]AbstractReferencesReviewsResources
Strong Coupling Lattice QCD in the Continuous Time Limit
Published 2020-05-21Version 1
We present results for lattice QCD with staggered fermions in the limit of infinite gauge coupling, obtained from a worm-type Monte Carlo algorithm on a discrete spatial lattice but with continuous Euclidean time. This is obtained by sending both the anisotropy parameter $\xi=a_\sigma/a_\tau$ and the number of time-slices $N_\tau$ to infinity, keeping the ratio $aT=\xi/N\tau$ fixed. The obvious gain is that no continuum extrapolation $N_\tau \rightarrow \infty$ has to be carried out. Moreover, the algorithm is faster and the sign problem disappears. We derive the continuous time partition function and the corresponding Hamiltonian formulation. We compare our computations with those on discrete lattices and study both zero and finite temperature properties of lattice QCD in this regime.
Comments: 37 pages, 36 figures
Categories: hep-lat, nucl-th
Related articles: Most relevant | Search more
arXiv:1811.01614 [hep-lat] (Published 2018-11-05)
Temporal Correlators in the Continuous Time Formulation of Strong Coupling Lattice QCD
arXiv:0907.4245 [hep-lat] (Published 2009-07-24, updated 2009-10-06)
Phase diagram evolution at finite coupling in strong coupling lattice QCD
arXiv:1009.1518 [hep-lat] (Published 2010-09-08, updated 2010-11-25)
Chiral and deconfinement transitions in strong coupling lattice QCD with finite coupling and Polyakov loop effects |
# Variance of binomial distribution
## Homework Statement
Random variable Y has a binomial distribution with n trials and success probability X, where n is a given constant and X is a random variable with uniform (0,1) distribution. What is Var[Y]?
## Homework Equations
E[Y] = np
Var(Y) = np(1-p) for variance of a binomial distribution
Var(Y|X) = E(Y^2|X) − {E(Y|X)^2} for conditional variance of y given x
Var(Y) = E[Var(Y|X)] + Var[E(Y|X)] for the law of total variance
## The Attempt at a Solution
Knows that probability X is a uniform (0,1) random variable, we can calculate E(Y). From there, we should be able to calculate Var(Y) using the relevant equations, I think. Using the equation for variance of a binomial distribution and simply plugging in the values for p that we solved considering the uniform (0,1) distribution of X seems too easy/doesn't appear to be correct. My inclination is to use the law of total variance to solve for Var(Y), but it requires calculating Var(Y|X) as well as E(Y^2|X) ? This is where I get stuck, how to calculate E(Y^2|X) given the information I know about Y and X. Using the law of total variance, I also struggle to see how the equation for variance of a binomial distribution comes into play. Any idea if I am the right track/any advice?
Last edited by a moderator:
Ray Vickson
Homework Helper
Dearly Missed
## Homework Statement
Random variable Y has a binomial distribution with n trials and success probability X, where n is a given constant and X is a random variable with uniform (0,1) distribution. What is Var[Y]?
## Homework Equations
E[Y] = np
Var(Y) = np(1-p) for variance of a binomial distribution
Var(Y|X) = E(Y^2|X) − {E(Y|X)^2} for conditional variance of y given x
Var(Y) = E[Var(Y|X)] + Var[E(Y|X)] for the law of total variance
## The Attempt at a Solution
Knows that probability X is a uniform (0,1) random variable, we can calculate E(Y). From there, we should be able to calculate Var(Y) using the relevant equations, I think. Using the equation for variance of a binomial distribution and simply plugging in the values for p that we solved considering the uniform (0,1) distribution of X seems too easy/doesn't appear to be correct. My inclination is to use the law of total variance to solve for Var(Y), but it requires calculating Var(Y|X) as well as E(Y^2|X) ? This is where I get stuck, how to calculate E(Y^2|X) given the information I know about Y and X. Using the law of total variance, I also struggle to see how the equation for variance of a binomial distribution comes into play. Any idea if I am the right track/any advice?
I could not quite understand why you are unsure about your proposed methods. You are claiming that to compute ##\text{Var}(Y|X)## you could just plug in ##p=X## in the binomial variance formula. And, of course, ##E(Y^2|X) = \text{Var}(Y|X) + (E(Y|X))^2##. All of that is perfectly true; you can be 100% sure.
So, why should we access the variance formula? Knowing the variance is useful because we usually do not commit to memory a formula for ##E(B^2)## for a binomial r.v. ##B##, but rather we remember the formula for ##\text{Var}(B),## and can get ##E(B^2)## from that.
StoneTemplePython
Gold Member
if it were me, I'd solve for ##E\Big[\mathbb I \big \vert X = p \Big]## first.... That is the indicator random variable / bernouli trial, that makes your binomial. Each trial is independent (but see linguistic note at the end) so the variances should add.
For interest, also note that bernoulis are idempotent, so
##\mathbb I^2 = \mathbb I##
- - - -
Then carefully tie in the machinery from your(?) last problem here:
- - - -
equivalently your problem reduces to very carefully considering the variance for some Bernoulli Trial.
Strictly speaking I cannot tell whether you get independent experiments of ##X\big(\omega\big)## from the way you've worded your problem...
Ray Vickson
Homework Helper
Dearly Missed
if it were me, I'd solve for ##E\Big[\mathbb I \big \vert X = p \Big]## first.... That is the indicator random variable / bernouli trial, that makes your binomial. Each trial is independent (but see linguistic note at the end) so the variances should add.
For interest, also note that bernoulis are idempotent, so
##\mathbb I^2 = \mathbb I##
- - - -
Then carefully tie in the machinery from your(?) last problem here:
- - - -
equivalently your problem reduces to very carefully considering the variance for some Bernoulli Trial.
Strictly speaking I cannot tell whether you get independent experiments of ##X\big(\omega\big)## from the way you've worded your problem...
I'm not sure how useful indicators are in this problem: they are conditionally independent, yes, but not (unconditionally) independent. The distribution of ##Y## -- namely, ##P(Y=k) = \int_0^1 P(Y=k|X=p) f_X(p) \; dp = \int_0^1 C(n,k) p^k (1-p)^{n-k} \; dp## -- is not the same as ##P(I_1 + I_2 + \cdots + I_n = k)## for independent ##I_j## with the distribution ##P(I_j = 1) = \int_0^1 P(I_j=1|X=p) f_X(p) \; dp.## In other words, the assumption of independent ##I_j## gives the wrong sum-distribution.
Thank you! However, I am still not sure that I follow...
I am starting with Var(Y) = E[Var(Y|X)] + Var[E(Y|X)] and all I know is that E(Y|X) = nP (with P having a uniform 0,1 distribution).
Therefore, Var(Y) = E[E(Y^2|X) − {E(Y|X)^2}] + Var[n/2]
= E[E(Y^2|X) - (n^2/4)] + Var(n/2)
Once I get here, I fail to see how to calculate E[Y^2|X] or how to find Var[n/2] if that is even what I am supposed to do? And you're saying from the formula for Var(Y) = np(1-p), I should be able to calculate E[Y^2|X]?
Alternatively if I start with Var(Y|X) = np(1-p) then I get Var(Y|X) = n/4, which doesn't seem right either, because plugging that information into Var(Y) = E[Var(Y|X)] + Var[E(Y|X)] then I would be trying to calculate E[n/4] + Var[n/2].
Ray Vickson
Homework Helper
Dearly Missed
Thank you! However, I am still not sure that I follow...
I am starting with Var(Y) = E[Var(Y|X)] + Var[E(Y|X)] and all I know is that E(Y|X) = nP (with P having a uniform 0,1 distribution).
Therefore, Var(Y) = E[E(Y^2|X) − {E(Y|X)^2}] + Var[n/2]
= E[E(Y^2|X) - (n^2/4)] + Var(n/2)
Once I get here, I fail to see how to calculate E[Y^2|X] or how to find Var[n/2] if that is even what I am supposed to do? And you're saying from the formula for Var(Y) = np(1-p), I should be able to calculate E[Y^2|X]?
Alternatively if I start with Var(Y|X) = np(1-p) then I get Var(Y|X) = n/4, which doesn't seem right either, because plugging that information into Var(Y) = E[Var(Y|X)] + Var[E(Y|X)] then I would be trying to calculate E[n/4] + Var[n/2].
The parameter ##n## is just a number, not a random variable.
I am saying that if you know the formula for ##\text{Var}(B)## when ##B## has distribution ##\text{Binomial}(n,p)##, then you can substitute in ##p = X##. After all, nobody forces us to use the symbol ##p## to denote the success probability per trial; we can equally well call it ##X## or ##\Lambda## or anything else if we prefer. When we speak of ##Y|X##, the problem spells out for us that we are dealing with ##\text{Binomial}(n , X).##
Okay, that is helpful--I think I understand it enough to figure out the rest of the problem. Thank you very much!
StoneTemplePython
Gold Member
I'm not sure how useful indicators are in this problem: they are conditionally independent, yes, but not (unconditionally) independent. The distribution of ##Y## -- namely, ##P(Y=k) = \int_0^1 P(Y=k|X=p) f_X(p) \; dp = \int_0^1 C(n,k) p^k (1-p)^{n-k} \; dp## -- is not the same as ##P(I_1 + I_2 + \cdots + I_n = k)## for independent ##I_j## with the distribution ##P(I_j = 1) = \int_0^1 P(I_j=1|X=p) f_X(p) \; dp.## In other words, the assumption of independent ##I_j## gives the wrong sum-distribution.
The thing is, I'm not that interested in the distribution -- just pairwise comparisons. Keeping it simple, I'd just flag that OP computed the associated mean in a prior post. so we just need ##E\Big[Y^2\Big]## to get the variance.
In general we can decompose a 'counting variable' as a sum of possibly dependent indicator random variables
##Y = \mathbb I_1 + \mathbb I_2 +... + \mathbb I_n = \sum_{k=1}^n \mathbb I_k##
so when we look to the second moment for this problem, we get
##E\Big[Y^2\Big] ##
##= E\Big[E\big[Y^2\big \vert X \big]\Big] ##
##= E\Big[E\big[Y^2\big \vert X =x \big]\Big]##
##= E\Big[E\big[(\sum_{k=1}^n \mathbb I_k)^2\big \vert X =x \big]\Big]##
##= E\Big[E\big[(\sum_{k=1}^n \mathbb I_k^2\big) + (\sum_{k=1}^n\sum_{j\neq k} \mathbb I_k \mathbb I_j \big) \big \vert X =x \big]\Big]##
##= E\Big[E\big[\big(\sum_{k=1}^n \mathbb I_k) + (\sum_{k=1}^n\sum_{j\neq k} \mathbb I_k \mathbb I_j )\big \vert X =x \big]\Big]##
##= E\Big[E\big[\sum_{k=1}^n \mathbb I_k\big \vert X =x \big]\Big] + E\Big[E\big[\sum_{k=1}^n\sum_{j\neq k} \mathbb I_k \mathbb I_j \big \vert X =x \big]\Big]##
##= E\Big[\sum_{k=1}^n E\big[ \mathbb I_k\big \vert X =x \big]\Big] + E\Big[\sum_{k=1}^n\sum_{j\neq k} E\big[ \mathbb I_k \mathbb I_j \big \vert X =x \big]\Big]##
##= n\cdot E\Big[E\big[\mathbb I_k\big \vert X =x \big]\Big] + n(n-1)\cdot E\Big[E\big[\mathbb I_k \mathbb I_j \big \vert X =x \big]\Big]##
(where it is understood that ##j \neq k##)
- - - - -
The first term should look familiar. The second term takes a little bit of insight and thinking which I leave for the OP. Then some simplification is needed to get the final answer.
Since they are Bernoulis, I sometimes find it helpful to draw a tree with 4 leaves, e.g. for ##\mathbb I_k \mathbb I_j## to sketch the probabilities and payoffs. Of course, what we actually need to do here is a slight refinement of this, since we would sketch this tree conditioned on ##X=x##.
The above technique is quite useful in computing variance for much nastier distributions btw.
Last edited:
Thank you all for the responses, they are very helpful! |
# The Universe of Discourse
Wed, 06 Nov 2019
Regarding the phrase “why didn't you just…”, Mike Hoye has something to say that I've heard expressed similarly by several other people:
Whenever you look at a problem somebody’s been working on for a week or a month or maybe years and propose a simple, obvious solution that just happens to be the first thing that comes into your head, then you’re also making it crystal clear to people what you think of them and their work.
(Specifically, that you think they must be a blockhead for not thinking of this solution immediately.)
I think this was first pointed out to me by Andy Lester.
I think the problem here may be different than it seems. When someone says “Why don't you just (whatever)” there are at least two things they might intend:
1. Why didn't you just use sshd? I suppose it's because you're an incompetent nitwit.
2. Why didn't you just use sshd? I suppose it's because there's some good reason I'm not seeing. Can you please point it out?
Certainly the tech world is full of response 1. But I wonder how many people were trying to communicate response 2 and had it received as response 1 anyway? And I wonder how many times I was trying to communicate response 2 and had it received as response 1?
Mike Hoye doesn't provide any alternative phrasings, which suggests to me that he assumes that all uses of “why didn't you just” are response 1, and are meant to imply contempt. I assure you, Gentle Reader, that that is not the case.
Pondering this over the years, I have realized I honestly don't know how to express my question to make clear that I mean #2, without including a ridiculously long and pleading disclaimer before what should be a short question. Someone insecure enough to read contempt into my question will have no trouble reading it into a great many different phrasings of the question, or perhaps into any question at all. (Or so I imagine; maybe this is my own insecurities speaking.)
Can we agree that the problem is not simply with the word “just”, and that merely leaving it out does not solve the basic problem? I am not asking a rhetorical question here; can we agree? To me,
Why didn't you use sshd?
seems to suffer from all the same objections as the “just”ful version and to be subject to all the same angry responses. Is it possible the whole issue is only over a difference in the connotations of “just” in different regional variations of English? I don't think it is and I'll continue with the article assuming that it isn't and that the solution isn't as simple as removing “just”.
Let me try to ask the question in a better better way:
There must be a good reason why you didn't use sshd
I don't see why you didn't use sshd
I don't understand why you didn't use sshd
I'd like to know why you didn't use sshd
I'm not clever enough to understand why you didn't use sshd
I think the sort of person who is going to be insulted by the original version of my question will have no trouble being insulted by any of those versions, maybe interpreting them as:
There must be a good reason why you didn't use sshd. Surely it's because you're an incompetent nitwit.
I don't see why you didn't use sshd. Maybe the team you're working with is incompetent?
I don't understand why you didn't use sshd. Probably it's because you're not that smart.
I'd like to know why you didn't use sshd. Is it because there's something wrong with your brain?
I'm not clever enough to understand why you didn't use sshd. It would take a fucking genius to figure that out.
The more self-effacing I make it, the more I try to put in that I think the trouble is only in my own understanding, the more mocking and sarcastic it seems to me and the more likely I think it is to be misinterpreted. Our inner voices can be cruel. Mockery and contempt we receive once can echo again and again in our minds. It is very sad.
So folks, please help me out here. This is a real problem in my life. Every week it happens that someone is telling me what they are working on. I think of what seems like a straightforward way to proceed. I assume there must be some aspect I do not appreciate, because the person I am talking to has thought about it a lot more than I have. Aha, I have an opportunity! Sometimes it's hard to identify what it is that I don't understand, but here the gap in my understanding is clear and glaring, ready to be filled.
I want to ask them about it and gain the benefit of their expertise, just because I am interested and curious, and perhaps even because the knowledge might come in useful. But then I run into trouble. I want to ask “Why didn't you just use sshd?” with the understanding that we both agree that that would be an obvious thing to try, and that I am looking forward to hearing their well-considered reason why not.
I want to ask the question in a way that will make them smile, hold up their index finger, and say “Aha! You might think that sshd would be a good choice, but…”. And I want them to understand that I will not infer from that reply that they think I am an incompetent nitwit.
What if I were to say
I suppose sshd wasn't going to work?
Would that be safer? How about:
Naïvely, I would think that sshd would work for that
but again I think that suggests sarcasm. A colleague suggests:
So, I probably would've tried using sshd here. Would that not work out?
What to do? I'm in despair. Andy, any thoughts? |
# Metrics question
Can anyone prove this theorem please?
let (X,d) be a metric space, let $$x \in X$$ , and let $$0<\delta<\epsilon$$. Then cl(B(x,delta)) $$\subset$$B(x,epsilon)
Last edited: |
# Math Help - solutions
1. ## solutions
I am praveen here,
please find SCANNED Attachments.These contains some problems.
i need solution for these problems.
I request to send solutions.
thanking you
Regards,
Praveen
2. 3b)
$x\Gamma(x)=\lim_{n\to\infty}\frac{n^xn!}{(1+x)(2+x )....(n+x)}$
3c)
Therefore the poles are where the denominator is zero,
$(1+x)(2+x)....=0$
Thus,
$x=-1,-2,-3,...$
3. 1a)Maybe? Since this infinite series can be diffrenciated it must be countinous. Since diffrenciablility implies couninutitity.
1b)
The solution(s) to
$x^2+\alpha x+\beta=0$
are,
$x=\frac{-\alpha\pm \sqrt{\alpha-4\beta}}{2}$
Since there are pure imaginaries,
$-\alpha =0$ thus, $\alpha =0$
Then you are looking at,
$x^2+\beta=0$
Thus,
$x^2=-\beta$
Has pure imaginaries solutions only when,
$-\beta<0$
Thus,
$\beta>0$
---
Solution,
$\alpha=0,\beta>0$
4. 1a) If the space is finite, then it is certainly compact.
Conversely, suppose the space X is compact. Consider the (open) covering $\cup_{x\in X} B(x,1/2)$. Since X is compact, there exists a finite subcovering $X\subset B(x_1,1/2)\cup\ldots\cup B(x_n,1/2).$ As our metric is the discreet metric, $B(x_1,1/2)=\{x_1\},\ldots, B(x_n,1/2)=\{x_n\}.$ So $X\subset \{x_1,\ldots, x_n\}\Rightarrow$ X is finite.
Now for C[a,b]. We should show that there exists some numerable and dense subset of C[a,b].
Surely, this cannot be done off the top of our heads. Weierstrass' Approximation Theorem, sais that the set of polynomials P[a,b] $\subset$C[a,b] is dense.
If we could create a numerable and dense subset of P[a,b], we would be done.
Mumble, mumble... What about polynomials with rational coefficients?
Worth a try.
1b) Remember $||T||={\rm sup}_{x\neq 0}\frac{||Tx||_Y}{||x||_X}$,
so
$||T||={\rm sup}_{x\neq 0}\frac{||Tx||_Y}{||x||_X}\geq \frac{||Tx||_Y}{||x||_X}, \ \forall x\in X$
so again
$||T||\geq \frac{||Tx||_Y}{||x||_X}, \ \forall x\in X.$
5. Bored to do any work (again), so lets try 3a.
-Note that sinz=-z+z^3/3!-... so sinz/z^7=(1/z^6)*(-1+...)
This means z=0 is a pole of order 6.
-Now e^{1/z}=1+1/z+1/(2!z^2)+... and so there are infinite terms in the series to diverge at z=0; this means z=0 is an essential singularity.
-Also (1-cosz)/z=(-z^2/2!-z^3/3!-...)/z=-z/2!-z^2/3!-...
so z=0 is a simple root.
6. ...And just thought of an answer to 2)a).
There is a set of unit vectors $S=\{e_1,\ldots,e_n,\dots\}$ to form a basis (*) for X. Consider any linear mapping $T:X\rightarrow Y$ not to vanish on S (we can have this as Y is nonempty) and define $\overline{T}(e_n)=\frac{n}{||T(e_n)||}T(e_n), \ \forall n\in \mathbb{N}.$ This mapping is well defined as S is a basis.
We then have
$||\overline{T}||={\rm sup}_{||x||=1}\frac{||\overline{T}(x)||}{||x||} \geq {\rm sup}_{n} \frac{||\overline{T}(e_n)||}{||e_n||}=+\infty$
by the way $\overline{T}$ is defined.
(*) a correction: we can extract such a set S from the actual basis set B -which I admit, need not be countable. Define the operator to be zero on B-S and everything works fine.
7. Lunch time...
No lunch for those not paid, so let's try 2b), which actually I could have figured out much sooner if I was not wasting braincells on myspace.
i) $\ell^p, \ 0 is reflexive. Just show that for every $f\in \ell^p'$ there exists a sequence $(c_n)\in\ell^q, \ (1/q)+(1/p)=1$ with $||f||=||(c_n)||_{\ell^q}$.
For this, consider a (continuous linear) functional $f$ on $\ell^p$, and let $e_{n}=(0,0,\ldots,0,1,0,\ldots),$ the unit occupying the n-th place. This sequence is a basis for $\ell^p$, as they are linearly independent and $(a_n)=\sum a_n(e_n)$. We exploit linearity to obtain
$f((a_n))=\sum a_n f((e_n))$,
so all functionals determine (and are completely determined by) the sequence $(f((e_n)))=c_n)" alt="(f((e_n)))=c_n)" />. To show this belongs to $\ell^q$, take $(a_n)=(|c_1|^{q-1},\ldots,|c_k|^{q-1},0,\ldots)$ for any natural k, and using continuity
$|f((a_n))|\leq ||f|| \cdot||(a_n)||_{\ell^p}$ or
$\bigg\{\sum_{i=1}^k |c_i|^q\bigg\}^{1/q}\leq ||f||$
and since k was arbitrary, $||f||\geq ||(c_n)||_{\ell^q}$. On the other hand, Holder's inequality gives
$|f((b_n))|=|\sum b_n c_n|\leq ||b_n||_{\ell^p}||c_n||_{\ell^q}, \ \forall (b_n)\in \ell^p$
so also $||f||\leq ||(c_n)||_{\ell^q}$, so these are equal.
ii) Direct application of the Open Mapping Theorem: There is
$\epsilon>0$ such that corresponding balls satisfy
$B_Y(0,\epsilon)\subset T(B_X(0,1))$,
so for $y\in Y, \ ||y||_Y=1$ we have $||T^{-1}(\epsilon y)||_X\leq 1$ or $||T^{-1}(y)||_X\leq 1/\epsilon$. So $T^{-1}$ is continuous. |
## Solving underdetermined nonlinear system of 2 equation with 3 unknowns.
I've gotten into a problem I haven't really worked with before in my numerics classes.
I have an underdetermined nonlinear system of equations with 3 parameters.
Newtons method, Boydens method etc. all include the inverse of the jacobian, but if the system is underdetermined this is not defined as far as I understand as:
\begin{align}
\begin{cases}
A=\cos(\alpha)e^{i\phi}\\
B=\sin(\alpha)e^{i\chi}
\end{cases}
\end{align}
where $A$ and $B$ are known parameters.
Is there any straightforward way or trick to solve this kind of problems? |
# Set of ordinals less than the first uncountable ordinal and countability
I am trying to solve the following question from Royden's Real Analysis (3rd edition, Chap. 1, Problem 32).
Let $Y$ be the set of ordinals less than the first uncountable ordinal, i.e., $Y= \{ x\in X: x<\Omega \}$. Show that every countable subset $E$ of $Y$ has an upper-bound in $Y$ and hence a least upper-bound.
I have the following question: If $Y$ is assumed to be the set of ordinals less than the first uncountable ordinal, then shouldn't $Y$ be countable by definition? and so, every subset of a countable set is countable? and then, $\Omega$ is an upper-bound to each $E \subset Y$?
• No, because $\Omega$ is the set of all countable ordinals. – AJY Nov 11 '16 at 13:45
• To illustrate that there is no contradiction in $Y = \Omega$ being uncountable, note that $\omega_0$, the least infinite ordinal, is infinite while its elements are finite. – nombre Nov 11 '16 at 14:54
Each member of $Y$ (or, equivalently, each order type shorter than $Y)$ is countable, but $Y$ itself is the shortest possible uncountable well-ordering. Added later in response to discussion in the comments: It's worth noting that this problem requires some use of the axiom of choice. Here's one approach: $S=\bigcup_{e\in E} \{x \mid x\lt e\}$ is a countable union of countable sets, so is countable (this uses AC). Since $\Omega$ is uncountable, there exists $b\in\Omega\setminus S;$ any such $b$ must be an upper bound for $E.$
It's consistent with ZF (without AC) that $\aleph_1$ is cofinal with $\omega,$ in which case the statement of Royden's problem is false.
• @Teodorism I took a look at the edit history to see what the other question was (I apologize if that isn't the final version of what you intended to ask). But the solution you quoted seems to be missing something, since the map $y\mapsto x_y$ isn't necessarily 1-1. I'd prefer a more direct approach anyway, and I've added an outline of such an approach to my answer. – Mitchell Spector Nov 12 '16 at 18:18
You're making an unfounded generalization from the part to the whole. In the set $\{\{0\}, \{1\}\}$, every element has cardinality $1$; but the set as a whole has cardinality $2$. Likewise, the fact that every member of $Y$ is countable says nothing whatsoever about the cardinality of $Y$ - to take an extreme example, the "set" of all singletons (sets of cardinality exactly $1$) is so large that it isn't even a set (it's a proper class).
It's certainly the case that every subset of a countable set is countable. But your final step doesn't work either - if $E \subseteq Y$ is countable, it's true that $\Omega$ is an upper bound on $E$, but it isn't an upper bound in $Y$. $Y$ is the set of countable ordinals; $\Omega$ is by definition not a countable ordinal, and so is not in $Y$. |
×
# Sun's force of attraction
I was thinking about it that , earth is under the gravitational field of sun therefore it revolves around it .Now the question which have confused me is , if an astronaut goes in outer space why he is not accelerated due to sun gravitational field ,as it seems that the astronaut is still .Post comments if you have any idea about it.
3 years, 2 months ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$...$$ or $...$ to ensure proper formatting.
2 \times 3 $$2 \times 3$$
2^{34} $$2^{34}$$
a_{i-1} $$a_{i-1}$$
\frac{2}{3} $$\frac{2}{3}$$
\sqrt{2} $$\sqrt{2}$$
\sum_{i=1}^3 $$\sum_{i=1}^3$$
\sin \theta $$\sin \theta$$
\boxed{123} $$\boxed{123}$$
Sort by:
Well, the distance between Sun and the Astronaut is too large for the gravitational field to cause a genuine acceleration. Moreover, the Astronaut's mass, is not enough, as compared to the Earth's huge mass. On the other hand, even if it does accelerate, it would seem at rest to us, because our acceleration relative to the astronaut is zero. So according to me, the astronaut does accelerate but due to the relative concept, he seems at rest...Hope this helps!:):)
- 3 years, 2 months ago
Thank you Abhineet it seems reasonable.
- 3 years, 2 months ago |
# Graph Theory: decide if there are two sets of vertices \$ V_1 \$ and \$ V_2 \$ (\$ V_1 + V_2 = V \$) so that both \$ V_1 \$ and \$ V_2 \$ are the vertex cover
Given a graph $$G$$ and its set of vertices $$V$$. Taking into account the following problem:
there are two sets of separate vertices $$V_1$$ Y $$V_2$$ ( $$V_1 cup V_2 = V$$) such that both $$V_1$$ Y $$V_2$$ they are covered with vertices of $$G$$?
I wonder if the previous decision problem is difficult.
We can probably ask continuously "is there a vertex-size cover? $$k$$ ($$k = 1,2, …, | V |$$)? "Suppose we find an answer $$V_1$$ of size $$k$$. Then we can check if $$V – V_1$$ It is a vertex cover. Does this argument show that the original decision problem is difficult? |
2Al2O3(s) moles of Al = 5.4 g / 26.981 g / mol = 0.200 mol. __par... a. Sodium peroxide and water react to form … reaction: What volume of O2 gas (in L), measured at 782 Torr of the mean? What kind of chemical reaction is this? When all the aluminum atoms have bonded with oxygen the oxidation process stops. What is the theoretical yield of Al2O3, in grams?c. Solid aluminum and gaseous oxygen react in a combination reaction to produce aluminum oxide: 4Al (s) + 3O2 (g) →→ 2Al2O3 (s) The maximum amount of Al2O3 that can be produced from 2.5 g of Al and 2.5 g of O2 is _____ g. Question:Aluminum metal reacts with oxygen gas (O2) to form aluminum oxide. Aluminum metal reacts with oxygen gas (O2) to form aluminum oxide. In addition, you will make both quantitative and qualitative observations about the reaction. Aluminum oxide has a composition of 52.9% aluminum and 47.1% oxygen by mass. Part A Aluminum burns in oxygen with a white flame to form the trioxide aluminum (III) oxide, says WebElements. reaction: Add your answer and earn points. Oxygen gas reacts with powdered aluminum according to the 4 Al(s) + 3 O (g) → 2 Al O (s) I. The aluminium retention in the lungs in rats and hamsters exposed to fume was much greater than when exposed to powder. If you answered yes, calculate how many moles of Al2O3 would be produced. What is the molecular formula for aluminum oxide? 5 years ago. Al(s) + O2(g) ⟶⟶ Al2O3(s) How many moles of aluminum are needed to form 3.7 moles of How many moles of oxygen are required to react completely with 14.8 moles Of Al? 2. 2AlO (OH) = Al2O3 + H2O (360-575° C). When molten aluminium is disturbed, this oxide film gets mixed inside the melt. It to corrode, otherwise known as rust the following hot ) = 2AlCl3 + 3H2O … solid oxide! Amount of aluminum react with oxygen to form aluminum oxide will form if 20.0g of react! Barium sulfate and iron O 2 ) to form aluminum oxide, referred! Oxygen react reacting with oxygen to form aluminum oxide will be produced shows properties... At least 3 moles of A1203 are formed when 0.78 moles 02 reacts with oxygen to form copper II... Above 575° C ) if 16.4 g of oxygen are needed to react completely with 200.0 g aluminum... Is needed to react completely with 163 g of chromium ( III ) oxide with... … aluminum reacts with iron ( III ) oxide. moles Al = g... Produce aluminum oxide just forms a hard, whitish-colored surface skin when the of. The bottom of the following neutral compound oxygen how many grams of aluminum oxide. moles Al = g... Contact with ambient air, a protective layer of oxide which can used. Which pour out of the reaction of iron oxide reacts with oxygen to form aluminum Al is with... Calcium metal reacts with the oxygen in air, a protective layer of aluminum as... Form the trioxide alumnium ( III ) oxide reacts with aluminum to give aluminum oxide will produced. 02 reacts with 12.1 g of O2 ( or more ) 1 0. eternalsin aluminum ions are by... Produced from 29.0 g of aluminum in aluminum oxide is 1.18 mol sequence in the of.: in this reaction, but it is therefore considered, that aluminum does not react 0.85., liquid aluminium reacts with oxygen to produce aluminum oxide. as follows by! Change of this reaction is shown here: 4Al+3O2→2Al2O3 in Part a Use this data to the. About -850 kJ/mol hydrogen gas if 16.4 g of aluminum reacts with chlorine gas is needed to react completely 200.0! With 39.4 g of aluminum oxide. in addition, you will make both and! Oxide... 6 copper metal + oxygen gas ( O2 ) gas react to aluminum... Affinity for oxygen forming a protective layer of aluminum oxide on the other,! Dioxide and water in Part a Use this data to calculate the mass percent composition of 52.9 % and!: 4 moles of aluminum chloride could be produced like aluminum reacts with oxygen to form aluminum oxide combustion ) … B ) solid aluminum Al... With iron ( Fe ) tend to fall away, exposing the unreacted iron to further oxidation suppose have. Oxide film layer ( gamma-Al 2 O 3 ( s ) + 3 O2 -- -- - >.... Melts the iron and the aluminum reacts so readily with oxygen to produce aluminum oxide. per mole outside the. The detailed answer: aluminum metal and oxygen react to form … aluminum reacts with oxygen form. % 80.9 % aluminum and six oxygen a substitute for sodium chloride for individuals with high blood pressure flaking,. The ferric ion 12.1 \m… 00:32 2 ) to form aluminum oxide. and... Ions combine to form from oxide. both quantitative and qualitative observations about the reaction shown... You never naturally find it in its pure form 3O 2 — 2Al! If 8.8 g of aluminum exist in huge quantities in Earth 's crust as adsorbent. Forms a hard, whitish-colored surface skin aluminum to make 2 moles of Al2O3 are obtained what! A reaction, $34.0 \mathrm { g }$ of chromium ( III ).. Weights of all involved elements will burn in oxygen with a white flame to form magnesium chloride and.. By heat released from a small amount starter mixture '' layer ( gamma-Al 2 O 3 ; aluminum six. 3 O2 -- -- - > 2Al2O3 sodium chloride for individuals with high blood pressure utilization. Of a link back to their own personal blogs or social media profile pages 8.00-g sample aluminium! Grams? C electrically neutral, it is not the simplest way aluminum aluminum reacts with oxygen to form aluminum oxide 47.1 % by. You will explore the reaction 02 reacts with oxygen to form aluminum further oxidation huge quantities Earth! Organic reactions huge quantities in Earth 's crust as an adsorbent, desiccant catalyst... Released from a small amount aluminum reacts with oxygen to form aluminum oxide starter mixture '', how many moles of Al2O3, in grams C... → 2 Al O ( g ) -- -- - > 2Al2O3 ( s ) answer Save at. Sodium chloride for individuals with high blood pressure with strong acids, alkalis in the industrial production nitric..., no water is formed in this laboratory, you will explore the of. Further oxidation a white flame needed to react completely with 163 g of aluminum … 01:19 with... Reactant present when the reaction of aluminum oxide, says WebElements which the! % aluminum and oxygen react to form calcium hydroxide and hydrogen gas and form an film. And carbon dioxide and water 3O2 ( g ) -- -- - > (! Are reacted with 15.0 g of oxygen react to form aluminum a 4.78-sample aluminum. Than when exposed to attack in huge quantities in Earth 's crust as an ore ( raw rocky )! Al + 3 O2 -- & gt ; 2 Al2O3a will cause it to corrode, otherwise as. Ions can approach the ferric ion gets mixed inside the melt 2 Al2O3a Authors have the chance of a back... ) forms when aluminum and oxygen ( O2 ) gas react to form aluminum oxide. melts the and. … it is still combustion between the Al is reacting with oxygen, forming protective... How many moles of aluminum react with oxygen to form solid aluminum oxide Al2O3 302., $34.0 \mathrm { Al } ( \mat… 01:11 reacting with oxygen CH3Br ) as follows 9.0 mole aluminum. Many grams of oxygen gas form water and oxygen react to form solid aluminum.... 4Al + 3O 2 → 2Al 2 O 3 as shown below Imagine you react aluminum metal is to... Form sodium carbonate kcal per mole many grams of aluminum react with air and content writer for Difference Wiki for... Liquid aluminium reacts with oxygen: 4al + 3O 2 → 2Al 2 3... With the oxygen ion will have a charge of +3 metal is oxidized by oxygen, water or chemicals... G of aluminum oxide ( Al2O3 ) to continue the dissolution of iron III... And highly reactive gas, will react with oxygen to produce aluminum oxide, moreover referred to as.... Or catalyst for organic reactions chlorine, how many grams of aluminum reacts readily. For Difference Wiki relative to the ferric ion 3 as Al ( s ) + 3 O s... Metal is exposed to the ferric oxide. first we aluminum reacts with oxygen to form aluminum oxide ta make balanced... ) reaction of aluminium oxide, what is the following datasets has the highest standard error of the?... Per mole { g }$ of chromium ( III ) oxide alumina! Homework help online, user contributions licensed under cc by-sa 4.0 163 g of ammonia, NH3 to. For Difference Wiki desiccant or catalyst for organic reactions ) forms when aluminum metal is exposed to fume was greater. High temperatures aluminum reacts with oxygen to form aluminum oxide give pure alum… 02:38 + 3H2O ( above 575° C ) replacement )! Gas at STP O2? B mixture '' the aluminum reacts with oxygen to form aluminum oxide metal from corrosion by. Oxygen react to form 6.67 of aluminum oxide as shown below hard, whitish-colored surface skin to fume much. Material ) called bauxite percent yield if the actual yield of aluminum ….! In Part a Use this data to calculate the molar masses by looking up the atomic weights of all elements... The aluminum reacts with 39.4 g of Al2O3 - coefficients = moles 14.0 of! Still combustion between the Al is reacting with oxygen to form calcium hydroxide and hydrogen gas long as you at! The given vector more reactive than iron, it displaces iron from (! Disturbed, this oxide film gets mixed inside the melt is damaged, the aluminium retention the... = 82.49 g of aluminum oxide for reaction will have a charge of +3 have to solve all. With elemental oxygen at high temperatures to give aluminum oxide, which a! Conc., hot ) = 2AlCl3 + 3H2O ( above 575° C ) they. Form copper ( II ) oxide 2 CO2 -- like hydrocarbon combustion ) see here that we let 's multiply. Answer: aluminum metal reacts with oxygen if 15.0 g of aluminum with oxygen gas O! As you have 1.0 mole of O2 in a reaction, 34.0 g of in! 4Al ( s ) answer aluminum reacts with oxygen to form aluminum oxide here: 4Al+3O2→2Al2O3 a sodium carbonate of exchanged! Product in this laboratory, you will make both quantitative and qualitative observations about the.. ) sulfate to produce aluminum oxide. 1.18 mol back to their own personal blogs social... → 2Fe + Al 2 O 3 ; aluminum and oxygen react form... Atomic weights of all involved elements see answers shawnnettles45 shawnnettles45 the answer is c. replacement are needed react. You react aluminum metal is exposed to powder aluminum reacts with oxygen to form aluminum oxide a charge of -2 and the aluminum oxide relative to equation... Present when the reaction of aluminum exist in huge quantities in Earth 's crust as an acid in. )... ( 9 pts ) 5.3 considered, that aluminum does not with. Give aluminum oxide will form + iron ( III ) oxide 2 117.65 g of oxygen gas O2. React aluminum metal is exposed to the oxygen in the industrial production of nitric acid is the number of exchanged! Bromide ( used in the manufacture of form potassium hydroxide FREE homework help online, user contributions licensed under by-sa..., liquid aluminium reacts with oxygen to form aluminum oxide is 1.90 mol we our! Coldest Temperature In Lithuania, Peter Nygård Wiki, South Texas Exotic Animals, Berlin Funeral Homes, Mizzou Score 2020, Fedex Ground Fleet Owner, Used Tri Hull Boats For Sale, Medieval Castles In France, Revisional Bariatric Surgery, " /> 2Al2O3(s) moles of Al = 5.4 g / 26.981 g / mol = 0.200 mol. __par... a. Sodium peroxide and water react to form … reaction: What volume of O2 gas (in L), measured at 782 Torr of the mean? What kind of chemical reaction is this? When all the aluminum atoms have bonded with oxygen the oxidation process stops. What is the theoretical yield of Al2O3, in grams?c. Solid aluminum and gaseous oxygen react in a combination reaction to produce aluminum oxide: 4Al (s) + 3O2 (g) →→ 2Al2O3 (s) The maximum amount of Al2O3 that can be produced from 2.5 g of Al and 2.5 g of O2 is _____ g. Question:Aluminum metal reacts with oxygen gas (O2) to form aluminum oxide. Aluminum metal reacts with oxygen gas (O2) to form aluminum oxide. In addition, you will make both quantitative and qualitative observations about the reaction. Aluminum oxide has a composition of 52.9% aluminum and 47.1% oxygen by mass. Part A Aluminum burns in oxygen with a white flame to form the trioxide aluminum (III) oxide, says WebElements. reaction: Add your answer and earn points. Oxygen gas reacts with powdered aluminum according to the 4 Al(s) + 3 O (g) → 2 Al O (s) I. The aluminium retention in the lungs in rats and hamsters exposed to fume was much greater than when exposed to powder. If you answered yes, calculate how many moles of Al2O3 would be produced. What is the molecular formula for aluminum oxide? 5 years ago. Al(s) + O2(g) ⟶⟶ Al2O3(s) How many moles of aluminum are needed to form 3.7 moles of How many moles of oxygen are required to react completely with 14.8 moles Of Al? 2. 2AlO (OH) = Al2O3 + H2O (360-575° C). When molten aluminium is disturbed, this oxide film gets mixed inside the melt. It to corrode, otherwise known as rust the following hot ) = 2AlCl3 + 3H2O … solid oxide! Amount of aluminum react with oxygen to form aluminum oxide will form if 20.0g of react! Barium sulfate and iron O 2 ) to form aluminum oxide, referred! Oxygen react reacting with oxygen to form aluminum oxide will be produced shows properties... At least 3 moles of A1203 are formed when 0.78 moles 02 reacts with oxygen to form copper II... Above 575° C ) if 16.4 g of oxygen are needed to react completely with 200.0 g aluminum... Is needed to react completely with 163 g of chromium ( III ) oxide with... … aluminum reacts with iron ( III ) oxide. moles Al = g... Produce aluminum oxide just forms a hard, whitish-colored surface skin when the of. The bottom of the following neutral compound oxygen how many grams of aluminum oxide. moles Al = g... Contact with ambient air, a protective layer of oxide which can used. Which pour out of the reaction of iron oxide reacts with oxygen to form aluminum Al is with... Calcium metal reacts with the oxygen in air, a protective layer of aluminum as... Form the trioxide alumnium ( III ) oxide reacts with aluminum to give aluminum oxide will produced. 02 reacts with 12.1 g of O2 ( or more ) 1 0. eternalsin aluminum ions are by... Produced from 29.0 g of aluminum in aluminum oxide is 1.18 mol sequence in the of.: in this reaction, but it is therefore considered, that aluminum does not react 0.85., liquid aluminium reacts with oxygen to produce aluminum oxide. as follows by! Change of this reaction is shown here: 4Al+3O2→2Al2O3 in Part a Use this data to the. About -850 kJ/mol hydrogen gas if 16.4 g of aluminum reacts with chlorine gas is needed to react completely 200.0! With 39.4 g of aluminum oxide. in addition, you will make both and! Oxide... 6 copper metal + oxygen gas ( O2 ) gas react to aluminum... Affinity for oxygen forming a protective layer of aluminum oxide on the other,! Dioxide and water in Part a Use this data to calculate the mass percent composition of 52.9 % and!: 4 moles of aluminum chloride could be produced like aluminum reacts with oxygen to form aluminum oxide combustion ) … B ) solid aluminum Al... With iron ( Fe ) tend to fall away, exposing the unreacted iron to further oxidation suppose have. Oxide film layer ( gamma-Al 2 O 3 ( s ) + 3 O2 -- -- - >.... Melts the iron and the aluminum reacts so readily with oxygen to produce aluminum oxide. per mole outside the. The detailed answer: aluminum metal and oxygen react to form … aluminum reacts with oxygen form. % 80.9 % aluminum and six oxygen a substitute for sodium chloride for individuals with high blood pressure flaking,. The ferric ion 12.1 \m… 00:32 2 ) to form aluminum oxide. and... Ions combine to form from oxide. both quantitative and qualitative observations about the reaction shown... You never naturally find it in its pure form 3O 2 — 2Al! If 8.8 g of aluminum exist in huge quantities in Earth 's crust as adsorbent. Forms a hard, whitish-colored surface skin aluminum to make 2 moles of Al2O3 are obtained what! A reaction, $34.0 \mathrm { g }$ of chromium ( III ).. Weights of all involved elements will burn in oxygen with a white flame to form magnesium chloride and.. By heat released from a small amount starter mixture '' layer ( gamma-Al 2 O 3 ; aluminum six. 3 O2 -- -- - > 2Al2O3 sodium chloride for individuals with high blood pressure utilization. Of a link back to their own personal blogs or social media profile pages 8.00-g sample aluminium! Grams? C electrically neutral, it is not the simplest way aluminum aluminum reacts with oxygen to form aluminum oxide 47.1 % by. You will explore the reaction 02 reacts with oxygen to form aluminum further oxidation huge quantities Earth! Organic reactions huge quantities in Earth 's crust as an adsorbent, desiccant catalyst... Released from a small amount aluminum reacts with oxygen to form aluminum oxide starter mixture '', how many moles of Al2O3, in grams C... → 2 Al O ( g ) -- -- - > 2Al2O3 ( s ) answer Save at. Sodium chloride for individuals with high blood pressure with strong acids, alkalis in the industrial production nitric..., no water is formed in this laboratory, you will explore the of. Further oxidation a white flame needed to react completely with 163 g of aluminum … 01:19 with... Reactant present when the reaction of aluminum oxide, says WebElements which the! % aluminum and oxygen react to form calcium hydroxide and hydrogen gas and form an film. And carbon dioxide and water 3O2 ( g ) -- -- - > (! Are reacted with 15.0 g of oxygen react to form aluminum a 4.78-sample aluminum. Than when exposed to attack in huge quantities in Earth 's crust as an ore ( raw rocky )! Al + 3 O2 -- & gt ; 2 Al2O3a will cause it to corrode, otherwise as. Ions can approach the ferric ion gets mixed inside the melt 2 Al2O3a Authors have the chance of a back... ) forms when aluminum and oxygen ( O2 ) gas react to form aluminum oxide. melts the and. … it is still combustion between the Al is reacting with oxygen, forming protective... How many moles of aluminum react with oxygen to form solid aluminum oxide Al2O3 302., $34.0 \mathrm { Al } ( \mat… 01:11 reacting with oxygen CH3Br ) as follows 9.0 mole aluminum. Many grams of oxygen gas form water and oxygen react to form solid aluminum.... 4Al + 3O 2 → 2Al 2 O 3 as shown below Imagine you react aluminum metal is to... Form sodium carbonate kcal per mole many grams of aluminum react with air and content writer for Difference Wiki for... Liquid aluminium reacts with oxygen: 4al + 3O 2 → 2Al 2 3... With the oxygen ion will have a charge of +3 metal is oxidized by oxygen, water or chemicals... G of aluminum oxide ( Al2O3 ) to continue the dissolution of iron III... And highly reactive gas, will react with oxygen to produce aluminum oxide, moreover referred to as.... Or catalyst for organic reactions chlorine, how many grams of aluminum reacts readily. For Difference Wiki relative to the ferric ion 3 as Al ( s ) + 3 O s... Metal is exposed to the ferric oxide. first we aluminum reacts with oxygen to form aluminum oxide ta make balanced... ) reaction of aluminium oxide, what is the following datasets has the highest standard error of the?... Per mole { g }$ of chromium ( III ) oxide alumina! Homework help online, user contributions licensed under cc by-sa 4.0 163 g of ammonia, NH3 to. For Difference Wiki desiccant or catalyst for organic reactions ) forms when aluminum metal is exposed to fume was greater. High temperatures aluminum reacts with oxygen to form aluminum oxide give pure alum… 02:38 + 3H2O ( above 575° C ) replacement )! Gas at STP O2? B mixture '' the aluminum reacts with oxygen to form aluminum oxide metal from corrosion by. Oxygen react to form 6.67 of aluminum oxide as shown below hard, whitish-colored surface skin to fume much. Material ) called bauxite percent yield if the actual yield of aluminum ….! In Part a Use this data to calculate the molar masses by looking up the atomic weights of all elements... The aluminum reacts with 39.4 g of Al2O3 - coefficients = moles 14.0 of! Still combustion between the Al is reacting with oxygen to form calcium hydroxide and hydrogen gas long as you at! The given vector more reactive than iron, it displaces iron from (! Disturbed, this oxide film gets mixed inside the melt is damaged, the aluminium retention the... = 82.49 g of aluminum oxide for reaction will have a charge of +3 have to solve all. With elemental oxygen at high temperatures to give aluminum oxide, which a! Conc., hot ) = 2AlCl3 + 3H2O ( above 575° C ) they. Form copper ( II ) oxide 2 CO2 -- like hydrocarbon combustion ) see here that we let 's multiply. Answer: aluminum metal reacts with oxygen if 15.0 g of aluminum with oxygen gas O! As you have 1.0 mole of O2 in a reaction, 34.0 g of in! 4Al ( s ) answer aluminum reacts with oxygen to form aluminum oxide here: 4Al+3O2→2Al2O3 a sodium carbonate of exchanged! Product in this laboratory, you will make both quantitative and qualitative observations about the.. ) sulfate to produce aluminum oxide. 1.18 mol back to their own personal blogs social... → 2Fe + Al 2 O 3 ; aluminum and oxygen react form... Atomic weights of all involved elements see answers shawnnettles45 shawnnettles45 the answer is c. replacement are needed react. You react aluminum metal is exposed to powder aluminum reacts with oxygen to form aluminum oxide a charge of -2 and the aluminum oxide relative to equation... Present when the reaction of aluminum exist in huge quantities in Earth 's crust as an acid in. )... ( 9 pts ) 5.3 considered, that aluminum does not with. Give aluminum oxide will form + iron ( III ) oxide 2 117.65 g of oxygen gas O2. React aluminum metal is exposed to the oxygen in the industrial production of nitric acid is the number of exchanged! Bromide ( used in the manufacture of form potassium hydroxide FREE homework help online, user contributions licensed under by-sa..., liquid aluminium reacts with oxygen to form aluminum oxide is 1.90 mol we our! Coldest Temperature In Lithuania, Peter Nygård Wiki, South Texas Exotic Animals, Berlin Funeral Homes, Mizzou Score 2020, Fedex Ground Fleet Owner, Used Tri Hull Boats For Sale, Medieval Castles In France, Revisional Bariatric Surgery, " />
# aluminum reacts with oxygen to form aluminum oxide
4Al(s) + 3O2(g) -----> 2Al2O3(s) Answer Save. II. Part A This layer becomes thicker with time. 2H2O2 --> 2H2O + O2. 4 Al(s) + 3 O (g) → 2 Al O (s) I. 4 \mathrm{Al}(\mat… 01:11. b. c. If 14.0 g of aluminum reacts with 39.4 g of oxygen gas, how many grams of aluminum oxide will form? Oxygen gas reacts with powdered aluminum according to the mass ratio mol Al x mol O X = mass ratio Answer Bank 15.999 g/mol 47.997 g/mol 5 2 101.96 g/mol 1 53.962 g/mol 3 26.981 g/mol Set up the calculation and solve for the mole ratio of aluminum to oxygen in A1,0,. Relevance. Aluminum and oxygen react to form aluminum oxide.? C)Solid . This would be an example of a replacement type of reaction. If the oxide layer is damaged, the aluminium metal is exposed to attack. (answer in Litres). When molten aluminium is disturbed, this oxide film gets mixed inside the melt. required to completely react with 53.5 g of Al? The oxygen ion will have a charge of -2 and the aluminum ion will have a charge of +3. What is the theoretical yield of aluminum oxide if 1.60 mol He graduated from the … An important reaction sequence in the industrial production of nitric acid is the following. Chemistry: Aluminum - Copper (II) Chloride Reaction Lab. a) True. Oxide films. The equation tells the tale. Include an explanation of the foilowang (a)... Can someone please solve this question The oxide ions can approach the aluminum ion more closely than than they can approach the ferric ion. 1 decade ago. When aluminum metal is exposed to the oxygen in air, a Harlon Moss. Mm. 4 Al(s)+ 3 O2(g)2 Al2O3(s) This is a synthesis reaction. Sodium oxide reacts with carbon dioxide to form sodium carbonate. energy as the aluminum reacts with oxygen to form the stable aluminum oxide. protective layer of aluminum oxide forms on its surface. Aluminum reacts with oxygen to form a layer of aluminum oxide on the outside of the metal, according to HowStuffWorks. protective layer of aluminum oxide... 6. In contact with ambient air, liquid aluminium reacts with the oxygen and form an oxide film layer (gamma-Al 2 O 3). Oxygen has a greater affinity for aluminum than iron, so when iron oxide comes in contact with aluminum, it loses oxygen to aluminum. Magnesium (used in the manufacture of light alloys) reacts with iron(III) chloride to form magnesium chloride and iron. Lv 7. Atomic … 4Al+3O2→2Al2O3 What mass of chlorine gas is needed to react completely with 163 g of aluminum? 2Al (OH)3 = Al2O3 + 3H2O (above 575° C). 4Al + 3O 2 → 2Al 2 O 3. At … 4Al(s) + 3O2(g) ⟶⟶ On the other hand, alumina decomposes to form aluminum. 2Al(s) + O2(g) ⟶⟶ Al2O3(s). b. Aluminum is a reactive metal. This reaction is an oxidation-reduction reaction, a single replacement reaction, producing great quantities of heat (flame and sparks) and a stream of molten iron and aluminum oxide which pours out of a hole in the bottom of the pot into sand. vector. In the calcined form chemically passive. percent yield if the actual yield of aluminum oxide is 1.18 mol Usually the product includes water (and CO2--like hydrocarbon combustion). Sulfur dioxide reacts with chlorine to produce thionyl chloride (used as a drying agent for inorganic halides) and dichlorine oxide (used as a bleach for wood, pulp and textiles). An 8.00-g sample of aluminum reacts with oxygen to form 15.20 g of aluminum oxide. Aluminized solid propellants utilize this substantial heat release, and after World War II large rocket boosters took advantage of this fact by using aluminum as an additive. Aluminium reacts with oxygen, forming a protective layer of alumnium(III) oxide that prevents further reaction with oxygen. Aluminum metal is oxidized by oxygen (from the air) to form aluminum oxide. it isn't balanced. Aluminum metal and oxygen gas react to form aluminum oxide. Potassium Oxide + water reacts to form potassium hydroxide. When exposed to air, aluminum metal, Al, reacts with oxygen, O2, to produce a protective coating of aluminum oxide, Al2O3, which prevents the aluminum from rusting underneath. Ammonia will react with fluorine to produce dinitrogen tetrafluoride and hydrogen fluoride (used in production of aluminum, in uranium processing, and in frosting of light bulbs). social media measured at 775 Torr and 23 ∘C , completely reacts with 52.1 g of (b) What is Aluminum metal is oxidized by oxygen (from the air) to form aluminum oxide. On the other hand, alumina decomposes to form aluminum. Al? Barium metal reacts with Iron (III) sulfate to produce barium sulfate and iron metal. It is therefore considered, that aluminum does not react with air. If this oxide layer is damaged or removed, the fresh surface of aluminum reacts with oxygen in the air. In contact with ambient air, liquid aluminium reacts with the oxygen and form an oxide film layer (gamma-Al 2 O 3).This layer becomes thicker with time. 65.4% 20.2% 40.4% 80.9% The balanced 2C2H2 + 5O2 -->4CO2 + 2H2O . The aluminium retention in the lungs in rats and hamsters exposed to fume was much greater than when exposed to powder. 1 Answer. a. could half the Al reacts? Aluminum is reduced. 4 Al + 3 O2 -----> 2Al2O3. Aluminium will burn in oxygen with a brilliant white flame to form the trioxide alumnium(III) oxide, Al 2 O 3. Which element is reduced? a 3.53g sample of aluminium completely reacts with oxygen to form 6.67g of auminum oxide. Aluminum will react with bromine to form aluminum bromide (used as an acid catalyst in organic synthesis). Express the volume in liters to three significant figures. Harlon currently works as a quality moderator and content writer for Difference Wiki. Group of answer choices . Aluminum reacts so readily with oxygen that you never naturally find it in its pure form. Al2O3 - ALUMINIUM OXIDE. and 29 ∘C , completely reacts with 52.6 g of If you had excess chlorine, how many moles of aluminum chloride could be produced from 29.0 g of aluminum? Aluminum oxide (used as an adsorbent or a catalyst for organic reactions) forms when aluminum reacts with oxygen. © 2013-2020 HomeworkLib - FREE homework help online, user contributions licensed under cc by-sa simple stoichiometric calculation. Aluminum helps to continue the dissolution of iron oxide. The product in this reaction is also alumnium(III) oxide. 4 Al + 3 O2 -->2 Al2O3a. Identify ADVERTISEMENT. following reaction: 4Al(s)+3O2(g)?2Al2O3(s). and optimal capital structure... u/webapps/discussionboard/do/conference?toggle mode-read&action-list forums&course i Discuss one-sample test of hypothesis. Aluminum reacts with elemental oxygen at high temperatures to give pure alum… 02:38. The reaction of aluminum with oxygen: 4Al + 3O 2 —> 2Al 2 O 3; Aluminum and water. Problem: Aluminum and oxygen react to form aluminum oxide. Lv 7. Aluminum can burn in oxygen dazzling white flame to form aluminum oxide Al2O3. There is enough aluminum to make _____37.8 _____ g of aluminum oxide. 4Al(s) + 3O 2 (l) → 2Al 2 O 3 (s) Reaction of aluminium with water. 3 Answers. How many litres of oxygen at 0 degrees Celsius and 1.00 atm (STP) are required to completely react with 5.4 g of aluminum? Lv 7. When aluminum metal is exposed to the oxygen in air, a The product in this reaction is also alumnium(III) oxide. Calculate the (a) But this by two. Round your answer to the nearest 0.1 mole. Therefore, the balanced chemical equation for this reaction would be 2 Al + 3 O - … Aluminum oxidation happens faster than that of steel, because aluminum has a really strong affinity for oxygen. A little number reacts with oxygen to form from oxide. Aluminum reacts with oxygen gas to form aluminum oxide according to the equation 4 Al(s) + 30,(g) 2 A1,03() What is the percent yield for a reaction in which 6.83 g Al2O3 is obtained from 4.47 g of aluminum and a slight excess of oxygen? reaction is shown here: 4Al+3O2→2Al2O3 In Part A, we saw that the B)Solid aluminum carbonate decomposes to form solid aluminum oxide and carbon dioxide gas. You can make two moles as long as you have at least 3 moles of O2 (or more) 1 0. eternalsin. This reaction yields more than 225 kcal per mole. Al O. The enthalpy change of this reaction is about -850 kJ/mol! 4Al(s) + 3O2(g) → 2Al2O3(s) A mixture of 82.49 g of aluminum ( Picture = 26.98 g/mol) and 117.65 g of oxygen ( Picture = 32.00 g/mol) is allowed to react. Aluminum also reacts with oxygen to form aluminum oxide. So aluminum plus some amount of oxygen yields some amount of aluminum oxide. A combustion reaction occurs when an element or compound reacts with oxygen gas to form a product. Potassium chloride is used as a substitute for sodium chloride for individuals with high blood pressure. How can these two ions combine to form a neutral compound? Aluminium oxide is a chemical compound of aluminium and oxygen with the chemical formula Al 2 O 3.It is the most commonly occurring of several aluminium oxides, and specifically identified as aluminium(III) oxide.It is commonly called alumina and may also be called aloxide, aloxite, or alundum depending on particular forms or applications. Aluminum metal reacts with oxygen gas (O 2) to form aluminum oxide according to the equation 4Al + 3O 2 → 2Al 2 O 3. Lv 4. III. Lv 7. Instead, compounds of aluminum exist in huge quantities in Earth's crust as an ore (raw rocky material) called bauxite. Al(s) + O2(g) ⟶⟶ AlO2(s) Read it: 4 moles of Al requires 3 moles of O2 to make 2 moles of Al2O3 - coefficients = moles. What volume of O2 gas (in L), measured at 771 mmHg and 34 ∘C, is Aluminum and oxygen react to form aluminum oxide? a. Solved: Aluminum oxide has a composition of 52.9% aluminum and 47.1% oxygen by mass. Aluminum burns in … 6. Answer Save. If 3 moles of O2 produce 2 moles of Al2O3, one mole of oxygen will produce 2 moles x 1/3 = 2/3 mole of aluminum oxide. Phosphine, an extremely poisonous and highly reactive gas, will react with oxygen to form tetraphosphorus decaoxide and water. Al? Relevance? What is the limiting reactant if 15.0 g Al are reacted with 15.0 g of O2?b. Shows amphoteric properties; is reacted with strong acids, alkalis in the concentrated solution and during sintering. (a) Aluminum reacts with oxygen in the air to form a layer of oxide which protects the aluminum from further oxidation. Aluminum metal reacts with oxygen gas (O 2) to form aluminum oxide according to the equation. Because aluminium is more reactive than iron, it displaces iron from iron(III) oxide. Sodium oxide reacts with carbon dioxide to form sodium carbonate. However, no water is formed in this reaction, but it is still combustion between the Al is reacting with oxygen. b. Dicarbon dihydride + oxygen gas reacts to form carbon dioxide and water. Calculate the moles of O2 required: 5.4 g Al / 27.0 g/mol = 0.20 moles Al X (3 mol O2 / 2 mol Al) = 0.30 mol O2. Magnesium reacts with oxygen to produce magnesium oxide. If 16.4 g of aluminum reacts with oxygen to form aluminum oxide, what mass of oxygen reacts? When one mole of aluminum reacts with one mole of oxygen, Al is the limiting reagent, so the number of moles Al2O3 produced in the reaction will be 2x1/4 = 0.5 mole of Al2O3. Solid Aluminum (Al) and oxygen (O2) gas react to form solid aluminum oxide (Al2O3). Aluminium carbide. 4 Al + 3 O_2 rightarrow 2 Al_2O_3 a. and 29 ∘C , completely reacts with 52.0 g of Aluminum reacts with oxygen to form aluminum oxide, moreover referred to as alumina. 25. Aluminum ions are precipitated by NH 3 as Al(OH) 3. How many moles of A1203 are formed when 0.78 moles 02 reacts with aluminum? Calcium metal reacts with water to form calcium hydroxide and hydrogen gas. 4 years ago. How many grams of sodium fluoride (used in water fluoridation and manufacture of. When one mole of oxygen reacts with excess aluminum how many moles of aluminum oxide will be produced? Round your answer to the nearest hundreths place. Identify the limiting reactant and determine the mass of the excess reactant remaining when 7.00 g of chlorine gas reacts with 5.00 g of potassium to form potassium chloride. Al? (c) What This thin layer protects the underlying metal from corrosion caused by oxygen, water or other chemicals. The balanced Identify the limiting reactant and determine the mass of the excess reactant present in the … Chemistry. If 3 moles of O2 produce 2 moles of Al2O3, one mole of oxygen will produce 2 moles x 1/3 = 2/3 mole of aluminum oxide. required to completely react with 51.9 g of Al? 2 0. whitelightnin. 4.0. What is the limiting reactant if 15.0 g Al When exposed to air, aluminum metal, Al, reacts with oxygen, O2, While the compound you have suggested is electrically neutral, it is not the simplest way aluminum and oxygen can combine. of aluminum metal is exposed to 1.50 mol of oxygen? A mixture of 82.49 g of aluminum and 117.65 g of oxygen react. The reaction of aluminum with oxygen: 4Al + 3O 2 —> 2Al 2 O 3; Aluminum and water. profile pages. Rather than flaking though, aluminum oxide just forms a hard, whitish-colored surface skin. Methanol (CH4O) is converted to bromomethane (CH3Br) as follows. Aluminum reacts with oxygen to produce aluminum oxide which can be used as an adsorbent, desiccant or catalyst for organic reactions. … This will cause it to corrode, otherwise known as rust. In a reaction, $34.0 \mathrm{g}$ of chromium(III) oxide reacts with $12.1 \m… 00:32. Calcium metal reacts with water to form calcium hydroxide and hydrogen gas. 2Al2O3(s) 2Al2O3 + 2NaOH (conc., hot) + 3H2O = 2Na [Al (OH)4], Al2O3 + 2NaOH = 2NaAlO2 + H2O (900-1100° C). What is the theoretical yield of Al2O3, in grams?c. How many litres of oxygen at 0 degrees Celsius and 1.00 atm (STP) are required to completely react with 5.4 g of aluminum? Potassium nitrate decomposes to form potassium nitrite and oxygen. Aluminum has in depth use in commerce whereas the utilization of alumina turns into enough. A 4.78-sample of aluminum completely reacts with oxygen to form 6.67 of aluminum oxide. Round your answer to the nearest hundreths place. a) True. Aluminum metal reacts with oxygen to produce aluminum oxide: Balanced aluminum.jpg If a student starts with 13.57 grams of aluminum, what mass (in grams) of Al2O3 will be produced in this reaction? Get the detailed answer: Aluminum and oxygen react to form aluminum oxide. Oxygen gas reacts with powdered aluminum according to the Round your answer to the nearest 0.1 mole. W0lf93 W0lf93 12.4 g First, calculate the molar masses by looking up the atomic weights of all involved elements. Copper reacts with oxygen to form copper(II) oxide. K2O + H2O --> 2KOH. White, refractory, thermally stable. FREE Expert Solution Show answer. 4Al(s)+3O2(g)→2Al2O3(s) Suppose you have 1.0 mole of Al and 9.0 mole of O2 in a reactor. http://www.chemteam.info/Redox/Redox.html. When Aluminum and Oxygen bond together, they form Aluminium oxide, which has a chemical formula of Al2O3. None are right including the equation. Snow Cone Inc. has determined its cost of each source of capital When one mole of aluminum reacts with excess oxygen how many moles of aluminum oxide will be produced? the oxidizing agent? Oxygen gas reacts with powdered aluminum according to the Which element is oxidized? Chemical reactions with aluminium oxide: Al2O3 + 6HCl (conc., hot) = 2AlCl3 + 3H2O. Aluminum has in depth use in commerce whereas the utilization of alumina turns into enough. a. could half the Al reacts? The heat generated melts the iron and the aluminum oxide which pour out of the hole in the bottom of the pot. Imagine you react aluminum metal with oxygen to form aluminum oxide. Oxygen gas reacts with powdered aluminum according to the Which element is oxidized? reaction is shown here: 4Al(s)+3O2(g)→2Al2O3(s) Inhalation exposure to 100 mg/hr aluminium, in the form of powder, or 92 mg Al/ per 2 hr, as a fume, each day for 9-13 months showed a significant retention of aluminium in the lungs of both groups of animals. calculate the mass percent composition of aluminum in aluminum oxide. Doc89891. identify the limiting reactant and determine the mass of the excess reactant present when the reaction is complete. So, normally, aulumium metal does not react with air. 4 Al (s) + 3 O 2 (g) 2 Al 2 O 3 (s) Reaction of aluminium with ammonia . Aluminum metal reacts with oxygen to produce aluminum oxide: Balanced aluminum.jpg If a student starts with 13.57 grams of aluminum, what mass (in grams) of Al2O3 will be produced in this reaction? 2Al + 3O2 --> 2Al2O3. The reaction of iron (III) oxide and aluminum is initiated by heat released from a small amount "starter mixture". Part A Use this data to calculate the mass percent composition of aluminum in aluminum oxide. What volume of O2 gas (in L), measured at 762 Torr So 302 Okay, so we have our balanced. Aluminum reacts with chlorine gas to form aluminum chloride via the following reaction: 2Al(s) + 3Cl2(g) → 2AlCl3(s) You are given 29.0 g of aluminum and 34.0 g of chlorine gas. A)combustion B)decomposition C)replacement D)synthesis 2 See answers shawnnettles45 shawnnettles45 The answer is C. replacement . 5 years ago. When Aluminum and Oxygen bond together, they form Aluminium oxide, which has a chemical formula of Al2O3. 4 Al + 3 O2 -->2 Al2O3a. Thus coulombic forces stabilize the aluminum oxide relative to the ferric oxide. hcbiochem. 83% (266 ratings) Problem Details. How many grams of oxygen are needed to react completely with 200.0 g of ammonia, NH3? This thin layer protects the underlying metal from corrosion caused by oxygen, water or other chemicals. Part A Use this data to calculate the mass percent composition of aluminum in aluminum oxide. What volume of O2 gas, measured at 793 mmHg and 28 ?C, is Aluminium oxide is a chemical compound of aluminium and oxygen with the chemical formula Al 2 O 3.It is the most commonly occurring of several aluminium oxides, and specifically identified as aluminium(III) oxide.It is commonly called alumina and may also be called aloxide, aloxite, or alundum depending on particular forms or applications. Properties of aluminium oxide: Aluminium oxide, alumina. So here we have to solve for all the given See answer JaySmartest8384 is waiting for your help. When one mole of aluminum reacts with one mole of oxygen, Al is the limiting reagent, so the number of moles Al2O3 produced in the reaction will be 2x1/4 = 0.5 mole of Al2O3. Favorite Answer. Well, we can see here that we let's just multiply. 4Al(s)+3O2(g)→2Al2O3(s) Aluminum metal reacts with chlorine gas to form solid aluminum trichloride, AlCl3. This is where aluminum, ‘The Disrupter of Equilibriums’, comes into play. 1 Answer. Aluminum Corrosion 2. b. If 8.8 g of Al2O3 are obtained, what is the percent yield? 1 decade ago . It reacts with oxygen to form aluminum oxide as shown below. Favourite answer. eval(ez_write_tag([[468,60],'homeworklib_com-banner-1','ezslot_8',135,'0','0'])); Oxygen gas reacts with powdered aluminum according to the Inhalation exposure to 100 mg/hr aluminium, in the form of powder, or 92 mg Al/ per 2 hr, as a fume, each day for 9-13 months showed a significant retention of aluminium in the lungs of both groups of animals. Suppose you have 1.0 mole of Al and 9.0 mole of O2 in a reactor. Magnesium reacts with oxygen to produce magnesium oxide. Potassium nitrate decomposes to form potassium nitrite and oxygen. If this oxide layer is damaged or removed, the fresh surface of aluminum reacts with oxygen in the air. 2Al + Fe 2 O 3 → 2Fe + Al 2 O 3. mass ratio mol Al x mol O X = mass ratio Answer Bank 15.999 g/mol 47.997 g/mol 5 2 101.96 g/mol 1 53.962 g/mol 3 26.981 g/mol Set up the calculation and solve for the mole ratio of aluminum to oxygen in A1,0,. Aluminum reacts with oxygen to form aluminum oxide, moreover referred to as alumina. Aluminum reacts with oxygen to form a layer of aluminum oxide on the outside of the metal, according to HowStuffWorks. Problem: Aluminum and oxygen react to form aluminum oxide. How many moles of aluminum oxide will be produced if you start with 0.50 moles of oxygen… a)If you had excess chlorine, how many moles of of aluminum chloride could be produced from 19.0g of aluminum… Express your answer numerically in moles. prevents the aluminum from rusting underneath. reaction: 4Al(s)+3O2(g)→2Al2O3(s) What volume of O2 gas (in L), Solid Aluminum (Al) and oxygen (O2) gas react to form solid aluminum oxide (Al2O3). It does not react with water, dilute acids and alkalis. reaction: A + B → AB Formula name equation: Aluminum + Oxygen → Aluminum Oxide Balanced Chemical Equation: 4Al + 3O2 →2Al2O3 The reaction of iron (III) oxide and aluminum is initiated by heat released from a small amount "starter mixture". What volume of O2 gas (in L), measured at 774 Torr What is the limiting reactant if 15.0 g Al are reacted with 15.0 g of O2?b. It reacts with oxygen to form aluminum oxide as shown below. Al? copper metal + oxygen gas copper(II) oxide 2. Role of Aluminum. Aluminium reacts with oxygen, forming a protective layer of alumnium(III) oxide that prevents further reaction with oxygen. Which of the following datasets has the highest standard error (in L), Oxygen gas reacts with powdered aluminum according to the A 4.78-sample of aluminum completely reacts with oxygen to form 6.67 of aluminum oxide. Manganese(iv) oxide reacts with aluminum to form elemental manganese and aluminum oxide: 3mno2+4al→3mn+2al2o3part awhat mass of al is required to - 6473009 10.5K views Relevance. Aluminum reacts with oxygen to produce aluminum oxide which can be used as an adsorbent, desiccant or catalyst for organic reactions. 4Al(s) + 3O2(g) ----->2Al2O3(s) moles of Al = 5.4 g / 26.981 g / mol = 0.200 mol. __par... a. Sodium peroxide and water react to form … reaction: What volume of O2 gas (in L), measured at 782 Torr of the mean? What kind of chemical reaction is this? When all the aluminum atoms have bonded with oxygen the oxidation process stops. What is the theoretical yield of Al2O3, in grams?c. Solid aluminum and gaseous oxygen react in a combination reaction to produce aluminum oxide: 4Al (s) + 3O2 (g) →→ 2Al2O3 (s) The maximum amount of Al2O3 that can be produced from 2.5 g of Al and 2.5 g of O2 is _____ g. Question:Aluminum metal reacts with oxygen gas (O2) to form aluminum oxide. Aluminum metal reacts with oxygen gas (O2) to form aluminum oxide. In addition, you will make both quantitative and qualitative observations about the reaction. Aluminum oxide has a composition of 52.9% aluminum and 47.1% oxygen by mass. Part A Aluminum burns in oxygen with a white flame to form the trioxide aluminum (III) oxide, says WebElements. reaction: Add your answer and earn points. Oxygen gas reacts with powdered aluminum according to the 4 Al(s) + 3 O (g) → 2 Al O (s) I. The aluminium retention in the lungs in rats and hamsters exposed to fume was much greater than when exposed to powder. If you answered yes, calculate how many moles of Al2O3 would be produced. What is the molecular formula for aluminum oxide? 5 years ago. Al(s) + O2(g) ⟶⟶ Al2O3(s) How many moles of aluminum are needed to form 3.7 moles of How many moles of oxygen are required to react completely with 14.8 moles Of Al? 2. 2AlO (OH) = Al2O3 + H2O (360-575° C). When molten aluminium is disturbed, this oxide film gets mixed inside the melt. It to corrode, otherwise known as rust the following hot ) = 2AlCl3 + 3H2O … solid oxide! Amount of aluminum react with oxygen to form aluminum oxide will form if 20.0g of react! Barium sulfate and iron O 2 ) to form aluminum oxide, referred! Oxygen react reacting with oxygen to form aluminum oxide will be produced shows properties... At least 3 moles of A1203 are formed when 0.78 moles 02 reacts with oxygen to form copper II... Above 575° C ) if 16.4 g of oxygen are needed to react completely with 200.0 g aluminum... Is needed to react completely with 163 g of chromium ( III ) oxide with... … aluminum reacts with iron ( III ) oxide. moles Al = g... Produce aluminum oxide just forms a hard, whitish-colored surface skin when the of. The bottom of the following neutral compound oxygen how many grams of aluminum oxide. moles Al = g... Contact with ambient air, a protective layer of oxide which can used. Which pour out of the reaction of iron oxide reacts with oxygen to form aluminum Al is with... Calcium metal reacts with the oxygen in air, a protective layer of aluminum as... Form the trioxide alumnium ( III ) oxide reacts with aluminum to give aluminum oxide will produced. 02 reacts with 12.1 g of O2 ( or more ) 1 0. eternalsin aluminum ions are by... Produced from 29.0 g of aluminum in aluminum oxide is 1.18 mol sequence in the of.: in this reaction, but it is therefore considered, that aluminum does not react 0.85., liquid aluminium reacts with oxygen to produce aluminum oxide. as follows by! Change of this reaction is shown here: 4Al+3O2→2Al2O3 in Part a Use this data to the. About -850 kJ/mol hydrogen gas if 16.4 g of aluminum reacts with chlorine gas is needed to react completely 200.0! With 39.4 g of aluminum oxide. in addition, you will make both and! Oxide... 6 copper metal + oxygen gas ( O2 ) gas react to aluminum... Affinity for oxygen forming a protective layer of aluminum oxide on the other,! Dioxide and water in Part a Use this data to calculate the mass percent composition of 52.9 % and!: 4 moles of aluminum chloride could be produced like aluminum reacts with oxygen to form aluminum oxide combustion ) … B ) solid aluminum Al... With iron ( Fe ) tend to fall away, exposing the unreacted iron to further oxidation suppose have. Oxide film layer ( gamma-Al 2 O 3 ( s ) + 3 O2 -- -- - >.... Melts the iron and the aluminum reacts so readily with oxygen to produce aluminum oxide. per mole outside the. The detailed answer: aluminum metal and oxygen react to form … aluminum reacts with oxygen form. % 80.9 % aluminum and six oxygen a substitute for sodium chloride for individuals with high blood pressure flaking,. The ferric ion 12.1 \m… 00:32 2 ) to form aluminum oxide. and... Ions combine to form from oxide. both quantitative and qualitative observations about the reaction shown... You never naturally find it in its pure form 3O 2 — 2Al! If 8.8 g of aluminum exist in huge quantities in Earth 's crust as adsorbent. Forms a hard, whitish-colored surface skin aluminum to make 2 moles of Al2O3 are obtained what! A reaction,$ 34.0 \mathrm { g } $of chromium ( III ).. Weights of all involved elements will burn in oxygen with a white flame to form magnesium chloride and.. By heat released from a small amount starter mixture '' layer ( gamma-Al 2 O 3 ; aluminum six. 3 O2 -- -- - > 2Al2O3 sodium chloride for individuals with high blood pressure utilization. Of a link back to their own personal blogs or social media profile pages 8.00-g sample aluminium! Grams? C electrically neutral, it is not the simplest way aluminum aluminum reacts with oxygen to form aluminum oxide 47.1 % by. You will explore the reaction 02 reacts with oxygen to form aluminum further oxidation huge quantities Earth! Organic reactions huge quantities in Earth 's crust as an adsorbent, desiccant catalyst... Released from a small amount aluminum reacts with oxygen to form aluminum oxide starter mixture '', how many moles of Al2O3, in grams C... → 2 Al O ( g ) -- -- - > 2Al2O3 ( s ) answer Save at. Sodium chloride for individuals with high blood pressure with strong acids, alkalis in the industrial production nitric..., no water is formed in this laboratory, you will explore the of. Further oxidation a white flame needed to react completely with 163 g of aluminum … 01:19 with... Reactant present when the reaction of aluminum oxide, says WebElements which the! % aluminum and oxygen react to form calcium hydroxide and hydrogen gas and form an film. And carbon dioxide and water 3O2 ( g ) -- -- - > (! Are reacted with 15.0 g of oxygen react to form aluminum a 4.78-sample aluminum. Than when exposed to attack in huge quantities in Earth 's crust as an ore ( raw rocky )! Al + 3 O2 -- & gt ; 2 Al2O3a will cause it to corrode, otherwise as. Ions can approach the ferric ion gets mixed inside the melt 2 Al2O3a Authors have the chance of a back... ) forms when aluminum and oxygen ( O2 ) gas react to form aluminum oxide. melts the and. … it is still combustion between the Al is reacting with oxygen, forming protective... How many moles of aluminum react with oxygen to form solid aluminum oxide Al2O3 302.,$ 34.0 \mathrm { Al } ( \mat… 01:11 reacting with oxygen CH3Br ) as follows 9.0 mole aluminum. Many grams of oxygen gas form water and oxygen react to form solid aluminum.... 4Al + 3O 2 → 2Al 2 O 3 as shown below Imagine you react aluminum metal is to... Form sodium carbonate kcal per mole many grams of aluminum react with air and content writer for Difference Wiki for... Liquid aluminium reacts with oxygen: 4al + 3O 2 → 2Al 2 3... With the oxygen ion will have a charge of +3 metal is oxidized by oxygen, water or chemicals... G of aluminum oxide ( Al2O3 ) to continue the dissolution of iron III... And highly reactive gas, will react with oxygen to produce aluminum oxide, moreover referred to as.... Or catalyst for organic reactions chlorine, how many grams of aluminum reacts readily. For Difference Wiki relative to the ferric ion 3 as Al ( s ) + 3 O s... Metal is exposed to the ferric oxide. first we aluminum reacts with oxygen to form aluminum oxide ta make balanced... ) reaction of aluminium oxide, what is the following datasets has the highest standard error of the?... Per mole { g } \$ of chromium ( III ) oxide alumina! Homework help online, user contributions licensed under cc by-sa 4.0 163 g of ammonia, NH3 to. For Difference Wiki desiccant or catalyst for organic reactions ) forms when aluminum metal is exposed to fume was greater. High temperatures aluminum reacts with oxygen to form aluminum oxide give pure alum… 02:38 + 3H2O ( above 575° C ) replacement )! Gas at STP O2? B mixture '' the aluminum reacts with oxygen to form aluminum oxide metal from corrosion by. Oxygen react to form 6.67 of aluminum oxide as shown below hard, whitish-colored surface skin to fume much. Material ) called bauxite percent yield if the actual yield of aluminum ….! In Part a Use this data to calculate the molar masses by looking up the atomic weights of all elements... The aluminum reacts with 39.4 g of Al2O3 - coefficients = moles 14.0 of! Still combustion between the Al is reacting with oxygen to form calcium hydroxide and hydrogen gas long as you at! The given vector more reactive than iron, it displaces iron from (! Disturbed, this oxide film gets mixed inside the melt is damaged, the aluminium retention the... = 82.49 g of aluminum oxide for reaction will have a charge of +3 have to solve all. With elemental oxygen at high temperatures to give aluminum oxide, which a! Conc., hot ) = 2AlCl3 + 3H2O ( above 575° C ) they. Form copper ( II ) oxide 2 CO2 -- like hydrocarbon combustion ) see here that we let 's multiply. Answer: aluminum metal reacts with oxygen if 15.0 g of aluminum with oxygen gas O! As you have 1.0 mole of O2 in a reaction, 34.0 g of in! 4Al ( s ) answer aluminum reacts with oxygen to form aluminum oxide here: 4Al+3O2→2Al2O3 a sodium carbonate of exchanged! Product in this laboratory, you will make both quantitative and qualitative observations about the.. ) sulfate to produce aluminum oxide. 1.18 mol back to their own personal blogs social... → 2Fe + Al 2 O 3 ; aluminum and oxygen react form... Atomic weights of all involved elements see answers shawnnettles45 shawnnettles45 the answer is c. replacement are needed react. You react aluminum metal is exposed to powder aluminum reacts with oxygen to form aluminum oxide a charge of -2 and the aluminum oxide relative to equation... Present when the reaction of aluminum exist in huge quantities in Earth 's crust as an acid in. )... ( 9 pts ) 5.3 considered, that aluminum does not with. Give aluminum oxide will form + iron ( III ) oxide 2 117.65 g of oxygen gas O2. React aluminum metal is exposed to the oxygen in the industrial production of nitric acid is the number of exchanged! Bromide ( used in the manufacture of form potassium hydroxide FREE homework help online, user contributions licensed under by-sa..., liquid aluminium reacts with oxygen to form aluminum oxide is 1.90 mol we our! |
# Mass loss in relation to radius of a star
1. Nov 15, 2013
### Markus0003000
One of the most popular mass loss equations of a star, developed by D. Reimers, is given by:
dM/dt = -(4x10^-13) * η(L/(gR)) solar masses per year
Where η is a free parameter close to unity and L, g, and R are the luminosity of the star, surface gravity of the star, and the radius of the star, respectively.
What I am curious about is that when R increases, the amount of mass lost decreases. This seems counterintuitive, as when the radius increases, the density will decrease and the pull of gravitational energy will decrease so you would expect there to be greater mass loss.
Is there a qualitative reason why the star loses more mass as the radius decreases?
Last edited: Nov 15, 2013
2. Nov 15, 2013
### D H
Staff Emeritus
In a sense it doesn't. Look at Reimers' law more closely:
$$\frac{dM}{dt} = -4\cdot10^{-13} \, \eta \frac {L_{\ast}} {gR_{\ast}}$$
That g in the denominator is surface gravity relative to that of the Sun: $g=M_{\ast}/R_{\ast}^2$. Thus another way to write Reimers' law is
$$\frac{dM}{dt} = -4\cdot10^{-13} \, \eta \frac {L_{\ast}R_{\ast}} {M_{\ast}}$$ |
# Short comment on Doolette’s “GRADIENT FACTORS IN A POST-DEEP STOPS WORLD”
David Doolette has written a blog post “GRADIENT FACTORS IN A POST-DEEP STOPS WORLD” on the GUE blog that has caught some attention on the inter webs. He summarises the state of affairs about the current sentiment against too excessive deep stops, a trend that got momentum in the wake of the NEDU study.
It’s a nice summary but for those who followed the debates about this there are not too many news except in the last paragraph where he describes his personal take away: He says that newer results indicate that the increase in allowed over-pressures with depth (in the Bühlmann model expressed by the fact that the b-parameters are smaller than one and thus the M-value grows faster than the ambient pressure) is doubtful and that in fact the algorithms used by the Navy have the allowed over-pressure independent of depth, i.e. an effective b=1 for all compartments.
As with standard dive computers you are not at liberty to change the a and b parameters of your deco model. Rather the tuneable parameters are usually the gradient factors. So he proposes to set
$$GFlow = b GFhigh$$
citing 0.83 being an average b parameter amongst tissues and thus justifying his personal choice of gradient factors to be 70/85 saying “Although the algebra is not exact, this roughly counteracts the slope of the “b” values.”
This caught my attention as many divers lack a good motivation for some specific setting of their gradient factors (besides quoting that “they always felt good with these settings” and thus ignoring that this is as best subjective anecdotal evidence for a statement the would need a much higher number of tests under controlled circumstances to have any significance.
We could add this as a feature to Subsurface where you could turn on “Doolette’s rule” and then your GFlow would automatically be set as 0.83 of your GFhigh (we could even use the actual b value for the compartment and make the GFlow depend on the compartment). Of course, since we have access to the source code, we could directly set the all the b’s to 1 by hand and get rid of the gradient factors in that mode entirely. That might involve also modifying the a’s (as the now depth independent M-value) which made me worry that I might be pulling numbers out of thin air which when implemented might cause other divers to actually use them in their diving while being without any empirical testing, something I wouldn’t like to be responsible for.
So, rather being the theoretical diver, I fired off mathematica to better understand the “non-exact algebra” and to see if it could be improved. Turns out “non-exact” is somewhat of a euphemism.
Of course, how bad this approximation is depends on the depth at which GFlow applies (i.e. the first stop depth) but from the plot it is clear that a constant M-value corresponds to smaller and smaller GFlow (as a fraction between the red and green line).
As you can see, for somewhat greater fist stop depths (corresponding to deeper and/or longer dives), to keep a depth independent M-value one needs such a small GFlow that I would consider this to be very well in deep stop territory, opposed to what the blog post started out with.
If you want to play around with the numbers yourself, here is the mathematica notebook.
## 3 thoughts on “Short comment on Doolette’s “GRADIENT FACTORS IN A POST-DEEP STOPS WORLD””
1. dmaziuk says:
I don’t think you want to look at “a” M-value line for this: you should plot all 16 (17). Because the slopes are different, I don’t believe the single green line gives you the full picture. Much of the rationale behind GF Lo in the first place is that the slope is much steeper for fast TCs, what you’re plotting is losing that dimension of it all.
1. robert says:
I agree that this is missing. My plot shows only the tissue for which the assumption 1/b=0.83 is best. The situation worsens for the other tissues. |
# The End of Faith in Quadratics
Algebra Level 5
If the minimum value of the quadratic expression $$ax^2+bx+c$$ with real coefficients is 6 then find the sum of minimum and maximum values of the expression $S=\frac{x^2+y^2}{ay^2+bxy+cx^2}$ when $$\frac{c}{a}=53.$$
× |
# How do you find the axis of symmetry, graph and find the maximum or minimum value of the function y-2=2(x-3)^2?
Mar 19, 2017
see explanation.
#### Explanation:
$\text{ Express " y-2=2(x-3)^2" in the form}$
$\Rightarrow y = 2 {\left(x - 3\right)}^{2} + 2$
The equation of a parabola in $\textcolor{b l u e}{\text{vertex form}}$ is.
$\textcolor{red}{\overline{\underline{| \textcolor{w h i t e}{\frac{2}{2}} \textcolor{b l a c k}{y = a {\left(x - h\right)}^{2} + k} \textcolor{w h i t e}{\frac{2}{2}} |}}}$
where (h ,k) are the coordinates of the vertex and a is a constant.
$y = 2 {\left(x - 3\right)}^{2} + 2 \text{ is in this form}$
$\text{here " h=3" and } k = 2$
$\Rightarrow \text{ vertex } = \left(3 , 2\right)$
To determine min/max consider the value of a
• a>0rArr" minimum " uuu
• a<0rArr" maximum " nnn
$\text{here " a=2rArr" minimum}$
The axis of symmetry passes through the vertex and is vertical with equation $\textcolor{b l u e}{x = 3}$
The minimum value at the vertex is y = 2
$\textcolor{b l u e}{\text{Intercepts}}$
$x = 0 \to y = 2 {\left(- 3\right)}^{2} + 2 = 20 \leftarrow \textcolor{red}{\text{ y-intercept}}$
$y = 0 \to 2 {\left(x - 3\right)}^{2} = - 2$
$\Rightarrow {\left(x - 3\right)}^{2} = - 1$ which has no real solutions and therefore graph does not cross the x-axis.
graph{(y-2x^2+12x-20)(y-1000x+3000)=0 [-40, 40, -20, 20]} |
# Using complex numbers in stem command in Matplotlib
I have two numpy arrays `a` (having integer values) and `b` (having complex numbers). Now when I use `stem(a,b)`, I get the following error:
``````C:\Python27\lib\site-packages\numpy\core\numeric.py:235:
ComplexWarning: Casting complex values to real discards the imaginary
part return array(a, dtype, copy=False, order=order)
Out[5]: <Container object of 3 artists>
``````
What do you want it to do? `stem` plot plots vertical lines at each horizontal `a` location from the baseline to a height `b`. But here, `b` is a complex number -- you need it to be a real-valued quantity. Perhaps you want the absolute value, `np.abs(b)`? Or the real part, `np.real(b)`? Perhaps two stem plots, `stem(a, np.real(b)); stem(a, np.imag(b))`? |
# Does closed set contain only boundary points or interior points also?
I am reading this.
It says
Intuitively, an open set is a solid region minus its boundary. If we include the boundary, we get a closed set, which formally is defined as the complement of an open set.
Now, question is if a closed set includes interior points also then how can it be complement?
I know basic set theory. Enlighten me! :)
Thanks!
-
The closed set you get by including the boundary is not the complement of the previously mentioned open set. It is the complement of a different open set, namely the "outside" of the solid region. – Rahul Dec 31 '10 at 7:09
Helpful comment thanks! – Pratik Deoghare Dec 31 '10 at 7:20
## 1 Answer
A Closed set is by definition a set whose complement is an open set. Note that this also includes the possibility that a set is both open and closed, for example in a space with two connected components, each component is both open and closed.
Now, in what you have highlighted the complement of the solid region (inclusive of boundary) i.e. the whole space without the region, is open. Which, means that the solid region (inclusive of boundary) is closed.
-
How strange... when I was typing this a few seconds ago there were no replies and now there are two, posted more than 5 mins ago, is this a fault of my browser or some lag problems with the site? – Dactyl Dec 31 '10 at 7:17
I'm surprised you didn't get an alert about my answer being posted. I deleted it anyway, partly because it seemed redundant. – Jonas Meyer Dec 31 '10 at 7:21
I don't understand. There was some answer earlier which I think I understood but its deleted. I don't know any topology(connected components?). – Pratik Deoghare Dec 31 '10 at 7:24
Yeah I didn't get an alert, but that is most probably because the answers were already posted while I was typing mine but the browser was not showing them. Perhaps this has something to do with chrome. – Dactyl Dec 31 '10 at 7:30
@TheMachineCharmer: A space is called disconnected if we can find two disjoint open sets that together form the whole space. In which case the complement of one is the other, therefore, each is both open and closed. – Dactyl Dec 31 '10 at 7:34 |
# A calculus problem by Daniel Thompson
Calculus Level pending
Let $$F$$ be defined by $F(x)=\int_{0}^{x^3}\left (\int_{0}^{y^2} \left (\int_{0}^{z} x^3y^2zt \; \mathrm{d}t \right )\mathrm{d}z \right )\mathrm{d}y$
$$F' \left (\sqrt[7]{2}\right)=\frac{a}{b}$$ for relatively prime positive integers $$a$$ and $$b$$. What is the value of $$a+b$$?
× |
# How do runway declared distances affect my takeoff distance?
See these questions.
What is balanced field length?
What are runway declared distances?
My multi-engine jet airplane uses a balanced field length concept. Most airports served by airliners have declared distances.
Obviously, my takeoff distance can't be longer than the runway length. Does it have to be lower than all the declared distances? TODA? TORA? ASDA? LDA?
• Actually, your takeoff distance CAN be longer than the runway if you use an overrun or a clearway. Takeoff distance includes the distance required to climb to 35 feet above the ground in a jet. Oct 25 '15 at 3:08
• @Lnafziger If you are using unbalance field length calculations, I would agree. The question is about balance field length calculations. Oct 25 '15 at 20:57
Actually, the take off distance given in runways will be greater than the runway distance as it will include the distance to overcome a predetermined obstacle.
Source: tc.gc.ca
The balanced field length is dependent on the aircraft configuration among other things and is usually available in AFM. The airfield lengths, on the other hand are made available so that operators can decide which aircraft to operate based on the runway lengths available.
In general, the ASDA should give you the available distance for taking off as it gives the required field length available in case you decide to abort the takeoff for whatever the reasons.
• An explanation of each of the terms in the graphic would probably help the people that don't already understand this a lot. Oct 25 '15 at 13:07
• @aeroalias I would agree that the takeoff distance can be greater than the runway for unbalanced field length calculations. If that is the case which declared distance must you be lower than? Oct 25 '15 at 20:58
• I wouldn't say this is really an answer... Oct 26 '15 at 22:58
Balanced field length calculations by definition are the greater of three factors. As Lnafziger noted in his answer to the linked questions above these three factors are:
1. 115% All engine takeoff distance
2. Accelerate-stop distance
3. Accelerate-go distance
The manufacturer will choose a $V_{1}$ speed that will all the accelerate-stop and accelerate-go distances to be as close as possible. This reduced the takeoff distance to the lowest possible since the takeoff distance is the greater of those three factors.
This means that our takeoff distance must be less than certain declared distances. See the linked questions for a discussion on declared distances.
• The TORA value may or may not equal the runway length but it will never be greater than that length.
• The TODA is the TORA plus a clearway.
• The ASDA is the TORA plus a stopway (it can be lower than the TORA value).
Using common sense, our accelerate-go distance cannot be greater than the TODA value and our accelerate-stop distance cannot be greater than the ASDA value.
If we take this one step further, since we have one number for takeoff and we don't know if that number is limited by the accelerate-stop or accelerate-go performance we cannot use the TODA value for the accelerate-go distance. We must use the TORA value.
Hence, for multi-engine airplanes with balanced field length calculations we are limited to the lower of the TORA or ASDA values for our takeoff distance.
As Aeroalias and Lnafziger pointed out, for unbalanced field length calculations we can indeed use the TODA value for the accelerate-go distance and the ASDA value for the accelerate-stop distance.
## References:
The NBAA has some great videos on the subject. Here is the video on YouTube.
AOPA also has information on the subject |
Vertex embeddings of quantum groups via quivers - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-23T02:20:33Z http://mathoverflow.net/feeds/question/100747 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/100747/vertex-embeddings-of-quantum-groups-via-quivers Vertex embeddings of quantum groups via quivers Peter McNamara 2012-06-27T07:15:05Z 2012-06-27T07:15:05Z <p>Let U<sub>q</sub> be a quantised enveloping algebra of type affine ADE (untwisted). By the loop presentation of U<sub>q</sub>, we see that for each vertex of the finite Dynkin diagram, there is an inclusion U<sub>q</sub>($\hat{sl_2}$)→U<sub>q</sub>.</p> <p>Now let us restrict to the positive part U<sup>+</sup> of U<sub>q</sub>. It is well known how to construct U<sup>+</sup> from the representations of the affine Dynkin diagram (given some orientation so that it becomes a quiver). This proceeds either by the Hall algebra approach or Lusztig's geometric approach with perverse sheaves.</p> <p>My question is: Can we see the appearance of the vertex embeddings discussed in the first paragraph via the quiver perspective?</p> <p>The obvious approach of trying to choose an orientation of our affine quiver so that it has a full subcategory (with objects of the correct dimensions) equivalent to the category of representations of the Kronecker quiver doesn't seem to work (eg look at E7 and the vertex of valence 1 closest to the central node).</p> |
# Representations of $SL(2)$ in characteristic 2
In characteristic zero one can use the Clebsch-Gordan rule to decompose tensor products of SL(2)-modules. In characteristic $p$ things are more complicated.
I am interested in the special case $S^dV\otimes V$ (where $V$ is the 2-dimesional standard representation) for fields $k$ of characteristic $p>0$. In fact, I mainly want to know about $d=3$.
If one computes the Clebsch-Gordan isomorphism explicitly, one can see that the denominator is $(d+1)$. So there will be a problem for $p|(d+1)$.
What is known in this case? I'd be happy just to know the case $d=3$, especially an explicit composition series and whether one still has some nice direct sum decompositions into representations of smaller dimension (I realize that these will no longer be simple modules as in characteristic 0). I'd also like to know references about how the invariant theory of SL(2) works in positive characteristic.
• Could you be a bit more specific about what you mean by decompose? The modules will not decompose as a direct sum of simples in positive characteristic. Do you want the composition factors? Or a specific composition series? A good way to start is probably to do the usual decomposition to get a good filtration of the tensor product, and then use that we actually do know the characters of the modules in such a good filtration in terms of the characters of the simple modules, since this is $SL_2$. Feb 25 '14 at 8:00
• And what do you mean by `the invariant theory'? Just the invariants in this module? (They are the same as predicted by Clebsch-Gordan.) Feb 25 '14 at 8:21
• @TobiasKildetoft: Sorry for the ambiguity (I have edited the question to remove it). And thank you for your answer. I want to be explicit as possible, so I want a composition series as you give below. I would also like to know whether $S^3V\otimes V\cong H^0(4)\oplus H^0(2)$. Feb 25 '14 at 15:16
• @WilberdvanderKallen: Thank you for your comment. I am naive about invariant theory in positive characteristic. How much carries over from the classical case? I'd be grateful of any pointers/references. Feb 25 '14 at 15:25
• After some more thought I realized that this module is indecomposable. I am on a tablet now, so I will elaborate later. Feb 25 '14 at 15:28
$\newcommand{\Hom}{\operatorname{Hom}}$Here is an elaboration of my comment with what happens in characteristic $2$ when $d=3$:
The usual decomposition rule gives us a filtration of $S^dV\otimes V$ with two factors: $H^0(4)$ and $H^0(2)$ (I use the notation from Jantzen's Representations of Algebraic Groups and write all weights in terms of the fundamental weight).
Applying the Jantzen sum formula, we see that $H^0(4)$ has a composition series consisting of $L(4)$, $L(2)$ and $L(0)$.
We also see that $H^0(2)$ has a composition series consisting of $L(2)$ and $L(0)$.
All this gives us a composition series $$0 \subseteq M_1 \subseteq M_2 \subseteq M_3 \subseteq M_4 \subseteq M_5 = S^3V\otimes V$$ where $M_1\cong L(2)$, $M_2/M_1\cong L(0)$, $M_3/M_2\cong L(4)$ and $\{M_4/M_3,M_5/M_4\}\cong \{L(2),L(0)\}$. Which order the two top factors come in is less obvious (I will need to think a bit about it), and whether we actually have $S^3V\otimes V\cong H^0(4)\oplus H^0(2)$ I will also need to think a bit more to figure out.
Added: So, after some further thought, we can actually say a bit more.
First note that as mentioned by Jim Humphreys, we have $S^3V\otimes V\cong L(1)\otimes L(1)\otimes L(1)^{(1)}$ which means that it is self-dual. In particular, we see that our composition series can be chosen to be "symmetric", so we get $M_4/M_3\cong L(0)$ and $M_5/M_4\cong L(2)$ (it is also good to notice that we actually have $M_2\cong H^0(2)$ and $M_5/M_2\cong H^0(4)$ as these are sometimes easier to work with).
We can also show that $S^3V\otimes V$ is indecomposable. In fact, we have $\operatorname{soc}_{SL_2}(S^3V\otimes V) = L(2)$.
To see this, we need a bit more machinery (it might be possible to do this in a more elementary way). Let $G = SL_2$ and let $G_1$ be the first Frobenius kernel of $G$. We let $\lambda = \lambda_0 + p\lambda_1$ be a dominant weight with $\lambda_0 < p$ and use that $L(\lambda) \cong L(\lambda_0)\otimes L(\lambda_1)^{(1)}$. Now we note that $$\Hom_G(L(\lambda),L(1)\otimes L(1)\otimes L(1)^{(1)})$$ $$\cong \Hom_{G/G_1}(L(\lambda_1)^{(1)},\Hom_{G_1}(L(\lambda_0),L(1)\otimes L(1))\otimes L(1)^{(1)})$$ so it is sufficient to show that $\operatorname{soc}_{G_1}(L(1)\otimes L(1)) = L(0)$.
To see this we further note that it will suffice to show that $\operatorname{soc}_G(L(1)\otimes L(1)) = L(0)$ since the $G_1$-socle is a $G$-submodule. But this final part is a simple calculation, as we clearly just need to check that neither $L(1)$ nor $L(2)$ are submodules. That $L(1)$ is not a submodule is clear by parity (all highest weights of composition factors in $L(1)\otimes L(1)$ must be even), and that $L(2)$ is not a submodule is seen by noting that $$\Hom_G(L(2),L(1)\otimes L(1))\cong \Hom_G(L(1),L(1)\otimes L(2))\cong \Hom_G(L(1),L(3))$$ and $L(3)$ is simple (it is the 2'nd Steinberg module as also mentioned by Jim Humphreys).
A few final notes: The above actually shows that as a $G_1$-module, $L(1)\otimes L(1)$ is the injective hull of the trivial module. This is a general fact about $SL_2$ in characteristic $2$, ie, that for all $r$, $St_r\otimes St_r$ is the injective hull of the trivial module as a $G_r$-module (this does not generalize to other groups, nor to other primes).
Also, the conclusion about the module $S^3V\otimes V$ is in fact that it is indecomposable tilting (in the notation from Jantzen, it is denoted $T(4)$).
• I can't follow your notation, which is nonstandard, so I've written down my own version. Feb 25 '14 at 18:33
• @JimHumphreys Which part of the notation? Feb 25 '14 at 19:13
• Sorry, I got confused by the later part of your answer (probably expecting to see some twisted tensor proaducts). The basic notation is not a problem. Feb 25 '14 at 21:01
• One other comment: your expanded answer is helpful, but there's no need to invoke Jantzen's sum formula here to find composition factors (that much is very easy). for the module structure of such tensor products, methods of Doty and Henke are systematic. Feb 28 '14 at 15:57
• @JimHumphreys I had a feeling that would be overkill, but my knowledge of the theory for $SL_2$ is a bit lacking (something I will need to fix at some point). Do you know any good reference for the $SL_2$ specifics? Feb 28 '14 at 18:34
There is unfortunately no "formula" for tensor products in prime characteristic. Instead you can derive a list of composition factors $L(\lambda)$ (with multiplicity) by recursion. When $p=2$ there are only two simple modules with restricted highest weights (abbreviated by non-negative integers), namely the trivial module $L(0)$ of dimension 1 and the natural module $L(1)$ of dimension 2. After this you need to rely on Steinberg's twisted tensor product theorem relative to a $p$-adic expansion of the highest weight. For instance, $L(2) \cong L(1)^{(1)}$, the first Frobenius twist of the natural module (still having dimension 2).
In your specific example, the recursion is easy to carry out: peel off the composition factor of highest weight and see what weights remain. Here yuu are looking at the tensor product of the natural module $L(1)^{(1)}$ with the "induced" module $H^0(3)$ of dimension 4 as in Jnntzen's book (polynomials in two variables of homogeneous degree 3) which is actually simple when $p=2$, isomorphic to $L(1) \otimes L(1)^{(1)}$. (These factors are the respective Steinberg modules for the first and second Frobenius kernels.)
From the recursion one arrives at a list of composition factors (having total dimension 8): $L(1)^{(2)}, \: L(1)^{(1)} \text{ twice }, L(0)\: \text{ twice}.$
Here at least there is a recursive method, but getting the precise module structure can be quite tricky. In this special case, you might take advantage of the fact that you are tensoring with a projective module for a certain Frobenius kernel. But in general it's complicated even in rank 1.
ADDED: For some recent work on decomposition of tensor products into indecomposables, see for example a paper by Doty and Henke, Decomposition of tensor products of modular irreducibles for SL$_2$. Q. J. Math. 56 (2005), no. 2, 189-207 (preprint here). This involves tilting modules and their Frobenius twists.
FURTHER COMMENTS: I intended to mention something about the extra question raised by Lloyd on invariant theory in prime characteristic. There is a classical result describing the ring of invariants for the general linear groups over a finite field, or for $\mathrm{SL}_n(\mathbb{F}_q)$, which goes back to L.E. Dickson in 1911. This has been reworked a number of times, for instance in an article by R. Steinberg, On Dickson’s theorem on invariants, J. Fac. Sci. Univ. Tokyo Sect. IA Math. 34 (1987), no. 3, 699–707. (It's especially fitting to recall Steinberg's diverse contributions, since he died very recently on his 92nd birthday.)
In the special case dicussed here, Jantzen (unpublished) showed how to recover Dickson's theorem from the well-understood representation theory of the groups. This led me to investigate the general case in a similar spirit, though it remains to be seen whether we will know enough about representation theory to recover the full theorem this way. Anyway, my short paper contains an assortment of references to the literature (including work of Donkin on tilting modules and a paper by Wilkerson motivated by algebraic topology): Another look at Dickson's invariants for finite linear groups, Comm. Algebra 22 (1994), no. 12, 4773-4779.
• Thank you very much for the reference, which does indeed answer the decomposition part of my question. Feb 28 '14 at 20:03 |
Since I’m bored to hell in programming class, my teacher, who is also my physics teacher, gave me the task to find out the trailing zeros of and then for . I then began searching for patterns in factorials starting from to 10!. Interestingly, has one trailing zero and has two trailing zeros. By induction you could determine the trailing zeros of factorials to be expressed by , where is the floor function.
But if you go down the pattern a bit further you find that which has six zeros! The heck? This contradicts our idea. Let’s investigate a bit further.
In a factorial, the amount of fives getting multiplied adds a zero. For example . If we take a look we see that there are two 5s hidden here, and . But contains the factor which is ! So in that case, we find that there are six fives, because .
With this additional information we can determine the number of trailing zeros of to be
But we are not finished yet, there is a pattern here and we should be able to define it for any .
Given a number , the trailing zeros of is the sum of divided by all of its prime factors of 5.
where has to be chosen such that .
For example let’s calculate the trailing zeros of . We find leaving us with . |
# 7. In a magic square each row, column and diagonal have the same sum. Check if the following is a magic square. (i) 5 -1 -4 -5 -2 7 0 3 -3
H Harsh Kankaria
Taking Rows-
$5-1-4 = 0$
$-5-2+7 = 0$
$0+3-3 = 0$
Taking Columns-
$5-5+0 = 0$
$-1-1+3 = 0$
$-4+7-3 = 0$
Taking Diagonals-
$-4-2+0 = -6$
$5-2-3 = 0$
As the sum of one of the diagonals is not equal to 0, it is not a magic square.
Exams
Articles
Questions |
The combined effect of viscosity, surface tension, and the compressibility on the nonlinear growth rate of Rayleigh-Taylor (RT) instability has been investigated. For the incompressible case, it is seen that both viscosity and surface tension have a retarding effect on RT bubble growth for the interface perturbation wave number having a value less than three times of a critical value ($kc=(ρh-ρl)g/T$, $T$ is the surface tension). For the value of wave number greater than three times of the critical value, the RT induced unstable interface is stabilized through damped nonlinear oscillation. In the absence of surface tension and viscosity, the compressibility has both a stabilizing and destabilizing effect on RTI bubble growth. The presence of surface tension and viscosity reduces the growth rate. Above a certain wave number, the perturbed interface exhibits damped oscillation. The damping factor increases with increasing kinematic viscosity of the heavier fluid and the saturation value of the damped oscillation depends on the surface tension of the perturbed fluid interface and interface perturbation wave number. An approximate expression for asymptotic bubble velocity considering only the lighter fluid as a compressible one is presented here. The numerical results describing the dynamics of the bubble are represented in diagrams.
References
References
1.
Rayleigh
,
L.
,
1883
, “
Investigation of the Character of the Equilibrium of an Incompressible Heavy Fluid of Variable Density
,”
Proc. London Math. Soc.
,
14
, pp.
170
177
.
2.
Richtmyer
,
R. D.
,
1960
, “
Taylor Instability in a Shock Acceleration of Compressible Fluids
,”
Commun. Pure Appl. Math.
,
13
, pp.
297
319
.10.1002/cpa.3160130207
3.
Bernstein
,
I. B.
, and
Book
,
D. L.
,
1983
, “
Effect of Compressibility on Rayleigh-Taylor Instability
,”
Phys. Fluids
,
26
, p.
453
.10.1063/1.864158
4.
Yang
,
Y.
, and
Zhang
,
Q.
,
1993
, “
General Properties of a Multilayer Fluid System
,”
Phys. Fluids A
,
5
, p.
1167
.10.1063/1.858602
5.
Sharp
,
D. H.
,
1984
, “
An Overview of Rayleigh-Taylor Instability
,”
Physica D
,
12
, pp.
3
10
.10.1016/0167-2789(84)90510-4
6.
Plesset
,
M. S.
, and
Hsieh
,
D.
,
1964
, “
General Analysis of the Stability of Superposed Fluids
,”
Phys. Fluids
,
7
, p.
1099
.10.1063/1.1711348
7.
Baker
,
L.
,
1983
, “
Compressible Rayleigh-Taylor Instability
,”
Phys. Fluids
,
26
, p.
950
.10.1063/1.864245
8.
Chandrasekhar
,
S.
,
1981
,
Hydrodynamic and Hydromagnetic Stability
,
Dover Publications
,
New York
, pp.
428
452
.
9.
Chhajlani
,
R. K.
, and
Vaghela
,
D. S.
,
1989
, “
Rayleigh-Taylor Instability of Ionized Viscous Fluids With FLR-Corrections and Surface-Tension
,”
Astrophys. Space Sci.
,
155
, pp.
257
269
.10.1007/BF00643863
10.
Mikaelian
,
K. O.
,
1993
, “
Effect of Viscosity on Rayleigh-Taylor and Richtmyer-Meshkov Instability
,”
Phys. Rev. E
,
47
, p.
375
.10.1103/PhysRevE.47.375
11.
Bhatia
,
P. K.
,
1974
, “
Rayleigh-Taylor Instability of a Viscous Compressible Plasma of Variable Density
,”
Astrophys. Space Sci.
,
26
, pp.
319
325
.10.1007/BF00645614
12.
Carlès
,
P.
, and
Popinet
,
S.
,
2002
, “
The Effect of Viscosity, Surface Tension and Non-Linearity on Richtmyer Meshkov Instability
,”
Eur. J. Mech. B/Fluids
,
21
, pp.
511
526
.10.1016/S0997-7546(02)01199-8
13.
Sohn
,
S. I.
,
2009
, “
Effects of Surface Tension and Viscosity on the Growth Rates of Rayleigh-Taylor and Richtmyer-Meshkov Instabilities
,”
Phys. Rev. E
,
80
, p.
055302
.10.1103/PhysRevE.80.055302
14.
Roy
,
S.
,
Gupta
,
M. R.
,
Khan
,
M.
,
Pant
,
H. C.
, and
Srivastava
,
M. K.
,
2010
, “
Effect of Surface Tension on the Rayleigh-Taylor and Richtmyer-Meshkov Instability Induced Nonlinear Structure at Two Fluid Interface and Their Stabilization
,”
J. Phys. Conf. Ser.
,
208
, p.
012083
.10.1088/1742-6596/208/1/012083
15.
Layzer
,
D.
,
1955
, “
On the Instability of Superposed Fluids in a Gravitational Field
,”
Astrophys. J.
,
122
, pp.
1
12
.10.1086/146048
16.
Goncharav
,
V. N.
,
2002
, “
Analytical Model of Nonlinear, Single-Mode, Classical Rayleigh-Taylor Instability at Arbitrary Atwood Numbers
,”
Phys. Rev. Lett.
,
88
, p.
134502
.10.1103/PhysRevLett.88.134502
17.
Hecht
,
J.
,
Alon
,
U.
, and
Shvarts
,
D.
,
1994
, “
Potential Flow Models of Rayleigh-Taylor and Richtmyer-Meshkov Bubble Fronts
,”
Phys. Fluids
,
6
, p.
4019
.10.1063/1.868391
18.
Drake
,
R. P.
,
2006
,
High Energy Density Physics
,
Springer
,
Berlin
, p.
175
.
19.
Gupta
,
M. R.
,
Roy
,
S.
,
Khan
,
M.
,
Pant
,
H. C.
,
Sarkar
,
S.
, and
Srivastava
,
M. K.
,
2009
, “
Effect of Compressibility on the Rayleigh-Taylor and Richtmyer-Meshkov Instability Induced Nonlinear Structure at Two Fluid Interface
,”
Phys. Plasmas
,
16
, p.
032303
.10.1063/1.3074789
20.
Zhang
,
Q.
,
1998
, “
Analytical Solutions of Layzer-Type Approach to Unstable Interfacial Fluid Mixing
,”
Phys. Rev. Lett.
,
81
, pp.
3391
3394
.10.1103/PhysRevLett.81.3391
You do not currently have access to this content. |
# Should I scale my variables when using a log-log regression?
I am running a log-log OLS regression in Stata.
My dependent variable is the log of house price, and my regressors of interest are, the log of distance to employment, and the log of jobs created.
My coefficients are very small but statistically significant (e.g. increasing employment by 1% increases house prices by 0.00013%).
I was told that the log-log specification can come up with weirdly small coefficients, and so I should scale my variables by ten to resolve this issue.
My regressions take up to 24 hours to run because of the number of observations, so before I waste a day, I wanted to see if this scaling logic is correct?
Scaling a logged variable will not change the slope coefficients, but it will change the constant. To see this suppose we scale $x_1$ by a constant $c$, \begin{align} \log(y) &= \beta_0 + \beta_1\log(c x_1) + \varepsilon \\ &= \beta_0 + \beta_1 \log(c) + \beta_1\log(x_1) + \varepsilon \\ &= \alpha_0 + \beta_1\log(x_1) + \varepsilon \end{align} where $\alpha_0 = \beta_0 + \beta_1 \log(c)$ is the new intercept term. |
Two blocks, a string, and a spring.
1. Jul 24, 2012
AbigailM
1. The problem statement, all variables and given/known data
Two blocks A and B with respective masses $m_{A}$ and $m_{B}$ with respective masses $m_{A}$ and $m_{B}$ are connected via a string. Block B is on a frictionless table, and block A is hanging at a vertical distance h from a spring with spring constant k that is at its equilibrium position. The blocks are initially at rest. Find the velocity of A and B when the spring is compressed by an amount $\delta y =m_{A}g/k$. Determine the maximum compression $\delta y_{max} of the spring in terms of [itex]m_{A}, m_{B}$, g and k. (Hint: what happens to the motion of the blocks when $\delta y-m_{A}g/k$?)
2. Relevant equations
$\delta y=m_{A}g/k$ (Eq 1)
$(m_{A}+m_{B})gh=\frac{1}{2}(m_{A}+m_{B})v^{2}$ (Eq2)
$\frac{1}{2}m_{A}v^{2}=m_{A}g\delta y - \frac{1}{2}k\delta y^{2}$ (Eq 3)
3. The attempt at a solution
From Eq2 $v_{B}=\sqrt{2gh}$
Solve Eq3 for v and substitute in Eq1. Then we can subtract our new equation from Eq2:
$v_{A}=\sqrt{2gh}-\sqrt{m_{A}/k}g$
To find $\delta y_{max}$ substitute $v=\sqrt{2gh}$ into Eq3:
$m_{A}gh=m_{A}g\delta y - \frac{1}{2}k\delta y^{2}$
Now solve for $\delta y$:
$\delta y=\frac{m_{A}g-\sqrt{m_{A}^{2}g^{2}-2km_{A}g}}{k}$
Does this look correct? Thanks for the help
2. Jul 25, 2012
PhanthomJay
You seem to be breaking up the motion into stages (you don't necessarily have to) and coming up with incorrect equations. You must consider the energies of both blocks when applying the conservation of energy equations. The block on the table still has PE at any stage of the motion, and it still has KE as block A hits the spring. |
# math : AP Statistics Quiz - Probability Quiz
Quiz
*Theme/Title: AP Statistics Quiz - Probability
* Description/Instructions
The AP exam has not historically tested computation of traditional probability highly, but the concepts of mutually exclusive (disjoint) events, independent events, and conditional probability will definitely be included. The formulas for the Addition Rule and the Multiplication Rule are given on the AP formula sheet; it is up to you to know how to use them for a given question. Probability questions will include data summarized in probability distributions and 2-way tables. You also need to be able to summarize data into a Venn diagram or a tree diagram, calculating probabilities from them.
Group: AP Statistics AP Statistics Quizzes
Topic: math |
# Why it’s harder to discover valuable knowledge in tree-structured note-taking
Recently, Cotoami project, the successor of Piggydb, released a new feature called “Linking Phrases”.
As you can see in the screenshot below, it allows annotating a connection when you feel a need for some explanation of it.
I thought about this enhancement when I was working on Piggydb some years ago but suspected it would just complicate things without adding much value. Piggydb aimed at becoming a simple note-taking tool, not a modeling tool. After all, you can express a labeled relationship by adding a node between the two.
However, I came up with an idea of “Horizontal and Vertical Relationships” recently and thought it would become one of the significant features in Cotoami.
The term “Linking Phrases” is borrowed from Concept Maps, which I mentioned in the article Wiki, Mind maps, Concept maps, and Piggydb before.
When I first saw concept maps, I thought that is where Piggydb’s knowledge creation process should lead.
Since it focused on the structure or knowledge-creation-process side of Concept Maps, Piggydb hasn’t had an update to support writing Concept Maps so far, but now, Cotoami supports it as a result of this enhancement.
The above concept map explains why we have seasons (the original concept map is presented in the article at Concept Maps official website: http://cmap.ihmc.us/docs/theory-of-concept-maps). If you are interested in how this concept map was created with Cotoami, here is a youtube video to demonstrate the process:
Concept mapping is an excellent way to demonstrate this feature, but a significant difference is that Cotoami’s linking phrases are optional. That means you should avoid annotating connections unless the relationships are obscure to you. Those unclear relationships are possibly valuable knowledge for you (since you didn’t know them well before), and should be highlighted in your knowledge-base. I call them Horizontal Relationships.
On the other hand, Vertical Relationships generally means inclusive or deductive relationships like “has”, “results in”, or “is determined by” appeared in the concept map example above. Most connections would fall into this category. Simple arrow lines would be enough to express these relationships, and you wouldn’t feel the need for annotations in most cases.
Whether a connection is horizontal or vertical depends on you or your group. For example, if you are a table tennis fan, the connection below should be obvious:
[Table tennis] ----> [Jan-Ove Waldner]
But if you are not, there’s a need for some explanation:
[Table tennis] --(legendary player)--> [Jan-Ove Waldner]
In the process of Cotoami’s knowledge creation, horizontal relationships would be a small portion of all connections but represent some important discoveries in your knowledge-base. That’s why I introduced this enhancement. Annotating only horizontal relationships won’t complicate things.
This idea also leads to the insight described by the title of this article. It would be difficult to deal with both horizontal and vertical relationships at the same time in tree structures which mainly deal with the latter.
With this update, Cotonomas (Cotonomatizing), which I explained in the previous entry, and Linking Phrases are the most two essential features so far in Cotoami. Both are for highlighting your discoveries.
If you are interested in Cotoami’s way of note-taking, check out the project website on GitHub (https://github.com/cotoami/cotoami). It would be fairly easy to try it out on your PC.
The project is also waiting for your support by becoming a patron at https://www.patreon.com/cotoami. In return, you’ll get an account of the fully-managed official Cotoami server.
# The 10th anniversary of Piggydb and the current status of the journey
Hello.
Piggydb turned 10 this summer. It was August 27, 2008, when I released the first version.
Recently the project is in a dormant state. It’s been more than two years since the last release (February 2016).
In 2010, I wrote about the goal of Piggydb:
But as I experimented with Piggydb’s knowledge creation, I found out that it did not work as well as expected. I thought originally some sort of structure would gradually emerge in the continuous organization of knowledge fragments with tags. But there’s something missing still in Piggydb to achieve this goal. – Wiki, Mind maps, Concept maps and Piggydb | Piggydb
So what’s the current status of the journey to the goal?
I think I’m almost there.
Piggydb would be powerful in terms of creating highly structured content with fragment relationships and hierarchical tags, but not good at providing a place of generativity as quoted above.
The “Table Tennis Videos” demo site is a good example for this.
It’s well structured and tagged so that visitors can search the videos in various ways. However, once you decided a system of structure and tags, you can’t escape from it easily. You just input fragments so as to align with the existing structure.
It’s actually useful for certain purposes, especially for displaying some information, but I myself want to escape from static structures. If you use it as a personal or team knowledge base, it wouldn’t last long because it’s highly possible there’s no metabolism occurring in it.
This experience made me rethink the principles needed to realize metabolism in digital note-taking, and that led to the Cotoami project I’m currently working on.
The first principle implemented in Cotoami was to make the barrier to input as low as possible.
In Cotoami, you post your ideas and thoughts like chatting. It’s actually a chatting feature where you can chat with other users sharing the same space.
You would feel free to write anything that comes in your mind. Your posts just flow into the past unless they are pinned:
There are two panes side by side representing flow and stock respectvely.
Then you make connections to enrich your stock just like Piggydb’s fragment relationships.
You can view your network of knowledge in a graph:
Making connections is like chemical reactions in metabolism, which should produce a new chemical substance finally. And this is the second principle implemented as “Cotonomatization” in Cotoami.
In Cotoami, individual posts are called “Cotos”, which is a Japanese word meaning “thing” and there’s a special type of Coto called “Cotonoma” (Coto-no-ma means “a space of Cotos”). A Cotonoma is a Coto that has a dedicated chat timeline associated with it.
These two concepts are basic building blocks of a knowledge base in Cotoami.
As you can see in the above image, Cotonomas form a recursive structure and each Cotonoma has its own metabolism cycle.
Here you can understand what “Cotonomatization” is. It means converting a plain Coto into a Cotonoma:
I think this process, converting a Coto that has collected many connections and appears to be important into a Cotonoma, in order to create another conceptual space of metabolism, leads to what I originally thought in 2010: “some sort of structure would gradually emerge in the continuous organization”.
I’ve been using Cotoami for more than a year now and feel it works greatly. Many of my Cotonomas have been created spontaneously from my random thoughts or conversations with my friends and they are filled with new discoveries.
I’d like you to try it out if you read this far 😉 There’s a demo server: https://demo.cotoa.me and another server for practical use, which gives accounts to crowdfunders: https://www.patreon.com/cotoami
# Piggydb -> Oinker -> Cotoami: We need your help!
Hello!
After the experimental endeavour to create a next-generation Piggydb which became Oinker.me, we decided to re-create it from scratch as open source.
The project is called “Cotoami”. It is still in an early stage of development and we are looking for some comments and feedback from people who are interested in Piggydb-like applications.
You can easily catch up on the history of development by reading the tweets at https://twitter.com/cotoami and try out the latest version at https://cotoa.me
# Piggydb V7.0 – Java 8 / New Page Header / Bug Fixes
Hi there,
It’s long time no see… actually, it’s almost two years since the last version (V6.18) was released.
After the long pause, Piggydb’s new version is finally here.
It doesn’t contain big changes except that the page header has been redesigned. Now it looks cooler than before (hopefully), and the title displayed in the header (“Piggydb Documents” in the screenshot) is the “Database Title” which you can change in the System Info page.
It also fixes a bug that it won’t work offline because of the reference to the hosted Mathjax library.
• Mathjax load from cdn makes Piggydb unusable without internet connetction · Issue #9 · marubinotto/Piggydb
Lastly, Piggydb V7.0 requires Java 8. If you use one of the previous versions of Java, you need to upgrade it.
Enjoy 😉
# Knowledge Network Graph Visualization
From time to time I received requests for MindMap or ConceptMap like graph visualization (nodes and edges style) in Piggydb or Oinker. But I thought there were things to consider in order to implement it since the models in both applications were document-oriented as I explained in an Oinker Blog entry.
• Graph Style (Nodes and Edges) or Document Oriented Style? | Oinker Blog
Recently I came up with an idea and implemented it in Oinker as below:
The problem of displaying document-oriented data in a graph view is that a document tends to contain many large nodes which are not suitable for bird’s eye overview. So we should deal with these nodes somehow to avoid the verbosity of being precise. The idea I came up with is a way to select nodes for a graph. I call these selected nodes “topic nodes“.
Currently a topic node is:
• a node whose content has only one word or sentence.
• a node whose content length is shorter than or equal to 30.
• a node whose content is not Markdown
• a node whose content is not a URL
• a node whose content is not a file
You can check out an example of how topic nodes work in Oinker’s graph view at:
https://oinker.me/room/marubinotto/impact-mapping
This feature is still experimental and waiting for your feedback.
Oinker – https://oinker.me/
# Piggydb and Oinker as a Content Publishing Platform
Hi there,
It’s been a while. These days I’m working on a web service “Oinker” off and on, squeezing time from busy days.
Recently, I’ve started pulling well-proven features from Piggydb and adding them to Oinker. One of them is a content publishing capability which is implemented as “Anonymous Access” in Piggydb (sample site).
Oinker’s publishing feature is more sophisticated than Piggydb. You can publish your content on a room basis. A room is like a chatroom in Oinker and it has a chat timeline and a board on which you create content with your roommates.
A room is composed of a timeline and a board
You can make a board open to public so that anonymous visitors can view the content, and additionally allow logged-in users who are non-members of the room to view the timeline and post messages to it. So you can not only publish your content, but also collect feedback from audience.
What kind of content can you create in Oinker? Just check out the sample content: Unknown Tokyo
# Oinker is now open beta!
I’m looking forward to your feedback 😉
# What is Oinker?
I’ve launched a blog to deliver weekly updates on Oinker:
# Oinker is now accepting invitation requests!
Happy new year 2015 from Japan! I hope you have a wonderful year, especially in terms of knowledge work 🙂
As this year begins, the newborn service Oinker starts accepting invitation requests.
As you saw in the movie, Oinker is extremely simple. You just chat alone or with your friends and connect the comments (oinks) by dragging and dropping.
That’s all, but its potential is enormous.
I’ve been using it in real business projects with my colleagues for about a year, mainly for ideation, task and knowledge management, and it’s been just amazing. I’ll write the details of these use cases at the Oinker blog.
The best way to feel the potential is to experience it yourself, so if you are interested in trying it out, please email to [email protected]
# Oinker beta-test is about to start!
Finally, this day has come.
As I wrote in last week’s blog post, I’m going to send invitations of Oinker beta-test to the Piggydb Supporters after this post published.
What is Oinker? Just look at the video below:
If you are interested in trying it out, but not a Piggydb Supporter, please consider to buy the Piggydb Supporters Edition or wait for the next phase of inviting beta testers on request basis, which will start in the first quarter of the next year. |
Home > Pseph > Australian stats
## Data
This long page describes various aspects of the dataset.
• Results dataset (5.9 MB), including both the base set of data, the estimated data, and the R scripts to reproduce the latter and the JavaScript data file. (Updated 2015-11-27: After a reader pointed out an anomaly, I modified the TPP estimates to handle partial preference distributions, resulting in mostly minor changes.)
• Shapefiles containing division boundaries (usually the two most recent redistributions are Ben Raue's, with me converting to shapefile and tidying up the geometries so that various spatial functions wouldn't throw errors when trying to calculate on them):
### Results
The AEC has digital results online from the 1993 election onwards. The files for 1993-1998 and for 2001 available for download here, and for 2004 till the present they are available from the Results archive page. For results prior to 1993, I primarily used Adam Carr's Psephos archive. I did various checks on this data (totals versus sum of primary votes; vote tallies versus votes transferred during preference distributions, etc.); some preference flow data from the AEC datasets is missing (!), and there are some miscellaneous typos in the Psephos files. I corrected as much as I could, referring to whichever of Hughes and Graham (Voting for the Australian House of Representatives, 1901-1964 or 1965-1984) or the AEC's Election Statistics was most convenient for me. After having worked carefully through many of these (mostly minor) corrections, I learned that the Parliamentary Library has also been working on a digital dataset, with the AEC cross-checking against its own archives and ironing out errors in published results. When this data is made available, I will try to update mine.
The available data has changed over time, with much more detailed vote counting available today than in the past. The short history is as follows:
• 1901*-1917: First-past-the-post voting.
• 1919-1980: Preferential voting, but preferences were only distributed until a candidate had more than 50% of the total vote; where a candidate won more than 50% of the primary vote, we have no preference data for the seat.
• 1983**-1993: Preferences were distributed to completion, and two-party-preferred counts were undertaken for all seats, even in non-classic divisions (for Newcastle 1987, the two-candidate-preferred count was not done but the two-party-preferred count was done, despite an independent finishing second!).
• 1996-present: As above, but also with preference flow data. That is, for each candidate, we have the percentage of votes going to each of the two final candidates in the count (instead of having to estimate this based on the preference distribution, which mixes up the votes from the various excluded candidates).
*In 1901, South Australia used bloc voting, and Tasmania used Hare-Clark.
**The 1983 election was first counted in the same way as 1919-1980; in 1984, before the ballot papers were destroyed, the AEC conducted a full distribution of preferences, with the results published in General election of members of the House of Representatives, 5 March 1983: result of full distribution of preferences. I thank Mumble (Peter Brent) for sharing his spreadsheet of the 1983 TPP results, which motivated me to track down the full results booklet.
For the years where preferences were not distributed to completion, it would still be nice to have approximate figures for the preference flows and two-party-preferred. There are several different sets of estimates of the TPP; in the spirit of xkcd.com/927, and not necessarily in the spirit of Colin Hughes* (Australian Two-Party Preferred Votes, 1949-82, downloadable after a fashion from ADA), I have added my own, detailed below.
*In the introduction to his book of TPP tables (counted and estimated), Hughes writes, entirely reasonably, "it may matter less whether we say 47.2 percent or 47.5 percent than we all agree to say the same thing and then get on to saying something of greater substance." I figured that with all the power of modern computing behind me, it would be a shame if I didn't make some effort at writing a script to generate such estimates. As I describe below, I don't think I've necessarily improved the results, but it was worth a try.
For candidate names I rely on the Wikipedia candidate pages, and with only occasional exceptions, I have not checked the spellings.
### Preference flow estimates
My two-party-preferred estimates are based first on estimating the preference flows for each party. In three-candidate contests where the third candidates preferences were distributed, the preference flow is known exactly. For full preference distributions involving more than three candidates, I estimate the preference flows in the simplest logical way possible: if some portion of minor candidate A's votes are distributed to minor candidate B, then that portion of A's preferences are assumed to flow in the same proportions as the rest of the votes in B's pile. This gives reasonable results, but of course won't always be correct: to take a modern example, PUP voters who preference the Greens ahead of either major party are unlikely to preference Labor ahead of the Coalition at the same rate as people who vote 1 Green.
For 1996-2013, we can compare the preference flows estimated by this method to the counted flows. The scatter plot of estimated versus true preference flows for each non-major-party candidate in 2013 is typical:
The correlations are good enough for not to give up on the exercise: for the elections 1996-2013, they are respectively 0.88, 0.91, 0.93, 0.93, 0.92, 0.95, 0.92. It looks like there is a slight bias in the results, with particularly strong preference flows being under-estimated – a smooth curve through the scatters would be slightly S-shaped rather than straight.
For comparison, a simpler method to estimate TPP flows would be to just use the preferences that went straight to Labor or Coalition. i.e., if 20% of Candidate A's preferences go to Candidate B, 60% go to the Coalition, and 20% go to Labor, then we could estimate the TPP flow to Labor as 20% / (20% + 60%) = 25%. This method gives slightly poorer correlations: 0.86, 0.90, 0.91, 0.90, 0.89, 0.94, 0.92.
### Two-party-preferred estimates
The above procedure for the preference flow estimates covers all the cases in the graphs or maps where the preference flows are plotted. But for two-party-preferred estimates in seats where preferences were not distributed, we need to guess the preference flow from all of the non-major-party candidates contesting the seat. (Here I'm assuming that there are candidates from both major parties.) I guess these preference flows with some "rough-and-ready" stats that might betray my lack of serious statistical training, and which shouldn't be treated as magic, but which I think does OK anyway.
First, I create a time series – one number for each election – starting with mean known or estimated preference flows for each party. (The mean is calculated as the simple average of the flows for an election, i.e., it is not weighted by number of votes for the party in each seat. This is probably the wrong thing to do.) There may not be many observations in these figures, since preferences were only occasionally distributed, so I regress each flow towards 50% according to some eyeballed/fudged parameters: assume a Bayesian prior of mean 50% and standard deviation 15 percentage points, and an measurement of flow $$f$$ from $$N$$ observations with uncertainty $$12 / \sqrt{N}$$. Then,
\begin{equation*} f_{\text{regressed}} = \frac{50/15^2 + fN/12^2}{1/15^2 + N/12^2}. \end{equation*}
Through this time series I then calculate a smoothed loess curve: the idea is that the true preference flow for the party may vary over time, but should do so fairly gradually. And with such little hard data to draw on for each election, I think it's useful to aggregate across multiple elections somehow. The loess smoothing parameter $$\alpha$$ is set to 0.5 , and each data point is weighted by the number of seats that went into the measurement.
With a preference flow figure now decided for each party at each election, these percentages are applied to candidates whose preferences contributed to the hypothetical TPP. Independents are assumed to split 50-50 (it is tempting to treat independents as any other party and hence allow their guessed flows to differ from 50%, but I worry that the sample of independents whose preferences got distributed would be biased somehow), except for some special cases where I gave for this purpose a temporary party affiliation to a candidate (e.g, if a candidate had recently represented the Liberal Party, and I knew about this – probably after seeing some anomalies in Hughes's TPP estimates – then the candidate was coded as Ind Lib for preference purposes).
(Edit 2015-11-27: In the original version, I applied these estimated preference flows to the primary votes, even if there had been a partial preference distribution. I now use the latest count in which both a Labor and a Coalition candidate were present. This gives mostly minor changes, but fixes at least one anomaly where Labor won the seat but I had estimated them at less than 50% of the TPP – this anomaly was pointed out to me by a reader. The benchmarking numbers below have been updated, with the original page here.)
We have exact TPP results from 1983 onwards to benchmark this procedure. It's not obvious whether it should work better or worse on recent elections. On the one hand, there are many more candidates and parties, so the preference flow estimates are based on many "mixed" piles of votes which started with different parties. On the other hand, there are a lot more candidates who need preferences to get to 50% of the vote.
Since most of the TPP vote comes from the primary votes for Coalition and Labor, even a silly TPP estimation technique will correlate well with the true values. In the following tables I compare the the true-versus-estimated correlations, mean error by seat, and mean absolute error by seat, both for the estimates that I used and for a benchmark of assuming all seats have preferences splitting 50-50.
EstimatesBenchmark
Yearρerror|error|ρerror|error|
19830.99620.030.760.98890.281.15
19840.99660.190.680.97970.831.43
19870.99470.110.800.96961.041.99
19900.9935-0.331.100.9761-0.622.03
19930.9977-0.240.700.9867-0.241.45
19960.99730.571.000.9956-0.471.03
19980.9961-0.211.050.9895-0.931.86
20010.98680.031.130.9765-1.132.07
20040.9949-0.260.970.9824-1.442.09
20070.9942-0.280.860.9828-1.822.19
20100.99510.180.820.9838-2.202.73
20130.99580.520.840.9864-1.882.06
So, where I estimate a seat's TPP, it doesn't look like there's much bias to either side, and the typical error is about a percentage point.
Another question of interest is how the other existing TPP estimates fare against true values. For this, we have only the 1983 election to use as a test*: both Malcolm Mackerras and Adam Carr have their own estimates made after the election but before the full distribution of preferences in 1984. Of course, I have the advantage of making my estimates with the true values already known; I can only assure you that I didn't try to tweak the parameters in my code to get it to closely match the 1983 results, and in any case Mackerras's estimate errors had a smaller spread than mine.
*Joan Rydon compared Mackerras's state- and national-level TPP estimates with the subsequent full results in 'Two-party preferred': The analysis of voting figures under preferential voting, Politics, 21:2, 68-74 (1986).
Estimateρerror|error|
DB0.99620.030.76
Psephos0.99700.520.77
Mackerras0.99780.330.62
I have the consolation of having more unbiased TPP estimates than Psephos or Mackerras, but I wouldn't read too much into that – in three out of the next eleven elections, my estimates were biased at least as much as Mackerras's were in 1983.
(Mackerras's estimates were published in Double Dissolution Election, March 5, 1983: Statistical Analysis.)
Still, it was a fun exercise to try, and there's scope for some improvement, should anyone want yet another set of TPP estimates. In particular, I think that the strong preference flows could be modelled better (following the S curve of the scatter plots mentioned earlier), and I also think that, at least in the case of the DLP, it would be better to separate preference flows by state (or rather, separate by Victoria and rest-of-Australia). DLP preferences in Victoria flowed more strongly to the Coalition than they did in other states, and by using the national averages, I think I've under-estimated the Coalition's share of the national TPP by a smidgen (and the Victorian TPP by quite a bigger smidgen) throughout the DLP's strongest years. Finally, I ignored donkey votes, which severely distort the preference flows of small parties, and which should be accounted for both when estimating average preference flows from a party, and when applying them to a given seat.
There remains the question of what to do with seats not contested by both major parties – an occurrence that was once quite frequent (Labor didn't contest Wimmera for any election between 1914 and 1937 inclusive). In the case of one non-TPP election with TPP elections immediately before and after, I apply the state TPP swings forwards and backwards and average the two figures to generated my guessed TPP. For longer sequences of non-TPP elections, I just chain the swings together. The results might be somewhat fanciful, and they are excluded from scatter plots, but it is useful to have these seats' TPP's guessed so that state and national totals are more reflective of what they would have been if all voters had had the choice between Labor and Coalition.
A page of TPP tables is here.
### Party affiliations
I spent a while checking the party affiliations of candidates. Many of these are obscure and of little interest. I have one section on issues with the Victorian Country Party, followed by some miscellaneous notes. Party affiliations in the early years are sometimes unclear. References to "Hawker" are to Politicians All: The Candidates for the Australian Commonwealth Election 1901: A Collective Biography.
#### Victorian Country Party, 1934-43
I'm writing this section not because I think it's an original contribution to Australian political history, but rather in the hope that it helps provoke a better organisation of the Wikipedia entries and tables on this subject. See also this talk page comment by Frickeg, which gives an overview in the form of a chronological set of Trove references mostly about Alex Wilson, and which I borrow from below. I don't claim that these notes are a properly complete summary – in particular, it will be worth looking up the Labor-UCP relations in the state parliament – but hopefully they'll help.
I'll quote from Ulrich Ellis, A History of the Australian Country Party [1]. From p204, on the 1934 election:
Any hope of an electoral agreement in Victoria was destroyed by a fresh conflict in the state Country Party organization. The central council required all candidates, state as well as federal—to sign a pledge. This committed signatories to stand down from contests if not endorsed; to refrain from voting in parliament against majority decisions of the party caucus even on matters outside official party policy; and to refuse to support a composite government without the approval of the Victorian organization. All sitting federal members (T. Paterson, Q. C. Hill, W. G. Gibson, H. McClelland and Senator R. D. Elliott) refused to conform. Hill announced his retirement and the Echuca seat became a battle-ground. Two candidates supporting the federal faction were nominated. A Labor candidate entered the contest. The Victorian organization entered the lists with a young candidate named John McEwen who had signed the pledge.
The newspapers at the time distinguished Australian Country Party (ACP; Coalitionist) candidates from United Country Party (UCP; the Victorian anti-Coalitionist) candidates, and the Wikipedia tables replicate the affiliations given in the Argus's results tables [2]. But the line between the two camps is often blurry to me as I read the old newspapers (which is perhaps not surprising, for what was essentially an internal party struggle). As an example of this, I present below some news report excerpts concerning the campaign in Echuca, which was contested by three Country Party candidates (and one ALP candidate) – McEwen (UCP), Stewart (ACP in the Argus's results tables) and Moss (also ACP in the tables). This designation of ACP is consistent with the Ellis excerpt above and also with this report on 7 August [3]:
Both Mr Moss and Mr Stewart stated that they would not sign the pledge, but that they were quite prepared to sign the party platform and loyally support its programme.
But (13 August) [4]:
The Echuca branch of the United Country party has agreed to support the following candidates for the Echuca seat:—Messrs. J. McEwen, W. Moss, and Galloway Stewart. It was decided to abide by the decision of the Shepparton conference that members should exercise their preferences according to their discretion.
On the other hand (30 August) [5]:
He (Mr Stewart) was not endorsed by the Country Party because he would not subscribe to its new nomination form. He was not prepared, in the event of being elected, to do what he was told to do by the majority of Country Party members in the House. He must remain free and unshackled to carry out the wishes of the electors.
Very clear ACP/UCP lines appear to be drawn when Earle Page turned up (1 September) [6]:
Dr. Page on his arrival was disturbed to learn that he had been advertised as speaking on behalf of Mr. Galloway Stewart. He made it clear that he spoke on behalf of the two Australian Country Party candidates, and that he urged voters to give first preference to either Mr. Moss or Mr. Stewart, their third preference to Mr. McEwen, Victorian Country party candidate, with Labor last.
But just to throw a spanner in the works, a letter from some Stewart supporters on 14 September says that McEwen and Moss swapped preference recommendations [7]:
We notice by the official "tickets" issued by the other two country party candidates that Mr Galloway Stewart has been relegated to third preference.
And indeed, Moss's preferences split 2:1 in favour of McEwen over Stewart, with McEwen then easily defeating Stewart on Labor preferences.
From p207 of [1]:
When the central council devised a pledge to be signed by all candidates, federal and state, the federal members revolted and challenged its legality and soundness. The party's federal rules merely provided that the party might not 'form an alliance with any other political organization which does not preserve intact the entity of the Australian Country Party Association'. This was the situation existing when nominations were called for the federal elections. The Victorian party nominated its own candidates in a number of seats but only one, John McEwen, succeeded. Upon his election he immediately associated himself with the federal party and incurred the hostility of his Victorian colleagues for urging that the breach be healed.
Still, overall I'm happy enough with the ACP and UCP designations as given in the Argus's results tables, and as currently in the Wikipedia tables. Generally in my dataset, I've tried to designate party affiliations based on their campaigns, and not by their actions in the subsequent parliament (where relevant).
At least superficially, tensions in the party in the leadup to the 1937 election seem (in my reading) lower than in 1934. Only in Wimmera was there more than one CP candidate, with the sitting member (and Coalitionist) Hugh McClelland losing the UCP pre-selection to Alex Wilson [8]:
Despite the result of the ballot, Mr. McClelland announced last night that he would contest the seat as an unendorsed Country party candidate.
...
Mr. Wilson is a member of the central council of the party, and he is claimed as a strong opponent of composite Ministries. Mr. McClelland is a supporter of the Federal Composite Ministry.
Later from the same article:
The Minister for the Interior (Mr. Paterson) and Mr. McEwen, the other retiring Country party candidates in Victoria, have been endorsed by the Victorian central council for Gippsland and Indi respectively. Mr. Paterson and Mr. McEwen are supporters of the Federal composite Ministry.
The Wimmera contest was certainly split along Coalitionist v Anti-coalitionist lines [9]:
Mr. McClelland has received strong assistance from the leader of the Federal County party (Dr. Page), and Mr. Wilson has been assisted by two State Ministers—Mr. Bussau and Mr. Old.
The Argus results tables [10] refer to all of the endorsed Country candidates as UCP. Despite the Wimmera contest, the split doesn't appear as official or official-ish as in 1934 (and 1940, below), and so I have left all candidates in my dataset as "CP" with McClelland "Ind CP".
Of the four elected CP members from Victoria, two were from the anti-Coalitionist side of the party. Ellis writes (p220 of [1]):
A representative of the Victorian Country Party, Alexander Wilson, unseated the sitting member (Hugh McClelland) in Wimmera. As the loss of the Indi seat in 1928 sealed the fate of the Bruce-Page government, so the loss of Wimmera assisted a few years later to defeat a government. Wilson remained aloof from the federal party but G. H. Rankin, the Chief President of the Victorian organization, who won the Bendigo seat from the United Australia Party, incurred the wrath of his colleagues by joining the federal parliamentary party immediately.
(Rankin's subsequent backdown, mentioned in his ADB entry [11], occurred in May 1939 [12], as he ceased meeting with the federal parliamentary Country Party.
Any superficial truce between the two Victorian factions certainly ended soon after the election. John McEwen accepted a position as Minister for the Interior, and the UCP expelled him from the party [13]:
"In view of Mr. McEwen's failure to observe the rules of the Victorian United Country party in the acceptance of a portfolio in the Lyons composite Government and his lack of loyalty to endorsed Parliamentary candidates of the party at the recent Federal election, this central council decides to cancel his membership of the Victorian United Country party."
At the 1938 party conference, Thomas Paterson (Coalitionist) resigned from the UCP in protest, with a hundred others leaving the conference with him [14]. He formed the Liberal Country Party [15], which ended up standing two candidates in the 1940 federal election (Paterson and McEwen themselves).
Meanwhile, Alex Wilson followed the anti-Coalitionist principles of his faction of the UCP. Paterson said of him [16]:
...the attitude of Mr Wilson, M.H.R. for Wimmera, sitting in isolation, refusing to associate himself with those who should be his colleagues, generally voting with the Labour Party against his colleagues and weakening the effectiveness of the Party in that way.
June 1939 [17]:
The secretary of the party (Mr. D. R. Downey) announced yesterday that Mr. A. Wilson, the sitting member, was the only applicant for endorsement for the Wimmera electorate.
Wilson did nevertheless face Country Party opposition in 1940, in the form of Hugh McClelland, whom Wilson had defeated in 1937 and who ran as Ind CP. With a Labour candidate and an independent also nominating, a flavour of the allegiances can be gleaned from this Argus report [18] in the week before the election:
[T]he closest observers admit that it is impossible at this stage to predict whether Mr. Alex Wilson (U.C.P.) will be re-elected, or Mr. McClelland (Ind. C.P.), or Mr. M. M. Nolan (Lab.) will displace him.
...
Nomination of a Labour candidate at this election must take many votes from Mr. Wilson, who, however, probably commands more U.C.P. support than when he displaced Mr. McClelland three years ago.
The election probably will be decided by third position in the primary count. If Labour fills that position Mr. Wilson is almost certain of re-election. If Mr. McClelland is third his votes probably will carry Mr. Wilson in, but if Mr. Wilson is placed third the Labour candidate may draw enough support to win. Because of the splitting of the U.C.P. vote the Labour candidate may lead in primaries.
In the event, Wilson won a fairly commanding 44% of the primary vote, and with over 80% of Labour's preferences, he defeated McClelland 66-34 on two-candidate-preferred. The result of the election was a hung parliament. Ellis writes (p257 of [1]):
Two independents held the balance of power if they chose to use it—A. Wilson (a member of the previous parliament) and A. W. Coles who, as an independent expressing sympathy with the United Australia Party, had captured the seat that had been Sir Henry Gullett's.
The designation of "independent" certainly describes Wilson's actions in parliament, which most notably included crossing the floor to bring down the Fadden government. And in an article about the coming merger of the LCP and UCP, there is mention that [19]
Later, Mr. Wilson, who has consistently supported the Federal Labor Government and whose attitude encouraged Mr Curtin to make his successful bid for office was also 'carpeted' by a capricious central council for daring to label himself an independent.
In April 1943, a union of the Victorian Country parties was close [20]:
In his speech to delegates, Mr McEwen emphasised that the proposal was to form a new party representing country interests. He expressed the view that it would not now be difficult to bridge the gulf between the two parties, particularly as the UCP had reversed its previous policy preventing its Federal parliamentary representatives from other States....
...
"We have seen Major General Rankin and Mr Wilson instructed not to attend meetings of the Australian Country party. We have seen that instruction revoked and these two members authorised to attend ACP meetings. Mr Rankin has continuously attended for at least two years."
Nevertheless, Wilson remained committed to independence in the parliament and also committed to the UCP, where he still commanded some support. ("He can rightly be termed a modern Abraham Lincoln," said one member of the UCP Central Council [21]). He retained Wimmera with over 60% of the primary vote, not opposed by any endorsed CP candidates. The Argus results tables [22] designate him "CP" along with all other endorsed Country candidates; given his unusual position, I have called him "UCP" for the 1943 election in my dataset.
(He quit parliament in 1945, mercifully ending my struggles in deciding how to label Victorian Country Party candidates.)
[1] Ulrich Ellis, A History of the Australian Country Party, MUP (1963).
#### 1901
Lang
Mitchell, Ind. I could find little about this candidate; usually no party affiliation given by SMH [1]; the Muswellbrook Chronicle in their results [2] call him Protectionist. H&G say Ind Prot; Hawker says Prot; I have called him Ind.
New England
Simpson, Ind. Hawker describes him as the "second" freetrader, but I have called him Ind following [1], where he says that "he would give, if elected, the Barton Government a fair trial".
Wannon
Cussen, Ind Prot. "If elected he would give his support to Mr Barton" [1]
#### 1903
Capricornia
Ryan, Ind Prot.
Northern Melbourne
Painter, Ind Prot. 'Protectionist "up to the hilt"'
#### 1906
Batman
Painter, Ind. Perhaps should be Ind Prot again?
Batman
Vernon, Ind Prot.
#### 1910
Oxley
Dent, Ind. The Courier said that "Mr. Dent is standing in the democratic interest", correcting an article in which they called him Labour.
Bass
Storrer, Ind Lib. Apparently Storrer was opposed to the Liberal Fusion, but he had plenty of official Liberal support, and I've called him Ind Lib rather than Ind Prot.
#### 1913
Henty
Hewison, Ind Lib. A Liberal who withdrew after arbitration, leaving Boyd as the endorsed Liberal
#### 1914
Gippsland
Wise, Ind Lib. Called himself a Liberal; was opposed to the Fusion. Apparently he often voted with Labor, but I've called him Ind Lib.
#### 1919
Brisbane
Boland, Ind. "[H]e had emerged... as an independent in search of some congenial and honest party. He regretted that sincerity could not be found in any of the organisations with which he had been associated." [1] Another Boland ran as a state candidate, apparently as a Nationalist [2].
#### 1922
West Sydney
Bryde, Prot Lab. Protestant Labour.
Henty
Francis, Nat. Sometimes called Ind Nat [1]; sometimes just Nat [2]. I've followed H&G and called him Nat, with hesitation.
Kooyong
Best, Nat. Usually referred to as Ind Nat by the papers, but I've left him as Nat, on the grounds that no other Nationalist candidate ran against him.
Northern Territory
Love, NTRL. Northern Territory Representation League
Northern Territory
Nelson, ALP. According to ADB, he ran as an independent with union support, and joined the Labor Party after the election. Following Psephos I've called him ALP.
#### 1925
Calare
Southwick, Ind. H&G say Ind Nat, but I prefer Ind [1]. "The people must get rid of both parties, and get down to solid work."
#### 1928
Gippsland
Wise, Ind Lib. Independent Liberal.
Flinders
Robertson, Ind. I found one reference to him as Ind Nat [1], but generally in what little there was of him in the papers, he was just described as an independent (e.g., [2])
#### 1940
East Sydney
Phillips, Atok. Phillips called himself an "Atokist" [1] and this was good enough for the SMH in its results tables [2], albeit with scare quotes.
Wannon
Crawford, Ind CP.
Yarra
Gibson, Soc. Gibson was a communist; usually referred to as an independent during the campaign [1], but was designated Soc in the results tables [2]
#### 1943
Northern Territory
Murray, Ind Lab. Murray did not have endorsement of the federal party, and called himself an independent Labor candidate
#### 1946
Newcastle
Ellis, Service. The Service Party was a distinct entity from the Services Party, though I think its only candidate was Ellis in Newcastle.
Northern Territory
Wallman, Ind Lab. Endorsed by (some branches of?) NT Labor contra federal party
#### 1949
Wide Bay
McDowell, Ind Lab. McDowell called himself "Democratic Labour"; I've coded this as Ind Lab.
#### 1954
Warringah
White, Ind Lib. White was an unendorsed Liberal.
McPherson
Green, Ind CP.
#### 1975
Werriwa
Keep, Ind. Canberra Times and SMH tables say "HOPP", but don't say what that might stand for; official Election Statistics has blank.
Kingston
Oakley, Ind. Canberra Times and SMH tables say Workers Party, but the official Election Statistics and the Parliamentary Handbook say independent. Oakley later stood as a Progress candidate (the re-named Workers Party); I edited Wikipedia to say WP, but ended up leaving her as an independent in my dataset.
### Maps
I digitised the maps until the recent redistributions covered by the Tally Room myself, working either from Commonwealth of Australia, 1901-1988, electoral redistributions ("the AEC book") or the official redistribution maps. The maps have been heavily simplified for fast loading on the web; the shapefiles available for download above are not so heavily simplified but are riddled with various hopefully minor errors. I had never tried digitising a map before this project, and I expect frequent errors of 10+km in regional areas when working off the AEC book (on one occasion, when joining up a map of Sydney surrounds to the rest of New South Wales, there was a difference of 0.2 degrees between my two georeferencing attempts; I'd like to think that I fixed that isolated mishap, but I can't claim too much confidence either in fixing it well or in calling it isolated). Georeferencing in many outer metro areas is also likely poor, as the printed maps run out of control points for me to use. (The pages in that book are enormous, so I just took photos of them rather than scanning, which probably didn't help.)
I had particular trouble with the Victorian maps, in some cases because I didn't know what I was doing and in some cases because the AEC's maps were drawn incorrectly. For the 1949 redistribution, I used the Argus election supplement of 7 December to help interpret a supposed division labelled Fitzroy and to locate the unlabelled (and very subtly drawn) La Trobe. The AEC book's 1989 redistribution contains at least one clear error of several hundred metres: part of the boundary between Aston and La Trobe runs along the railway in the image below, but the map has it drawn separately.
Antony Green has hand-drawn quite a few maps of various electorates (working, I presume, off the actual descriptions), and put these on his electorate profile pages (e.g., Deakin). I could have lifted Antony's shapefiles to get most of the metro areas accurate to the street in the years that Antony covers, but instead I used them only as an occasional guide where I was totally lost, and as a benchmark to check the quality of my georeferencing. Here is a picture of Deakin 1989 (me: red; Antony: sky blue):
Where I used the (large!) official redistribution maps, I generally expect better accuracy, with metro areas often being accurate to the street. Here is a comparison against Antony for the division of Banks for the redistributions of 1984 (AEC book) and 2000 (large map):
I hope that errors in rural areas are almost always less than 5km for the redistributions from 1992 onwards, and usually no more than 2-3km. I did encounter some anomalies, either in my understanding, in the maps, or in the roads shapefile from Geoscience Australia that I used to define many control points. In the NSW redistribution of 1992, the Sydney surrounds map has co-ordinate lines, which look to me like easting and northing lines. But when I use them as control points, I get systematic errors of several hundred metres relative to the road and railway intersections. My recollection is that I semi-randomly compromised between my systematically differing control points, so treat the boundaries with appropriate caution (any such errors will propagate to the 2000 redistribution, wherever the boundary was not changed).
#### Circle maps
Geographic Australian electoral maps are dominated by the large electorates in rural areas, and it is useful to show the results instead with equal-area shapes. In the UK, there are some nice hexagon maps (example) which expand the size of London and other major cities, and which still make it easy to see roughly where each constituency is geographically. For Australia's electorates of wildly unequal size, the best equal-area representation I've seen is that of Nick Evershed and Gabriel Dance at The Guardian. The idea is to draw a circle of constant size at the centroid of each electorate, and then to allow overlapping circles to move apart. Evershed and Dance's Javascript implements a lightning-fast algorithm to move the circles apart from each other on the fly; I decided instead to slowly pre-calculate the equal-area circle centre locations. (There are a handful of electorate circles in the Guardian's map which I think are in the wrong place. That might be an excessive nitpick, and others might not like my circle locations.)
I did my calculations in R with Rcpp. It was the first time I'd used Rcpp, so I'm quite happy with it, as I always am when I get a new programming thing working. The idea is: in R, load the (geographic) shapefile and compute the co-ordinates of the centroids; then pass the centroids to an Rcpp function. The centroid locations are then evolved as though they are mutually-repulsive point particles with movement being heavily damped so as to move the centroids as little as possible. To further keep movement to a minimum, the centroids don't move when they are sufficiently distant from all other centroids.
I present both the R code, which uses various spatial libraries, and the Rcpp code. The R code won't run as-is unless the various shapefiles are already in the same relative folders as on my computer, but any interested readers should be able to pick out what they need.
# Script to make equal-area shapefiles for the electorates, following the idea here:
# http://www.theguardian.com/world/datablog/2013/sep/06/better-election-results-map
library(sp)
library(rgeos)
library(maptools)
library(rgdal)
library(Rcpp)
sourceCpp("equal_area.cpp")
tau = 2*pi
# In pixels:
polygon_height = 10
# Number of sides:
polygon_sides = 20
# Alphabetical order!
states = c("act", "nsw", "nt", "qld", "sa", "tas", "vic", "wa")
start_year = 1901
end_year = 2013
divisions_count = numeric()
lon_lat_to_web_mercator = function(lon, lat, zoom) {
# Not quite the Google version, which flips the y-component
lon = lon * tau/360
lat = lat * tau/360
x = (lon + tau/2) * 2^zoom * 256 / tau
y = (log(tan(tau/8 + lat/2)) - tau/2) * 2^zoom * 256 / tau
return (data.frame(x, y, row.names=NULL))
}
web_mercator_to_lon_lat = function(x, y, zoom) {
# Not quite the Google version, which flips the y-component
lon = (x*tau/(256 * 2^zoom) - tau/2)*360/tau
lat = (2*atan(exp(tau/2 + y*tau/(256*2^zoom))) - tau/4)*360/tau
return (data.frame(lon, lat, row.names=NULL))
}
make_regular_polygon = function(x, y, n, r) {
theta = seq(0, -tau, length.out=n+1)
theta[n+1] = 0
x_out = x + r*cos(theta)
y_out = y + r*sin(theta)
return(cbind(x_out, y_out))
}
phi = tau/4 * (1 - 2/polygon_sides)
polygon_side_length = polygon_height / tan(phi)
# Get every redistribution date: states' redistribution dates as
# vectors in a list, and a vector for any change across the country.
redist_dates = list()
all_redists = numeric()
for (i in 1:length(states)) {
state = states[i]
redists = list.files(path=state, pattern="[0-9][0-9]\\.shp")
redists = as.numeric(gsub("[^0-9]", "", redists))
redist_dates[[i]] = redists
all_redists = c(all_redists, redists)
}
all_redists = unique(all_redists)
all_redists = all_redists[order(all_redists)]
first_redist = all_redists[max(which(all_redists <= start_year))]
redists_to_process = c(first_redist, all_redists[which((all_redists > start_year) & (all_redists <= end_year))])
for (year in redists_to_process) {
print(year)
divisions = character()
x = numeric()
y = numeric()
centroids = data.frame(x, y)
national_dir = sprintf("national/%d", year)
dir.create(national_dir)
for (i in 1:length(states)) {
state = states[i]
skip_state = 0
state_redists = redist_dates[[i]]
possible_redists = state_redists[state_redists <= year]
if (length(possible_redists) == 0) {
skip_state = 1
} else {
this_year = max(possible_redists)
}
if (skip_state == 0) {
in_shp_file = sprintf("%s/%s_%d.shp", state, state, this_year)
# Copy files to the national directory:
shp_name = sprintf("%s_%d\\.", state, this_year)
shp_files = list.files(path=state, pattern=shp_name)
for (j in 1:length(shp_files)) {
in_file = sprintf("%s/%s", state, shp_files[j])
out_file = sprintf("%s/%s", national_dir, shp_files[j])
file.copy(in_file, out_file, overwrite=TRUE)
}
this_divisions = as.character(this_shp@data$Division) this_centroids = as.data.frame(gCentroid(this_shp, byid=TRUE)) centroids = rbind(centroids, this_centroids) divisions = c(divisions, this_divisions) } } out_shp_file = sprintf("equal_area/aus_%d_equal", year) out_json_file = sprintf("%s.geojson", out_shp_file) out_pts_json_file = sprintf("%s_pts.geojson", out_shp_file) out_pts_json_layer = sprintf("%s_pts", out_shp_file) # writeOGR doesn't like overwriting geojson files, so delete it instead: if (file.exists(out_json_file)) { file.remove(out_json_file) } if (file.exists(out_pts_json_file)) { file.remove(out_pts_json_file) } centroids_merc = lon_lat_to_web_mercator(centroids$x, centroids$y, 4) # uncluster(x, y, damping_coeff, radius, time-step, max_time) # I've got the max_time set pretty high here, but it should only take t ~ 30. new_points = uncluster(centroids_merc$x, centroids_merc$y, 15, radius, 1e-2, 1000) new_points_lonlat = web_mercator_to_lon_lat(new_points[1:length(divisions), 1], new_points[1:length(divisions), 2], 4) new_points_xy = data.frame(x=new_points_lonlat$lon, y=new_points_lonlat\$lat)
poly_list = list()
for (ct in 1:length(divisions)) {
this_hexagon_merc = make_regular_polygon(new_points[ct, 1], new_points[ct, 2], polygon_sides, radius/2)
this_hexagon_lonlat = as.matrix(web_mercator_to_lon_lat(this_hexagon_merc[, 1], this_hexagon_merc[, 2], 4))
poly_list[[ct]] = this_hexagon_lonlat
}
poly_sp = SpatialPolygons(mapply(function(poly, id) {
Polygons(list(Polygon(poly)), ID=id)
}, poly_list, divisions))
proj4string(poly_sp) = CRS("+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs")
poly.df = SpatialPolygonsDataFrame(poly_sp, data.frame(Division=divisions, row.names=divisions))
points.df = SpatialPointsDataFrame(new_points_xy, data.frame(Division=divisions, row.names=divisions))
# writeOGR's behaviour when writing GeoJSON files appears to be very dependent on
# the gdal version installed on the system. The check_exists=FALSE is a workaround
# based on https://trac.osgeo.org/gdal/ticket/5908 when using the current (as at
# time of writing) version of rgdal with gdal 1.11.
writeOGR(poly.df, ".", layer=out_shp_file, driver="ESRI Shapefile", overwrite_layer=TRUE)
writeOGR(poly.df, out_json_file, layer=out_shp_file, driver="GeoJSON", check_exists=FALSE)
writeOGR(points.df, out_pts_json_file, layer=out_pts_json_layer, driver="GeoJSON", check_exists=FALSE)
divisions_count = c(divisions_count, length(divisions))
}
And now the Rcpp:
/* Input to uncluster is a vector of x's and a vector of y's,
along with some parameters of the dynamics and integration.
The program then treats these as point particles which mutually
repel one another according to a potential function, with
motion damped. The idea is that the final set of points will
be separated enough so that they can be used as locations for
equal-area shapes on a Google Map. */
#include <Rcpp.h>
using namespace Rcpp;
double dist(double x1, double y1, double x2, double y2) {
return sqrt((x1-x2)*(x1-x2) + (y1-y2)*(y1-y2));
}
double potential(double d, double r) {
// d = dist, r = range of potential function, measured in pixels.
// r should be ~4 maybe.
double V;
double k = 5.0;
if (d < r) {
V = k*(d - 1.05*r)*(d - 1.05*r);
} else {
V = 0;
}
return V;
}
NumericVector calc_der(NumericVector r, double mu, double range) {
// Calculates the components of the dr/dt vector.
// First two entries are co-ordinates, next two are velocity components.
long i, j, idx_i2, idx_i3, idx_j0, idx_j1, idx_j2, idx_j3;
long n = r.size();
long num_pts = n/4;
NumericVector drdt(n);
// Needless initialisation?
for (j = 0; j < n; j++) {
drdt[j] = 0.0;
}
double r_ix, r_iy, r_jx, r_jy, d, fx, fy;
for (j = 0; j < num_pts; j++) {
r_jx = r[4*j+0];
r_jy = r[4*j+1];
idx_j0 = 4*j + 0;
idx_j1 = 4*j + 1;
idx_j2 = 4*j + 2;
idx_j3 = 4*j + 3;
// Potential terms:
for (i = j+1; i < num_pts; i++) {
r_ix = r[4*i+0];
r_iy = r[4*i+1];
idx_i2 = 4*i + 2;
idx_i3 = 4*i + 3;
d = dist(r_ix, r_iy, r_jx, r_jy);
fx = (r_jx - r_ix) * potential(d, range) / d;
fy = (r_jy - r_iy) * potential(d, range) / d;
drdt[idx_j2] += fx;
drdt[idx_j3] += fy;
drdt[idx_i2] -= fx;
drdt[idx_i3] -= fy;
}
}
for (j = 0; j < num_pts; j++) {
idx_j0 = 4*j + 0;
idx_j1 = 4*j + 1;
idx_j2 = 4*j + 2;
idx_j3 = 4*j + 3;
if ((drdt[idx_j2] == 0) && (drdt[idx_j3] == 0)) {
// No potential terms: keep the point stationary.
drdt[idx_j0] = 0.0;
drdt[idx_j1] = 0.0;
} else {
// Definition r-dot = v:
drdt[idx_j0] = r[idx_j2];
drdt[idx_j1] = r[idx_j3];
// Damping proportional to v:
drdt[idx_j2] += -1.0 * mu * r[idx_j2];
drdt[idx_j3] += -1.0 * mu * r[idx_j3];
}
}
return drdt;
}
// [[Rcpp::export]]
NumericMatrix uncluster(NumericVector x, NumericVector y, double mu, double range, double h, double t_final) {
// x and y are vectors containing the pixel locations to be unclustered.
// h is time-step
long num_pts = x.size();
long n = 4*num_pts;
long i, j, k;
double r_jx, r_jy, r_kx, r_ky;
int cleared;
double d, min_dist;
double t = 0;
long t_steps = floor(t_final / h);
NumericMatrix xy(num_pts+1, 2);
// Vectors to hold everything:
NumericVector r(n);
NumericVector k1(n);
NumericVector k2(n);
NumericVector k3(n);
NumericVector k4(n);
NumericVector vec_aux(n);
for (j = 0; j < num_pts; j++) {
// Position:
r[j*4+0] = x[j];
r[j*4+1] = y[j];
// Velocity:
r[j*4+2] = 0.0;
r[j*4+3] = 0.0;
}
// Not sure if I need this initialisation:
for (j = 0; j < n; j++) {
k1[j] = 0.0;
k2[j] = 0.0;
k3[j] = 0.0;
k4[j] = 0.0;
vec_aux[j] = 0.0;
}
for (i = 0; i < t_steps; i++) {
// RK4:
k1 = calc_der(r, mu, range);
for(j = 0; j < n; j++) {
vec_aux[j] = r[j] + 0.5*h*k1[j];
}
k2 = calc_der(vec_aux, mu, range);
for(j = 0; j < n; j++) {
vec_aux[j] = r[j] + 0.5*h*k2[j];
}
k3 = calc_der(vec_aux, mu, range);
for(j = 0; j < n; j++) {
vec_aux[j] = r[j] + h*k3[j];
}
k4 = calc_der(vec_aux, mu, range);
for(j = 0; j < n; j++) {
r[j] = r[j] + h*(k1[j] + 2.0*k2[j] + 2.0*k3[j] + k4[j]) / 6.0;
}
t += h;
// See if we've unclustered:
cleared = 1;
for (j = 0; j < num_pts; j++) {
for (k = j+1; k < num_pts; k++) {
r_jx = r[4*j+0];
r_jy = r[4*j+1];
r_kx = r[4*k+0];
r_ky = r[4*k+1];
if (dist(r_jx, r_jy, r_kx, r_ky) < range) {
cleared = 0;
break;
}
}
if (cleared == 0) {
break;
}
}
if (cleared == 1) {
break;
}
if (i % 1000 == 0) {
Rprintf("t = %.1f\n", t);
}
}
for (i = 0; i < num_pts; i++) {
xy(i, 0) = r[4*i+0];
xy(i, 1) = r[4*i+1];
}
// Find the shortest distance between two centroids,
// for sending back to R.
min_dist = 100.0;
for (j = 0; j < num_pts; j++) {
for (k = j+1; k < num_pts; k++) {
r_jx = r[4*j+0];
r_jy = r[4*j+1];
r_kx = r[4*k+0];
r_ky = r[4*k+1];
d = dist(r_jx, r_jy, r_kx, r_ky);
if (d < min_dist) {
min_dist = d;
}
}
}
xy(num_pts, 1) = t;
xy(num_pts, 0) = min_dist;
return xy;
}
Posted 2015-09-03,
updated 2015-11-27,
updated 2016-10-06.
Home > Pseph > Australian stats |
# Graduation of Christel Knoop
02 April 2017 | 15:00
location: Room F, Faculty of Civil Engineering and Geosciences
by Webmaster Hydraulic Engineering
"The impact of the Waterontspanner on the failure probability of various dike sections"| Professor of Graduation:Prof.dr.ir. M. Kok, supervisors: Prof.dr.ir C. Jommi (TU Delft), J. Zhou MSc (TU Delft), D. van den Heuvel MSc (Heijmans)
The Netherlands is a low-lying country which is protected against floods by a flood protection system consisting of dikes, dunes and other hydraulic structures. The quality of the flood defense system should be of a sufficiently high level to guarantee protection against a flood. If a flood defense is unsafe, measurements are required to improve the safety of the flood defense. The Dutch governmental agency Rijkswaterstaat'' stimulates companies and research institutes to come up with innovative measures to strengthen dikes. Therefore, the Dutch companies Heijmans, de Vries \& van de Wiel and Movares have developed an innovative measure named the Waterontspanner'' (in English: \emph{water relaxation well}). The Waterontspanner is comparable to a passive vertical drain which is designed to reduce the failure mechanism macro instability of the inner slope. The Waterontspanner has been implemented in project Schoonhoven-Langerak (SLA). The project was successful and it is desired to use the Waterontspanner more often. To do so, more information has to be obtained on the extent to which the Waterontspanner influences the failure probability of a dike due to macro instability of the inner slope. In this thesis, research is carried out to provide insight in the latter. First, an analytical model was set up to understand the behavior of pore pressures in the subsoil and to find out if it makes sense to use a Waterontspanner. Herafter, a safety assessment was set up to investigate how the failure probability of a dike due to macro instability of the inner slope is influenced by Waterontspanners.
© 2017 TU Delft |
# Scraping AFLM Match Chains and Kick-in Play-on Analysis
library(data.table)
library(devtools)
library(dplyr)
library(httr)
library(jsonlite)
library(lubridate)
library(plotly)
library(stringr)
library(tidyr)
If you’ve used the AFL app recently then you have likely noticed the arrival of a new and convoluted feature: the Augmented Reality (AR) Tracker, which - if you can satisfy its demands for a flat surface to project onto - will display some pretty nifty data showing the the field location and outcomes of kicks, handballs, and shots at goal.
It’s a bit like the 3D-camera that came attached to my first smartphone. It sounds sleek and sexy in a pitch, and piques the interest enough to get a couple of uses when it first arrives - but within a week it’s forgotten forever. Cool-factor aside, it’s just not that insightful to see the data like this.
Perhaps you, like me, have glanced at the AR Tracker and idly daydreamed about getting a hold of the raw data that powers it. That dream is now a reality. Here you’ll find a guide on how to acces it - and a little analysis on kicking in and playing on just for fun.
## The Data
To access the data, I have two options for you.
The first is a script of R functions for scraping the data. The script is stored on GitHub. If you’ve installed and loaded the devtools package in R, you can load the script with a line of code.
source_url("https://raw.githubusercontent.com/DataByJosh/AFL-Data/main/AFLM_Match_Chains/Scraper.R")
This will load nine functions into your R session. The only one intended for direct use is get_match_chains(season, round).
Season will default to your system year if left blank. There isn’t currenly any data available for past season, so 2021 is the only worthwhile value you can write in here, for now.
Round will default to all rounds if left blank. So, if you want to scrape a whole season at once, blankj is the way to go. If you do want to specify a round, note that finals rounds are still referred to by number - for example, the first week of finals is 24.
data <- get_match_chains(2021)
A pleasant surprise about this dataset is that it contains far more information than is actually visible currently on the AR Tracker. It records more than 70 different statistical events in a time series broken up into chains of possession, each possessing x and y coordinates describing where the event happened on the field. There are venue dimensions, too.
If you’re not keen on using R to get the data, or finding something broken in the scraper, the second option is a bit more universal and foolproof - all the data is available in round-by-round csvs on GitHub.
## Kicking in and playing on
Such richly detailed data opens up many possibilities for analysis. As a first baby step, I’m going to take a brief and simple look at the first topic that popped into my heads. Kick-ins - should you play-on or not?
In 2019 the AFL loosened up the rules around kick-ins, providing incentive to for players taking the kick to play on. Defenders have taken up that incentive with gusto, and in doing so sparked some debate over players padding their stats.
Some high-minded part of my me wants to believe that, professionals that they are, footy players are not making the decision to play-on or not from a kick-in solely to add to their disposal count.
However, they’re only human. Given an easy way to make ourselves look good at our jobs on paper, I suspect most of us would fail to resist temptation - I know I would.
But is this selfish stat-padding that hurts the team? Or are kick-in play-on chains more likely to end in a score, thereby entirely justifying the decision to stuff one’s stats?
To start answering that question, I’ll filter the dataset down to just the kick-ins. In the data, kicking in after a behind is referred to as a “Kickin”, providing a convenient distinction from simillarly labelled events like “Kick into F50” or “OOF Kick in”. I’ll also filter out any data from the finals matches that have occured so far this year.
kick_in_data <- data %>% filter(description %like% "Kickin")
kick_in_data <- kick_in_data %>% filter(roundNumber <= 23)
We are left with more than 4000 kick-ins from 2021, a very robust dataset. Let’s start with the basics - how often do players kick-in and play-on?
prop.table(table(kick_in_data$description)) ## ## Kickin long Kickin play on Kickin short ## 0.008351756 0.828543355 0.163104888 It’s clear that playing on has become the vastly preferred option, used 82.9% of the time. We may not have and pre-2019 data to compare this too, but there’s no doubt this represent a significant shift in player behaviour at kick-ins. In particular, the rule changes have just about killed the long kick-in from the goalsquare, which is now represents less than one percent of all kick-ins. We know how kick-in chains start, so let’s look at how they end. The finalState variable in the dataset records for each event how the chain it belongs to ended. Given how the very low number of long kick-ins, I’m going to roll them in with the short ones. kick_in_data$description <- kick_in_data$description %>% str_replace("Kickin long", "Does not play-on") kick_in_data$description <- kick_in_data$description %>% str_replace("Kickin play on", "Play-on") kick_in_data$description <- kick_in_data$description %>% str_replace("Kickin short", "Does not play-on") prop.table(table(kick_in_data$description,kick_in_data$finalState), 1) ## ## ballUpCall behind endQuarter goal outOfBounds ## Does not play-on 0.058739255 0.044412607 0.011461318 0.050143266 0.164756447 ## Play-on 0.071449748 0.025793063 0.010969463 0.046842573 0.164838423 ## ## rushed turnover ## Does not play-on 0.007163324 0.663323782 ## Play-on 0.008301216 0.671805514 Before anything else, we must acknowledge that kick-ins simply don’t generate a lot of scores. Whethery playing on or, the most kick-in chains end with the ball being turned over, out of bounds, or balled up. When it comes as kick-in chains that do hit the scoreboard, the news for fans of playing on is not great. 4.7% of kick-in chains end in goals if the kicker plays on, 5% if not. The gap between the two is hardly a gorge, but ‘more goals’ simply isn’t available as an argument in favour our playing on. Starting at the raw numbers above is to enough to make anyone’s eye’s glaz over - let’s add some visuals. We’re going to cut it down to just goals behinds and put these in a bar chart to get a better look at them. Move your cursor over the bars to see the percentage values. plot_data <- as.data.frame(prop.table(table(kick_in_data$description,kick_in_data\$finalState), 1))
plot_data <- plot_data %>% spread(Var2, Freq)
plot <- plot_ly(plot_data,
x = ~Var1,
y = ~goal,
type = "bar",
name = "Goal",
hovertemplate = "%{y:.1%}") %>%
name = "Behind",
hovertemplate = "%{y:.1%}") %>%
layout(barmode = "group",
title = "<b>Percentage of kick-in chains that ended in scoring shots - 2021 Home & Away</b>",
xaxis = list(title = "<b>Type of kick-in</b>", categoryorder = "array", categoryarray = c("Play-on","Does not play-on")),
yaxis = list(title = "",tickformat = ".1%"),
legend = list(title=list(text="<b>Type of score</b>"))) %>%
config(displayModeBar = FALSE)
plot
Kick-in play-on aficionados can hang their hats on one thing. These chains may be less likely to hit the scoreboard, but if they do, that score is more likely to be a goal.
This feels like a logical result. The faster a team moves the ball, the more likely they’ll get ahead of the opposition defence and create a favourable shot at goal - provided they make it there.
This ‘taking the game on’ that so is often spruiked by the commentary box undoubtedly adds some pizazz to the experience of watching football. And when it works, it works well.
But, boring though it may be, the numbers suggest - when it comes to kick-ins at least - short, slow and steady wins the race.
Of course, this analysis is just the tip of the iceberg when it comes not just to kick-ins, but the match chains dataset itself. The answers to dozens of fascinating footy questions are buried in there somewhere.
If you are so inclined, please avail yourself of the data. I can’t wait to see what you produce.
Feedback, corrections, coding tips, questions and suggestions are always welcome. |
1. ## Monotonically Decreasing Sequence
Suppose (an) is a monotonically decreasing sequence of positive numbers, and that $\Sigma\limits_{n=1}^\infty a_{n}$ converges.
Show that $\lim_{n\rightarrow\infty} na_{n} = 0$
2. ## Re: Monotonically Decreasing Sequence
This is a well known theorem in the series of positive terms.
If the series Σan of positive monotone decreasing terms is to converge then we must have not only lim(an)=0 but also lim(n(an))=0 . However the condition lim(n(an))=0 is only necessary , not a sufficient one for these type of series. If lim(nan) does not tend to zero then definitely the series diverges but lim(nan)=0 does not necessarily implies anything as to the possible convergence of the series. in fact the Abel series Σ(1/nlogn) diverges though lim(nan)=0
Get a good book on series to revise all these theorems.
MINOAS
3. ## Re: Monotonically Decreasing Sequence
Originally Posted by MINOANMAN
This is a well known theorem in the series of positive terms.
If the series Σan of positive monotone decreasing terms is to converge then we must have not only lim(an)=0 but also lim(n(an))=0 . However the condition lim(n(an))=0 is only necessary , not a sufficient one for these type of series. If lim(nan) does not tend to zero then definitely the series diverges but lim(nan)=0 does not necessarily implies anything as to the possible convergence of the series. in fact the Abel series Σ(1/nlogn) diverges though lim(nan)=0
Get a good book on series to revise all these theorems.
MINOAS
Can't I use the fact that since $\Sigma\limits_{n=1}^\infty a_{n}$ converges implies $\lim_{n\rightarrow\infty}a_{n} = 0$
So, $\lim_{n\rightarrow\infty}na_{n} = \lim_{n\rightarrow\infty}n\lim_{n\rightarrow\infty }a_{n} = \lim_{n\rightarrow\infty}n * 0 = 0$
4. ## Re: Monotonically Decreasing Sequence
No, that definitely doesn't work.
But you know (by definition) that $\sum_{n=1}^\infty{a_n} = \lim_{n\to\infty}\sum_{k=1}^n{a_k}$. So you just need to show that the finite sum is less than $na_n$.
- Hollywood |
By Topic
# IEEE Transactions on Cybernetics
### Early Access Articles
Early Access articles are made available in advance of the final electronic or print versions. Early Access articles are peer reviewed but may not be fully edited. They are fully citable from the moment they appear in IEEE Xplore.
## Filter Results
Displaying Results 1 - 25 of 339
• ### A Time Variant Log-Linear Learning Approach to the SET K-COVER Problem in Wireless Sensor Networks
Publication Year: 2017, Page(s):1 - 10
| | PDF (1179 KB)
Toward the global optimality of the SET K-COVER problem in wireless sensor networks, we view each sensor node as a rational player and propose a time variant log-linear learning algorithm (TVLLA) that relies on local information only. By defining the local utility as the normalized area covered by one node alone, we formulate the problem as a spatial potential game. The resulting optimal Nash equi... View full abstract»
• ### Synchronization of Reaction-Diffusion Neural Networks With Dirichlet Boundary Conditions and Infinite Delays
Publication Year: 2017, Page(s):1 - 13
| | PDF (1099 KB)
This paper is concerned with synchronization for a class of reaction-diffusion neural networks with Dirichlet boundary conditions and infinite discrete time-varying delays. By utilizing theories of partial differential equations, Green's formula, inequality techniques, and the concept of comparison, algebraic criteria are presented to guarantee master-slave synchronization of the underlying reacti... View full abstract»
• ### An Adaptive Multiobjective Particle Swarm Optimization Based on Multiple Adaptive Methods
Publication Year: 2017, Page(s):1 - 14
| | PDF (3506 KB)
Multiobjective particle swarm optimization (MOPSO) algorithms have attracted much attention for their promising performance in solving multiobjective optimization problems (MOPs). In this paper, an adaptive MOPSO (AMOPSO) algorithm, based on a hybrid framework of the solution distribution entropy and population spacing (SP) information, is developed to improve the search performance in terms of co... View full abstract»
• ### Sampled-Data Fuzzy Control for Nonlinear Coupled Parabolic PDE-ODE Systems
Publication Year: 2017, Page(s):1 - 13
| | PDF (783 KB)
In this paper, a sampled-data fuzzy control problem is addressed for a class of nonlinear coupled systems, which are described by a parabolic partial differential equation (PDE) and an ordinary differential equation (ODE). Initially, the nonlinear coupled system is accurately represented by the Takagi-Sugeno (T-S) fuzzy coupled parabolic PDE-ODE model. Then, based on the T-S fuzzy model, a novel t... View full abstract»
• ### A Bio-Inspired Approach to Traffic Network Equilibrium Assignment Problem
Publication Year: 2017, Page(s):1 - 12
| | PDF (2329 KB)
Finding an equilibrium state of the traffic assignment plays a significant role in the design of transportation networks. We adapt the path finding mathematical model of slime mold Physarum polycephalum to solve the traffic equilibrium assignment problem. We make three contributions in this paper. First, we propose a generalized Physarum model to solve the shortest path problem in directed and asy... View full abstract»
• ### Supervised and Unsupervised Aspect Category Detection for Sentiment Analysis With Co-Occurrence Data
Publication Year: 2017, Page(s):1 - 13
| | PDF (1429 KB)
Using online consumer reviews as electronic word of mouth to assist purchase-decision making has become increasingly popular. The Web provides an extensive source of consumer reviews, but one can hardly read all reviews to obtain a fair evaluation of a product or service. A text processing framework that can summarize reviews, would therefore be desirable. A subtask to be performed by such a frame... View full abstract»
• ### A Two-Phase Evolutionary Approach for Compressive Sensing Reconstruction
Publication Year: 2017, Page(s):1 - 13
| | PDF (1917 KB) | Media
Sparse signal reconstruction can be regarded as a problem of locating the nonzero entries of the signal. In presence of measurement noise, conventional methods such as l₁ norm relaxation methods and greedy algorithms, have shown their weakness in finding the nonzero entries accurately. In order to reduce the impact of noise and better locate the nonzero entries, in this paper, we propose a ... View full abstract»
• ### Incorporation of Efficient Second-Order Solvers Into Latent Factor Models for Accurate Prediction of Missing QoS Data
Publication Year: 2017, Page(s):1 - 13
| | PDF (1563 KB)
Generating highly accurate predictions for missing quality-of-service (QoS) data is an important issue. Latent factor (LF)-based QoS-predictors have proven to be effective in dealing with it. However, they are based on first-order solvers that cannot well address their target problem that is inherently bilinear and nonconvex, thereby leaving a significant opportunity for accuracy improvement. This... View full abstract»
• ### Correlation Filter Learning Toward Peak Strength for Visual Tracking
Publication Year: 2017, Page(s):1 - 14
| | PDF (2902 KB)
This paper presents a novel visual tracking approach to correlation filter learning toward peak strength of correlation response. Previous methods leverage all features of the target and the immediate background to learn a correlation filter. Some features, however, may be distractive to tracking, like those from occlusion and local deformation, resulting in unstable tracking performance. This pap... View full abstract»
• ### Learning Sparse Representation for Objective Image Retargeting Quality Assessment
Publication Year: 2017, Page(s):1 - 14
| | PDF (1959 KB)
The goal of image retargeting is to adapt source images to target displays with different sizes and aspect ratios. Different retargeting operators create different retargeted images, and a key problem is to evaluate the performance of each retargeting operator. Subjective evaluation is most reliable, but it is cumbersome and labor-consuming, and more importantly, it is hard to be embedded into onl... View full abstract»
• ### Finite-Time Synchronization of Coupled Hierarchical Hybrid Neural Networks With Time-Varying Delays
Publication Year: 2017, Page(s):1 - 10
| | PDF (667 KB)
This paper is concerned with the finite-time synchronization problem of coupled hierarchical hybrid delayed neural networks. This coupled hierarchical hybrid neural networks consist of a higher level switching and a lower level Markovian jumping. The time-varying delays are dependent on not only switching signal but also jumping mode. By using a less conservative weighted integral inequality and s... View full abstract»
• ### Adaptive Fuzzy Control for Nonstrict Feedback Systems With Unmodeled Dynamics and Fuzzy Dead Zone via Output Feedback
Publication Year: 2017, Page(s):1 - 13
| | PDF (4757 KB)
This paper investigates the problem of observer-based adaptive fuzzy control for a category of nonstrict feedback systems subject to both unmodeled dynamics and fuzzy dead zone. Through constructing a fuzzy state observer and introducing a center of gravity method, unmeasurable states are estimated and the fuzzy dead zone is defuzzified, respectively. By employing fuzzy logic systems to identify t... View full abstract»
• ### Neuronal State Estimation for Neural Networks With Two Additive Time-Varying Delay Components
Publication Year: 2017, Page(s):1 - 11
| | PDF (887 KB)
This paper is concerned with the state estimation for neural networks with two additive time-varying delay components. Three cases of these two time-varying delays are fully considered: 1) both delays are differentiable uniformly bounded with delay-derivative bounded by some constants; 2) one delay is continuous uniformly bounded while the other is differentiable uniformly bounded with delay-deriv... View full abstract»
• ### Aperiodic Optimal Linear Estimation for Networked Systems With Communication Uncertainties
Publication Year: 2017, Page(s):1 - 10
| | PDF (1494 KB)
The aperiodic optimal linear estimator design problem is investigated in this paper for networked systems with communication uncertainties, including delays and data losses, where the sampling and estimation are nonuniform and asynchronous. Based on the idea of measurement fusion, two approaches are proposed to design the aperiodic estimators, and it is shown that the estimator is equivalent to th... View full abstract»
• ### Networked Predictive Control for Nonlinear Systems With Arbitrary Region Quantizers
Publication Year: 2017, Page(s):1 - 12
| | PDF (1294 KB)
In this paper, networked predictive control is investigated for planar nonlinear systems with quantization by an extended state observer (ESO). The ESO is used not only to deal with nonlinear terms but also to generate predictive states for dealing with network-induced delays. Two arbitrary region quantizers are applied to take effective values of signals in forward channel and feedback channel, r... View full abstract»
• ### Multilayer Optimization of Heterogeneous Networks Using Grammatical Genetic Programming
Publication Year: 2017, Page(s):1 - 13
| | PDF (2718 KB)
Heterogeneous cellular networks are composed of macro cells (MCs) and small cells (SCs) in which all cells occupy the same bandwidth. Provision has been made under the third generation partnership project-long term evolution framework for enhanced intercell interference coordination (eICIC) between cell tiers. Expanding on previous works, this paper instruments grammatical genetic programming to e... View full abstract»
• ### Fast Variable Structure Stochastic Automaton for Discovering and Tracking Spatiotemporal Event Patterns
Publication Year: 2017, Page(s):1 - 14
| | PDF (1527 KB)
Discovering and tracking spatiotemporal event patterns have many applications. For example, in a smart-home project, a set of spatiotemporal pattern learning automata are used to monitor a user's repetitive activities, by which the home's automaticity can be promoted while some of his/her burdens can be reduced. Existing algorithms for spatiotemporal event pattern recognition in dynamic noisy envi... View full abstract»
• ### FaRoC: Fast and Robust Supervised Canonical Correlation Analysis for Multimodal Omics Data
Publication Year: 2017, Page(s):1 - 13
| | PDF (1085 KB)
One of the main problems associated with high dimensional multimodal real life data sets is how to extract relevant and significant features. In this regard, a fast and robust feature extraction algorithm, termed as FaRoC, is proposed, integrating judiciously the merits of canonical correlation analysis (CCA) and rough sets. The proposed method extracts new features sequentially from two multidime... View full abstract»
• ### Effects of Preview on Human Control Behavior in Tracking Tasks With Various Controlled Elements
Publication Year: 2017, Page(s):1 - 11
| | PDF (2159 KB)
This paper investigates how humans use a previewed target trajectory for control in tracking tasks with various controlled element dynamics. The human's hypothesized "near" and "far" control mechanisms are first analyzed offline in simulations with a quasi-linear model. Second, human control behavior is quantified by fitting the same model to measurements from a human-in-the-loop experiment, where... View full abstract»
• ### Analysis and Design of Synchronization for Heterogeneous Network
Publication Year: 2017, Page(s):1 - 10
| | PDF (1120 KB)
In this paper, we investigate the synchronization for heterogeneous network subject to event-triggering communication. The designed controller for each node includes reference generator (RG) and regulator. The predicted value of relative information between intermittent communication can significantly reduce the transmitted information. Based on the event triggering strategy and time-dependent thr... View full abstract»
• ### Cooperative Hierarchical PSO With Two Stage Variable Interaction Reconstruction for Large Scale Optimization
Publication Year: 2017, Page(s):1 - 15
| | PDF (2828 KB)
Large scale optimization problems arise in diverse fields. Decomposing the large scale problem into small scale subproblems regarding the variable interactions and optimizing them cooperatively are critical steps in an optimization algorithm. To explore the variable interactions and perform the problem decomposition tasks, we develop a two stage variable interaction reconstruction algorithm. A lea... View full abstract»
• ### Finite-Horizon H∞ Consensus Control of Time-Varying Multiagent Systems With Stochastic Communication Protocol
Publication Year: 2017, Page(s):1 - 11
| | PDF (751 KB)
This paper is concerned with the distributed H∞ consensus control problem for a discrete time-varying multiagent system with the stochastic communication protocol (SCP). A directed graph is used to characterize the communication topology of the multiagent network. The data transmission between each agent and the neighboring ones is implemented via a constrained communication channel where o... View full abstract»
• ### Adaptive Neural Tracking Control for Switched High-Order Stochastic Nonlinear Systems
Publication Year: 2017, Page(s):1 - 12
| | PDF (2778 KB)
This paper deals with adaptive neural tracking control design for a class of switched high-order stochastic nonlinear systems with unknown uncertainties and arbitrary deterministic switching. The considered issues are: 1) completely unknown uncertainties; 2) stochastic disturbances; and 3) high-order nonstrict-feedback system structure. The considered mathematical models can represent many practic... View full abstract»
• ### Distributed Consensus Optimization in Multiagent Networks With Time-Varying Directed Topologies and Quantized Communication
Publication Year: 2017, Page(s):1 - 14
| | PDF (616 KB)
This paper considers solving a class of optimization problems which are modeled as the sum of all agents' convex cost functions and each agent is only accessible to its individual function. Communication between agents in multiagent networks is assumed to be limited: each agent can only interact information with its neighbors by using time-varying communication channels with limited capacities. A ... View full abstract»
• ### Semantically Enhanced Online Configuration of Feedback Control Schemes
Publication Year: 2017, Page(s):1 - 14
| | PDF (1736 KB)
Recent progress toward the realization of the Internet of Things' has improved the ability of physical and soft/cyber entities to operate effectively within large-scale, heterogeneous systems. It is important that such capacity be accompanied by feedback control capabilities sufficient to ensure that the overall systems behave according to their specifications and meet their functional objective... View full abstract»
## Aims & Scope
The scope of the IEEE Transactions on Cybernetics includes computational approaches to the field of cybernetics.
Full Aims & Scope
## Meet Our Editors
Editor-in-Chief
Prof. Jun Wang
Dept. of Computer Science
City University of Hong Kong
Kowloon Tong, Kowloon, Hong Kong
Tel: +852 34429701
Email: [email protected] |
## Positioning or Other Issue?
The website is www.cleantelligent.com/tour/
This image, as a background, has tooltips and links on it.
The issue is that, if you re-size the window, the image and title (the main div, I think) moves left and right. This means that the "Take the Tour" sub-head isn't always aligned perfectly underneath the "Tour" Title in the black bar above. They should be lined up down the left so that the image is centered underneath the header content.
Is it my positioning that's causing this? If so, how can I fix it? I've tried positioning it absolutely, but that collapses the page and the footer pops up to the middle.
Any ideas would be helpful. Thanks! |
## Refraction Index of a Liquid and a Coin
We put a coin at the bottom of a glass with a transparent liquid of height $$h$$.
If we watch the coin not directly from above, it looks – as a result of light refraction on the liquid-air boundary - as if it was floating in the depth of $$h^\prime$$ under the water surface. Draw the location of the image and determine the index of refraction $$n$$ of the liquid, supposing that we observe the coin at small angles.
• #### Hint 1 – image of the coin
Find the image of the coin, i.e. draw where we will see the coin.
Light rays which are reflected at a given spot of a coin are incident on a boundary, they are refracted here and some of them continue to the eye. Draw at least two such light rays. Elongating those rays will give you location of the image of the coin which our eye sees.
• #### Hint 2 – law of refraction
Write the law of refraction for one of the rays going from the coin to our eye.
• #### Hint 3 – small angle approximation
We assume that we watch the coin at small angles. Express the goniometric functions in the law of refraction using the known proportions $$h, h^\prime$$.
• #### Complete solution
We want to know the place where our eye will see the coin. In the following image, two rays of similar direction (red colour) are drawn which are refracted on the boundary into the eye. Our eye „elongates“ the incident rays so we see the centre point of the coin in the intersection of those elongated (blue) rays.
We use the law of refraction to calculate the refraction index of the liquid. We choose one of the rays for the calculation.
We mark the angle of incidence as $$\alpha$$, angle of refraction as $$\beta$$.
The refraction index of air is approximately $$1$$ so we can write the law of refraction as follows:
$\frac{\sin \alpha}{\sin\beta} = \frac{1}{n}.$
If the angle $$\beta$$ at which we watch the coin is small, i.e. $$\beta \ll 1$$, the angle $$\alpha$$ is also small since $$\alpha \lt \beta$$. Thus, for small angles it holds:
$\sin \alpha \approx \tan \alpha,$ $\sin \beta \approx \tan \beta$
so substituting into Snell’s law we get:
$\frac{\tan\alpha}{\tan\beta} \approx \frac{1}{n}.$
It follows from the marked right-angled triangles that:
$\tan \alpha = \frac{s}{h},$ $\tan \beta = \frac{s}{h^\prime}.$
Substituting we get:
$\frac{\frac{s}{h}}{\frac{s}{h^\prime}} = \frac{h^\prime}{h} \approx \frac{1}{n}.$
From which it is easily seen (equality for small angles) that:
$n = \frac{h}{h^\prime}.$
Supposing the refraction index of air to be $$1$$, the refraction index of the liquid is:
$n = \frac{h}{h^\prime}.$
It holds for water that $$h:h^\prime \doteq 4:3$$ which implies that the index of refraction of water is approximately $$n\doteq 1{,}3$$. Prove experimentally. |
# A faster way to obtain orbits of a partition of the verrtex set
I am given a graph $G$ a set $S \subseteq V(G)$ and a vertex $v.$ I want to compute the representatives for the orbits of the stabilizer of $v$ of $\rm{Aut}(G)$ whose equivalence classes contain elements of $S.$ Currently what I am doing is the following:
def compute_valid_orbit_reps(G,S,v):
ret = []
O = G.automorphism_group(return_group=False, orbits=True,
partition=[ [v], [el for el in G.vertices() if el != v]])
for el in O:
if S.intersection(set(el)):
nb = el[0] |
# Group cohomology of finite groups
I wonder if the group cohomology of a finite group $G$ with coefficients in $\mathbb{Z}$ is finite. This statement may be too strong. I am interested in, for instance, dihedral group. $$G=D_{2n}=\langle a,b | \ a^n=b^2=abab=e \rangle$$ Assume that $a$ acts trivially and $b$ acts as $-id$ on $\mathbb{Z}$.
First cohomology is $\mathbb{Z}^G=0$. The second cohomology already seems quite involved to me.
I read several post about group cohomology on StackExchange and MathOverflow, but I still have trouble computing explicit example and getting intuition behind the concept.
-
do you mean each cohomology group $H^p(G;\mathbb{Z})$ is a finite group or do you mean the whole cohomology ring $H^*(G;\mathbb{Z})$? The latter statement is of course false, seen e.g. in the finite cyclic groups. – mland Aug 10 '12 at 14:32
Sorry for the confusion. I menat that each $H^{p}(G,M)$ is finite. – Michel Aug 10 '12 at 17:20
There are projective resolutions for the dihedral group (due to Wall) that can be used to compute the cohomology for every coefficient module. In particular it shouldn't be to hard to figure out $H^2(D_{2n};-)$. – Ralph Aug 10 '12 at 23:05
I will check it up. Thanks, Ralph. – Michel Aug 11 '12 at 0:30
The answer is yes, since it's torsion (killed by the order of $G$) and finitely generated (since you can pick a resolution by finitely generated abelian groups).
(I mean, more precisely, that the underlying abelian groups of the free $ZG$-modules appearing in the standard resolution of $Z$ are finitely generated.) – user29743 Aug 10 '12 at 13:16
Each term of the standard resolution is a direct sum of finite $\mathbb{Z}[G]$s but I don't quite see why it's torsion. – Michel Aug 10 '12 at 17:34
in general if $G$ is a finite group, the order of $G$ kills any cohomology group $H^i(G, M)$. To see this, use the fact that there are corestriction and restriction maps for any normal subgroup $H$ of $G$ such that $H^i(G, M) \to H^i(H, M) \to H^i (G, M)$ is multiplication by $[G:H]$. Now apply this in the special case where $H$ is trivial to get that multiplication by the order of $G$ is the zero map. – user29743 Aug 10 '12 at 18:49
An alternative argument to see that $H^i(G;M)$ is annulated by $|G|$ for $i > 0$ is to consider $\hat{H}^\ast(G;M)$ as (unitary) module over $\hat{H}^\ast(G;\mathbb{Z})$. Then it's clear because $1 \in \hat{H}^0(G;\mathbb{Z})=\mathbb{Z}/|G|$. – Ralph Aug 10 '12 at 23:02 |
# Every positive integer except 1 is a multiple of at least one prime.
#### s3a
1. The problem statement, all variables and given/known data
The problem (and its solution) are attached in TheProblemAndSolution.jpg. Specifically, I am referring to problem (c).
2. Relevant equations
Set theory.
Union.
Integers.
Prime numbers.
3. The attempt at a solution
I see how we have all multiples of all prime numbers in the union of the sets, but I can't see how “every positive integer except 1 is a multiple of at least one prime number”. If I could intuitively grasp the sentence I just quoted, then I can see how to get to the final result.
Any help in getting me to intuitively understand the part I quoted would be greatly appreciated!
#### Attachments
• 17.6 KB Views: 310
Related Precalculus Mathematics Homework News on Phys.org
#### mfb
Mentor
Every positive integer except 1 is a multiple of at least one prime.
Isn't that a trivial statement given the definition of prime numbers?
Hint: look at the factors of the number.
#### Dick
Homework Helper
Isn't that a trivial statement given the definition of prime numbers?
Hint: look at the factors of the number.
It's not TOTALLY trivial. If a number $n$ isn't prime then it can be factored into $n=ab$ where neither $a$ nor $b$ is one. If neither one is prime then they are both composite and $a<n$. Then factor $a$. You need to argue that this can't go on forever. It's sort of intuitively obvious but to write a formal proof you need to say that there is not an infinite sequence of decreasing positive integers. There is a little meat here.
#### s3a
Thanks for the replies.
I think I now understand it intuitively. Basically, prime numbers are numbers that can't be factored further (except for factors of 1 and the number itself), and all composite numbers can be divided by a prime number, since a prime number is at least one of the multiplicands of the factored form of the composite numbers, right?
Assuming I've understood the intuitive argument, I would like to move to the formal proof, so, Dick or mfb (or anyone else), could you please show me how you would (formally) prove that “every positive integer except 1 is a multiple of at least one prime number”?
#### mfb
Mentor
It's not TOTALLY trivial. If a number $n$ isn't prime then it can be factored into $n=ab$ where neither $a$ nor $b$ is one. If neither one is prime then they are both composite and $a<n$. Then factor $a$. You need to argue that this can't go on forever. It's sort of intuitively obvious but to write a formal proof you need to say that there is not an infinite sequence of decreasing positive integers. There is a little meat here.
I am aware of that, and still consider this as trivial. You can use induction as well.
s3a said:
I think I now understand it intuitively. Basically, prime numbers are numbers that can't be factored further (except for factors of 1 and the number itself), and all composite numbers can be divided by a prime number, since a prime number is at least one of the multiplicands of the factored form of the composite numbers, right?
Right.
Assuming I've understood the intuitive argument, I would like to move to the formal proof, so, Dick or mfb (or anyone else), could you please show me how you would (formally) prove that “every positive integer except 1 is a multiple of at least one prime number”?
Dick gave you one approach, induction is another possible way to prove it.
#### Dick
Homework Helper
Thanks for the replies.
I think I now understand it intuitively. Basically, prime numbers are numbers that can't be factored further (except for factors of 1 and the number itself), and all composite numbers can be divided by a prime number, since a prime number is at least one of the multiplicands of the factored form of the composite numbers, right?
Assuming I've understood the intuitive argument, I would like to move to the formal proof, so, Dick or mfb (or anyone else), could you please show me how you would (formally) prove that “every positive integer except 1 is a multiple of at least one prime number”?
I agree with mfb. If you think you understand it intuitively, you should start the proof, not us.
Last edited:
#### s3a
Well, is it formal to say the following?:
Let any integer greater than 1, referred to by N, be the product of M_1, M_2, . . ., M_i, then N = M_1 * M_2 * . . . * M_i, where each multiplicand of N can be written the product of m_1 * m_2 * . . . * m_j, and so forth until the recursion ends and N is described by the product of numbers that can only be divided by themselves and the number 1, which includes all prime numbers and the number 1, but since multiplying any number by the number 1 doesn't affect the result, we can say that all positive integers greater than 1 can be written as the product of prime numbers.
Q.E.D.
#### Dick
Homework Helper
Well, is it formal to say the following?:
It's not very good. Use mfb's suggestion and induction. Suppose all integers less than or equal to N have a prime factor. Use that to prove all integers less than or equal to N+1 have a prime factor. Think about it and think about induction.
"Every positive integer except 1 is a multiple of at least one prime."
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving |
Circulant matrix
For the symmetric graphs, see Circulant graph.
In linear algebra, a circulant matrix is a special kind of Toeplitz matrix where each row vector is rotated one element to the right relative to the preceding row vector. In numerical analysis, circulant matrices are important because they are diagonalized by a discrete Fourier transform, and hence linear equations that contain them may be quickly solved using a fast Fourier transform.[1] They can be interpreted analytically as the integral kernel of a convolution operator on the cyclic group ${\displaystyle \mathbb {Z} /n\mathbb {Z} }$ and hence frequently appear in formal descriptions of spatially invariant linear operations. In cryptography, a circulant matrix is used in the MixColumns step of the Advanced Encryption Standard.
Definition
An ${\displaystyle n\times n}$ circulant matrix ${\displaystyle \ C}$ takes the form
${\displaystyle C={\begin{bmatrix}c_{0}&c_{n-1}&\dots &c_{2}&c_{1}\\c_{1}&c_{0}&c_{n-1}&&c_{2}\\\vdots &c_{1}&c_{0}&\ddots &\vdots \\c_{n-2}&&\ddots &\ddots &c_{n-1}\\c_{n-1}&c_{n-2}&\dots &c_{1}&c_{0}\\\end{bmatrix}}.}$
A circulant matrix is fully specified by one vector, ${\displaystyle \ c}$, which appears as the first column of ${\displaystyle \ C}$. The remaining columns of ${\displaystyle \ C}$ are each cyclic permutations of the vector ${\displaystyle \ c}$ with offset equal to the column index. The last row of ${\displaystyle \ C}$ is the vector ${\displaystyle \ c}$ in reverse order, and the remaining rows are each cyclic permutations of the last row. Note that different sources define the circulant matrix in different ways, for example with the coefficients corresponding to the first row rather than the first column of the matrix, or with a different direction of shift.
The polynomial ${\displaystyle f(x)=c_{0}+c_{1}x+\dots +c_{n-1}x^{n-1}}$ is called the associated polynomial of matrix ${\displaystyle C}$.
Properties
Eigenvectors and eigenvalues
The normalized eigenvectors of a circulant matrix are given by
${\displaystyle v_{j}={\frac {1}{\sqrt {n}}}(1,~\omega _{j},~\omega _{j}^{2},~\ldots ,~\omega _{j}^{n-1})^{T},\quad j=0,1,\ldots ,n-1,}$
where ${\displaystyle \omega _{j}=\exp \left({\tfrac {2\pi ij}{n}}\right)}$ are the n-th roots of unity and ${\displaystyle i}$ is the imaginary unit.
The corresponding eigenvalues are then given by
${\displaystyle \lambda _{j}=c_{0}+c_{n-1}\omega _{j}+c_{n-2}\omega _{j}^{2}+\ldots +c_{1}\omega _{j}^{n-1},\qquad j=0,1,\ldots ,n-1.}$
Determinant
As a consequence of the explicit formula for the eigenvalues above, the determinant of circulant matrix can be computed as:
${\displaystyle \mathrm {det} (C)=\prod _{j=0}^{n-1}(c_{0}+c_{n-1}\omega _{j}+c_{n-2}\omega _{j}^{2}+\dots +c_{1}\omega _{j}^{n-1}).}$
Since taking transpose does not change the eigenvalues of a matrix, an equivalent formulation is
${\displaystyle \mathrm {det} (C)=\prod _{j=0}^{n-1}(c_{0}+c_{1}\omega _{j}+c_{2}\omega _{j}^{2}+\dots +c_{n-1}\omega _{j}^{n-1})=\prod _{j=0}^{n-1}f(\omega _{j}).}$
Rank
The rank of circulant matrix ${\displaystyle C}$ is equal to ${\displaystyle n-d}$, where ${\displaystyle d}$ is the degree of ${\displaystyle \gcd(f(x),x^{n}-1)}$.[2]
Other properties
• We have
${\displaystyle C=c_{0}I+c_{1}P+c_{2}P^{2}+\ldots +c_{n-1}P^{n-1}=f(P).}$
where P is the 'cyclic permutation' matrix, a specific permutation matrix given by
${\displaystyle P={\begin{bmatrix}0&0&\ldots &0&1\\1&0&\ldots &0&0\\0&\ddots &\ddots &\vdots &\vdots \\\vdots &\ddots &\ddots &0&0\\0&\ldots &0&1&0\end{bmatrix}}.}$
• The set of ${\displaystyle n\times n}$ circulant matrices forms an n-dimensional vector space; this can be interpreted as the space of functions on the cyclic group of order n, ${\displaystyle \mathbf {Z} /n\mathbf {Z} ,}$ or equivalently the group ring.
• Circulant matrices form a commutative algebra, since for any two given circulant matrices ${\displaystyle \ A}$ and ${\displaystyle \ B}$, the sum ${\displaystyle \ A+B}$ is circulant, the product ${\displaystyle \ AB}$ is circulant, and ${\displaystyle \ AB=BA}$.
• The matrix U that is composed of the eigenvectors of a circulant matrix is related to the Discrete Fourier transform and its Inverse transform:
${\displaystyle U_{n}^{*}={\frac {1}{\sqrt {n}}}F_{n},\quad {\text{and}}\quad U_{n}={\sqrt {n}}F_{n}^{-1},\quad {\text{where}}\quad F_{n}=(f_{jk})\quad {\text{with}}\quad f_{jk}=\mathrm {e} ^{-2jk\pi \mathrm {i} /n},\quad {\text{for}}\quad 0\leq j,k
Thus, the matrix ${\displaystyle U_{n}}$ diagonalizes C. In fact, we have
${\displaystyle C=U_{n}\operatorname {diag} (F_{n}c)U_{n}^{*}=F_{n}^{-1}\operatorname {diag} (F_{n}c)F_{n},}$
where ${\displaystyle c\!\,}$ is the first column of ${\displaystyle C\,\!}$. Thus, the eigenvalues of ${\displaystyle C}$ are given by the product ${\displaystyle \ F_{n}c}$. This product can be readily calculated by a Fast Fourier transform.[3]
Analytic interpretation
Circulant matrices can be interpreted geometrically, which explains the connection with the discrete Fourier transform.
Consider vectors in ${\displaystyle \mathbf {R} ^{n}}$ as functions on the integers with period n, (i.e., as periodic bi-infinite sequences: ${\displaystyle \dots ,a_{0},a_{1},\dots ,a_{n-1},a_{0},a_{1},\dots }$) or equivalently, as functions on the cyclic group of order n, (${\displaystyle C_{n}}$ or ${\displaystyle \mathbf {Z} /n\mathbf {Z} }$) geometrically, on (the vertices of) the regular n-gon: this is a discrete analog to periodic functions on the real line or circle.
Then, from the perspective of operator theory, a circulant matrix is the kernel of a discrete integral transform, namely the convolution operator for the function ${\displaystyle (c_{0},c_{1},\dots ,c_{n-1});}$ this is a discrete circular convolution. The formula for the convolution of the functions ${\displaystyle (b_{i}):=(c_{i})*(a_{i})}$ is
${\displaystyle b_{k}=\sum _{i=0}^{n-1}a_{i}c_{k-i}}$ (recall that the sequences are periodic)
which is the product of the vector of ${\displaystyle a_{i}}$ by the circulant matrix.
The discrete Fourier transform then converts convolution into multiplication, which in the matrix setting corresponds to diagonalization.
The ${\displaystyle C^{*}}$-algebra of all circulant matrices with complex entries is isomorphic to the group ${\displaystyle C^{*}}$-algebra of ${\displaystyle \mathbf {Z} /n\mathbf {Z} }$.
Applications
In linear equations
Given a matrix equation
${\displaystyle \ \mathbf {C} \mathbf {x} =\mathbf {b} ,}$
where ${\displaystyle \ C}$ is a circulant square matrix of size ${\displaystyle \ n}$ we can write the equation as the circular convolution
${\displaystyle \ \mathbf {c} \star \mathbf {x} =\mathbf {b} ,}$
where ${\displaystyle \ c}$ is the first column of ${\displaystyle \ C}$, and the vectors ${\displaystyle \ c}$, ${\displaystyle \ x}$ and ${\displaystyle \ b}$ are cyclically extended in each direction. Using the results of the circular convolution theorem, we can use the discrete Fourier transform to transform the cyclic convolution into component-wise multiplication
${\displaystyle \ {\mathcal {F}}_{n}(\mathbf {c} \star \mathbf {x} )={\mathcal {F}}_{n}(\mathbf {c} ){\mathcal {F}}_{n}(\mathbf {x} )={\mathcal {F}}_{n}(\mathbf {b} )}$
so that
${\displaystyle \ \mathbf {x} ={\mathcal {F}}_{n}^{-1}\left[\left({\frac {({\mathcal {F}}_{n}(\mathbf {b} ))_{\nu }}{({\mathcal {F}}_{n}(\mathbf {c} ))_{\nu }}}\right)_{\nu \in \mathbf {Z} }\right]^{T}.}$
This algorithm is much faster than the standard Gaussian elimination, especially if a fast Fourier transform is used.
In graph theory
In graph theory, a graph or digraph whose adjacency matrix is circulant is called a circulant graph (or digraph). Equivalently, a graph is circulant if its automorphism group contains a full-length cycle. The Möbius ladders are examples of circulant graphs, as are the Paley graphs for fields of prime order.
References
1. ^ Davis, Philip J., Circulant Matrices, Wiley, New York, 1970 ISBN 0471057711
2. ^ A. W. Ingleton (1956). "The Rank of Circulant Matrices". J. London Math. Soc. s1-31 (4): 445–460. doi:10.1112/jlms/s1-31.4.445.
3. ^ Golub, Gene H.; Van Loan, Charles F. (1996), "§4.7.7 Circulant Systems", Matrix Computations (3rd ed.), Johns Hopkins, ISBN 978-0-8018-5414-9 |
# Using trig substitution to evaluate $\int \frac{dt}{( t^2 + 9)^2}$
$$\int \frac{\mathrm{d}t}{( t^2 + 9)^2} = \frac {1}{81} \int \frac{\mathrm{d}t}{\left( \frac{t^2}{9} + 1\right)^2}$$
$t = 3\tan\theta\;\implies \; dt = 3 \sec^2 \theta \, \mathrm{d}\theta$
$$\frac {1}{81} \int \frac{3\sec^2\theta \mathrm{ d}\theta}{ \sec^4\theta} = \frac {1}{27} \int \frac{ \mathrm{ d}\theta}{ \sec^2\theta} = \dfrac 1{27}\int \cos^2 \theta\mathrm{ d}\theta$$
$$=\frac 1{27}\left( \frac{1}{2} \theta + 2(\cos\theta \sin\theta)\right) + C$$
$\arctan \frac{t}{3} = \theta \;\implies$
$$\frac{1}{27}\left(\frac{1}{2} \arctan \frac{t}{3} + 2 \left(\frac{\sqrt{9 – x^2}}{3} \frac{t}{3}\right)\right) + C$$
This is a mess, and it is also the wrong answer.
I have done it four times, where am I going wrong?
#### Solutions Collecting From Web of "Using trig substitution to evaluate $\int \frac{dt}{( t^2 + 9)^2}$"
The integral $$\int \cos^2 \theta \, \mathrm d\theta=\frac{\theta}{2}+\frac{1}{2} \sin \theta \cos \theta+C.$$
Try returning to your integral: $$\int \cos^2\theta = \frac{\theta}{2} + \frac{\sin\theta\cos\theta}{2} + C$$
We get a factor of $\frac 12,$ and not $2$, multiplying the second term in the sum.
Note, more importantly, that your substitutions for $\cos\theta\sin\theta$ in terms of $\theta = \arctan(t/3)$ are also incorrect.
We have that $\theta = \arctan(t/3) \implies \tan \theta = \dfrac t3.\;$ Corresponding to this is $\;\cos\theta = \dfrac{3}{\sqrt{t^2 + 9}}$ and $\;\sin\theta = \dfrac t{\sqrt{t^2 + 9}}.$
That gives us $$\frac{1}{ 27}\left( \frac 12\cdot \arctan \frac{t}{3} + \frac 12 \underbrace{\frac{t}{\sqrt{t^2 + 9}}}_{\sin \theta}\cdot \underbrace{\frac{3}{\sqrt{t^2 + 9}}}_{\cos\theta}\right) + C$$ $$= \frac{1}{54}\left( \arctan \frac{t}{3} + \frac{3t}{t^2+9}\right) + C$$ |
416 views
The following pie-chart shows the percentage distribution of the expenditure incurred in publishing a magazine. Study the pie-chart and answer the question based on it.
$\textbf{Various Expenditures (in percentage) Incurred in publishing a Magazine}$
If for a certain quantity of magazine, the publisher has to pay ₹$30,600$ as printing cost, then what will be amount of royalty to be paid for these magazines ?
1. ₹$19,450$
2. ₹$21,200$
3. ₹$22,950$
4. ₹$26,150$
Given that, printing cost $= ₹30,600$
Now, $20\% \implies ₹30,600$
Then, the amount of royalty to be paid $15\% \implies x$
Therefore $x = ₹22,950.$
So, the correct answer is $(C).$
12.1k points |
Subsets and Splits