text
stringlengths 100
356k
|
---|
# Details about a Recurrence Relation problem.
I am trying to understand Recurrence Relations through the Towers of Hanoi example, and I am having trouble understanding the last step:
If $H_n$ is the number of moves it takes for n rings to be moved from the first peg to the second peg, then:
$H_n = 2H_{n−1} + 1$
$H_n = 2(2H_{n−2} + 1) + 1 = 2^2H_{n−2} + 2 + 1$
$H_n = 2^2(2H_{n−3} + 1) + 2 + 1 = 2^3H_{n−3} + 2^2 + 2 + 1$
$\vdots$
$H_n = 2^{n−1}H_1 + 2^{n−2} + 2^{n−3} +· · ·+2 + 1$
$H_n = 2^{n−1} + 2^{n−2} +· · ·+2 + 1$
$H_n = 2^n − 1.$
I am having trouble understanding the last two steps. How does one go from this:
$H_n = 2^{n−1} + 2^{n−2} +· · ·+2 + 1$
To this:
$H_n = 2^n − 1$
?
-
The last few lines contain errors. The line
$$H_n=2^{n-1}H_1+2^{n-2}+2^{n-3}+\ldots+2+1$$
is correct, but the next line is not: since $H_1=1$, it should be
$$H_n=2^{n-1}+2^{n-2}+2^{n-3}+\ldots+2+1\;.$$
Apparently the exponents got dropped down onto the main line of text. This is now the sum of a geometric series:
$$H_n=\sum_{k=0}^{n-1}2^k=\frac{2^n-1}{2-1}=2^n-1\;,$$
and again it appears that an exponent was accidentally dropped down to the main line.
-
You just opened my eyes. I didn't realize that $H_n = 2^{n−1} + 2^{n−2} +· · ·+2 + 1$ was a Geometric Series. Looks like I need to go back a few chapters. Thank you! – Chase May 17 '13 at 18:03
@Chase: You’re welcome! – Brian M. Scott May 17 '13 at 18:04
|
# Step response in MATLAB not oscillating around 1.
Discussion in 'Programmer's Corner' started by Suyash Shandilya, Oct 4, 2016.
1. ### Suyash Shandilya Thread Starter New Member
Sep 14, 2016
7
0
Hello everyone!
I am very new here. I am very new to MATLAB. And new to Control System engineering as well.
[a,b]=ord2(10,0.1);
step(a,b);
Above is my code and I am attatching the ouptut for the same.
In all my Control system engg. lectures the second order step response was somehow normalised and always around one.
I tried for different frequency value but that lead to drastic amplitude diminishing.
I want to know why is my code around 0.01 and how do I normalise it other than simply multiplying the a vector with 100.
2. ### Papabravo Expert
Feb 24, 2006
10,021
1,757
The 0.01 represents the steady state value as t →∞, and all the first and second order terms have died out. What do you think the ord2 function does? So what is the transfer function of a system with a natural frequency of 10 radians per second and a damping factor of 0.1? It is:
$\frac{1}{s^2+0.2s+100}$
Question: What is the magnitude of the transfer function as t→∞, which is the same as s→0?
To normalize the amplitude at 1 you need to have the natural frequency be 1, so:
[a,b]=ord2(1, 0.1);
Also the normalized transfer function for any natural frequency is just
$\frac{\omega_n^2}{s^2+2\zeta\omega_n+\omega_n^2}\;,\;\; \text where \zeta\; \text is the damping ratio, and \omega_n \text is the natural frequency$
Last edited: Oct 4, 2016
3. ### Suyash Shandilya Thread Starter New Member
Sep 14, 2016
7
0
Damn it! Yes!!!
You are absolutely right. What was I thinking...
4. ### Papabravo Expert
Feb 24, 2006
10,021
1,757
I think the TS just had an AHHAAAH moment!
|
# Probability Question - Nonstandard Normal Distributions
## Homework Statement
The weight of eggs produced by a certain type of hen varies according to a distribution that is approximately normal with mean 6.5 grams and standard deviation 2 grams.
What is the probability that the average of a random sample of the weights of 25 eggs will be less than 6 grams
## Homework Equations
P(X<6)=P((6-6.5)/σ)
## The Attempt at a Solution
- The part I can't figure out is how to arrive at sigma. This is a problem from a practice exam, so I already know that sigma is 0.40. If I'm understanding correctly, then that would make V(X) = 4/25. I just can't figure out how to arrive at these conclusion from the data that is given. I'm pretty stumped.
Last edited:
Homework Helper
your problem asks about the probability the MEAN of a sample will be a certain size. what do you know about distributions of sample means?
your problem asks about the probability the MEAN of a sample will be a certain size. what do you know about distributions of sample means?
Not much.
I think I figured out why sigma is what it is though. Since the SD is 2 grams, it follows that V(X) = 4. Since I'm trying to find out what the average of X is, I divide 4 by 25. that is how I get the 4/25. From there sigma is easy. I think that is kind of close anyway.
Last edited:
Homework Helper
You've essentially got it. If you take a sample of size $$n$$ from a normally distributed population, then $$\overline X$$ has a normal distribution. For the
distribution of $$\overline X$$,
$$\mu = \text{ original population mean}$$
and
$$\sigma = \frac{\text{Original standard deviation}}{\sqrt n}$$
As long as the original population itself has a normal distribution, this is true
for any sample size.
|
LUCAS WILLEMS
A 23 year-old student passionate about maths and programming
Calculation of some cubic roots
Article
I was watching maths videos on Youtube when I came accross this video of Preety Uzlain explaining how to find cube root of cube of 2-digits numbers.
So, if we want to find the value of $$b = \sqrt[3]{a}$$ where $$a$$ is the cube of a 2-digits number, we have to use the following technique :
• Find what is the number $$u \in [[0, 9]]$$ that the cube has the same units digit as $$a$$
• Find what is the number $$d \in [[0, 9]]$$ that the cube is less than $$\left \lfloor \frac{a}{1000} \right \rfloor$$ and the nearest of this value
When $$u$$ and $$d$$ are found, we conclude that $$b = 10d + u$$.
Proof
As we want to find $$\sqrt[3]{a}$$ where $$a$$ is the cube of 2-digits number, we can write :
$$(d, u) \in [[0, 9]]^2 \qquad a = (10d + u)^3$$
Proof of the 1st point of the technique
Let's develop the previous expression :
\begin{align}a &= (10d)^3 + 3(10d)^2 u + 3(10d)u^2 + u^3 = 1000d^3 + 300d^2 u + 30du^2 + u^3 \\ &= 10(100d^3 + 30d^2 u + 3du^2) + u^3\end{align}
It is now easy to see that the units digit of $$10(100d^3 + 30d^2 u + 3du^2)$$ is $$0$$, which means that $$a$$ and $$u^3$$ have the same units digit.
So, as the units digit of $$0^3$$, $$1^3$$, $$2^3$$, ..., $$9^3$$ is different, only 1 $$u^3$$ with the same digits unit as $$a$$ exists. It is this reason that proves the 1st point of the technique.
Proof of the 2nd point of the technique
As $$0 \leqslant u < 10$$, we get the following inequality :
$$10d \leqslant 10d + u < 10d + 10 = 10(d+1)$$And, if we divide by 10 then cube each member of the inequality, we get :
$$d^3 \leqslant \frac{(10d + u)^3}{1000} < (d+1)^3$$
As $$a = (10d + u)^3$$, the final inequality is the following :
$$d^3 \leqslant \frac{a}{1000} < (d+1)^3$$It is now very easy to see that the 2nd point of the technique is right.
Therefore, the technique used by Preety Uzlain works !
|
# Collision of a pion and a proton
Reference: Griffiths, David J. (2007), Introduction to Electrodynamics, 3rd Edition; Pearson Education – Chapter 12, Problem 12.58.
As another example of using conservation of energy and momentum to work out the kinematics of particle collisions, suppose we fire a pion at a proton at rest. One possible outcome of such a collision is the conversion of the pion and proton into kappa and sigma particles, but this can only occur if the momentum of the pion is high enough, since the rest energies of the kappa plus sigma are greater than those of the pion plus proton. We can find the minimum pion momentum (called the threshold momentum), as measured in the lab, at which this reaction can occur. It’s easiest to convert to the centre of momentum frame to do the calculations and then convert back at the end.
In the centre of momentum frame, the pion and proton head towards each other with equal and opposite momenta, so using the usual relativistic notation, and expressing rest energy in MeV (so we can ignore the ${c^{2}}$ factor):
$\displaystyle \gamma_{\pi}\beta_{\pi}m_{\pi}=\gamma_{p}\beta_{p}m_{p} \ \ \ \ \ (1)$
At the threshold momentum, the pion and proton collide and produce a ${K}$ and ${\Sigma}$ at rest, so from conservation of energy
$\displaystyle \gamma_{\pi}m_{\pi}+\gamma_{p}m_{p}=m_{K}+m_{\Sigma} \ \ \ \ \ (2)$
Using Griffiths’s approximate values for the rest energies, we have (in MeV)
$\displaystyle m_{\pi}$ $\displaystyle =$ $\displaystyle 150\ \ \ \ \ (3)$ $\displaystyle m_{p}$ $\displaystyle =$ $\displaystyle 900\ \ \ \ \ (4)$ $\displaystyle m_{K}$ $\displaystyle =$ $\displaystyle 500\ \ \ \ \ (5)$ $\displaystyle m_{\Sigma}$ $\displaystyle =$ $\displaystyle 1200 \ \ \ \ \ (6)$
so from 1 and 2
$\displaystyle 150\gamma_{\pi}\beta_{\pi}$ $\displaystyle =$ $\displaystyle 900\gamma_{p}\beta_{p}\ \ \ \ \ (7)$ $\displaystyle \gamma_{\pi}\beta_{\pi}$ $\displaystyle =$ $\displaystyle 6\gamma_{p}\beta_{p}\ \ \ \ \ (8)$ $\displaystyle 150\gamma_{\pi}+900\gamma_{p}$ $\displaystyle =$ $\displaystyle 1700\ \ \ \ \ (9)$ $\displaystyle \gamma_{\pi}+6\gamma_{p}$ $\displaystyle =$ $\displaystyle \frac{34}{3} \ \ \ \ \ (10)$
From these equations, we get
$\displaystyle \gamma_{\pi}^{2}$ $\displaystyle =$ $\displaystyle \left(\frac{34}{3}-6\gamma_{p}\right)^{2}=\frac{1}{1-\beta_{\pi}^{2}}\ \ \ \ \ (11)$ $\displaystyle \beta_{\pi}^{2}$ $\displaystyle =$ $\displaystyle 1-\left(\frac{34}{3}-6\gamma_{p}\right)^{-2}\ \ \ \ \ (12)$ $\displaystyle \left(\gamma_{\pi}\beta_{\pi}\right)^{2}$ $\displaystyle =$ $\displaystyle \left(\frac{34}{3}-6\gamma_{p}\right)^{2}-1\ \ \ \ \ (13)$ $\displaystyle$ $\displaystyle =$ $\displaystyle 36\gamma_{p}^{2}\beta_{p}^{2} \ \ \ \ \ (14)$
where we used 8 to get the last line. We can now solve the last two equations to find ${\gamma_{p}}$:
$\displaystyle 36\gamma_{p}^{2}\beta_{p}^{2}$ $\displaystyle =$ $\displaystyle \left(\frac{34}{3}-6\gamma_{p}\right)^{2}-1\ \ \ \ \ (15)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \left(\frac{34}{3}\right)^{2}-1-136\gamma_{p}+36\gamma_{p}^{2}\ \ \ \ \ (16)$ $\displaystyle 0$ $\displaystyle =$ $\displaystyle \left(\frac{34}{3}\right)^{2}-1-136\gamma_{p}+36\gamma_{p}^{2}\left(1-\beta_{p}^{2}\right)\ \ \ \ \ (17)$ $\displaystyle 0$ $\displaystyle =$ $\displaystyle \left(\frac{34}{3}\right)^{2}-1+36-136\gamma_{p}\ \ \ \ \ (18)$ $\displaystyle \gamma_{p}$ $\displaystyle =$ $\displaystyle 1.202\ \ \ \ \ (19)$ $\displaystyle \beta_{p}$ $\displaystyle =$ $\displaystyle 0.555 \ \ \ \ \ (20)$
We can now get the values for the pion in the centre of momentum frame from 11 and 12:
$\displaystyle \gamma_{\pi}$ $\displaystyle =$ $\displaystyle \frac{34}{3}-6\gamma_{p}=4.123\ \ \ \ \ (21)$ $\displaystyle \beta_{\pi}$ $\displaystyle =$ $\displaystyle \sqrt{1-\left(\frac{34}{3}-6\gamma_{p}\right)^{-2}}=0.970 \ \ \ \ \ (22)$
The speed of the proton in the centre of momentum frame is also the speed of the centre of momentum frame relative to the lab frame, so we can use a Lorentz transformation on the pion’s four-momentum to get back to the lab frame:
$\displaystyle p_{\pi}^{1}$ $\displaystyle =$ $\displaystyle \gamma_{p}\left(\bar{p}_{\pi}^{1}+\beta_{p}\bar{p}_{\pi}^{0}\right)\ \ \ \ \ (23)$ $\displaystyle$ $\displaystyle =$ $\displaystyle 1.202\left(150\gamma_{\pi}\beta_{\pi}+0.555\times150\gamma_{\pi}\right)\ \ \ \ \ (24)$ $\displaystyle$ $\displaystyle =$ $\displaystyle 1133\mbox{ MeV}/c \ \ \ \ \ (25)$
Notice that we must use ${\gamma_{p}}$ and ${\beta_{p}}$ (that is, the values for the proton, not the pion) in doing the Lorentz transformation, since it’s the speed of the proton that determines the relative speed of the two frames.
|
How to use Latex on a forum
• LaTeX
Main Question or Discussion Point
I love how this forum uses Latex, but I don't know how to do this for other sites. How can I make it so another site I'm with can use this?
Jameson
Related MATLAB, Maple, Mathematica, LaTeX News on Phys.org
Last edited by a moderator:
Right, but those tags are usable because the code is installed on the site's server. I'm wondering how to do this.
o, i thought you wanted to know how to use $$\LaTeX$$ on other sites like wikipedia
Last edited by a moderator:
Hey thanks a bunch. I'll post back if I get it up and running!
|
# Reverse every word in a line in vim
Your task is to build a vim script or provide a sequence of keystrokes that will operate on a single line of text with up to 140 printable ASCII characters (anywhere in a file, with the cursor starting anywhere in the line) and reverse every space-separated string in the sentence while keeping the strings in the same order.
For example, the input:
roF emos nosaer m'I gnisu a retcarahc-041 timil no siht noitseuq neve hguoht ti t'nseod evlovni .rettiwT RACECAR
should return:
For some reason I'm using a 140-character limit on this question even though it doesn't involve Twitter. RACECAR
The script with the fewest characters, or the sequence of the fewest keystrokes, to achieve this result is the winner.
• "For the purposes of this question it's vim only" seems as arbitrary a language-restriction as posting a normal code golf challenge and asking only for answers in C. (And I don't seem to be alone with this opinion.) – Martin Ender Mar 28 '15 at 18:08
• Why is RACECAR not reversed? – orlp Mar 28 '15 at 19:02
• Because it's a palindrome. Try reversing it yourself. – Joe Z. Mar 28 '15 at 19:02
• Wow, I'm stupid. Derp. – orlp Mar 28 '15 at 19:03
• @orlp Lol. I thought you were joking. – mbomb007 Mar 29 '15 at 2:12
# 2825 24 keystrokes
:se ri<CR>^qqct <C-r>"<Esc>f l@qq@q
Recursive macro, I assume that Ctrl-r counts as one keystroke.
The hardest part was to make sure the macro stays on the same line and does not destroy the rest of the file.
• You could use cE instead of ct , if it wasn't ending the macro. But you can use W instead of f l to save 2 strokes. – Caek Mar 29 '15 at 23:43
• @Caek Wrong x2. Guess what cE does when the cursor is at the beginning of a retcarahc-041? And guess what W does when we're at the end of the line? – orlp Mar 30 '15 at 0:39
• Note the capital E. lowercase e would go until the dash, capital E would go until the next space. I just tried it to confirm. – Caek Mar 30 '15 at 0:46
• try: :set ri<Enter>^qqct <C-r>"<Esc>W@qq@q for 23. – Caek Mar 30 '15 at 0:49
• @Caek That won't work. And regarding E, I know what it does. I was referring that cE<C-r><Esc> would turn a retcarahc-041 into 140-character a, AKA it would swap the words. – orlp Mar 30 '15 at 1:13
# 24 keystrokes
ma:s/ /\r/g
V'a:!rev
gvJ
I know this question is very old, but I love vimgolf so I couldn't not post an answer on one of the few vim-specific challenges on the site. Plus this solution is tied with Orlp's.
Just like Orlp said, the hardest part was making sure that the rest of the buffer was unmodified. If it weren't for that restriction, we could simply do:
:s/ /\r/g
!{rev
V}J
(19 keystrokes) but we need a little bit more to keep it buffer-safe. This assumes a unix environment.
|
# Fourier transform of $\int_{-\infty}^{t} f(\eta )\text{d}\eta$
Suppose $f(t)$ and $F(\omega)$ are a Fourier transform pair. I want to show that $$\mathcal{F}^{-1} \left\{\frac{F(\omega)}{i\omega}\right\} = \int_{-\infty}^t f(\eta)\ \text{d}\eta$$ I start with the Fourier transform of the RHS and use integration by parts: \begin{align*} \mathcal{F}\left\{\int_{-\infty}^tf(\eta)\ \text{d}\eta \right\}\ &= \int_{-\infty}^{\infty} \int_{-\infty}^t f(\eta)\ \text{d}\eta\ e^{-i\omega t}\ \text{d}t \\&=\underbrace{\frac{-1}{i\omega}\left[\int_{-\infty}^t f(\eta)\ \text{d}\eta\ e^{-i\omega t}\right]_{t\ =-\infty}^{t\ =\ \infty}}_{=\ 0 ?}\ +\ \frac{1}{i\omega}\int_{-\infty}^{\infty}f(t)\ e^{-i\omega t}\ \text{d}t\\ &= \frac{F(\omega)}{i\omega} \end{align*} Hence $$\int_{-\infty}^t f(\eta)\ \text{d}\eta\ = \mathcal{F}^{-1} \left\{\frac{F(\omega)}{i\omega}\right\}$$
If the result is true I can see the first term on the RHS after performing the integration by parts must vanish but I'm not sure how to justify it. Any help on justifying it (or a cleaner approach to show the result) would be appreciated, thanks!
One may recall that
$$\mathcal{F}(g')(\omega)=i\omega\mathcal{F}(g)(\omega) \tag1$$
applying it with $$g(t)=\int_{-\infty}^t f(\eta)\ \text{d}\eta, \quad g'(t)=f(t),\quad$$ the notation $F(\omega):=\mathcal{F}(f)(\omega)$, it gives $$\frac{F(\omega)}{i\omega} =\mathcal{F}\left( \int_{-\infty}^t f(\eta)\ \text{d}\eta\right)(\omega) \tag2$$ that is
$$\mathcal{F}^{-1} \left\{\frac{F(\omega)}{i\omega}\right\} = \int_{-\infty}^t f(\eta)\ \text{d}\eta. \tag3$$
Are you Ok with a proof of $(1)$ ?
• Très bon. :D No that's fine, I've already proved (1), thank you. – AtticusFinch95 Apr 27 '16 at 23:29
• You are welcome. – Olivier Oloa Apr 27 '16 at 23:30
Let $F(\omega)$ be the Fourier transform of the square integrable, continous function $f(t)$ as given by
$$F(\omega)=\int_{-\infty}^\infty f(t)e^{-i\omega t}\,dt$$
Let $I_L(\omega,)$ be defined by the integral
$$I_L(\omega)=\int_{-L}^L \int_{-\infty}^tf(t')\,dt'\,e^{-i\omega t}\,dt$$
Integrating by parts with $u=\int_{-\infty}^tf(t')\,dt'$ and $v=\frac{e^{-i\omega t}}{-i\omega}$ yields
$$I_L(\omega)=\frac{1}{i\omega}\int_{-L}^L f(t)e^{-i\omega t}\,dt+\frac{1}{i\omega}\left(e^{i\omega L}\int_{-\infty}^{-L}f(t')\,dt'-e^{-i\omega L}\int_{-\infty}^{L}f(t')\,dt'\right) \tag 1$$
Assuming that $\int_{-\infty}^t f(t')\,dt'$ is also a square integrable function, then we must have $$\lim_{L\to \infty}\int_{-\infty}^L f(t')\,dt'=0$$
Then, the boundary term on the right-hand side of $(1)$ vanishes in the limit as $L\to \infty$ and the limit of $I_L(\omega)$ becomes
\begin{align} \bbox[5px,border:2px solid #C0A000]{\int_{-\infty}^\infty \left(\int_{-\infty}^t f(t')\,dt'\right)\,e^{-i\omega t}\,dt=\frac{1}{i\omega}F(\omega)}\end{align}
as was to be shown!
• Hello, thanks for your answer! Just a small typo on the last line, it should be $e^{-i\omega t}$. ;) Sorry, I'm kinda new to this so bear with me here...why must $\lim_{L\to\infty} \int_{-\infty}^{L} f(t')\ \text{d}t$ vanish if $\int_{-\infty}^{t} f(t')\ \text{d}t$ is square integrable? Reading the wiki page on it isn't seeming to help... – AtticusFinch95 Apr 29 '16 at 11:53
• You're welcome. My pleasure. -Mark – Mark Viola Apr 29 '16 at 13:28
• Thank you for the catch. I've edited accordingly. I also added the previously tacitly assume assumption that $f$ is continuous. Then, $\int_{-\infty}^t f(t')\,dt'$ is continuous and differentiable (this permits the integration by parts). Moreover, if $\int_{-\infty}^t f(t')\,dt'$ is square integrable and continuous, then it must vanish at $\infty$. -Mark – Mark Viola Apr 29 '16 at 13:31
• I've got it now, thank you. Ah, I was just a lowly newbie beforehand and couldn't upvote any answers, but I can now. Done. ;) – AtticusFinch95 Apr 29 '16 at 13:51
• Thank you; much appreciative! I must ask ... is Atticus Finch your real name or are you a fan of "To Kill a Mockingbird?" – Mark Viola Apr 29 '16 at 14:18
|
/ Necessities and Necessary Truths. Proof-Theoretically.
## Abstract
In his seminal “Outline of a Theory of Truth” Kripke (1975) proposed understanding modal predicates as complex expressions defined by a suitable modal operator and a truth predicate. In the case of the alethic modality of logical or metaphysical necessity, this proposal amounts to understanding the modal predicate ‘is necessary’ as the complex predicate ‘is necessarily true’. In this piece we work out the details of Kripke’s proposal, which we label the Kripke reduction, from a proof-theoretic perspective. To this end we construct a theory for the modal predicate and a theory of truth formulated in a language with a modal operator and show that the modal predicate theory is interpretable in the theory of truth where the interpretation translates the modal predicate ‘N’ by the complex predicate ‘☐T’, the truth predicate modified by the modal operator. In addition, we show that our work can be viewed as the proof-theoretic counterpart to the semantic Kripke reduction recently carried out by Halbach and Welch (2009), which is based on Kripke’s theory of truth.
## 1. Introduction
In his seminal “Outline of a Theory of Truth” Kripke (1975) proposed understanding modal predicates by appeal to a corresponding modal operator and the truth predicate. He writes:
Ironically, the application of the present approach [his theory of truth] to languages with modal operators may be of some interest to those who dislike intensional operators and possible worlds and prefer to take predicates true of sentences (or sentence tokens). ... Now, if a necessity operator and a truth predicate are allowed, we could define a necessity predicate Nec(x) applied to sentences, either by ☐T(x) or by $$T(\underset{\bullet}{\Box}x)$$ [slight change of notation] according to taste (...). (Kripke 1975: 713)
Kripke’s suggestion is thus to define the modal predicate in a language containing a truth predicate and a modal operator. The hope would be that by defining the modal predicate in such a way, one arrives at a workable predicate approach to modality that combines the advantages of the operator approach with the benefits of the predicate approach. It is often thought that the predicate approach to modality is preferable to the operator approach because of its greater expressive strength. For example, within the predicate approach it is possible to quantify over the argument position of the modality at stake where nothing of the like is possible within the operator approach without moving to some restricted form of second-order logic. Another advantage of the predicate approach is that it ties well with the so-called relational analysis of propositional attitudes, which views propositional attitudes as relations between agents and propositions.
Kripke’s proposal lends itself to two possible interpretations. On the one hand it may serve as a defense of the predicate approach against the backdrop of Montague’s theorem.[1]
Montague’s theorem shows that the standard principles of modal operator logic cannot be adopted within the predicate setting for otherwise inconsistency arises. This result has often been viewed as a knock-down argument against predicate approaches to modality because it allegedly shows that no intuitive and philosophically satisfactory account of modalities qua predicates is available. Yet with Kripke’s proposal we possess a strategy for providing such, arguably, intuitive and philosophically satisfactory predicate approaches to modality.
On the other hand, an alternative understanding of Kripke’s proposal reconstructs it as an argument in favor of operator approaches to modality. According to this interpretation, reducing the modal predicate to a modal operator and the truth predicate shows that the expressive strength of the predicate approach (to modality) can be accounted for within the operator setting, i.e., we can dispense with primitive modal predicates and stick to modal operators. Halbach and Welch (2009) view Kripke’s proposal in this way and, accordingly, take Kripke’s proposal to show that the expressive strength of the predicate approach can be recovered within the operator setting. Halbach and Welch strengthen their argument by— instead of defining the modal predicate in the modal operator language—reducing a semantic theory of the modal predicate to a semantic theory of truth formulated in a modal operator language. That is, they show that understanding the modal predicate ‘N’ as the complex predicate ‘☐T’ results in a truth preserving translation with respect to the two semantics they put forward.
Reducing a semantic theory of a modal predicate to a semantic theory of truth formulated in a modal operator language has the advantage over simply defining a modal predicate in the latter language that it may serve as a defense of operator approaches to modality against proponents of the predicate approach. By using this strategy one may show that we can recover the modal predicate within the operator language. The modal predicate therefore proves to be dispensable. In contrast, if we define a modal predicate in the operator language we only show that we may define some sentential predicate that need not be of any appeal to the proponent of the predicate approach to modality. Indeed she may not even consider this predicate to be a modal predicate. In case of the reduction strategy, if the semantic theory of the modal predicate is accepted and if, in addition, we consider the expressive strength to be the sole argument in favor of the predicate approach, then the modal predicate is dispensable and we may stick to the operator approach. Obviously, the fact that it is possible to carry out a reduction does not imply that we must, but considering the fact that modal operator logic is well entrenched in philosophical and mathematical logic while the predicate approach is a niche phenomenon from this perspective, there seems to be good reason for dismissing the predicate approach as superfluous and for sticking to the “standard” operator approach to modality. Viewed from this perspective Halbach and Welch’s reduction puts some pressure on the proponent of the predicate approach to abandon his theory of modality in favor of the operator approach.
In this paper we show that Kripke’s proposal can be carried out from a proof-theoretic perspective and thereby supplement Halbach and Welch’s reduction by its proof-theoretic counterpart. To this end we construct two axiomatic theories: One axiomatizes the modal predicate; the other is a theory of truth formulated in a language containing a modal operator. We show that the theory of the modal predicate can be interpreted within (“reduced to”) the theory of truth formulated in the modal operator language where the modal predicate is translated as the truth predicate modified by an appropriate modal operator and, additionally, that the two theories match the two semantics of Halbach and Welch used for carrying out their reduction.
The structure of the paper then is as follows. In the next section we motivate our work from a philosophical perspective. In particular, we clarify what kind of reduction we are attempting, why it may be considered as a proof-theoretic counterpart of Halbach and Welch’s work and why it is philosophically interesting. In section 3 we construct the theory of the modal predicate and the theory of truth in a modal operator language. Next we reduce the theory of the modal predicate to the theory of truth, that is we show that the theory of the modal predicate can be interpreted in the theory of truth (section 4). We then turn to semantic aspects and introduce the two semantics that Halbach and Welch use to carry out their reduction (section 5). Finally, we show that the two theories capture the relevant semantics in an interesting way (section 6). We end the paper with a summary and evaluation of our findings.
## 2. The Syntactic Reduction and Its Two Possible Interpretations
As we have mentioned, Halbach and Welch view their reduction as an argument in favor of the operator approach to modality. Their idea is that the sole argument for opting for a predicate approach to modality is the alleged expressive strength of the predicate approach. Accordingly, they take their reduction to show that the expressive strength of the predicate approach can be recovered within the operator approach, as the modal predicate can be reduced to a modal operator and the truth predicate. However, their reduction is carried out solely in semantic terms: they show that the translation is truth preserving with respect to the two semantics employed. Yet, both semantics are based on possible world semantics and therefore the reduction is convincing only if one accepts possible worlds or similar notions in the analysis of modal notions. Within the predicate approach, in contrast to the operator approach, there is no interpretation problem, i.e., we are not forced to move to possible worlds to provide an interpretation of the modal notion—standard first-order semantics will do the job. Consequently, the proponent of the predicate approach to modality who dislikes possible worlds will remain unconvinced by the semantic reduction proposed by Halbach and Welch.[2] The reduction we are going to propose will be carried out syntactically by employing proof-theoretic means only and therefore simple skepticism toward possible worlds will no longer do. Rather, such a syntactic reduction, if successful both from a philosophical and technical perspective, would, together with Halbach and Welch’s semantic reduction, put the proponent of the predicate approach under considerable pressure—at least if one were to follow Halbach and Welch’s dialectic.
Of course, this kind of argument can only be convincing if the reduction we propose is a plausible reduction in its own right and so far, without further specification, it remains unclear what it means to carry out the reduction by purely proof-theoretic means.[3] We shall now attempt to fill this gap. To this end, let us first reconsider the reduction carried out by Halbach and Welch in more detail. Halbach and Welch construct an intended model for both the language of the modal predicate and the language of the modal operator plus the truth predicate. Let us call the first model MN and the second model MT. It is assumed that the two languages only differ in the way they conceive of the modal notion, that is, whether they conceive of the modal notion as a predicate or an operator. Next Halbach and Welch provide a translation function 𝜌 from the language of the modal predicate $$\mathcal{L}_{N}$$ to the language of the modal operator plus truth predicate $$\mathcal{L}_{\Box T}$$. The translation function translates the modal predicate ‘N’ as ‘☐T’ but leaves the remaining vocabulary fixed. In particular the translation commutes with the logical connectives and quantifiers. Thus 𝜌 clearly captures the idea behind Kripke’s proposal and should therefore be considered as a philosophically interesting translation function that translates formulas of $$\mathcal{L}_{N}$$ in an intended way.[4] Halbach and Welch show that for all sentences φ of $$\mathcal{L}_{N}$$
In a way, (SR) asserts that whether we treat the modal predicate as a primitive predicate or as a complex predicate that is constructed using the modal operator and the truth predicate does not affect the truth of a modal statement. Therefore, following the dialectics of Halbach and Welch, the modal predicate is dispensable and we can stick to modal operators. With this in mind let us return to outlining the proof-theoretic counterpart of Halbach and Welch’s reduction. The idea is to replace the semantic notions used in (SR) by purely syntactic or proof-theoretic notions. Consequently, we shall replace the notion of truth in a model by the notion of derivability in a theory. This requires developing two theories, which we shall call $$\mathsf{T}_N$$ and $$\mathsf{T}_{\Box T}$$ for now. $$\mathsf{T}_N$$ is the theory of the modal predicate formulated in the language $$\mathcal{L}_{N}$$. $$\mathsf{T}_{\Box T}$$ is the theory of truth formulated in the modal operator language $$\mathcal{L}_{\Box T}$$ . We then show that for all φ$$\mathcal{L}_{N}$$
(PR) asserts that theoremhood is preserved modulo translation in moving from $$\mathsf{T}_N$$ to $$\mathsf{T}_{\Box T}$$. In more technical terms, because of the properties of the translation function, (PR) shows that $$\mathsf{T}_N$$ is a subtheory of a definitorial extension of $$\mathsf{T}_{\Box T}$$ where the arithmetical vocabulary is left unaltered. This is a fairly strong notion of reduction. In other words, 𝜌 is an unrelativized interpretation of $$\mathsf{T}_N$$ in $$\mathsf{T}_{\Box T}$$ in the sense of Tarski, Mostowski, and Robinson (1953).[5] This sets the syntactic reduction in (PR) apart from reductions where preservation of theoremhood is achieved by translating formulas of the source language in an unintended way.
Consequently, it seems fair to conclude that (PR) establishes from a proof-theoretic side what (SR) establishes from a semantic side, if one accepts $$\mathsf{T}_N$$ and $$\mathsf{T}_{\Box T}$$ as the intended theories of the languages and the notions involved.[6] For now, let us assume that $$\mathsf{T}_N$$ and $$\mathsf{T}_{\Box T}$$ were such theories, or at least plausible candidate theories, then it seems the proponent of the predicate approach can no longer resist the dialectics of Halbach and Welch by simply dismissing possible world semantics. Rather, she would also need to dismiss at least one of the two theories as apt characterizations of the notions at stake. Such a dismissal should not be taken lightly, however, because there is an asymmetry between rejecting possible world semantics in the case of the semantic reduction and of rejecting the modal theories and possible world semantics when we have both a semantic and a syntactic reduction. Whereas in the first case the modal semantics, i.e., possible world semantics, was, in a way, superfluous, as one can appeal to simple first-order semantics,[7] and as a last resort adopt a purely proof-theoretic account there is no obvious substitute available for the two theories appealed to in the syntactic reduction. As a consequence, simply rejecting the theories will not be sufficient, rather the proponent of the predicate approach would need to provide us with alternative, well-motivated theories for which the proposed reduction does not hold. This kind of worry is even more pressing in light of Montague’s theorem, which threatens the intelligibility of the predicate approach altogether. It therefore seems crucial for a proponent of the predicate approach to provide us with a positive account, that is, she has to offer a modal theory for which the “Kripke reduction” is not feasible. However, to date there has been little success in direction of such a theory. Consequently, if one subscribes to Halbach and Welch’s claim that the expressive strength is the sole argument favoring predicate approaches to modality, then a proof-theoretic counterpart of Halbach and Welch’s reduction puts the proponent of the predicate approach under considerable pressure given the enormous success of the operator approach in logic and philosophy.
However, in the introduction we have already suggested that if one thinks that predicate approaches are motivated independently of their alleged expressive strength, then the success of the reduction can be viewed as an argument in favor of the modal theory we are about to construct and thus ultimately as an argument in favor of predicate approaches to modality. The guiding idea is that since the predicate approach is motivated independently of its expressive strength and the success of the “Kripke reduction” (as we shall call the reduction from now on) shows that we have constructed a workable theory of modality where the modal notion is conceived as a predicate. This shows that contra Montague there are well motivated, intuitive and consistent accounts of modalities conceived as predicates. Moreover, the syntactic and semantic reduction taken together show that we can have both semantic and syntactic, that is, axiomatic theories of modal predicates. Incidentally, the modal theory we propose will match the semantics Halbach and Welch employ in their semantic reduction. On the one hand this shows that our syntactic reduction really is the proof-theoretic counterpart to Halbach and Welch’s reduction. On the other hand, Halbach and Welch’s work together with our proof-theoretic addendum may be viewed as a full-fledged defense of the predicate approach to modality against the challenges raised by Montague’s theorem and related inconsistency results. One might even hope that based on Kripke’s idea a general strategy for devising modal theories in which the modal notion is conceived as a predicate can be developed.[8]
## 3. The Theories Modal PKF and Operator PKF
In this section we introduce the theory of the modal predicate and the theory of truth formulated in a language containing a modal operator. Both theories will be formulated in non-classical logic, namely FDE or BDM (as Field 2008 would call it). The reason for our departure from classical logic is twofold. First, the semantics employed by Halbach and Welch is based on Kripke’s theory of truth, which is based on a partial evaluation scheme.[9] Formulating the modal theories in partial logic thus goes nicely with these semantics.[10] Second, if we wish to capture the semantics of Halbach and Welch in classical logic it proves necessary to introduce a primitive possibility predicate. This is a side effect of the so-called diverging inner and outer logic in classical axiomatizations of Kripke’s theory of truth[11] and may be nicely illustrated by reading the modal predicate following the spirit of the Kripke reduction as a modified truth predicate. Under this reading ‘necessarily true not’ would need to be equivalent to ‘not possibly true’ for a possibility predicate to be interdefinable with a necessity predicate. But the latter is (by standard modal logic) equivalent to ‘necessarily not true’, which is not, in general, equivalent to the former statement because of the diverging inner and outer logic.[12] Halbach and Welch do not have a primitive possibility predicate in their language and by axiomatizing their semantics in FDE the introduction of a primitive possibility predicate can be avoided. A further welcome side effect of formulating the modal theory in non-classical logic is that the standard principles of modal operator logic can be transferred to the predicate setting without much trouble. If we were to work in classical logic this would not be possible because of Montague’s theorem. Nonetheless, one can still construct intuitive and “Kripke reducible” modal theories in classical logic, but a lot more care has to be taken in spelling out the modal principles. Roughly, one has to make sure that the modal principles are stated in a way such that any type of semantic ascent or descent is avoided within the modal principle. This can be achieved in stating the modal principles by appeal to a truth predicate in addition to the modal predicate.[13] In fact as a residue of this strategy the non-classical theory of the modal predicate we are about to construct will be developed in a language containing a modal predicate and a truth predicate.
Accordingly, the theory of the modal predicate called “modal theory” will be formulated in the language $$\mathcal{L}_{PATN}$$ extending the language of arithmetic $$\mathcal{L}_{PA}$$ by two unary predicates; the modal predicate ‘N’ and the truth predicate ‘T’. We add a truth predicate because this allows to establish the connection between the modal theory and its intended semantics. Moreover, adding a truth predicate seems to be desirable independently since principles linking truth and the modal notion can be stated in this way. The theory of truth formulated in the language $$\mathcal{L}_{PAT}^{\Box}$$, i.e., a language extending $$\mathcal{L}_{PA}$$ by a truth predicate ‘T’ and a unary modal operator ‘☐’, will be called the “operator theory of truth”. In formulating the modal theory and the operator theory of truth we assume some standard coding scheme and denote the code, e.g., the Gödel number, of an expression ζ by #ζ and its name, that is the numeral of #ζ, by $$\ulcorner\zeta\urcorner$$. In the remainder of the paper we then freely equate the expressions with their codes in order to keep things as simple as possible. We let the sets $$Sent_{\mathcal{L}}$$ (“$$\mathcal{L}$$-sentences”) and $$Cterm_{\mathcal{L}}$$ (“$$\mathcal{L}$$-closed terms”) represent themselves and, moreover, drop the subscript when no confusion can arise. In most cases, if $$\rhd$$ is a syntactic operation we represent it by $$\underset{\bullet}{\rhd}$$. However, there are few exceptions to this rule: we represent the ternary substitution function by x(s/t) where x(s/t) is a name of the formula that results from replacing t by s in x. Also, let Val(·) represent the function that takes codes of closed terms as arguments and provides the code of their denotation as output. Finally, we represent the so-called num function, i.e., the function that takes codes of expressions (#ζ) as input and provides the code of the name of the expression as output ($$\#\ulcorner\zeta\urcorner$$) by a superdot. We thus have $$\dot{\ulcorner\zeta\urcorner}=\ulcorner{\ulcorner\zeta\urcorner}\urcorner$$. Sometimes, in case of scope ambiguities, we represent the num function by num.
As a matter of fact both the modal theory and the operator theory of truth will be based on the same underlying theory of truth, for otherwise we would arguably fail to reduce the modal predicate to a modal operator and the truth predicate. Instead, from the perspective of the modal theory, the reduction would then show that we may reduce the modal predicate to a modal operator and some other sentential predicate—a sentential predicate that does not share the characteristics of the truth predicate employed in the modal theory. Since Halbach and Welch’s proposal is a generalization of Kripke’s theory of truth to the modal setting, the theory of truth we assume should arguably be a proof-theoretic counterpart of Kripke’s theory of truth and, as we have argued, the theory should be formulated in partial logic.
It turns out that a theory of truth which fits the bill has already been developed by Halbach and Horsten (2006).[14] Their theory of truth “Partial Kripke-Feferman (PKF)” captures Kripke’s construction and is formulated in partial logic. The theory PKF comes in two versions. In Halbach and Horsten (2006), symmetric strong Kleene logic is assumed as the underlying logic, while Halbach (2011: Ch. 16) formulates the theory using FDE, which allows for truth value gaps and gluts. Here we adopt Halbach’s version of the theory PKF. We now introduce the logic of FDE and the aforementioned theory of truth PKF.
### 3.1. FDE and Partial Kripke-Feferman
We provide a two sided sequent formulation of FDE which is the underlying logic of PKF. In what is to come we denote sequents by ∆ ⇒ Γ with ∆ and Γ being finite (possibly empty) sets of formulas. We sometimes write ∆ ⇔ Γ to convey the sequents ∆ ⇒ Γ and Γ ⇒ ∆.[15] Throughout, the only primitive logical symbols will be ¬, ∧ and ∀. We also take a primitive two place identity predicate = to be part of the language. The existential quantifier ∃, disjunction ∨ and material implication → are defined in the usual way. However, it is important to understand the definitions as mere notational abbreviations.
We now provide the laws and initial sequents of this logic which is basically a variant of the logic presented in Scott (1975).
#### 3.1.1. Structural Rules and Initial Sequents
The structural rules and initial sequents of FDE are the following
#### 3.1.2. Rules and Initial Sequents for the Logical Constants
We provide rules for negation, conjunction, the universal quantifier, and the identity symbol. Note that ¬Γ stands for the set of all negations of sentences in a set Γ.
This completes our presentation of the logic of FDE underlying PKF. Even though this is not strictly speaking part of the syntax of the logic, we indicate the underlying consequence relation in order to avoid misunderstandings and confusion. Let M $$\models_{SK}$$ φ[β] denote that φ is true in M under a variable assignment β according to the strong Kleene evaluation scheme we shall define shortly. Then Γ $$\models_{SK}$$ ∆ iff for all models M and all variable assignments β:
It is easy to check that the rules and initial sequents of FDE are sound with respect to the consequence relation so defined: if the sequent Γ ⇒ ∆ is FDE-derivable, then Γ $$\models_{SK}$$ ∆.
It is important to bear in mind the definition of the consequence relation because several alternative definitions are used within the realm of partial logic and, depending on the consequence relation employed, the rule (¬) may fail to be a sound rule.[16]
The classical rules for negation, that is
are not admissible within the logic of FDE. Yet, FDE will collapse into classical logic for arithmetic φ, i.e., for arithmetic φ the rules (¬RR) and (¬RL) are derivable. As a consequence of this and the rules and initial sequents we are about to give, the arithmetic fragment of PKF will be a supertheory of classical PA.
#### 3.1.3. Rules and Initial Sequents for Arithmetic
Besides the rule of induction we have an initial sequent for every axiom φ of PA:
Finally, we give the principles characterizing the truth predicate which together with the arithmetic sequents and rules constitute the theory Partial Kripke-Feferman PKF.
#### 3.1.4. Partial Kripke-Feferman
The system PKF consists of the rules and initial sequents of arithmetic in the language $$\mathcal{L}_{PAT}$$ together with the following truth specific initial sequents:
For a more in depth discussion of the theory PKF we refer the reader to Halbach and Horsten (2006) and Halbach (2011: Ch. 16). However, we note the following facts. First, as we have already mentioned, PKF behaves classically on the arithmetical fragment:
Fact 3.1. For arithmetic φ the rulesRR) andRL) are derivable in PKF.
Proof. Cf. Halbach (2011: Ch. 16). The proof uses the fact that in PKF the sequent ⇒ φ, ¬φ is derivable for arithmetical φ.
The second fact we mention highlights some derivability facts.
Fact 3.2. The following sequents are derivable in PKF where $$\phi\in Sent_{\mathcal{L}_{PATN}}$$ and λ is some liar sentence such that PAT (PA in the language $$\mathcal{L}_{PAT}$$) proves $$\lambda\Leftrightarrow\neg T\ulcorner\lambda\urcorner$$:
Next we construct our modal theory, i.e., the theory of the modal predicate.
### 3.2. Modal Partial Kripke-Feferman
Modal Partial Kripke-Feferman extends PKF as formulated in the language $$\mathcal{L}_{PATN}$$ by initial sequents and rules characterizing the modal predicate. We first give the system Basic Modal Partial Kripke-Feferman (BMPKF) which consists of initial sequents characterizing the interaction of the modal predicate and the connectives, quantifiers and the identity symbol. In addition we have a modal version of (PKF5) which connects the modal predicate and the truth predicate, as well as the rule (RN), which is a substitute of the rule of (R☐) known from modal operator logic and which we shall encounter when dealing with the operator truth theory.
#### 3.2.1. Basic Modal Partial Kripke-Feferman
The system BMPKF consists of all initial sequents and rules of PKF in the languages $$\mathcal{L}_{PATN}$$[17] and the following initial sequents and rules.[18] In the formulation of the rule (RN), $$T\ulcorner\Gamma\urcorner$$ ($$N\ulcorner\Gamma\urcorner$$) is short for the set
{$$T\ulcorner\gamma\urcorner:\gamma\in\Gamma$$} ({$$N\ulcorner\gamma\urcorner:\gamma\in\Gamma$$}) for arbitrary sets of formulas Γ.
Our main interest, however, will not be in BMPKF but in MPKF, i.e., Modal Partial Kripke-Feferman.[19] We obtain MPKF from BMPKF by adding initial sequents expressing further modal properties to the theory. These initial sequents should appear familiar as they are reformulations of the classical modal principles (T), (4) and (E). In contrast to their usual formulation in the operator setting we formulate them for arbitrary terms of the language. For the modal principle (T) this requires the introduction of the truth predicate in the consequent, for otherwise there would not be a term position in the consequents.
#### 3.2.2. Modal Partial Kripke-Feferman
The system MPKF is BMPKF extended by the following initial sequents:
As the focus of this piece is on the Kripke reduction we refrain from investigating BMPKF and MPKF in further detail and move on to introducing the operator theory of truth.
### 3.3. Operator Partial Kripke-Feferman
We now set out the details for operator PKF, that is, PKF formulated in the language $$\mathcal{L}^{\Box}_{PAT}$$.[20] The theory will again be formulated in partial logic but as the language now contains a modal operator we also have to specify the logic of this operator. That is, we have to specify the modal logic we employ, which will be a partial modal logic constructed over FDE. However, the rules and sequents of the modal operator are just the standard ones and the partial character of the logic stems solely from its propositional fragment.
#### 3.3.1. Partial Modal Logic
The quantified modal logic QM consists of the rules and initial sequents of FDE together with
Here, (R☐) replaces the more usual rule
In classical logic the latter implies (R☐) due to the rules (¬RR) and (¬RL) but this is no longer the case in partial logic.[21]
If we add a further modal initial sequent X, we call the resulting modal logic QMX. In the remainder we consider the following additional initial sequents:
We call the modal logic that results from addition of (TS), (4S) and (ES) to QM, QM5. As in standard quantified normal modal logic, sequent formulations of what is known as Necessity of Identity (NI) and the Converse Barcan Formula (CBF) are derivable in the modal logic QM.
Fact 3.3. The following are derivable in QM
We now provide the operator theory of truth PKF☐ which is just PKF in the language $$\mathcal{L}^{\Box}_{PAT}$$ supplemented by two further initial sequents.
#### 3.3.2. Operator Partial Kripke-Feferman
Operator Partial Kripke-Feferman (PKF☐) is just PKF in the language $$\mathcal{L}^{\Box}_{PAT}$$[22] supplemented by two additional initial sequents:
Let $$\mathcal{S}$$ be some modal logic. If some sequent Γ ⇒ ∆ has been derived in PKF☐ using only modal sequents of S then we say that Γ ⇒ ∆ is derivable in PKF☐ assuming the modal logic $$\mathcal{S}$$. Importantly, already in QM, PKF☐ proves the rigidity of arithmetic, i.e., of our theory of syntax. For the proof of this fact it is crucial that, at least for arithmetic formulas, (ND) and (BF) can be derived. Indeed, this was the principal reason for adding (ND) and (BF) to QM.
Fact 3.4. For all $$\phi\in\mathcal{L}_{PA}$$ the following sequents are derivable in PKFassuming QM
We end the discussion of the modal theory and the operator theory of truth by noting that Fact 3.2 carries over to the extended theories. In particular, the sequent
will be provable for all sentences φ of the language under consideration in the corresponding theory.
Next we show that the modal theories can be interpreted in appropriate operator theories of truth, that is we shall carry out the Kripke reduction for the modal theories we have developed.
## 4. The Kripke Reduction
We show that MPKF is syntactically reducible to PKF☐ assuming QM5. More precisely, we show that MPKF is interpretable in PKF☐ assuming the underlying modal logic to be QM5 where the modal predicate ‘N’ gets translates as the modified truth predicate ‘☐T’. By inspecting the proof of this result it is also easily seen that BMPKF is interpretable in PKF☐ assuming QM.
However, it is worth discussing in some more detail what we are going to show in this section. In section 2 we have argued that the Kripke reduction, if carried out from a proof-theoretic perspective, should preserve derivability within a theory. But this requirement is slightly ambiguous once one deals with theories formulated in partial logic. In the setting of partial logic, one can distinguish between preservation of theoremhood and what one might call preservation of inferences. Due to the deduction theorem these two notions collapse within classical logic. This is not the case within partial logic and we take it that partial logic is foremost a study of permissible inferences rather than the study of partial tautologies. Consequently, we shall show that Halbach and Welch’s translation does not only preserve derivability but also inferences.
Before we state the details of our proposed reduction, we restate Halbach and Welch’s translation function from $$\mathcal{L}_{PATN}$$ to $$\mathcal{L}^{\Box}_{PAT}$$—slightly modified to account for the truth predicate we have added to the language of the modal predicate.
Lemma 4.1 (Halbach and Welch). There is a translation function 𝜌 from formulas of $$\mathcal{L}_{PATN}$$ to formulas of $$\mathcal{L}^{\Box}_{PAT}$$ with the following properties
Here, the function symbol 𝜌(s) represents the primitive recursive function 𝜌 and thus if s is the name of a formula φ, then 𝜌(s) will be the name of the formula 𝜌(φ). As an example consider the formula $$N\ulcorner{T\ulcorner{0=0}\urcorner}\urcorner$$. By definition of 𝜌, $$\rho(N\ulcorner{T\ulcorner{0=0}\urcorner}\urcorner)=\Box T\rho^{\bullet}(\ulcorner{T\ulcorner{0=0}\urcorner}\urcorner)$$ and since $$\rho^{\bullet}(\ulcorner{T\ulcorner{0=0}\urcorner}\urcorner)=\ulcorner{\rho(T\ulcorner{0=0})\urcorner}\urcorner=\urcorner{T\rho^{\bullet}(\ulcorner{0=0})\urcorner}\urcorner$$ and $$\rho^{\bullet}(\ulcorner{0=0}\urcorner)=\ulcorner{\rho(0=0)}\urcorner=\ulcorner{0=0}\urcorner$$ we get $$\rho(N\ulcorner{T\ulcorner{0=0}\urcorner}\urcorner)=\Box T\ulcorner{T\ulcorner{0=0}\urcorner}\urcorner$$.
The reduction or interpretation then amounts to showing that if a formula φ is derivable in MPKF, then so will be the formula 𝜌(φ) in PKF☐ assuming QM5. Before we provide the corresponding theorem we state a lemma that will do the main job in the proof of the theorem.
Lemma 4.2. Let ∆ ⇒ Γ be an initial sequent of MPKF. Then the sequent 𝜌(∆) ⇒ 𝜌(Γ) is derivable in PKFassuming QM5 where 𝜌(A) for some set A is short for {𝜌(φ) : φA}.
We only sketch a proof, as the proof basically consists in performing standard derivations within the sequent calculus we have provided. Note, however, that by Theorem 3.4 arithmetic formulas are rigid in PKF☐ assuming QM (and thus QM5). In addition, by formalizing the properties of 𝜌 we may prove in PA
These two facts allow us to deal with the arithmetical conditions appearing in the initial sequents of PKF☐ such as Cterm(x) and Sent(x) in the proof.
Proof sketch. We carry out the proof for the sequents (4′S) and (ES) in some detail but leave the remaining items to the reader.
(4′S) Using (4S) and (PKF4)(ii) we derive:
Now using $$(IA\Box)(i)$$ we conclude
By (†1), setting x ≐ 𝜌(x), we may derive the 𝜌((4′S)), that is
(E′S) Using the initial sequence (5S) of QM5 we derive (*) $$Sent_{\mathcal{L}_{PAT}^\Box}(x),\neg\Box T\underset{\bullet}{T}\dot{x}\Rightarrow\Box T\underset{\bullet}{\neg}\underset{\bullet}{\Box}\underset{\bullet}{T}\dot{x}$$. We may then reason
Now using this together with (†1) and taking x = 𝜌(x) we obtain
which is the desired 𝜌((E′S)). The initial sequence of the left branch of the proof tree is an instance of (PKF4)(i). The right branch is (∗), which we shall now prove. The left branch of the proof tree for (∗) starts with an instance of (IA☐)(ii), the right branch uses an instance of (PKF5)(i):
We may now use (ES) to conclude
One last remark is important before we state the main theorem. The modal sequents (TS), (4S) and (ES) of QM5 are only used in the proof of Lemma 4.2 to establish, respectively, the cases (T′S), (4′S) and (E′S). Therefore, the above lemma carries over directly to the initial sequents of BMPKF with respect to PKF☐ assuming QM.
The following theorem establishes that the Kripke reduction can be carried out from a proof-theoretic perspective. That is, using Halbach and Welch’s translation function the theory MPKF can be interpreted in PKF☐ assuming QM5.
Theorem 4.3. Let Γ ⇒ ∆ be a sequent derivable in MPKF. Then the sequent 𝜌(Γ) ⇒ 𝜌(∆) is derivable in PKFassuming the modal logic QM5. Again 𝜌(A) for some set A is short for {𝜌(φ) : φA}.
Proof. By induction on the length of a derivation in MPKF we show that if a sequent Γ ⇒ ∆ is derivable in MPKF then so is the sequent 𝜌(Γ) ⇒ 𝜌(∆) in PKF☐ assuming QM5. The start of the induction is immediate by the Lemma 4.2. For the induction step we show that derivability in PKF☐ is preserved if the rule (RN) is applied. Since the remaining rules of BMPKF are also rules of PKF☐ this will end our proof. Now as our induction hypothesis we may assume that $$\rho(T\ulcorner\Gamma\urcorner)\Rightarrow\rho(Tt),\rho(\neg T\ulcorner\Delta\urcorner)$$ is derivable in PKF☐. By the properties of 𝜌 this is just $$T\ulcorner\rho(\Gamma)\urcorner\Rightarrow T\rho^\bullet(t),\neg T\ulcorner\rho(\Delta)\urcorner$$. But then using the rule (R☐) we may derive the sequent $$\Box T\ulcorner\rho(\Gamma)\urcorner\Rightarrow\Box T\rho^\bullet(t),\neg\Box T\ulcorner\rho(\Delta)\urcorner$$, which by the properties of 𝜌 is just $$\rho(N\ulcorner\Gamma\urcorner)\Rightarrow\rho(Nt),\rho(\neg N\ulcorner\Delta\urcorner)$$.
The theorem establishes that inferences of MPKF are preserved in PKF☐ assuming QM5 modulo translation. This immediately implies that theoremhood is also preserved modulo translation. We may state this simpler claim as corollary of Theorem 4.3 by writing $$\Sigma\vdash\phi$$ ($$\Sigma\vdash_{\mathcal{S}}\phi$$) if the sequent ⇒ φ is derivable in a theory Σ (assuming the modal logic $$\mathcal{S}$$).
Corollary 4.4. For all $$\phi\in\mathcal{L}_{PATN}$$
The corollary and the foregoing theorem establish that MPKF is “Kripke-reducible” to PKF☐ assuming QM5. In other words the modal predicate of MPKF can be understood as a modified truth predicate as suggested by Kripke. It remains to substantiate our claim that our Kripke reduction can be viewed as the proof-theoretic counterpart to the semantic reduction by Halbach and Welch.
## 5. Modal Fixed-Point Semantics
We now give the semantics for $$\mathcal{L}_{PATN}$$ and $$\mathcal{L}_{PAT}^{\Box}$$, which are slight variants of the semantics introduced by Halbach and Welch. Both semantics combine the strategy employed by Kripke (1975) in formulating his theory of truth with ideas from possible world semantics for modal operator logic. The two semantics have the same structure, are both based on the strong Kleene scheme and only diverge in their respective clause for the modal notion. The key concepts of the two semantics are the notions of a frame and of an evaluation function, a function that assigns to each possible world a subset of ω—the interpretation of the truth predicate.
Definition 5.1 (Frame, evaluation function). Let W ≠ ∅ be a set of natural number structures and RW × W a dyadic relation on W. Then F = 〈W, Ris called a frame. A function f : WP(ω) is called an evaluation function for a frame F. The set of all evaluation functions of a frame F is denoted by ValF.
Worlds have a two-fold role to play in the present framework. Firstly, they serve as parameters in the definition of the notion of “truth in a model” but secondly, they already provide the interpretation of the arithmetical vocabulary. The frame and the evaluation function are then employed to provide an interpretation of the semantic vocabulary, i.e., the truth predicate and the modal notion.
The previous remarks should become clearer by the definition of a model for $$\mathcal{L}_{PATN}$$ induced by a frame and an evaluation function at a world w. We shall not explicitly mention the antiextension of the truth predicate in our presentation of the models since it can be taken to consist of the negations of the sentences in the truth predicate’s extension. However, we need to explicitly provide the antiextension of the necessity predicate.
Definition 5.2 (Models for $$\mathcal{L}_{PATN}$$). Let F be a frame, f an evaluation function and $$\overline{f(w)}$$ the set $$\{\#\neg\phi:\#\phi\in f(w)\}$$ for arbitrary w. Furthermore, set
with [wR] being an abbreviation for {v ∈ W : wRv}. Then $$M_{w}=\left\langle w, f(w),Y^{+}_{w},Y^{-}_{w}\right\rangle$$, where w ∈ W is a natural number structure, f(w) is the extension of the truth predicate and $$Y^{+}_{w}$$($$Y^{-}_{w}$$) the (anti-)extension of the necessity predicate, is a model of the language $$\mathcal{L}_{PATN}$$ induced by F and f at a world w.[23] If it is important to keep track of the frame and the evaluation function, we write F, $$w\models_{SK}^{f}\phi$$ instead of $$M_{w}\models_{SK}\phi$$.
The necessity predicate is thus interpreted as truth in all accessible worlds, but where the relevant notion of truth is not the metalinguistic notion but the notion of truth of $$\mathcal{L}_{PATN}$$ itself. As a consequence the antiextension of the necessity predicate consists of those sentences whose negation is true in at least one accessible world. We now specify the notion of truth in a model M at a world w according to the modal strong Kleene scheme.
Definition 5.3 (Modal Strong Kleene Truth in a Model). Let F be a frame and f an evaluation function f . We define the notion of truth in a model induced by the frame F and the evaluation function f at a world w according to the strong Kleene scheme, $$\models_{SK}$$ by
tN is the interpretation of a term t in the standard model and $$\overline{Sent_{\mathcal{L}_{PAT}}}$$ denotes the set of natural numbers that are not codes of sentences under the coding scheme assumed. We say a formula φ is true in the model induced by the frame F and the evaluation function f, $$(F,f)\models_{SK}\phi$$, iff $$\forall w(F,w\models^{f}_{SK}\phi)$$.
The semantics for the language $$\mathcal{L}_{PAT}^{\Box}$$ differs from the semantics we have just presented in the way the modal notion is interpreted. Since we are now dealing with an operator instead of a predicate, the interpretation is no longer given by an extension and antiextension but by the standard clause known from possible world semantics.
Definition 5.4 (Models for $$\mathcal{L}_{PAT}^{\Box}$$). Let F be a frame and fValF an evaluation function on F. Then the tuple (F, f ) is called a model of $$\mathcal{L}_{PAT}^{\Box}$$.
We define the notion of truth in a model according to the operator strong Kleene scheme.
Definition 5.5 (Operator Strong Kleene Truth in a Model). Let F be a frame and f an evaluation function. The truth of a formula φ of $$\mathcal{L}_{PAT}^{\Box}$$ in the model (F, f ) at a world w according to operator strong Kleene truth, F, $$w\models^f_{SK\Box}\phi$$, is defined by clauses (i)-(iv) and (vii)-(xi) of Definition 5.3 but where the clauses (v) and (vi) of Definition 5.3 are replaced by
We say that φ is true in the model (F, f), (F, f) $$\models_{SK\Box}$$ φ, iff for all wW(F, w $$\models^f_{SK\Box}$$ φ).
Following the outlines of Halbach and Welch we next define operations on evaluation functions called the modal strong Kleene jump and the operator strong Kleene jump.
Definition 5.6 (Modal and Operator Strong Kleene Jump). Let F be a frame and ValF the set of evaluation functions of F. The modal strong Kleene jump (operator strong Kleene jump) $$\Theta_{F}$$ ($$\Theta^{\Box}_{F}$$) is an operation on ValF relative to F such that for all wW
Importantly, the modal and the operator strong Kleene jump will be monotone operations with respect to an ordering ≤:
Lemma 5.7 (Monotonicity). Let F be a frame. The jump $$\Theta_{F}$$ ($$\Theta^{\Box}_{F}$$) is a monotone operation on ValF, i.e. for all f,gValF:
where fg :⇔ ∀wW( f (w) ⊆ g(w)).
Proof. Clearly for all evaluation functions f, g with f (w) ≤ g(w) for all wW we have [ΘF(f)](w) ⊆ [ΘF(g)](w) and thus ΘF(f) ≤ ΘF(g). Similarly for $$\Theta^{\Box}_{F}$$.
The monotonicity of ΘF implies, assuming standard set theory, the existence of fixed points, i.e., the existence of evaluation functions f, gValF such that
As we shall show in the next section, exactly these fixed points induce models for BMPKF and PKF☐ assuming the underlying modal logic to be QM.
## 6. The Adequacy of MPKF and PKF☐ relative to Modal Fixed-Point Semantics
In this section we show that the two theories we have constructed are intimately related to the corresponding modal fixed-point semantics. However, we should warn the reader that we do not attempt to connect the notion of truth in modal fixed-point semantics with the notion of derivability in MPKF or PKF☐, as one might expect if one thinks of adequacy as entailing the completeness of some proof system. Such a result is excluded by the fact that in modal fixed-point semantics the standard model of arithmetic is assumed to be the base model at every world. Thus every true arithmetical sentence will be true in modal fixed-point semantics but of course not all of them can be proved in the modal theories because of Gödel’s incompleteness theorem.[24]
Rather, our notion of adequacy connects the models of our theories to the models of modal fixed-point semantics. It asserts that the fixed-point models that we obtain relative to so-called equivalence frames are exactly the models of MPKF and PKF☐ assuming QM5 respectively.[25] Thus only the fixed-points of the modal and the operator Kripke jump respectively give rise to models of MPKF and PKF☐ assuming QM5 respectively. Philosophically speaking, one might take such a result to establish that fixed-point models are precisely the intended models of the theories. Moreover, this adequacy result also has the nice consequence that the notion of truth in modal fixed-point semantics corresponds to the object-linguistic notion of truth as characterized by MPKF and PKF☐ respectively—the theories we have developed force these two notions to correspond. By this we mean that if we can prove $$T\ulcorner\phi\urcorner$$ for some sentence φ in MPKF (PKF☐ assuming QM5) then φ will be true in each SK (SK☐) fixed-point model at every world.[26]
Before we provide the relevant results to this effect, we introduce some auxiliary terminology. Le Σ be some theory. We write F, $$w\models^{f}_{\gamma}\Sigma$$ for γ ∈ {SK, SK☐} iff for all Σ derivable sequents Γ ⇒ ∆ and variable assignments β:
• if F, $$w\models^{f}_{\gamma}\phi[\beta]$$ for all φ ∈ Γ, then there exists a ψ ∈ Δ such that F, $$w\models^{f}_{\gamma}\psi[\beta]$$
• if F, $$w\models^{f}_{\gamma}\neg\psi[\beta]$$ for all ψ ∈ Δ, then there exists a φ ∈ Γ such that F, $$w\models^{f}_{\gamma}\neg\phi[\beta]$$
We assume β to be constant across worlds and assume the SK-satisfaction relation to be defined in the expected way. Finally, let $$\mathcal{S}$$ be some modal logic. We take PKF$$\Box_{\mathcal{S}}$$ to be the set of all PKF☐ derivable sequents assuming the modal logic $$\mathcal{S}$$.
We state the key result, which establishes that the fixed-point models we obtain relative to arbitrary frames are exactly the models of BMPKF and PKF☐ respectively.
Theorem 6.1. Let F be a frame and fValF an evaluation function. Then
Proof. For the left-to-right direction we need to show that every BMPKF (PKF☐ assuming QM) derivable sequent preserves truth in the respective fixed-point models. The proof is a routine induction on the length of a proof and is left to the reader. For the converse direction we assume the right-hand-side. It then suffices to show for an arbitrary world w that (i) [ΘF( f )](w) = f (w) and (ii) $$[\Theta_{F}(f)](w)=f(w)$$. (PKF6) tells us that there will be only sentences in the extension of the truth predicate.[27] We may thus establish our claim by an induction on the positive complexity of the members of (i) [ΘF( f )](w) and (ii) $$[\Theta^{\Box}_{F}(f)](w)=f(w)$$. The proof is routine and for both (i) and (ii) we only check one of the modal cases for sake of illustration.
• (i) We discuss the case where φ $$\doteq$$Nt. The reader may check the remaining cases.
• (ii) We discuss the case φ $$\doteq$$ψ and assume that the induction hypothesis holds for ψ. The remaining cases are again left to the reader.
Theorem 6.1 tells us that the standard models of BMPKF and PKF☐ (assuming QM) are generated exactly by the fixed-points of the modal and the operator strong Kleene jump relative to arbitrary frames. More importantly, we may obtain similar adequacy results for the theory MPKF and (unsurprisingly) for PKF☐ assuming alternative modal logics if we restrict our attention to classes of modal frames which meet certain properties.
Definition 6.2. Let F be a frame and Σ some theory. If for all evaluation functions f with $$\Theta_{F}(f)=f$$ ($$\Theta_{F}^{\Box}(f)=f$$) and $$\forall w(F,w)\models^f_{SK}\Sigma$$ ($$\forall w((F,w)\models^f_{SK\Box}\Sigma$$), we write $$F\models_{SK}\Sigma$$ ($$F\models_{SK\Box}\Sigma$$).
Theorem 6.3. Let F = 〈W,Rbe a frame. Then
Proof sketch. The left-to-right direction works as in possible world semantics for modal operator logic. For the converse direction we consider case (i) and assume ¬wRw. Moreover, let τ be the truth teller.[28] We observe that there is a fixed point, that is, an evaluation function f, such that τ ∉ f(w) but τ ∈ f(v) for all v with wRv. Therefore, we have F, $$w\models_{SK}^{f}N\ulcorner\tau\urcorner$$ but $$F,w\not\models^{f}_{SK} T\ulcorner\tau\urcorner$$ which contradicts the sequent (T′S). Similarly for the remaining items.
Corollary 6.4. Let F = 〈W,Rbe a frame. Then
Similarly, as a consequence of well-known facts about possible world semantics for modal operator logic we get the parallel results for our modalized theory of truth. We only mention the adequacy result for PKF☐ when the modal logic QM5 is assumed.
Theorem 6.5. Let F = 〈W,Rbe a frame. Then
A final trivial corollary of the adequacy results we have put forward is the consistency of all the theories we have constructed. Again we only explicitly mention two consistency results as the remaining consistency results are immediately implied by those mentioned.
Corollary 6.6. MPKF and PKFassuming QM5 are consistent.
This concludes the investigation of the relation between the modal theories and modal fixed-point semantics. We have shown that the fixed-points of the two jump operations are exactly the evaluation functions that give rise to models of the respective theories, if frames with the suitable property are assumed. This shows that there is, to say the least, a close connection between the theories we introduced and modal fixed-point semantics, as laid out by Halbach and Welch. Due to this close tie between the theories and the semantics employed in Halbach and Welch’s reduction, the proof-theoretic version of the reduction we have carried out may rightly be considered as a proof-theoretic counterpart to Halbach and Welch’s work.[29]
## 7. Conclusion
Summing up, we have constructed modal theories and operator truth theories formulated in the logic of FDE. We showed that the modal theories were “Kripke-reducible” to the operator truth theories, that is, the modal theory can be interpreted in the operator truth theory if we assume an appropriate modal logic where the modal predicate is translated as the truth predicate modified by the modal operator. Finally, we showed that the theories we constructed match the semantics Halbach and Welch used to carry out their semantic Kripke reduction. We take this to establish that the semantic reduction of Halbach and Welch has a natural proof-theoretic counterpart. Moreover, the family of theories we constructed is rather flexible with respect to the modal properties assumed, which in turn suggests that a reducible modal theory will be found for almost any set of modal properties one wishes to adopt. The critical reader might draw attention to the fact that we have not allowed for contingent vocabulary in our language and have thereby failed to provide a philosophically interesting study of modality. However, in this context nothing but further complication would arise if we had allowed contingent vocabulary in our language. All the results we have provided carry over with minor modification when contingent vocabulary is introduced into the picture and the absence of contingent vocabulary should therefore not count against our proposal.[30]
Let us now turn to the philosophical evaluation of our work. To start with we consider the view that understands the Kripke reduction along the lines of Halbach and Welch, that is, as an argument in favor of the operator approach. According to this view, we have argued, the proponent of the predicate approach will be under considerable pressure to revise his treatment of modal notions in favor of an operator treatment of modality. The idea is that it no longer suffices to reject possible worlds because the syntactic reduction we have carried out shows that conceiving of the modal predicate as a truth predicate modified by a modal operator is equally warranted from a proof-theoretic perspective. The proponent of the predicate approach might reply that the modal theories we have constructed do not capture the kind of modal predicate she has in mind and that, indeed, the success of the Kripke reduction is nothing but a case in point. We are more than happy to concede this if the proponent of the predicate approach were to provide such an alternative modal theory, which fails to be Kripke reducible. However, so far, very little progress in direction of developing such a theory has been made and the proponent of the operator approach has every right to be skeptical as to whether a philosophically attractive theory of this kind is forthcoming. So while our proof-theoretic addition to Halbach and Welch’s reduction might not seal the case for operator approaches, it forces the proponent of the predicate approach to put his own non-reducible theory of modality on the table.
There is a further, more subtle argument the proponent of the predicate approach may bring forward in her defense, which highlights a slight asymmetry between Halbach and Welch’s semantic reduction and our proof-theoretic counterpart: while in the semantic case the converse direction of the reduction holds as well, this need not be true in the proof-theoretic case. Indeed, to date we don’t know whether a formula is derivable in MPKF if its translation is derivable in PKF☐ assuming QM5. In other words we don’t know whether 𝜌 faithfully interprets MPKF in PKF☐ assuming QM5. Now, assuming that there was a sentence for which this implication would fail the proponent of the predicate approach could argue that one of the advantages of the predicate approach was precisely that it refrained from asserting this particular sentence.[31] While this is certainly a coherent position it does not strike us as a very plausible one. First, it hinges on the fact that the converse direction of the syntactic reduction does not hold, which is highly speculative, as we don’t see a principle reason why MPKF should not be faithfully interpretable in PKF☐ assuming QM5. But let us assume for sake of the argument that MPKF cannot be faithfully interpreted in PKF☐ assuming QM5. Still it seems to us that simply because there is no principled reason speaking against the converse direction in the case of MPKF, it is unlikely that an alternative theory, which can be faithfully interpreted in PKF☐ assuming QM5, will be very different in character to MPKF.[32]
The question then arises why the proponent of the predicate approach is inclined to accept MPKF as a philosophically attractive theory of modality, while the alternative theory fails to be attractive on principled grounds. We don’t see how this question can be answered in a plausible and satisfactory way. Nonetheless, a result establishing the converse direction of the proposed reduction is clearly desirable.
In sum, it seems that if we follow the dialectics of Halbach and Welch and view the expressive strength of the predicate approach as the sole argument in favor of conceiving of modalities as predicates, then the syntactic and semantic Kripke reduction taken together make a strong case in favor of the operator approach. However, already in the introduction of this paper we have pointed toward further reasons why the predicate approach may be preferable and, presumably, the proponent of the predicate approach will have further such reasons for her preferred approach, other than its expressive strength. On this reading the Kripke reduction serves as an argument in favor of predicate approaches to modality because it shows that we can develop workable and philosophically adequate approaches to modality. Viewed in this way Halbach and Welch’s work shows how to obtain an attractive semantic theory of modal predicates, whereas our work establishes this point from a proof-theoretic perspective and, in addition, shows that the axiomatic and the semantic theory fit nicely together. Indeed this interpretation of the Kripke reduction seems to be more in line with Kripke (1975) himself who viewed his proposal as a vindication of predicate approaches to modality against the backdrop of liar-like paradoxes that threaten these approaches. Kripke writes:
We can even “kick away the ladder” and take Nec(x) as primitive, treating it in a possible world scheme as if it were defined by an operator plus the truth predicate. (Kripke 1975: 713-14)
Now, this is essentially what happens in Halbach and Welch’s semantics for the language of the modal predicate and, by the same token, by introducing the theory MPKF we have shown that the ladder can be also kicked away from a proof-theoretic perspective. Moving on from there the proponent of the predicate approach could even try to provide theories of modality in which the modal predicate can no longer be viewed as a modified truth predicate.
## Acknowledgments
This paper was presented at “The Truth to Be Told Again”-conference in Amsterdam and the “Truth and Paradox”-workshop in Munich. I wish to thank the audiences of both venues for helpful comments. I would also like to thank Catrin Campbell-Moore and Martin Fischer for detailed comments on versions of this paper. Finally, I thank two anonymous referees of this journal for their very helpful comments and for catching some mistakes of mine.
My research was funded by the DFG project “Syntactical Treatments of Interacting Modalities” and the Alexander von Humboldt Foundation.
## References
• Dunn, J. Michael (1995). Positive Modal Logic. Studia Logica, 55(2), 301-317. http://dx.doi.org/10.1007/BF01061239
• Feferman, Saul (2000). Does Reductive Proof Theory Have a Viable Rationale? Erkenntnis, 53(2), 63-96. http://dx.doi.org/10.1023/A:1005622403850
• Field, Hartry (2008). Saving Truth from Paradox. Oxford University Press. http://dx.doi.org/10.1093/acprof:oso/9780199230747.001.0001
• Fischer, Martin, Volker Halbach, Jönne Kriener, and Johannes Stern (2015). Axiomatizing Semantic Theories of Truth? The Review of Symbolic Logic, 8(2), 257-278. doi: 10.1017/ S1755020314000379
• Halbach, Volker (2011). Axiomatic Theories of Truth. Cambridge University Press. http://dx.doi.org/10.1017/CBO9780511921049
• Halbach, Volker, and Leon Horsten (2006). Axiomatizing Kripke’s Theory of Truth. The Journal of Symbolic Logic, 71(2), 677-712. http://dx.doi.org/10.2178/jsl/1146620166
• Halbach, Volker, and Philip Welch (2009). Necessities and Necessary Truths: A Prolegomenon to the Use of Modal Logic in the Analysis of Intensional Notions. Mind, 118(469), 71-100. http://dx.doi.org/10.1093/mind/fzn030
• Jaspars, Jan, and Elias Thijsse (1996). Fundamentals of Partial Modal Logic. In Patrick Doherty (Ed.), Partiality, Modality and Nonmonotonicity (111-141). CSLI Publications.
• Koellner, Peter (2015). Gödel’s Disjunction. In Leon Horsten and Philip Welch (Eds.), The Scope and Limits of Arithmetical Knowledge. (forthcoming)
• Kremer, Michael (1988). Kripke and the Logic of Truth. Journal of Philosophical Logic, 17(3), 225-278. http://dx.doi.org/10.1007/BF00247954
• Kripke, Saul (1975). Outline of a Theory of Truth. The Journal of Philosophy, 72(19), 690-716. http://dx.doi.org/10.2307/2024634
• Montague, Richard (1963). Syntactical Treatments of Modality, with Corollaries on Reflexion Principles and Finite Axiomatizability. Acta Philosophica Fennica, 16, 153-167.
• Niebergall, Karl-Georg (2000). On the Logic of Reducibility: Axioms and Examples. Erkenntnis, 53(2), 27-61.
• Scott, Dana (1975). Combinators and Classes. In C. Böhm (Ed.), λ-Calculus and Computer Science (1-26). Springer. http://dx.doi.org/10.1007/BFb0029517
• Stern, Johannes (2014a). Modality and Axiomatic Theories of Truth I: Friedman-Sheard. The Review of Symbolic Logic, 7(2), 273-298. http://dx.doi.org/10.1017/S1755020314000057
• Stern, Johannes (2014b). Modality and Axiomatic Theories of Truth II: Kripke-Feferman. The Review of Symbolic Logic, 7(2), 299–318. http://dx.doi.org/10.1017/S1755020314000069
• Stern, Johannes (2014c). Montague’s Theorem and Modal Logic. Erkenntnis, 79(3), 551-570. http://dx.doi.org/10.1007/s10670-013-9523-7
• Tarski, Alfred, Andrzej Mostowski, and Raphael Robinson (1953). Undecidable Theories. North-Holland Publications.
## Notes
1. See Montague (1963) for a statement and proof of Montague’s theorem and Stern (2014c) for a discussion of Montague’s result.
2. One might argue that while the proponent of the predicate approach need not accept possible world semantics as her preferred semantics she should be happy to accept possible worlds for instrumental reasons. We think the proponent of the predicate approach has every reason to resist this kind of argument because at least prima facie it is unclear whether the modal theory she has in mind will be sound with respect to possible world semantics.
3. I would like to thank an anonymous referee for stressing this point.
4. In particular, we think one should not criticize the reduction by blaming the translation function to be unprincipled or counterintuitive. This also holds for our syntactic reduction since we use the same translation function as Halbach and Welch.
5. See Niebergall (2000) and Feferman (2000) for a discussion of concepts of reducibility.
6. However, there is one decisive difference between (SR) and (PR). In the case of (SR) the converse direction of the implication will hold as well. In other words the translation is faithful in this case. This might not be so in the case of (PR). Indeed to date we do not know whether this holds for the syntactic reduction we propose. We shall come back to this issue in the conclusion of the paper.
7. One might rightly point out that while standard first-order semantics is an apt semantics for the language of the modal predicate it does not tell us which subsets of the domain of the model are apt interpretations of the modal predicate.
8. Stern (2014a) develops a general strategy for developing modal theories that can be understood in this way.
9. As a matter of fact the evaluation scheme is not only partial but also paraconsistent. In this paper we use the word “partial” carelessly to stand for both.
10. See Halbach and Horsten (2006) for further arguments in favor of axiomatizing Kripke’s theory of truth in partial logic.
11. See Halbach (2011) for a discussion of classical axiomatizations of Kripke’s theory of truth.
12. See Stern (2014b) for a discussion of this problem and a classical modal theory with a primitive possibility predicate. The Kripke reduction can still be carried out for this theory despite these pathologies.
13. See Stern (2014a) for a presentation and discussion of the strategy. This strategy has been independently applied by Koellner (2015).
15. We also use the symbols ⇒ and ⇔ to convey the notions of metalinguistic or semantic implication and equivalence. But the intended use of these symbols should be clear from context.
16. It is very plausible that the reduction we are attempting could be carried out if alternative partial logic were adopted. In particular, most of what we say should also hold modulo some necessary modifications of calculus and semantics for the logics S3, LP and K3.
17. This also means that the concepts of syntax such as ‘Sent’ should now be read as of concepts of syntax $$\mathcal{L}_{PATN}$$. That is, Sent(x) should now be read as $$Sent_{\mathcal{L}_{PATN}}(x)$$.
18. As a matter of fact most of the initial sequents can be omitted. As far as we can see, only (N1), (N3), (N7)(i) and (RN) are required. However, a proof of this fact is rather lengthy and complicated and would take us too far afield. In a nutshell one first has to prove $$\phi(x_1,\ldots,x_n)\Rightarrow N\ulcorner{\phi(\dot{x}_1,\ldots,\dot{x}_n)}\urcorner$$ for arithmetic φ. Then one can derive the redundant initial sequents using the axioms of PKF and the rule (RN).
19. BMPKF proves to be quite important for establishing the link between the modal theories and the semantics we shall discuss shortly.
20. Notice in contrast to the operator language provided by Halbach and Welch our formation rules are standard. In particular, we allow for the application of the modal operator to open formulas.
21. If we were to attempt the Kripke reduction for theories that are based on some non-symmetric logic such as K3 or LP, then we would also need to assume the rule
In FDE the rule can be derived from (R☐) using the rule (¬). See Jaspars and Thijsse (1996) and Dunn (1995) for more on partial modal logics.
22. Again the concepts of syntax appearing in the initial sequences are now thought of as concepts of the language $$\mathcal{L}^{\Box}_{PAT}$$
23. Here ⋂ and ⋃ are taken to be operations on P(ω).
24. Even if we would allow for alternative base models it is unlikely that we would obtain an interesting “completeness” result. See Fischer, Halbach, Kriener, and Stern (2015) for a discussion of these and related issues.
25. In the terminology of Fischer et al. (2015) our adequacy result shows that MPKF and PKF☐ are N-categorical theories relative to equivalence frames.
26. Similarly, if the theory proves $$N\ulcorner\phi\urcorner$$ then φ will be true in the model in all the worlds accessible from some world.
27. Note that, as we are working within the standard model, we only need to deal with standard sentences.
28. That is a sentence τ for which we can prove $$T\ulcorner\tau\urcorner\Leftrightarrow\tau$$ using only the logical and arithmetical initial sequents of PKF in the language $$\mathcal{L}_{PAT}$$.
29. The skeptic may argue that we have failed to axiomatize Halbach and Welch’s semantics for their proposal is based on the least fixed point whereas we allow for arbitrary fixed points. Yet, if we allow for the least fixed point only, no axiomatization of the semantics in the sense of Theorem 6.1 is forthcoming due to complexity issues. See again Fischer et al. (2015) for a discussion of these issues.
Also even though the reduction of Halbach and Welch is carried out assuming the least fixed point we do not think that this is actually of crucial importance: the reduction will go through for arbitrary fixed points as long as we make some suitable specification and adjustments to the construction.
30. See Stern (2014a) for hints on how to integrate contingent vocabulary.
31. I thank an anonymous referee for raising this worry. I also owe the formulation of the worry to this referee. It is also worth noting how a proponent of the predicate approach should not argue: she should not argue that we have failed to provide an attractive theory of modality because the modal theory is not faithfully interpretable in the operator truth theory. She should not argue in this way since by doing so she would accept the understanding of the modal predicate as the modified truth predicate of PKF☐ and thus implicitly accept the reducibility claim.
32. The set of sentence in the range of 𝜌, which are provable in PKF☐ assuming QM5 is recursively enumerable. Therefore there will be an axiomatizable modal theory that is faithfully interpretable in PKF☐ assuming QM5.
|
src/Doc/Sledgehammer/document/root.tex
author blanchet Tue Nov 07 15:16:41 2017 +0100 (19 months ago) changeset 67021 41f1f8c4259b parent 66735 5887ae5b95a8 child 68250 c45067867860 permissions -rw-r--r--
integrated Leo-III in Sledgehammer (thanks to Alexander Steen for the patch)
1 \documentclass[a4paper,12pt]{article}
2 \usepackage{lmodern}
3 \usepackage[T1]{fontenc}
4 \usepackage{amsmath}
5 \usepackage{amssymb}
6 \usepackage[english]{babel}
7 \usepackage{color}
8 \usepackage{footmisc}
9 \usepackage{graphicx}
10 %\usepackage{mathpazo}
11 \usepackage{multicol}
12 \usepackage{stmaryrd}
13 %\usepackage[scaled=.85]{beramono}
14 \usepackage{isabelle,iman,pdfsetup}
15
16 \newcommand\download{\url{http://isabelle.in.tum.de/components/}}
17
18 \let\oldS=\S
19 \def\S{\oldS\,}
20
21 \def\qty#1{\ensuremath{\left<\mathit{#1\/}\right>}}
22 \def\qtybf#1{$\mathbf{\left<\textbf{\textit{#1\/}}\right>}$}
23
24 \newcommand\const[1]{\textsf{#1}}
25
26 %\oddsidemargin=4.6mm
27 %\evensidemargin=4.6mm
28 %\textwidth=150mm
29 %\topmargin=4.6mm
30 %\headheight=0mm
31 %\headsep=0mm
32 %\textheight=234mm
33
34 \def\Colon{\mathord{:\mkern-1.5mu:}}
35 %\def\lbrakk{\mathopen{\lbrack\mkern-3.25mu\lbrack}}
36 %\def\rbrakk{\mathclose{\rbrack\mkern-3.255mu\rbrack}}
37 \def\lparr{\mathopen{(\mkern-4mu\mid}}
38 \def\rparr{\mathclose{\mid\mkern-4mu)}}
39
40 \def\unk{{?}}
41 \def\undef{(\lambda x.\; \unk)}
42 %\def\unr{\textit{others}}
43 \def\unr{\ldots}
44 \def\Abs#1{\hbox{\rm{\flqq}}{\,#1\,}\hbox{\rm{\frqq}}}
45 \def\Q{{\smash{\lower.2ex\hbox{$\scriptstyle?$}}}}
46
47 \urlstyle{tt}
48
49 \renewcommand\_{\hbox{\textunderscore\kern-.05ex}}
50
51 \begin{document}
52
53 %%% TYPESETTING
54 %\renewcommand\labelitemi{$\bullet$}
55 \renewcommand\labelitemi{\raise.065ex\hbox{\small\textbullet}}
56
57 \title{\includegraphics[scale=0.5]{isabelle_sledgehammer} \\[4ex]
58 Hammering Away \\[\smallskipamount]
59 \Large A User's Guide to Sledgehammer for Isabelle/HOL}
60 \author{\hbox{} \\
61 Jasmin Christian Blanchette \\
62 {\normalsize Institut f\"ur Informatik, Technische Universit\"at M\"unchen} \\[4\smallskipamount]
63 {\normalsize with contributions from} \\[4\smallskipamount]
64 Lawrence C. Paulson \\
65 {\normalsize Computer Laboratory, University of Cambridge} \\
66 \hbox{}}
67
68 \maketitle
69
70 \tableofcontents
71
72 \setlength{\parskip}{.7em plus .2em minus .1em}
73 \setlength{\parindent}{0pt}
74 \setlength{\abovedisplayskip}{\parskip}
75 \setlength{\abovedisplayshortskip}{.9\parskip}
76 \setlength{\belowdisplayskip}{\parskip}
77 \setlength{\belowdisplayshortskip}{.9\parskip}
78
79 % general-purpose enum environment with correct spacing
80 \newenvironment{enum}%
81 {\begin{list}{}{%
82 \setlength{\topsep}{.1\parskip}%
83 \setlength{\partopsep}{.1\parskip}%
84 \setlength{\itemsep}{\parskip}%
85 \advance\itemsep by-\parsep}}
86 {\end{list}}
87
88 \def\pre{\begingroup\vskip0pt plus1ex\advance\leftskip by\leftmargin
89 \advance\rightskip by\leftmargin}
90 \def\post{\vskip0pt plus1ex\endgroup}
91
92 \def\prew{\pre\advance\rightskip by-\leftmargin}
93 \def\postw{\post}
94
95 \section{Introduction}
96 \label{introduction}
97
98 Sledgehammer is a tool that applies automatic theorem provers (ATPs)
99 and satisfiability-modulo-theories (SMT) solvers on the current goal.%
100 \footnote{The distinction between ATPs and SMT solvers is convenient but mostly
101 historical. The two communities are converging, with more and more ATPs
102 supporting typical SMT features such as arithmetic and sorts, and a few SMT
103 solvers parsing ATP syntaxes. There is also a strong technological connection
104 between instantiation-based ATPs (such as iProver and iProver-Eq) and SMT
105 solvers.}
106 %
107 The supported ATPs are AgsyHOL \cite{agsyHOL}, Alt-Ergo \cite{alt-ergo}, E
108 \cite{schulz-2002}, E-SInE \cite{sine}, E-ToFoF \cite{tofof}, iProver
109 \cite{korovin-2009}, iProver-Eq \cite{korovin-sticksel-2010}, LEO-II
110 \cite{leo2}, Leo-III \cite{leo3}, Satallax \cite{satallax}, SNARK \cite{snark}, SPASS
111 \cite{weidenbach-et-al-2009}, Vampire \cite{riazanov-voronkov-2002},
112 Waldmeister \cite{waldmeister}, and Zipperposition \cite{cruanes-2014}.
113 The ATPs are run either locally or remotely via the System\-On\-TPTP web service
114 \cite{sutcliffe-2000}. The supported SMT solvers are CVC3 \cite{cvc3}, CVC4
115 \cite{cvc4}, veriT \cite{bouton-et-al-2009}, and Z3 \cite{z3}. These are always
116 run locally.
117
118 The problem passed to the external provers (or solvers) consists of your current
119 goal together with a heuristic selection of hundreds of facts (theorems) from the
120 current theory context, filtered by relevance.
121
122 The result of a successful proof search is some source text that usually (but
123 not always) reconstructs the proof within Isabelle. For ATPs, the reconstructed
124 proof typically relies on the general-purpose \textit{metis} proof method, which
125 integrates the Metis ATP in Isabelle/HOL with explicit inferences going through
126 the kernel. Thus its results are correct by construction.
127
128 For Isabelle/jEdit users, Sledgehammer provides an automatic mode that can be
129 enabled via the Auto Sledgehammer'' option under Plugins > Plugin Options >
130 Isabelle > General.'' In this mode, a reduced version of Sledgehammer is run on
131 every newly entered theorem for a few seconds.
132
133 \newbox\boxA
134 \setbox\boxA=\hbox{\texttt{NOSPAM}}
135
136 \newcommand\authoremail{\texttt{blan{\color{white}NOSPAM}\kern-\wd\boxA{}chette@\allowbreak
137 in.\allowbreak tum.\allowbreak de}}
138
139 To run Sledgehammer, you must make sure that the theory \textit{Sledgehammer} is
140 imported---this is rarely a problem in practice since it is part of
141 \textit{Main}. Examples of Sledgehammer use can be found in Isabelle's
142 \texttt{src/HOL/Metis\_Examples} directory.
143 Comments and bug reports concerning Sledgehammer or this manual should be
144 directed to the author at \authoremail.
145
146 %\vskip2.5\smallskipamount
147 %
148 %\textbf{Acknowledgment.} The author would like to thank Mark Summerfield for
149 %suggesting several textual improvements.
150
151 \section{Installation}
152 \label{installation}
153
154 Sledgehammer is part of Isabelle, so you do not need to install it. However, it
155 relies on third-party automatic provers (ATPs and SMT solvers).
156
157 Among the ATPs, AgsyHOL, Alt-Ergo, E, LEO-II, Leo-III, Satallax, SPASS, Vampire, and
158 Zipperposition can be run locally; in addition, AgsyHOL, E, E-SInE, E-ToFoF,
159 iProver, iProver-Eq, LEO-II, Leo-III, Satallax, SNARK, Vampire, and Waldmeister are
160 available remotely via System\-On\-TPTP \cite{sutcliffe-2000}. The SMT solvers
161 CVC3, CVC4, veriT, and Z3 can be run locally.
162
163 There are three main ways to install automatic provers on your machine:
164
165 \begin{sloppy}
166 \begin{enum}
167 \item[\labelitemi] If you installed an official Isabelle package, it should
168 already include properly setup executables for CVC4, E, SPASS, veriT, and Z3, ready to use.
169
170 \item[\labelitemi] Alternatively, you can download the Isabelle-aware CVC3,
171 CVC4, E, SPASS, and Z3 binary packages from \download. Extract the archives,
172 then add a line to your \texttt{\$ISABELLE\_HOME\_USER\slash etc\slash 173 components}% 174 \footnote{The variable \texttt{\$ISABELLE\_HOME\_USER} is set by Isabelle at
175 startup. Its value can be retrieved by executing \texttt{isabelle}
176 \texttt{getenv} \texttt{ISABELLE\_HOME\_USER} on the command line.}
177 file with the absolute path to CVC3, CVC4, E, SPASS, or Z3. For example, if the
178 \texttt{components} file does not exist yet and you extracted SPASS to
179 \texttt{/usr/local/spass-3.8ds}, create it with the single line
180
181 \prew
182 \texttt{/usr/local/spass-3.8ds}
183 \postw
184
185 in it.
186
187 \item[\labelitemi] If you prefer to build AgsyHOL, Alt-Ergo, E, LEO-II, Leo-III, or
188 Satallax manually, or found a Vampire executable somewhere (e.g.,
189 \url{http://www.vprover.org/}), set the environment variable
190 \texttt{AGSYHOL\_HOME}, \texttt{E\_HOME}, \texttt{LEO2\_HOME},
191 \texttt{LEO3\_HOME}, \texttt{SATALLAX\_HOME}, or
192 \texttt{VAMPIRE\_HOME} to the directory that contains the \texttt{agsyHOL},
193 \texttt{eprover} (and/or \texttt{eproof} or \texttt{eproof\_ram}),
194 \texttt{leo}, \texttt{leo3}, \texttt{satallax}, or \texttt{vampire} executable;
195 for Alt-Ergo, set the
196 environment variable \texttt{WHY3\_HOME} to the directory that contains the
197 \texttt{why3} executable.
198 Sledgehammer has been tested with AgsyHOL 1.0, Alt-Ergo 0.95.2, E 1.6 to 2.0,
199 LEO-II 1.3.4, Leo-III 1.1, Satallax 2.2 to 2.7, and Vampire 0.6 to 4.0.%
200 \footnote{Following the rewrite of Vampire, the counter for version numbers was
201 reset to 0; hence the (new) Vampire versions 0.6, 1.0, 1.8, 2.6, and 3.0 are more
202 recent than 9.0 or 11.5.}%
203 Since the ATPs' output formats are neither documented nor stable, other
204 versions might not work well with Sledgehammer. Ideally,
205 you should also set \texttt{E\_VERSION}, \texttt{LEO2\_VERSION},
206 \texttt{LEO3\_VERSION}, \texttt{SATALLAX\_VERSION}, or
207 \texttt{VAMPIRE\_VERSION} to the prover's version number (e.g., 4.0'').
208
209 Similarly, if you want to install CVC3, CVC4, veriT, or Z3, set the environment
210 variable \texttt{CVC3\_\allowbreak SOLVER}, \texttt{CVC4\_\allowbreak SOLVER},
211 \texttt{VERIT\_\allowbreak SOLVER}, or \texttt{Z3\_SOLVER} to the complete path
212 of the executable, \emph{including the file name}. Sledgehammer has been tested
213 with CVC3 2.2 and 2.4.1, CVC4 1.5-prerelease, veriT smtcomp2014, and Z3 4.3.2.
214 Since Z3's output format is somewhat unstable, other versions of the solver
215 might not work well with Sledgehammer. Ideally, also set
216 \texttt{CVC3\_VERSION}, \texttt{CVC4\_VERSION}, \texttt{VERIT\_VERSION}, or
217 \texttt{Z3\_VERSION} to the solver's version number (e.g., 4.4.0'').
218 \end{enum}
219 \end{sloppy}
220
221 To check whether the provers are successfully installed, try out the example
222 in \S\ref{first-steps}. If the remote versions of any of these provers is used
223 (identified by the prefix \textit{remote\_\/}''), or if the local versions
224 fail to solve the easy goal presented there, something must be wrong with the
225 installation.
226
227 Remote prover invocation requires Perl with the World Wide Web Library
228 (\texttt{libwww-perl}) installed. If you must use a proxy server to access the
229 Internet, set the \texttt{http\_proxy} environment variable to the proxy, either
230 in the environment in which Isabelle is launched or in your
601 \ldots, k_n = v_n$]''. For Boolean options, = \textit{true\/}'' is optional. 602 For example: 603 604 \prew 605 \textbf{sledgehammer} [\textit{isar\_proofs}, \,\textit{timeout} = 120] 606 \postw 607 608 Default values can be set using \textbf{sledgehammer\_\allowbreak params}: 609 610 \prew 611 \textbf{sledgehammer\_params} \qty{options} 612 \postw 613 614 The supported options are described in \S\ref{option-reference}. 615 616 The \qty{facts\_override} argument lets you alter the set of facts that go 617 through the relevance filter. It may be of the form (\qty{facts})'', where 618 \qty{facts} is a space-separated list of Isabelle facts (theorems, local 619 assumptions, etc.), in which case the relevance filter is bypassed and the given 620 facts are used. It may also be of the form (\textit{add}:\ \qty{facts\/_{\mathrm{1}}})'', 621 (\textit{del}:\ \qty{facts\/_{\mathrm{2}}})'', or (\textit{add}:\ \qty{facts\/_{\mathrm{1}}}\ 622 \textit{del}:\ \qty{facts\/_{\mathrm{2}}})'', where the relevance filter is instructed to 623 proceed as usual except that it should consider \qty{facts\/_{\mathrm{1}}} 624 highly-relevant and \qty{facts\/_{\mathrm{2}}} fully irrelevant. 625 626 If you use Isabelle/jEdit, Sledgehammer also provides an automatic mode that can 627 be enabled via the Auto Sledgehammer'' option under Plugins > Plugin Options 628 > Isabelle > General.'' For automatic runs, only the first prover set using 629 \textit{provers} (\S\ref{mode-of-operation}) is considered (typically E), 630 \textit{slice} (\S\ref{mode-of-operation}) is disabled, 631 fewer facts are 632 passed to the prover, \textit{fact\_filter} (\S\ref{relevance-filter}) is set to 633 \textit{mepo}, \textit{strict} (\S\ref{problem-encoding}) is enabled, 634 \textit{verbose} (\S\ref{output-format}) and \textit{debug} 635 (\S\ref{output-format}) are disabled, and \textit{timeout} (\S\ref{timeouts}) is 636 superseded by the Auto Time Limit'' option in jEdit. Sledgehammer's output is 637 also more concise. 638 639 \subsection{Metis} 640 \label{metis} 641 642 The \textit{metis} proof method has the syntax 643 644 \prew 645 \textbf{\textit{metis}}~(\qty{options})${}^?$~\qty{facts}${}^?$ 646 \postw 647 648 where \qty{facts} is a list of arbitrary facts and \qty{options} is a 649 comma-separated list consisting of at most one$\lambda$translation scheme 650 specification with the same semantics as Sledgehammer's \textit{lam\_trans} 651 option (\S\ref{problem-encoding}) and at most one type encoding specification 652 with the same semantics as Sledgehammer's \textit{type\_enc} option 653 (\S\ref{problem-encoding}). 654 % 655 The supported$\lambda$translation schemes are \textit{hide\_lams}, 656 \textit{lifting}, and \textit{combs} (the default). 657 % 658 All the untyped type encodings listed in \S\ref{problem-encoding} are supported. 659 For convenience, the following aliases are provided: 660 \begin{enum} 661 \item[\labelitemi] \textbf{\textit{full\_types}:} Alias for \textit{poly\_guards\_query}. 662 \item[\labelitemi] \textbf{\textit{partial\_types}:} Alias for \textit{poly\_args}. 663 \item[\labelitemi] \textbf{\textit{no\_types}:} Alias for \textit{erased}. 664 \end{enum} 665 666 \section{Option Reference} 667 \label{option-reference} 668 669 \def\defl{\{} 670 \def\defr{\}} 671 672 \def\flushitem#1{\item[]\noindent\kern-\leftmargin \textbf{#1}} 673 \def\optrueonly#1{\flushitem{\textit{#1}$\bigl[$= \textit{true}$\bigr]$\enskip}\nopagebreak\\[\parskip]} 674 \def\optrue#1#2{\flushitem{\textit{#1}$\bigl[$= \qtybf{bool}$\bigr]$\enskip \defl\textit{true}\defr\hfill (neg.: \textit{#2})}\nopagebreak\\[\parskip]} 675 \def\opfalse#1#2{\flushitem{\textit{#1}$\bigl[$= \qtybf{bool}$\bigr]$\enskip \defl\textit{false}\defr\hfill (neg.: \textit{#2})}\nopagebreak\\[\parskip]} 676 \def\opsmart#1#2{\flushitem{\textit{#1}$\bigl[$= \qtybf{smart\_bool}$\bigr]$\enskip \defl\textit{smart}\defr\hfill (neg.: \textit{#2})}\nopagebreak\\[\parskip]} 677 \def\opsmartx#1#2{\flushitem{\textit{#1}$\bigl[$= \qtybf{smart\_bool}$\bigr]$\enskip \defl\textit{smart}\defr\\\hbox{}\hfill (neg.: \textit{#2})}\nopagebreak\\[\parskip]} 678 \def\opnodefault#1#2{\flushitem{\textit{#1} = \qtybf{#2}} \nopagebreak\\[\parskip]} 679 \def\opnodefaultbrk#1#2{\flushitem{$\bigl[$\textit{#1} =$\bigr]$\qtybf{#2}} \nopagebreak\\[\parskip]} 680 \def\opdefault#1#2#3{\flushitem{\textit{#1} = \qtybf{#2}\enskip \defl\textit{#3}\defr} \nopagebreak\\[\parskip]} 681 \def\oparg#1#2#3{\flushitem{\textit{#1} \qtybf{#2} = \qtybf{#3}} \nopagebreak\\[\parskip]} 682 \def\opargbool#1#2#3{\flushitem{\textit{#1} \qtybf{#2}$\bigl[$= \qtybf{bool}$\bigr]$\hfill (neg.: \textit{#3})}\nopagebreak\\[\parskip]} 683 \def\opargboolorsmart#1#2#3{\flushitem{\textit{#1} \qtybf{#2}$\bigl[$= \qtybf{smart\_bool}$\bigr]$\hfill (neg.: \textit{#3})}\nopagebreak\\[\parskip]} 684 685 Sledgehammer's options are categorized as follows:\ mode of operation 686 (\S\ref{mode-of-operation}), problem encoding (\S\ref{problem-encoding}), 687 relevance filter (\S\ref{relevance-filter}), output format 688 (\S\ref{output-format}), regression testing (\S\ref{regression-testing}), 689 and timeouts (\S\ref{timeouts}). 690 691 The descriptions below refer to the following syntactic quantities: 692 693 \begin{enum} 694 \item[\labelitemi] \qtybf{string}: A string. 695 \item[\labelitemi] \qtybf{bool\/}: \textit{true} or \textit{false}. 696 \item[\labelitemi] \qtybf{smart\_bool\/}: \textit{true}, \textit{false}, or 697 \textit{smart}. 698 \item[\labelitemi] \qtybf{int\/}: An integer. 699 \item[\labelitemi] \qtybf{float}: A floating-point number (e.g., 2.5 or 60) 700 expressing a number of seconds. 701 \item[\labelitemi] \qtybf{float\_pair\/}: A pair of floating-point numbers 702 (e.g., 0.6 0.95). 703 \item[\labelitemi] \qtybf{smart\_int\/}: An integer or \textit{smart}. 704 \end{enum} 705 706 Default values are indicated in curly brackets (\textrm{\{\}}). Boolean options 707 have a negative counterpart (e.g., \textit{minimize} vs.\ 708 \textit{dont\_minimize}). When setting Boolean options or their negative 709 counterparts, = \textit{true\/}'' may be omitted. 710 711 \subsection{Mode of Operation} 712 \label{mode-of-operation} 713 714 \begin{enum} 715 \opnodefaultbrk{provers}{string} 716 Specifies the automatic provers to use as a space-separated list (e.g., 717 \textit{e}~\textit{spass}~\textit{remote\_vampire\/}''). 718 Provers can be run locally or remotely; see \S\ref{installation} for 719 installation instructions. 720 721 The following local provers are supported: 722 723 \begin{sloppy} 724 \begin{enum} 725 \item[\labelitemi] \textbf{\textit{agsyhol}:} AgsyHOL is an automatic 726 higher-order prover developed by Fredrik Lindblad \cite{agsyHOL}, 727 with support for the TPTP typed higher-order syntax (THF0). To use AgsyHOL, set 728 the environment variable \texttt{AGSYHOL\_HOME} to the directory that contains 729 the \texttt{agsyHOL} executable. Sledgehammer has been tested with version 1.0. 730 731 \item[\labelitemi] \textbf{\textit{alt\_ergo}:} Alt-Ergo is a polymorphic 732 ATP developed by Bobot et al.\ \cite{alt-ergo}. 733 It supports the TPTP polymorphic typed first-order format (TFF1) via Why3 734 \cite{why3}. To use Alt-Ergo, set the environment variable \texttt{WHY3\_HOME} 735 to the directory that contains the \texttt{why3} executable. Sledgehammer 736 requires Alt-Ergo 0.95.2 and Why3 0.83. 737 738 \item[\labelitemi] \textbf{\textit{cvc3}:} CVC3 is an SMT solver developed by 739 Clark Barrett, Cesare Tinelli, and their colleagues \cite{cvc3}. To use CVC3, 740 set the environment variable \texttt{CVC3\_SOLVER} to the complete path of the 741 executable, including the file name, or install the prebuilt CVC3 package from 742 \download. Sledgehammer has been tested with versions 2.2 and 2.4.1. 743 744 \item[\labelitemi] \textbf{\textit{cvc4}:} CVC4 \cite{cvc4} is the successor to 745 CVC3. To use CVC4, set the environment variable \texttt{CVC4\_SOLVER} to the 746 complete path of the executable, including the file name, or install the 747 prebuilt CVC4 package from \download. Sledgehammer has been tested with version 748 1.5-prerelease. 749 750 \item[\labelitemi] \textbf{\textit{e}:} E is a first-order resolution prover 751 developed by Stephan Schulz \cite{schulz-2002}. To use E, set the environment 752 variable \texttt{E\_HOME} to the directory that contains the \texttt{eproof} 753 executable and \texttt{E\_VERSION} to the version number (e.g., 1.8''), or 754 install the prebuilt E package from \download. Sledgehammer has been tested with 755 versions 1.6 to 1.8. 756 757 \item[\labelitemi] \textbf{\textit{e\_males}:} E-MaLeS is a metaprover developed 758 by Daniel K\"uhlwein that implements strategy scheduling on top of E. To use 759 E-MaLeS, set the environment variable \texttt{E\_MALES\_HOME} to the directory 760 that contains the \texttt{emales.py} script. Sledgehammer has been tested with 761 version 1.1. 762 763 \item[\labelitemi] \textbf{\textit{e\_par}:} E-Par is an experimental metaprover 764 developed by Josef Urban that implements strategy scheduling on top of E. To use 765 E-Par, set the environment variable \texttt{E\_HOME} to the directory that 766 contains the \texttt{runepar.pl} script and the \texttt{eprover} and 767 \texttt{epclextract} executables, or use the prebuilt E package from \download. 768 Be aware that E-Par is experimental software. It has been known to generate 769 zombie processes. Use at your own risks. 770 771 \item[\labelitemi] \textbf{\textit{iprover}:} iProver is a pure 772 instantiation-based prover developed by Konstantin Korovin \cite{korovin-2009}. 773 To use iProver, set the environment variable \texttt{IPROVER\_HOME} to the 774 directory that contains the \texttt{iprover} and \texttt{vclausify\_rel} 775 executables. Sledgehammer has been tested with version 0.99. 776 777 \item[\labelitemi] \textbf{\textit{iprover\_eq}:} iProver-Eq is an 778 instantiation-based prover with native support for equality developed by 779 Konstantin Korovin and Christoph Sticksel \cite{korovin-sticksel-2010}. To use 780 iProver-Eq, set the environment variable \texttt{IPROVER\_EQ\_HOME} to the 781 directory that contains the \texttt{iprover-eq} and \texttt{vclausify\_rel} 782 executables. Sledgehammer has been tested with version 0.8. 783 784 \item[\labelitemi] \textbf{\textit{leo2}:} LEO-II is an automatic 785 higher-order prover developed by Christoph Benzm\"uller et al.\ \cite{leo2}, 786 with support for the TPTP typed higher-order syntax (THF0). To use LEO-II, set 787 the environment variable \texttt{LEO2\_HOME} to the directory that contains the 788 \texttt{leo} executable. Sledgehammer requires version 1.3.4 or above. 789 790 \item[\labelitemi] \textbf{\textit{leo3}:} Leo-III is an automatic 791 higher-order prover developed by Alexander Steen, Max Wisniewski, Christoph 792 Benzm\"uller et al.\ \cite{leo3}, 793 with support for the TPTP typed higher-order syntax (THF0). To use Leo-III, set 794 the environment variable \texttt{LEO3\_HOME} to the directory that contains the 795 \texttt{leo3} executable. Sledgehammer requires version 1.1 or above. 796 797 \item[\labelitemi] \textbf{\textit{satallax}:} Satallax is an automatic 798 higher-order prover developed by Chad Brown et al.\ \cite{satallax}, with 799 support for the TPTP typed higher-order syntax (THF0). To use Satallax, set the 800 environment variable \texttt{SATALLAX\_HOME} to the directory that contains the 801 \texttt{satallax} executable. Sledgehammer requires version 2.2 or above. 802 803 \item[\labelitemi] \textbf{\textit{spass}:} SPASS is a first-order resolution 804 prover developed by Christoph Weidenbach et al.\ \cite{weidenbach-et-al-2009}. 805 To use SPASS, set the environment variable \texttt{SPASS\_HOME} to the directory 806 that contains the \texttt{SPASS} executable and \texttt{SPASS\_VERSION} to the 807 version number (e.g., 3.8ds''), or install the prebuilt SPASS package from 808 \download. Sledgehammer requires version 3.8ds or above. 809 810 \item[\labelitemi] \textbf{\textit{vampire}:} Vampire is a first-order 811 resolution prover developed by Andrei Voronkov and his colleagues 812 \cite{riazanov-voronkov-2002}. To use Vampire, set the environment variable 813 \texttt{VAMPIRE\_HOME} to the directory that contains the \texttt{vampire} 814 executable and \texttt{VAMPIRE\_VERSION} to the version number (e.g., 815 3.0''). Sledgehammer has been tested with versions 0.6 to 3.0. 816 Versions strictly above 1.8 support the TPTP typed first-order format (TFF0). 817 818 \item[\labelitemi] \textbf{\textit{verit}:} veriT \cite{bouton-et-al-2009} is an 819 SMT solver developed by David D\'eharbe, Pascal Fontaine, and their colleagues. 820 It is specifically designed to produce detailed proofs for reconstruction in 821 proof assistants. To use veriT, set the environment variable 822 \texttt{VERIT\_SOLVER} to the complete path of the executable, including the 823 file name. Sledgehammer has been tested with version smtcomp2014. 824 825 \item[\labelitemi] \textbf{\textit{z3}:} Z3 is an SMT solver developed at 826 Microsoft Research \cite{z3}. To use Z3, set the environment variable 827 \texttt{Z3\_SOLVER} to the complete path of the executable, including the 828 file name. Sledgehammer has been tested with a pre-release version of 4.4.0. 829 830 \item[\labelitemi] \textbf{\textit{z3\_tptp}:} This version of Z3 pretends to be 831 an ATP, exploiting Z3's support for the TPTP untyped and typed first-order 832 formats (FOF and TFF0). It is included for experimental purposes. It requires 833 version 4.3.1 of Z3 or above. To use it, set the environment variable 834 \texttt{Z3\_TPTP\_HOME} to the directory that contains the \texttt{z3\_tptp} 835 executable. 836 837 \item[\labelitemi] \textbf{\textit{zipperposition}:} Zipperposition 838 \cite{cruanes-2014} is an experimental first-order resolution prover developed 839 by Simon Cruane. To use Zipperposition, set the environment variable 840 \texttt{ZIPPERPOSITION\_HOME} to the directory that contains the 841 \texttt{zipperposition} executable. 842 \end{enum} 843 844 \end{sloppy} 845 846 Moreover, the following remote provers are supported: 847 848 \begin{enum} 849 \item[\labelitemi] \textbf{\textit{remote\_agsyhol}:} The remote version of 850 AgsyHOL runs on Geoff Sutcliffe's Miami servers \cite{sutcliffe-2000}. 851 852 \item[\labelitemi] \textbf{\textit{remote\_e}:} The remote version of E runs 853 on Geoff Sutcliffe's Miami servers \cite{sutcliffe-2000}. 854 855 \item[\labelitemi] \textbf{\textit{remote\_e\_sine}:} E-SInE is a metaprover 856 developed by Kry\v stof Hoder \cite{sine} based on E. It runs on Geoff 857 Sutcliffe's Miami servers. 858 859 \item[\labelitemi] \textbf{\textit{remote\_e\_tofof}:} E-ToFoF is a metaprover 860 developed by Geoff Sutcliffe \cite{tofof} based on E running on his Miami 861 servers. This ATP supports the TPTP typed first-order format (TFF0). The 862 remote version of E-ToFoF runs on Geoff Sutcliffe's Miami servers. 863 864 \item[\labelitemi] \textbf{\textit{remote\_iprover}:} The 865 remote version of iProver runs on Geoff Sutcliffe's Miami servers 866 \cite{sutcliffe-2000}. 867 868 \item[\labelitemi] \textbf{\textit{remote\_iprover\_eq}:} The 869 remote version of iProver-Eq runs on Geoff Sutcliffe's Miami servers 870 \cite{sutcliffe-2000}. 871 872 \item[\labelitemi] \textbf{\textit{remote\_leo2}:} The remote version of LEO-II 873 runs on Geoff Sutcliffe's Miami servers \cite{sutcliffe-2000}. 874 875 \item[\labelitemi] \textbf{\textit{remote\_leo3}:} The remote version of Leo-III 876 runs on Geoff Sutcliffe's Miami servers \cite{sutcliffe-2000}. 877 878 \item[\labelitemi] \textbf{\textit{remote\_pirate}:} Pirate is a 879 highly experimental first-order resolution prover developed by Daniel Wand. 880 The remote version of Pirate run on a private server he generously set up. 881 882 \item[\labelitemi] \textbf{\textit{remote\_satallax}:} The remote version of 883 Satallax runs on Geoff Sutcliffe's Miami servers \cite{sutcliffe-2000}. 884 885 \item[\labelitemi] \textbf{\textit{remote\_snark}:} SNARK is a first-order 886 resolution prover developed by Stickel et al.\ \cite{snark}. It supports the 887 TPTP typed first-order format (TFF0). The remote version of SNARK runs on 888 Geoff Sutcliffe's Miami servers. 889 890 \item[\labelitemi] \textbf{\textit{remote\_vampire}:} The remote version of 891 Vampire runs on Geoff Sutcliffe's Miami servers. 892 893 \item[\labelitemi] \textbf{\textit{remote\_waldmeister}:} Waldmeister is a unit 894 equality prover developed by Hillenbrand et al.\ \cite{waldmeister}. It can be 895 used to prove universally quantified equations using unconditional equations, 896 corresponding to the TPTP CNF UEQ division. The remote version of Waldmeister 897 runs on Geoff Sutcliffe's Miami servers. 898 \end{enum} 899 900 By default, Sledgehammer runs a subset of CVC4, E, E-SInE, SPASS, Vampire, 901 veriT, and Z3 in parallel, either locally or remotely---depending on the number 902 of processor cores available and on which provers are actually installed. It is 903 generally a good idea to run several provers in parallel. 904 905 \opnodefault{prover}{string} 906 Alias for \textit{provers}. 907 908 \optrue{slice}{dont\_slice} 909 Specifies whether the time allocated to a prover should be sliced into several 910 segments, each of which has its own set of possibly prover-dependent options. 911 For SPASS and Vampire, the first slice tries the fast but incomplete 912 set-of-support (SOS) strategy, whereas the second slice runs without it. For E, 913 up to three slices are tried, with different weighted search strategies and 914 number of facts. For SMT solvers, several slices are tried with the same options 915 each time but fewer and fewer facts. According to benchmarks with a timeout of 916 30 seconds, slicing is a valuable optimization, and you should probably leave it 917 enabled unless you are conducting experiments. 918 919 \nopagebreak 920 {\small See also \textit{verbose} (\S\ref{output-format}).} 921 922 \optrue{minimize}{dont\_minimize} 923 Specifies whether the minimization tool should be invoked automatically after 924 proof search. 925 926 \nopagebreak 927 {\small See also \textit{preplay\_timeout} (\S\ref{timeouts}) 928 and \textit{dont\_preplay} (\S\ref{timeouts}).} 929 930 \opfalse{spy}{dont\_spy} 931 Specifies whether Sledgehammer should record statistics in 932 \texttt{\$ISA\-BELLE\_\allowbreak HOME\_\allowbreak USER/\allowbreak spy\_\allowbreak sledgehammer}.
933 These statistics can be useful to the developers of Sledgehammer. If you are willing to have your
934 interactions recorded in the name of science, please enable this feature and send the statistics
935 file every now and then to the author of this manual (\authoremail).
936 To change the default value of this option globally, set the environment variable
937 \texttt{SLEDGEHAMMER\_SPY} to \textit{yes}.
938
939 \nopagebreak
940 {\small See also \textit{debug} (\S\ref{output-format}).}
941
942 \opfalse{overlord}{no\_overlord}
943 Specifies whether Sledgehammer should put its temporary files in
944 \texttt{\$ISA\-BELLE\_\allowbreak HOME\_\allowbreak USER}, which is useful for 945 debugging Sledgehammer but also unsafe if several instances of the tool are run 946 simultaneously. The files are identified by the prefixes \texttt{prob\_} and 947 \texttt{mash\_}; you may safely remove them after Sledgehammer has run. 948 949 \textbf{Warning:} This option is not thread-safe. Use at your own risks. 950 951 \nopagebreak 952 {\small See also \textit{debug} (\S\ref{output-format}).} 953 \end{enum} 954 955 \subsection{Relevance Filter} 956 \label{relevance-filter} 957 958 \begin{enum} 959 \opdefault{fact\_filter}{string}{smart} 960 Specifies the relevance filter to use. The following filters are available: 961 962 \begin{enum} 963 \item[\labelitemi] \textbf{\textit{mepo}:} 964 The traditional memoryless MePo relevance filter. 965 966 \item[\labelitemi] \textbf{\textit{mash}:} 967 The MaSh machine learner. Three learning algorithms are provided: 968 969 \begin{enum} 970 \item[\labelitemi] \textbf{\textit{nb}} is an implementation of naive Bayes. 971 972 \item[\labelitemi] \textbf{\textit{knn}} is an implementation of$k$-nearest 973 neighbors. 974 975 \item[\labelitemi] \textbf{\textit{nb\_knn}} (also called \textbf{\textit{yes}} 976 and \textbf{\textit{sml}}) is a combination of naive Bayes and$k$-nearest 977 neighbors. 978 \end{enum} 979 980 In addition, the special value \textit{none} is used to disable machine learning by 981 default (cf.\ \textit{smart} below). 982 983 The default algorithm is \textit{nb\_knn}. The algorithm can be selected by 984 setting the MaSh'' option under Plugins > Plugin Options > Isabelle > 985 General'' in Isabelle/jEdit. Persistent data for both algorithms is stored in 986 the directory \texttt{\$ISABELLE\_\allowbreak HOME\_\allowbreak USER/\allowbreak
987 mash}.
988
989 \item[\labelitemi] \textbf{\textit{mesh}:} The MeSh filter, which combines the
990 rankings from MePo and MaSh.
991
992 \item[\labelitemi] \textbf{\textit{smart}:} A combination of MePo, MaSh, and
993 MeSh. If the learning algorithm is set to be \textit{none}, \textit{smart}
994 behaves like MePo.
995 \end{enum}
996
997 \opdefault{max\_facts}{smart\_int}{smart}
998 Specifies the maximum number of facts that may be returned by the relevance
999 filter. If the option is set to \textit{smart} (the default), it effectively
1000 takes a value that was empirically found to be appropriate for the prover.
1001 Typical values lie between 50 and 1000.
1002
1003 \opdefault{fact\_thresholds}{float\_pair}{\upshape 0.45~0.85}
1004 Specifies the thresholds above which facts are considered relevant by the
1005 relevance filter. The first threshold is used for the first iteration of the
1006 relevance filter and the second threshold is used for the last iteration (if it
1007 is reached). The effective threshold is quadratically interpolated for the other
1008 iterations. Each threshold ranges from 0 to 1, where 0 means that all theorems
1009 are relevant and 1 only theorems that refer to previously seen constants.
1010
1011 \optrue{learn}{dont\_learn}
1012 Specifies whether MaSh should be run automatically by Sledgehammer to learn the
1013 available theories (and hence provide more accurate results). Learning takes
1014 place only if MaSh is enabled.
1015
1016 \opdefault{max\_new\_mono\_instances}{int}{smart}
1017 Specifies the maximum number of monomorphic instances to generate beyond
1018 \textit{max\_facts}. The higher this limit is, the more monomorphic instances
1019 are potentially generated. Whether monomorphization takes place depends on the
1020 type encoding used. If the option is set to \textit{smart} (the default), it
1021 takes a value that was empirically found to be appropriate for the prover. For
1022 most provers, this value is 100.
1023
1024 \nopagebreak
1025 {\small See also \textit{type\_enc} (\S\ref{problem-encoding}).}
1026
1027 \opdefault{max\_mono\_iters}{int}{smart}
1028 Specifies the maximum number of iterations for the monomorphization fixpoint
1029 construction. The higher this limit is, the more monomorphic instances are
1030 potentially generated. Whether monomorphization takes place depends on the
1031 type encoding used. If the option is set to \textit{smart} (the default), it
1032 takes a value that was empirically found to be appropriate for the prover.
1033 For most provers, this value is 3.
1034
1035 \nopagebreak
1036 {\small See also \textit{type\_enc} (\S\ref{problem-encoding}).}
1037 \end{enum}
1038
1039 \subsection{Problem Encoding}
1040 \label{problem-encoding}
1041
1042 \newcommand\comb[1]{\const{#1}}
1043
1044 \begin{enum}
1045 \opdefault{lam\_trans}{string}{smart}
1046 Specifies the $\lambda$ translation scheme to use in ATP problems. The supported
1047 translation schemes are listed below:
1048
1049 \begin{enum}
1050 \item[\labelitemi] \textbf{\textit{hide\_lams}:} Hide the $\lambda$-abstractions
1051 by replacing them by unspecified fresh constants, effectively disabling all
1052 reasoning under $\lambda$-abstractions.
1053
1054 \item[\labelitemi] \textbf{\textit{lifting}:} Introduce a new
1055 supercombinator \const{c} for each cluster of $n$~$\lambda$-abstractions,
1056 defined using an equation $\const{c}~x_1~\ldots~x_n = t$ ($\lambda$-lifting).
1057
1058 \item[\labelitemi] \textbf{\textit{combs}:} Rewrite lambdas to the Curry
1059 combinators (\comb{I}, \comb{K}, \comb{S}, \comb{B}, \comb{C}). Combinators
1060 enable the ATPs to synthesize $\lambda$-terms but tend to yield bulkier formulas
1061 than $\lambda$-lifting: The translation is quadratic in the worst case, and the
1062 equational definitions of the combinators are very prolific in the context of
1063 resolution.
1064
1065 \item[\labelitemi] \textbf{\textit{combs\_and\_lifting}:} Introduce a new
1066 supercombinator \const{c} for each cluster of $\lambda$-abstractions and characterize it both using a
1067 lifted equation $\const{c}~x_1~\ldots~x_n = t$ and via Curry combinators.
1068
1069 \item[\labelitemi] \textbf{\textit{combs\_or\_lifting}:} For each cluster of
1070 $\lambda$-abstractions, heuristically choose between $\lambda$-lifting and Curry
1071 combinators.
1072
1073 \item[\labelitemi] \textbf{\textit{keep\_lams}:}
1074 Keep the $\lambda$-abstractions in the generated problems. This is available
1075 only with provers that support the THF0 syntax.
1076
1077 \item[\labelitemi] \textbf{\textit{smart}:} The actual translation scheme used
1078 depends on the ATP and should be the most efficient scheme for that ATP.
1079 \end{enum}
1080
1081 For SMT solvers, the $\lambda$ translation scheme is always \textit{lifting},
1082 irrespective of the value of this option.
1083
1084 \opsmartx{uncurried\_aliases}{no\_uncurried\_aliases}
1085 Specifies whether fresh function symbols should be generated as aliases for
1086 applications of curried functions in ATP problems.
1087
1088 \opdefault{type\_enc}{string}{smart}
1089 Specifies the type encoding to use in ATP problems. Some of the type encodings
1090 are unsound, meaning that they can give rise to spurious proofs
1091 (unreconstructible using \textit{metis}). The type encodings are
1092 listed below, with an indication of their soundness in parentheses.
1093 An asterisk (*) indicates that the encoding is slightly incomplete for
1094 reconstruction with \textit{metis}, unless the \textit{strict} option (described
1095 below) is enabled.
1096
1097 \begin{enum}
1098 \item[\labelitemi] \textbf{\textit{erased} (unsound):} No type information is
1099 supplied to the ATP, not even to resolve overloading. Types are simply erased.
1100
1101 \item[\labelitemi] \textbf{\textit{poly\_guards} (sound):} Types are encoded using
1102 a predicate \const{g}$(\tau, t)$ that guards bound
1103 variables. Constants are annotated with their types, supplied as extra
1104 arguments, to resolve overloading.
1105
1106 \item[\labelitemi] \textbf{\textit{poly\_tags} (sound):} Each term and subterm is
1107 tagged with its type using a function $\const{t\/}(\tau, t)$.
1108
1109 \item[\labelitemi] \textbf{\textit{poly\_args} (unsound):}
1110 Like for \textit{poly\_guards} constants are annotated with their types to
1111 resolve overloading, but otherwise no type information is encoded. This
1112 is the default encoding used by the \textit{metis} proof method.
1113
1114 \item[\labelitemi]
1115 \textbf{%
1116 \textit{raw\_mono\_guards}, \textit{raw\_mono\_tags} (sound); \\
1117 \textit{raw\_mono\_args} (unsound):} \\
1118 Similar to \textit{poly\_guards}, \textit{poly\_tags}, and \textit{poly\_args},
1119 respectively, but the problem is additionally monomorphized, meaning that type
1120 variables are instantiated with heuristically chosen ground types.
1121 Monomorphization can simplify reasoning but also leads to larger fact bases,
1122 which can slow down the ATPs.
1123
1124 \item[\labelitemi]
1125 \textbf{%
1126 \textit{mono\_guards}, \textit{mono\_tags} (sound);
1127 \textit{mono\_args} (unsound):} \\
1128 Similar to
1129 \textit{raw\_mono\_guards}, \textit{raw\_mono\_tags}, and
1130 \textit{raw\_mono\_args}, respectively but types are mangled in constant names
1131 instead of being supplied as ground term arguments. The binary predicate
1132 $\const{g}(\tau, t)$ becomes a unary predicate
1133 $\const{g\_}\tau(t)$, and the binary function
1134 $\const{t}(\tau, t)$ becomes a unary function
1135 $\const{t\_}\tau(t)$.
1136
1137 \item[\labelitemi] \textbf{\textit{mono\_native} (sound):} Exploits native
1138 first-order types if the prover supports the TFF0, TFF1, or THF0 syntax;
1139 otherwise, falls back on \textit{mono\_guards}. The problem is monomorphized.
1140
1141 \item[\labelitemi] \textbf{\textit{mono\_native\_higher} (sound):} Exploits
1142 native higher-order types if the prover supports the THF0 syntax; otherwise,
1143 falls back on \textit{mono\_native} or \textit{mono\_guards}. The problem is
1144 monomorphized.
1145
1146 \item[\labelitemi] \textbf{\textit{poly\_native} (sound):} Exploits native
1147 first-order polymorphic types if the prover supports the TFF1 syntax; otherwise,
1148 falls back on \textit{mono\_native}.
1149
1150 \item[\labelitemi]
1151 \textbf{%
1152 \textit{poly\_guards}?, \textit{poly\_tags}?, \textit{raw\_mono\_guards}?, \\
1153 \textit{raw\_mono\_tags}?, \textit{mono\_guards}?, \textit{mono\_tags}?, \\
1154 \textit{mono\_native}? (sound*):} \\
1155 The type encodings \textit{poly\_guards}, \textit{poly\_tags},
1156 \textit{raw\_mono\_guards}, \textit{raw\_mono\_tags}, \textit{mono\_guards},
1157 \textit{mono\_tags}, and \textit{mono\_native} are fully typed and sound. For
1158 each of these, Sledgehammer also provides a lighter variant identified by a
1159 question mark (\hbox{?}')\ that detects and erases monotonic types, notably
1160 infinite types. (For \textit{mono\_native}, the types are not actually erased
1161 but rather replaced by a shared uniform type of individuals.) As argument to the
1162 \textit{metis} proof method, the question mark is replaced by a
1163 \hbox{\textit{\_query\/}''} suffix.
1164
1165 \item[\labelitemi]
1166 \textbf{%
1167 \textit{poly\_guards}??, \textit{poly\_tags}??, \textit{raw\_mono\_guards}??, \\
1168 \textit{raw\_mono\_tags}??, \textit{mono\_guards}??, \textit{mono\_tags}?? \\
1169 (sound*):} \\
1170 Even lighter versions of the \hbox{?}' encodings. As argument to the
1171 \textit{metis} proof method, the \hbox{??}' suffix is replaced by
1172 \hbox{\textit{\_query\_query\/}''}.
1173
1174 \item[\labelitemi]
1175 \textbf{%
1176 \textit{poly\_guards}@, \textit{poly\_tags}@, \textit{raw\_mono\_guards}@, \\
1177 \textit{raw\_mono\_tags}@ (sound*):} \\
1178 Alternative versions of the \hbox{??}' encodings. As argument to the
1179 \textit{metis} proof method, the \hbox{@}' suffix is replaced by
1180 \hbox{\textit{\_at\/}''}.
1181
1182 \item[\labelitemi] \textbf{\textit{poly\_args}?, \textit{raw\_mono\_args}? (unsound):} \\
1183 Lighter versions of \textit{poly\_args} and \textit{raw\_mono\_args}.
1184
1185 \item[\labelitemi] \textbf{\textit{smart}:} The actual encoding used depends on
1186 the ATP and should be the most efficient sound encoding for that ATP.
1187 \end{enum}
1188
1189 For SMT solvers, the type encoding is always \textit{mono\_native}, irrespective
1190 of the value of this option.
1191
1192 \nopagebreak
1193 {\small See also \textit{max\_new\_mono\_instances} (\S\ref{relevance-filter})
1194 and \textit{max\_mono\_iters} (\S\ref{relevance-filter}).}
1195
1196 \opfalse{strict}{non\_strict}
1197 Specifies whether Sledgehammer should run in its strict mode. In that mode,
1198 sound type encodings marked with an asterisk (*) above are made complete
1199 for reconstruction with \textit{metis}, at the cost of some clutter in the
1200 generated problems. This option has no effect if \textit{type\_enc} is
1201 deliberately set to an unsound encoding.
1202 \end{enum}
1203
1204 \subsection{Output Format}
1205 \label{output-format}
1206
1207 \begin{enum}
1208
1209 \opfalse{verbose}{quiet}
1210 Specifies whether the \textbf{sledgehammer} command should explain what it does.
1211
1212 \opfalse{debug}{no\_debug}
1213 Specifies whether Sledgehammer should display additional debugging information
1214 beyond what \textit{verbose} already displays. Enabling \textit{debug} also
1215 enables \textit{verbose} behind the scenes.
1216
1217 \nopagebreak
1218 {\small See also \textit{spy} (\S\ref{mode-of-operation}) and
1219 \textit{overlord} (\S\ref{mode-of-operation}).}
1220
1221 \opsmart{isar\_proofs}{no\_isar\_proofs}
1222 Specifies whether Isar proofs should be output in addition to one-line proofs.
1223 The construction of Isar proof is still experimental and may sometimes fail;
1224 however, when they succeed they are usually faster and more intelligible than
1225 one-line proofs. If the option is set to \textit{smart} (the default), Isar
1226 proofs are only generated when no working one-line proof is available.
1227
1228 \opdefault{compress}{int}{smart}
1229 Specifies the granularity of the generated Isar proofs if \textit{isar\_proofs}
1230 is explicitly enabled. A value of $n$ indicates that each Isar proof step should
1231 correspond to a group of up to $n$ consecutive proof steps in the ATP proof. If
1232 the option is set to \textit{smart} (the default), the compression factor is 10
1233 if the \textit{isar\_proofs} option is explicitly enabled; otherwise, it is
1234 $\infty$.
1235
1236 \optrueonly{dont\_compress}
1237 Alias for \textit{compress} = 1''.
1238
1239 \optrue{try0}{dont\_try0}
1240 Specifies whether standard proof methods such as \textit{auto} and
1241 \textit{blast} should be tried as alternatives to \textit{metis} in Isar proofs.
1242 The collection of methods is roughly the same as for the \textbf{try0} command.
1243
1244 \opsmart{smt\_proofs}{no\_smt\_proofs}
1245 Specifies whether the \textit{smt} proof method should be tried in addition to
1246 Isabelle's other proof methods. If the option is set to \textit{smart} (the
1247 default), the \textit{smt} method is used for one-line proofs but not in Isar
1248 proofs.
1249 \end{enum}
1250
1251 \subsection{Regression Testing}
1252 \label{regression-testing}
1253
1254 \begin{enum}
1255 \opnodefault{expect}{string}
1256 Specifies the expected outcome, which must be one of the following:
1257
1258 \begin{enum}
1259 \item[\labelitemi] \textbf{\textit{some}:} Sledgehammer found a proof.
1260 \item[\labelitemi] \textbf{\textit{none}:} Sledgehammer found no proof.
1261 \item[\labelitemi] \textbf{\textit{timeout}:} Sledgehammer timed out.
1262 \item[\labelitemi] \textbf{\textit{unknown}:} Sledgehammer encountered some
1263 problem.
1264 \end{enum}
1265
1266 Sledgehammer emits an error if the actual outcome differs from the expected outcome. This option is
1267 useful for regression testing.
1268
1269 \nopagebreak
1270 {\small See also \textit{timeout} (\S\ref{timeouts}).}
1271 \end{enum}
1272
1273 \subsection{Timeouts}
1274 \label{timeouts}
1275
1276 \begin{enum}
1277 \opdefault{timeout}{float}{\upshape 30}
1278 Specifies the maximum number of seconds that the automatic provers should spend
1279 searching for a proof. This excludes problem preparation and is a soft limit.
1280
1281 \opdefault{preplay\_timeout}{float}{\upshape 1}
1282 Specifies the maximum number of seconds that \textit{metis} or other proof
1283 methods should spend trying to preplay'' the found proof. If this option
1284 is set to 0, no preplaying takes place, and no timing information is displayed
1285 next to the suggested proof method calls.
1286
1287 \nopagebreak
1288 {\small See also \textit{minimize} (\S\ref{mode-of-operation}).}
1289
1290 \optrueonly{dont\_preplay}
1291 Alias for \textit{preplay\_timeout} = 0''.
1292
1293 \end{enum}
1294
1295 \let\em=\sl
1296 \bibliography{manual}{}
1297 \bibliographystyle{abbrv}
1298
1299 \end{document}
`
|
# Surface area of solids of revolution
The question is to find the area of the surface that is generated by revolving the region bounded by $y^2 = x + 3$, $y^2 = 4x$ and $y\geqslant 0$ about the $x$-axis. We use the formula and find the areas $S_1$ (from $x=-3$ to $x=1$) and $S_2$ (from $x=0$ to $x=1$) for the curves respectively. Then we should find the total area by $S_1 - S_2$ but I don't understand why. I think that we should find it by $S_1 + S_2$ because $S_1$ generates the outer surface and $S_2$ generates the inner surface of the 3D shape between $x = -3$ and $x = 1$. I could not understand the reason when I asked it to the teacher. And I did not understand why we choose $x = 1$ as the end point because the curves reach beyond $x = 1$.
For your question about bounds of integration, the "region bounded by $y^{2} = x + 3$ and $y^{2} = 4x$" refers to the bounded (i.e., "finite") region defined by the inequalities $$y^{2} - 3 \leq x \leq \tfrac{1}{4}y^{2},$$ which lies between the lines $x = -3$ and $x = 1$ (and whose "right-hand boundary component" lies between $x = 0$ and $x = 1$).
|
## The Lean Theorem Prover
Lean is a new player in the field of proof assistants for Homotopy Type Theory. It is being developed by Leonardo de Moura working at Microsoft Research, and it is still under active development for the foreseeable future. The code is open source, and available on Github.
You can install it on Windows, OS X or Linux. It will come with a useful mode for Emacs, with syntax highlighting, on-the-fly syntax checking, autocompletion and many other features. There is also an online version of Lean which you can try in your browser. The on-line version is quite a bit slower than the native version and it takes a little while to load, but it is still useful to try out small code snippets. You are invited to test the code snippets in this post in the on-line version. You can run code by pressing shift+enter.
In this post I’ll first say more about the Lean proof assistant, and then talk about the considerations for the HoTT library of Lean (Lean has two libraries, the standard library and the HoTT library). I will also cover our approach to higher inductive types. Since Lean is not mature yet, things mentioned below can change in the future.
Update January 2017: the newest version of Lean currently doesn’t support HoTT, but there is a frozen version which does support HoTT. The newest version is available here, and the frozen version is available here. To use the frozen version, you will have to compile it from the source code yourself.
## Lean
Examples
First let’s go through some examples. Don’t worry if you don’t fully understand these examples for now, I’ll go over the features in more detail below. You can use the commands check, print and eval to ask Lean for information. I will give the output as a comment, which is a double dash in Lean. check just gives the type of an expression.
check Σ(x y : ℕ), x + 5 = y
-- Σ (x y : ℕ), x + 5 = y : Type₀
This states that this sigma-type lives in the lowest universe in Lean (since and le live in the lowest universe). Unicode can be input (in both the browser version and the Emacs version) using a backslash. For example Σ is input by \Sigma or \S. In the browser version you have to hit space to convert it to Unicode.
check is useful to see the type of a theorem (the curly braces indicate that those arguments are implicit).
check @nat.le_trans
-- nat.le_trans : Π {n m k : ℕ}, n ≤ m → m ≤ k → n ≤ k
print can show definitions.
open eq
print inverse
-- definition eq.inverse : Π {A : Type} {x y : A}, x = y → y = x :=
-- λ (A : Type) (x : A), eq.rec (refl x)
It prints which notation uses a particular symbol.
print ×
-- _ ×:35 _:34 := prod #1 #0
And it prints inductive definitions.
print bool
-- inductive bool : Type₀
-- constructors:
-- bool.ff : bool
-- bool.tt : bool
eval evaluates an expression.
eval (5 + 3 : ℕ)
-- 8
The Kernel
Lean is a proof assistant with as logic dependent type theory with inductive types and universes, just as Coq and Agda. It has a small kernel, which implements only the following components:
• Dependent lambda calculus
• Universe polymorphism in a hierarchy of $\omega$ many universe levels
• Inductive types and inductive families of types, generating only the recursor for an inductive type.
In particular it does not contain
• A Termination checker
• Fixpoint operators
• Pattern matching
Let’s discuss the above features in more detail. The function types are exactly as in the book with judgemental beta and eta rules. There are a lot of different notations for functions and function types, for example
open nat
variables (A B : Type) (P : A → Type)
(Q : A → B → Type) (f : A → A)
check A -> B
check ℕ → ℕ
check Π (a : A), P a
check Pi a, P a
check ∀a b, Q a b
check λ(n : ℕ), 2 * n + 3
check fun a, f (f a)
Lean supports universe polymorphism. So if you define the identity function as follows
definition my_id {A : Type} (a : A) : A := a
you actually get a function my_id.{u} for every universe u. You rarely have to write universes explicitly, but you can always give them explicitly if you want. For example:
definition my_id.{u} {A : Type.{u}} (a : A) : A := a
set_option pp.universes true
check sum.{7 5}
-- sum.{7 5} : Type.{7} → Type.{5} → Type.{7}
The universes in Lean are not cumulative. However, with universe polymorphism universe cumulativity is rarely needed. In the cases where you would use universe cumulativity you can use the lift map explicitly. There are only a handful of definitions or theorems (less than 10) currently in the HoTT library where lifts are needed (if it is not proving something about lifts itself). Some examples:
• In the proof that univalence implies function extensionality;
• In the proof that Type.{u} is not a set, given that Type.{0} is not a set;
• In the Yoneda Lemma;
• In the characterization of equality in sum types (see below for the pattern matching notation):
open sum
definition sum.code.{u v} {A : Type.{u}}
{B : Type.{v}} : A + B → A + B → Type.{max u v}
| sum.code (inl a) (inl a') := lift (a = a')
| sum.code (inr b) (inr b') := lift (b = b')
| sum.code _ _ := lift empty
The first two examples only need lifts because we also decided that inductive types without parameters (empty, unit, bool, nat, etc.) should live in the lowest universe, instead of being universe polymorphic.
Lastly, the kernel contains inductive types. The syntax to declare them is very similar to Coq’s. For example, you can write
inductive my_nat :=
| zero : my_nat
| succ : my_nat → my_nat
which defines the natural numbers. This gives the type my_nat with two constructors my_nat.zero and my_nat.succ, a dependent recursor my_nat.rec and two computation rules.
Lean also automatically defines a couple of other useful definitions, like the injectivity of the constructors, but this is done outside the kernel (for a full list you can execute print prefix my_nat if you’ve installed Lean). Lean supports inductive families and mutually defined inductive types, but not induction-induction or induction-recursion.
There is special support for structures – inductive types with only one constructor. For these the projections are automatically generated, and we can extend structures with additional fields. This is also implemented outside the kernel. In the following example we extend the structure of groups to the structure of abelian groups.
import algebra.group
open algebra
structure my_abelian_group (A : Type) extends group A :=
(mul_comm : ∀a b, mul a b = mul b a)
print my_abelian_group
check @my_abelian_group.to_group
-- abelian_group.to_group : Π {A : Type}, abelian_group A → group A
The result is a structure with all the fields of group, but with one additional field. Also, the coercion from abelian_group to group is automatically defined.
The Lean kernel can be instantiated in multiple ways. There is a standard mode where the lowest universe Prop is impredicative and has proof irrelevance. The standard mode also has built-in quotient types. Secondly, there is the HoTT mode without impredicative or proof irrelevant universes. The HoTT mode also has some support for HITs; see below.
Elaboration
On top of the kernel there is a powerful elaboration engine which
1. Infers implicit universe variables
2. Infers implicit arguments, using higher order unification
3. Supports overloaded notation or declarations
4. Inserts coercions
5. Infers implicit arguments using type classes
6. Convert readable proofs to proof terms
7. Constructs terms using tactics
It does most of these things simultaneously, for example it can use the term constructed by type classes to find out implicit arguments for functions.
Let’s run through the above list in more detail.
1. As said before, universe variables are rarely needed explicitly, usually only in cases where it would be ambiguous when you don’t give them (for example sum.code as defined above could also live in Type.{(max u v)+3} if the universe levels were not given explicitly).
2. As in Coq and Agda, we can use binders with curly braces {...} to indicate which arguments are left implicit. For example with the identity function above, if we write id (3 + 4) this will be interpreted as @id _ (3 + 4). Then the placeholder _ will be filled by the elaborator to get @id ℕ (3 + 4). Lean also supports higher order unification. This allows, for example, to leave the type family of a transport implicit. For example ( denotes transport):
open eq
variables (A : Type) (R : A → A → Type)
variables (a b c : A) (f : A → A → A)
example (r : R (f a a) (f a a)) (p : a = b)
: R (f a b) (f b a) :=
p ▸ r
Or the following lemma about transport in sigma-types:
open sigma sigma.ops eq
variables {A : Type} {B : A → Type}
{C : Πa, B a → Type} {a₁ a₂ : A}
definition my_sigma_transport (p : a₁ = a₂)
(x : Σ(b : B a₁), C a₁ b)
: p ▸ x = ⟨p ▸ x.1, p ▸D x.2⟩ :=
eq.rec (sigma.rec (λb c, idp) x) p
set_option pp.notation false
check @my_sigma_transport
-- my_sigma_transport :
-- Π {A : Type} {B : A → Type} {C : Π (a : A), B a → Type} {a₁ a₂ : A} (p : eq a₁ a₂) (x : sigma (C a₁)),
-- eq (transport (λ (a : A), sigma (C a)) p x) (dpair (transport B p (pr₁ x)) (transportD C p (pr₁ x) (pr₂ x)))
Here the first transport is along the type family λa, Σ(b : B a), C a b. The second transport is along the type family B and the p ▸D is a dependent transport which is a map C a₁ b → C a₂ (p ▸ b). The check command shows these type families. Also in the proof we don’t have to give the type family of the inductions eq.rec and sigma.rec explicitly. However, for nested inductions higher-order unification becomes expensive quickly, and it is better to use pattern matching or the induction tactic, discussed below.
open eq is_equiv equiv equiv.ops
example {A B : Type} (f : A ≃ B) {a : A} {b : B}
(p : f⁻¹ b = a) : f a = b :=
(eq_of_inv_eq p)⁻¹
In this example f is in the type of equivalences between A and B and is coerced into the function type. Also, ⁻¹ is overloaded to mean both function inverse and path inverse.
5. Lean has type classes, similar to Coq’s and Agda’s, but in Lean they are very tightly integrated into the elaboration process. We use type class inference for inverses of functions. We can write f⁻¹ for an equivalence f : A → B, and then Lean will automatically try to find an instance of type is_equiv f.
Here’s an example, where we prove that the type of natural transformations between two functors is a set.
import algebra.category.nat_trans
open category functor is_trunc
example {C D : Precategory} (F G : C ⇒ D)
: is_hset (nat_trans F G) :=
is_trunc_equiv_closed 0 !nat_trans.sigma_char
In the proof we only have to show that the type of natural transformation is equivalent to a sigma-type (nat_trans.sigma_char) and that truncatedness respects equivalences (is_trunc_equiv_closed). The fact that the sigma-type is a set is then inferred by type class resolution.
The brackets [...] specify that that argument is inferred by type class inference.
6. There are multiple ways to write readable proofs in Lean, which are then converted to proof terms by the elaborator. One thing you can do is define functions using pattern matching. We already saw an example above with sum.code. In contrast to Coq and Agda, these expressions are not part of the kernel. Instead, they are ‘compiled down’ to basic recursors, keeping Lean’s kernel simple, safe, and, well, ‘lean.’ Here is a simple example. The print command shows how inv is defined internally.
open eq
definition my_inv {A : Type} : Π{a b : A}, a = b → b = a
| my_inv (idpath a) := idpath a
print my_inv
-- definition my_inv : Π {A : Type} {a b : A}, a = b → b = a :=
-- λ (A : Type) {a b : A} (a_1 : a = b), eq.cases_on a_1 (idpath a)
Here are some neat examples with vectors.
open nat
inductive vector (A : Type) : ℕ → Type :=
| nil {} : vector A zero
| cons : Π {n}, A → vector A n → vector A (succ n)
open vector
-- some notation. The second line allows us to use
-- [1, 3, 10] as notation for vectors
notation a :: b := cons a b
notation [ l:(foldr , (h t, cons h t) nil ]) := l
variables {A B : Type}
definition map (f : A → B)
: Π {n : ℕ}, vector A n → vector B n
| map [] := []
| map (a::v) := f a :: map v
definition tail
: Π {n : ℕ}, vector A (succ n) → vector A n
| n (a::v) := v
-- no need to specify "tail nil", because that case is
-- excluded because of the type of tail
definition diag
: Π{n : ℕ}, vector (vector A n) n → vector A n
| diag nil := nil
| diag ((a :: v) :: M) := a :: diag (map tail M)
-- no need to specify "diag (nil :: M)"
eval diag [[(1 : ℕ), 2, 3],
[4, 5, 6],
[7, 8, 9]]
-- we need to specify that these are natural numbers
-- [1,5,9]
You can use calculations in proofs, for example in the following construction of the composition of two natural transformations:
import algebra.category
open category functor nat_trans
variables {C D : Precategory} {F G H : C ⇒ D}
definition nt.compose (η : G ⟹ H) (θ : F ⟹ G)
: F ⟹ H :=
nat_trans.mk
(λ a, η a ∘ θ a)
(λ a b f,
calc H f ∘ (η a ∘ θ a)
= (H f ∘ η a) ∘ θ a : by rewrite assoc
... = (η b ∘ G f) ∘ θ a : by rewrite naturality
... = η b ∘ (G f ∘ θ a) : by rewrite assoc
... = η b ∘ (θ b ∘ F f) : by rewrite naturality
... = (η b ∘ θ b) ∘ F f : by rewrite assoc)
There are various notations for using forward reasoning. For example the following proof is from the standard library. In the proof below we use the notation p > 0, which is interpreted as the inhabitant of the type p > 0 in the current context. We also use the keyword this, which is the name term constructed by the previous unnamed have expression.
definition infinite_primes (n : nat) : {p | p ≥ n ∧ prime p} :=
let m := fact (n + 1) in
have m ≥ 1, from le_of_lt_succ (succ_lt_succ (fact_pos _)),
have m + 1 ≥ 2, from succ_le_succ this,
obtain p prime p p ∣ m + 1, from sub_prime_and_dvd this,
have p ≥ 2, from ge_two_of_prime prime p,
have p > 0, from lt_of_succ_lt (lt_of_succ_le p ≥ 2),
have p ≥ n, from by_contradiction
(suppose ¬ p ≥ n,
have p < n, from lt_of_not_ge this, have p ≤ n + 1, from le_of_lt (lt.step this), have p ∣ m, from dvd_fact p > 0 this,
have p ∣ 1, from dvd_of_dvd_add_right (!add.comm ▸ p ∣ m + 1) this,
have p ≤ 1, from le_of_dvd zero_lt_one this,
absurd (le.trans 2 ≤ p p ≤ 1) dec_trivial),
subtype.tag p (and.intro this prime p)
7. Lean has a tactic language, just as Coq. You can write
begin
...tactics...
end
anywhere in a term to synthesize that subterm with tactics. You can also use by *tactic* to apply a single tactic (which we used in the above calculation proof). The type of the subterm you want to synthesize is the goal and you can use tactics to solve the goal or apply backwards reasoning on the goal. For example if the goal is f x = f y you can use the tactic apply ap f to reduce the goal to x = y. Another tactic is the exact tactic, which means you give the term explicitly.
The proof language offers various mechanisms to pass back and forth between the two modes. You can begin using tactics anywhere a proof term is expected, and, conversely, you can enter structured proof terms while in tactic mode using the exact tactic.
A very simple tactic proof is
open eq
variables {A B C : Type} {a a' : A}
example (g : B → C) (f : A → B) (p : a = a')
: g (f a) = g (f a') :=
begin
apply ap g,
apply ap f,
exact p
end
which produces the proof term ap g (ap f p). In the Emacs mode of Lean you can see the intermediate goals by moving your cursor to the desired location and pressing Ctrl-C Ctrl-G
Lean has too many tactics to discuss here, although not as many as Coq. Here are some neat examples of tactics in Lean.
The cases tactic can be used to destruct a term of an inductive type. In the following examples, it destructs the path p to reflexivity. In the second example it uses that succ is injective, since it is the constructor of nat, so that it can still destruct p. It also doesn’t matter whether the free variable is on the left hand side or the right hand side of the equality. In the last example it uses that different constructors of an inductive type cannot be equal.
open nat eq
example {A : Type} {x y : A} (p : x = y) : idp ⬝ p = p :=
begin cases p, reflexivity end
example (n m l : ℕ) (p : succ n = succ (m + l))
: n = m + l :=
begin cases p; reflexivity end
example (n : ℕ) (p : succ n = 0) : empty :=
by cases p
The rewrite tactic is useful for doing a lot of rewrite rules. It is modeled after the rewrite tactic in SSReflect. For example in the following proof of Eckmann-Hilton we rewrite the goal 4 times with theorems to solve the goal. The notation ▸* simplifies the goal similar to Coq’s simpl, and -H means to rewrite using H⁻¹.
open eq
theorem eckmann_hilton {A : Type} {a : A}
(p q : idp = idp :> a = a) : p ⬝ q = q ⬝ p :=
begin
rewrite [-whisker_right_idp p, -whisker_left_idp q, ▸*,
idp_con, whisker_right_con_whisker_left p q]
end
The induction tactic performs induction. For example if the goal is P n for a natural number n (which need not be a variable), then you can use induction n to obtain the two goals P 0 and P (k + 1) assuming P k. What’s really neat is that it supports user-defined recursors. So you can define the recursor of a HIT, tag it with the [recursor] attribute, and then it can be used for the induction tactic. Even more: you can arrange it in such a way that it uses the nondependent recursion principle whenever possible, and otherwise the dependent recursion principle.
import homotopy.circle types.int
open circle equiv int eq pi
definition circle_code (x : S¹) : Type₀ :=
begin
induction x, -- uses the nondependent recursor
{ exact ℤ},
{ apply ua, exact equiv_succ}
-- equiv_succ : ℤ ≃ ℤ with underlying function succ
end
definition transport_circle_code_loop (a : ℤ)
: transport circle_code loop a = succ a :=
ap10 !elim_type_loop a
-- ! is the "dual" of @, i.e. it inserts
-- placeholders for explicit arguments
definition circle_decode {x : S¹}
: circle_code x → base = x :=
begin
induction x, -- uses the dependent recursor
{ exact power loop},
{ apply arrow_pathover_left, intro b,
apply concato_eq, apply pathover_eq_r,
rewrite [power_con,transport_circle_code_loop]}
end
In the future there will be more tactics, providing powerful automation such as a simplifier and a tableaux theorem prover.
## The HoTT Library
I’ve been extensively working this year on the HoTT library of Lean, with the help of Jakob von Raumer and Ulrik Buchholtz. The HoTT library is coming along nicely. We have most things from the first 7 chapters of the HoTT book, and some category theory. In this file there is a summary what is in the HoTT library sorted by where it appears in the HoTT book.
I’ve developed the library partly by porting it from the Coq-HoTT library, since Coq’s syntax is pretty close to Lean. Other things I’ve proven using the proof in the book, or just by proving it myself.
In the library we are heavily using the cubical ideas Dan Licata wrote about earlier this year. For example we have the type of pathovers, which are heterogenous paths lying over another path. These can be written as b =[p] b' (or pathover B b p b' if you want to give the fibration explicitly). These are used for the recursion principle of HITs
definition circle.rec {P : circle → Type} (Pbase : P base)
(Ploop : Pbase =[loop] Pbase) (x : circle) : P x
and for the equality in sigma’s
variables {A : Type} {B : A → Type}
definition sigma_eq_equiv (u v : Σa, B a)
: (u = v) ≃ (Σ(p : u.1 = v.1), u.2 =[p] v.2)
There are also squares and squareovers which are presentations of higher equalities. One neat example which shows the strength of these additional structures is the following example. I had the term $p_1 \cdot p_2 \cdot p_3 \cdot p_4$ and could fill every square in the diagram below. I wanted to rewrite this term to $r \cdot q_1 \cdot q_2 \cdot q_3 \cdot q_4$.
If every square is formulated as a path between compositions of paths, then this rewriting takes a lot of work. You have to rewrite any individual square (assuming it is formulated as “top equals composition of the other three sides”), then you have to perform a lot of associativity steps to be able to cancel the sides which you get in the middle of the diagram.
Or you can formulate and prove a dedicated lemma for this rewrite step, which also takes quite some work.
However, because I formulated these two-dimensional paths as squares, I could first horizontally compose the four squares. This gives a square with as top $p_1 \cdot p_2 \cdot p_3 \cdot p_4$ and as bottom $q_1 \cdot q_2 \cdot q_3 \cdot q_4$. Then you can apply the theorem that the top of a square equals the composition of the other 3 sides, which gives exactly the desired rewrite rule. Similar simplifications by the use of pathovers and squares are occurring all over the library.
HITs
Lastly I want to talk about our way of handling HITs in Lean. In Coq and Agda the way to define a HIT is to make a type with the point constructors and then (inconsistently) assume that this type contains the right path constructors and the correct induction principle and computation rules. Then we forget we assumed something which was inconsistent. This is often called Dan Licata’s trick. This works well, but it’s not so clean since it assumes an inconsistency.
In Lean, we add two specific HITs as kernel extension. These HITs are the quotient and the $n$-truncation. The formation rule, the constructors and the recursion principle are added as constants, and the computation rule on the points is added as a definitional equality to the kernel (the computation rule for paths is an axiom).
The quotient (not to be confused with a set-quotient) is the following HIT: (this is not Lean syntax)
HIT quotient {A : Type} (R : A → A → Type) :=
| i : A → quotient R
| p : Π{a a' : A}, R a a' → i a = i a'
So in a quotient we specify the type of points, and we can add any type of paths to the quotient.
In Lean we have the following constants for quotients. The computation rule for points, rec_class_of, is just defined as reflexivity, to illustrate that the reduction rule is added to the Lean kernel.
open quotient
print quotient
print class_of
print eq_of_rel
print quotient.rec
print rec_class_of
print rec_eq_of_rel
Using these HITs, we can define all HITs in Chapters 1-7 of the book. For example we can define the sequential colimit of a type family $A : \mathbb{N} \to \text{Type}$ with functions $f_n : A_n \to A_{n+1}$ by taking the quotient of the type $\Sigma n, A_n$ with relation $R$ defined as an inductive family
inductive R : (Σn, A n) → (Σn, A n) → Type :=
| Rmk : Π{n : ℕ} (a : A n), R ⟨n+1, f a⟩ ⟨n, a⟩
In similar ways we can define pushouts, suspensions, spheres and so on. But you can define more HITs with quotients. In my previous blog post I wrote how to construct the propositional truncation using just quotients. In fact, Egbert Rijke and I are working on a generalization of this construction to construct all $n$-truncations, which means we can even drop $n$-truncations as a primitive HIT.
Quotients can even be used to construct HITs with 2-path constructors. I have formalized a way to construct quite general HITs with 2-constructors, as long as they are nonrecursive. The HIT I constructed has three constructors. The point constructor i and path constructor p are the same as for the quotient, but there is also a 2-path constructor, which can equate
• One path constructors
• Reflexivity
• ap i p where i is the point constructor and p is a path in A
• concatenations and/or inverses of such paths
Formally, the defintion of the HIT is as follows. Given a type $A$ and a binary (type-valued) relation $R$ on $A$. Let $T$ be the “formal equivalence closure” of $R$, i.e. the following inductive family:
inductive T : A → A → Type :=
| of_rel : Π{a a'}, R a a' → T a a'
| of_path : Π{a a'}, a = a' → T a a'
| symm : Π{a a'}, T a a' → T a' a
| trans : Π{a a' a''}, T a a' → T a' a'' → T a a''
Note that if we’re given a map $p : \Pi\{a\ a' : A\}, R\ a\ a' \to i\ a = i\ a'$ we can extend it to a map $p^* : \Pi\{a\ a' : A\}, T\ a\ a' \to i\ a = i\ a'$ by interpreting e.g. $p^*(\text{trans}\ t\ t'):\equiv p^*\ t\cdot p^*\ t'$.
Now if you’re also given a “relation on $T$”, i.e. a family $Q : \Pi\{a\ a'\}, T\ a\ a' \to T\ a\ a' \to \text{Type}$, then we can construct the following type just using quotients
HIT two_quotient A R Q,
i : A → two_quotient A R Q
p : Π{a a' : A}, R a a' → i a = i a'
r : Π{a a'} (r s : T a a'), Q r s → p* r = p* s
This two_quotient allows to construct at least the following HITs:
• The torus (the formulation with two loops and a 2-constructor)
• The reduced suspension
• The groupoid quotient
So in conclusion, from quotients you can define the most commonly used HITs. You can (probably) not define every possible HIT, though, so for the other HITs you still have to use Dan Licata’s trick. I should note here that currently there is no private modifier in Lean yet, so we cannot do Dan Licata’s trick very well in Lean now.
These are all the topics I want to talk about in this blog post. I tried to cover as much of the functionalities of Lean, but there are still features I haven’t talked about. For more information, you can look at the Lean tutorial. Keep in mind that the tutorial is written for the standard library, not the HoTT library, and you might want to skip the first couple of chapters explaining dependent type theory. The Quick Reference chapter at the end of the tutorial (or in pdf format) is also very useful. Also feel free to ask any questions in the comments, in the lean-user google group or in an issue on Github. Finally, if you’re willing to help with the HoTT library, that would be very much appreciated!
This entry was posted in Code, Higher Inductive Types, Programming. Bookmark the permalink.
### 48 Responses to The Lean Theorem Prover
1. Steve Awodey says:
Thanks Floris — this is really terrific!
2. Mike Shulman says:
Thanks so much for this intro! I’m still making my way through the tutorial, but I have one brief question: does Lean have, or will it one day have, a tactic language that lets you write your own tactics, like Coq does?
• Floris van Doorn says:
You can already define your own custom tactics. However, this feature is still limited, since there is no way yet to do something like “match goal” as in Coq. This is something which is planned, though.
Here are some tactics defined by default:
https://github.com/leanprover/lean/blob/master/hott/init/tactic.hlean#L145-L151
• Mike Shulman says:
Good to hear, thanks. It looks from your link as though the “tactic language” of lean is reflected into its ordinary type theory, so that defining a tactic is done by simply defining a term? This is a nice approach, one reason being that I hope it will lead to a more strongly typed tactic language; one of the most confusing things (to me) about Coq’s Ltac is its untyped nature and how it mixes terms, variables, tactics, and so on.
• spitters37 says:
Could you explain how adding an abstract type of tactics preserves all the nice meta properties.
• Matt Oliveri says:
Have you seen Mtac?
http://plv.mpi-sws.org/mtac/
I haven’t tried it, but it sounds like something you might like.
• Floris van Doorn says:
@ Mike: Exactly, defining a tactic is just defining a term of type tactic. Then if you apply that tactic, Lean will reduce your definition to a basic tactic and apply that (or raise an error if your definition does not reduce to a basic tactic). For example
do 3 (apply and.intro)
reduces to
apply and.intro; apply and.intro; apply and.intro
and that results in some C++ code to be executed to apply apply and.intro three times.
I should note here that the semicolon has a different meaning in Lean than in Coq.
t1; t2 in Lean is just applying tactic t1 and then applying tactic t2. Since most tactics only do things with the first goal, this is not the semantics of Coq. The meaning of t1; t2 in Coq is the same as t1: t2 in Lean, so with a colon instead of a semicolon.
@ Bas: The type tactic is just an inductive type with a single inhabitant builtin:
inductive tactic : Type := builtin : tactic
Every tactic is defined to be builtin. This is where the story ends in the type theory in Lean. This is all done in the file I linked to in my previous comment. On top of that there is some code associated with all the tactics t defined in that file, which is executed if you write something like “by t”.
3. Mike Shulman says:
How does one use Licata’s trick for HITs in Lean? Is there a “private” modifier that can prevent the use of the forbidden induction principle outside the defining module?
• Floris van Doorn says:
There is currently no such “private” modifier, so you cannot use Licata’s trick very well in Lean yet. I’ve added this comment also to my blog post.
However, I hope I convinced you that for all basic HITs we don’t need Licata’s trick. But it’s too bad we cannot experiment with other HITs (unless you first give a construction using quotients).
What HITs are used in the HoTT libraries of Coq or Agda which do not fall in the list of HITs we can make in Lean already? Can you give some examples of HITs which
* are recursive, other than the n-truncations, or
* have 3-path constructors (or higher)?
• Mike Shulman says:
3-path constructors would be a nightmare, I don’t think we’ve used them for anything yet. The main example of a recursive HIT other than truncations that we’ve used so far is the localization at a family of maps. There’s also spectrification, but I hope that that can also be constructed as a sequential colimit.
• Mike Shulman says:
(Well, to be precise, the construction of localization that’s actually used in the HoTT/Coq library does use n-path constructors for all n, as described here. But that’s just to avoid funext; with funext it can be done using only (recursive) 1-path constructors.)
• Floris van Doorn says:
Good to hear that 3-path constructors are not used. I completely agree that they would be a nightmare to work with.
As you know, Egbert has a construction of (certain) localizations as a sequential colimit, and Egbert and I are trying to formalize that into Lean (this is still in progress). On the first sight, spectrification looks quite similar to localizations, in the sense that you want to make certain maps an equivalence. It’s good to hear that most HITs can (probably) be implemented in Lean using quotients.
• Mike Shulman says:
Egbert’s localizations as a sequential colimit only work for localizations at omega-compact types, so I don’t really regard it as solving the problem.
• Mike Shulman says:
Of course, there are also all the recursive HITs that appear in chapters 10-11 of the book: the cumulative hierarchy, the Cauchy reals, and the surreal numbers. I didn’t mention those at first because you said “used in the Coq and Agda libraries”, and while the Coq library contains the cumulative hierarchy and the surreals, they aren’t really “used” for anything else. However, I do think they’re all important too, and I expect more examples will be found in the future.
4. Mike Shulman says:
What would you say are Lean’s “selling points”? That is, suppose I’m choosing a proof assistant to use for a new project; why would I choose Lean?
For example, one reason to choose Coq is its powerful tactic language; another is its module system. One reason to choose Agda is its support for induction-induction and induction-recursion. (Of course, there are other reasons too in both cases; these are just examples.) What does Lean have that makes it stand out?
• Floris van Doorn says:
Some selling points:
* Lean has a very conservative kernel, which makes it less susceptible to inconsistency proofs which have been discovered Coq and Agda. So far no inconsistency have been found in Lean yet (granted, this is also explained by the fact it has been used much less).
* Lean has good support for readable proofs (at least compared to Coq), such as the have ..., from ..., and calc constructs I’ve given in my post.
* Currently Leo de Moura (the Lean developer) and Daniel Selsam (a student at Stanford) are working together on a generic tableau theorem prover for Lean, similar to the “blast” tactic in Isabelle. Once they have implemented the blast tactic, this should be a huge selling point, to take care of (at least) the huge amount of tedious but basic proofs. Much more automation is planned for Lean in the future.
• Mike Shulman says:
Thanks! I look forward to the automation. Do you know how HoTT-compatible it will be? (I’m still kind of sad that everything in Lean’s standard library has to be manually ported to its HoTT library.)
It’s worth noting that one can implement a certain amount of “readable proof” syntax in Coq and Agda in userspace just using mixfix syntax. The HoTT/Agda library, for instance, uses a syntax for chaining equalities very much like the Lean “calc”. We haven’t felt the need for anything like that in Coq (because rewrite is generally easier), but we could probably implement it about as well there.
• Floris van Doorn says:
The automation should be fully HoTT-compatible. But there is a difference between “every feature it has in the standard library works in the HoTT library” and “every feature useful for HoTT is implemented.”
But I’m not exactly sure how the automation would be used in different ways in HoTT and the standard library. Of course, using the proof irrelevant Prop, you can sometimes do more in the standard library. But there is a feature planned to identify all elements in a mere proposition, as if they were definitionally equal (that it, Lean will automatically cast along the equalities between terms in a mere proposition). This is also useful in the standard library if we have a type which has at most one element. I do not know the exact details, though.
Does the mixfix notation in Agda work only for equality, or also for other relations? In Lean, you can do it for any relation, which you have previously specified to be transitive (using the [trans] attribute). So you can also do it for equivalences, homotopies, functors (composing them), natural transformations, and so on.
Example:
open nat
constant (R : ℕ → ℕ → Type₀)
infix ~~ :50 := R
constant (Rt : Π{n m k}, n ~~ m → m ~~ k → n ~~ k)
attribute Rt [trans]
check calc
3 ~~ 5 : _
... ~~ 7 : _
... ~~ 9 : _
• Mike Shulman says:
Identifying all elements in a proposition definitionally implies UIP, so it’s not HoTT-compatible. For any $p:a=a$, we have $(a,p)$ and $(a,\mathrm{refl}_a)$ living in $\sum_{y:A} (a=y)$, which is a mere proposition; but if $(a,p)$ and $(a,\mathrm{refl}_a)$ are definitionally equal then $p=\mathrm{refl}_a$. But you said “as if” they were definitionally equal, so maybe what’s planned is a way to work around that?
The mixfix notation in the HoTT/Agda library as defined works only for equality, but one could of course define similar notations for other relations. It would be more work than simply declaring them as transitive, but it ought to work.
• Floris van Doorn says:
I didn’t mean that we would definitionally identify all elements. Maybe I shouldn’t have expanded on that point, because I don’t know exactly how it is going to work. What I imagine is this. Suppose $f : \prod_{a:A} B(a)$ and $A$ is a mere proposition. If the current goal contains $f(x)$ and some automation procedure has a rewrite rule $f(x') = y$, then this will match with the current goal, and to apply the rule it will transport along the equality $x = x'$.
To respond to your comment about porting the libraries: it would indeed be great if this could be done automatically, but this is not very realistic. First of all, originally we planned to have the HoTT library and the standard library be one single library. But even in this case, the theorems about natural numbers would be very different for both components. For example, in the standard library you have a Prop-valued theorem such as
$(n \le m) \leftrightarrow (n = m) \vee (n < m)$ while in the HoTT library this would be the type-valued variant
$(n \le m) \leftrightarrow (n = m) + (n < m)$. Moreover, in the standard library $\le$ is a map into Prop, while in the HoTT library $\le$ is a map into Type which is provably a mere proposition. Also, in the HoTT library, you might want a definition of (say) commutativity of addition which computes in a good way, while in the standard library this is not important. All these small differences make an automatic port hard.
• Mike Shulman says:
Ah, I see what you mean about automation of prop-transport. That makes sense; thanks for explaining!
I still don’t understand the difficulty of an automatic port. True, “Prop” means something different in the two libraries, but it seems that for most practical purposes, the two “Prop”s behave very similarly, to the point where the proofs of most theorems ought to be almost the same, if corresponding lemmas have the same names. The theorem $(n\le m) \leftrightarrow (n=m)\vee (n makes sense in both cases (in HoTT, $\vee$ means the (-1)-truncation of $+$), and the different true theorem $(n\le m) \leftrightarrow (n=m) + (n also makes sense in both cases (at least, it does in Coq; Lean's $+$ might need a universe lift or something). Typeclasses ought to carry through the prop-ness of props automatically for the most part, without the need to modify the proofs much.
• Floris van Doorn says:
I’m not saying it would be impossible to write a program porting the file automatically. But I do think that doing a manual port of *all* files five times will take less time than writing a program to do it automatically. There are just many small differences in the standard library and the HoTT library making this tricky. For example, in the HoTT library we use $+$ as the sum of two types, which we use much less often in the standard library. Hence in the HoTT library this notation is always available, but in the standard library you have to open a namespace for it. Similarly, the theorem inv_inv in the HoTT library means $(p^{-1})^{-1}=p$ for paths, which in the standard library inv_inv means $(g^{-1})^{-1}=g$ for elements in a group.
• Mike Shulman says:
Well, those are choices of conventions that could be made differently. Do you have any reason to think that there’s an upper bound (like five) on the number of times you’ll want to port files? I would think that as the standard library grows, you would want to keep incorporating its new features into the HoTT library — or at least you would if it were easy to do so.
• Floris van Doorn says:
That’s true. We could do things differently to make the port easier. But that might require compromises which would be inconvenient for other reasons. I don’t have a good reason to believe we only need 5 ports. But I do think that every individual file will slowly settle down, so that re-porting it after some point is not necessary anymore.
Even though there is no automatic script to port, we have a simple script which uses regular expressions to translate most notations and declarations from the standard library to the HoTT library. Nothing sophisticated, but using that it only takes a couple of minutes to port a file (usually less).
But I do think you’re underestimating how much time it takes to write a fully-automatic script, and whether updating the script might take more time than just porting it manually. Another example of a difference between the libraries: sometimes we just want to give a different proof in the HoTT library than in the standard library. Compare the standard library proof that 0 + n = n (which is more readable) with the HoTT library proof (which has better computational content):
https://github.com/leanprover/lean/blob/master/library/data/nat/basic.lean#L111
https://github.com/leanprover/lean/blob/master/hott/types/nat/basic.hlean#L112
That said, I don’t find this discussion particularly interesting. Personally, I’m incapable of writing a fully automatic script, and I think that the Lean developers have better ways to spend their time. However, I am willing to port files from the standard library to the HoTT library, especially if the standard library does something which is also useful for HoTT.
Note that I do not disagree that it would be great to have a fully-automatic script.
• Mike Shulman says:
(It ought also to be possible to declare a single mixfix notation that would work for any transitive relation, but it wouldn’t look as pretty, since you’d have to do something like choose a particular symbol that would stand in for the “=” regardless of what the relation was, and then pass the relation in question as a parameter somewhere else.)
• Floris van Doorn says:
By the way, you can also combine different relations in Lean. The following is a proof of 0 < 8. The notation dec_trivial runs the decision procedure for a decidable proposition (or type).
-- standard library
import data.nat
open nat algebra
check calc
0 = 0 : dec_trivial
... < 5 : dec_trivial
... ≤ 5 : dec_trivial
... ≤ 8 : dec_trivial
In addition to these, here are some of the things I like:
* Lean’s elaborator is quite powerful. In particular, the handling of type classes is well-integrated with the elaboration process. This makes it possible to work with algebraic structures and the algebraic hierarchy in a natural way.
* I find Lean’s Emacs mode very pleasant and enjoyable to use.
* When proofs and definitions get complicated enough, there are performance problems in all systems. It is hard to find head-to-head comparisons, but I am sure Lean stacks up very well in this regard. For example, parallelism is fundamentally built-in to the system.
* Lean’s syntax is very nice.
* The features are generally well designed: namespaces, sectioning notions, annotations, etc.
* The function definition package is very nice.
* The mechanisms for defining and extending structures are very nice.
* The rewrite tactic is very nice (though not as strong as SSReflect’s).
* The standard library is designed to support both constructive and classical reasoning, as well as interactions between the two, and it does this in a nice way. (We should also be able to take advantage of this in HoTT, i.e. support classical reasoning where appropriate.)
The system is quite new, and it is exciting to see it evolve. We have high hopes for powerful automation, code extraction, a fast evaluator, and a powerful library and so on. Only time will tell how well these hopes pan out, but the current system provides a solid basis for growth.
• Mike Shulman says:
Thanks for that list! I can’t help noticing a lot of entries on it of the form “X is very nice”. Can you be more specific about any of them? For instance, is there something particular that you can do with type classes in Lean but not in Coq?
Mike, I’m sorry to be slow to reply! I didn’t notice your response.
Regarding type classes, I can’t be definitive about the advantages over Coq, because I have never used type classes in Coq. I have used axiomatic type classes in Isabelle and canonical structures in SSReflect, though. Isabelle’s type classes manage a complex algebraic hierarchy very efficiently, but the problem there is simpler, since structures are not first class objects; you can use them to manage concrete groups, rings, and fields, but they do not work for abstract algebra. Also, the things you can do are very restricted; e.g. a type class cannot depend on more than one type parameter. SSReflect gives you “real” abstract algebra, but canonical structures are hard to work with, and I don’t think Coq’s mechanisms are as efficient. At least, in the SSReflect library, the algebraic hierarchy is not as elaborate as in Isabelle.
What makes type classes in Lean powerful is that class resolution is an integral part of the elaboration algorithm. (The algorithm is described here: http://arxiv.org/abs/1505.04324, but we are working on an updated and expanded version of that paper.) You don’t want class resolution to fire too early, since the constraints you are trying to solve may have metavariables (holes arising from other implicit arguments) that need to be solved first. On the other hand, you don’t want to solve them too late, because the solutions provide information that can be used to solve other constraints. I have been told that Coq’s algorithm separates the elaboration and class resolution phase, i.e. first it elaborates, then sends all the remaining class constraints to a separate class inference algorithm. I have been told that that often causes problems. Lean’s elaboration algorithm does not separate the two, and solves class resolution constraints when appropriate, as part of the same process.
There were “performance” issues that Leo had to address. There was a problem with slowdowns with long chains of coercions, and Leo managed to solve that. He also caches results, so that once a problem is solved, the same problem doesn’t have to be solved again and again.
The net effect is that it simply works. Much of our development of the algebraic hierarchy was modeled after Isabelle’s; we have everything from semigroups and additive structures and various kinds of orders all the way up to ordered rings and ordered fields. Setting it up and declaring instances was easy and natural. Everything works the way you expect it to, and it is a pleasure to use. I will have to leave it to others, however, to weigh in as to whether one can say the same about Coq.
As far as the other features I like, the best I can do is refer you to the tutorial:
The function definition package is described in Chapter 7, mechanisms for defining structures are in Chapter 10, the rewrite tactic is described in Chapter 11.6. Various aspects of sections, namespaces, etc. are described in Chapter 5 and 8, and the Emacs mode is described in Chapter 5.
Even though it is a really minor thing, I find tab completion in Emacs mode to be extremely helpful. We have adopted naming conventions that makes it easy to guess the prefix of a theorem you want to apply, and then hitting tab gives you a list of options. It is surprisingly effective; I rarely have to look around to find what I need.
I hope that helps… Let me know if I can expand on anything.
7. Mike Shulman says:
Popping out some comment nesting levels, I’m curious what is “more computational” about the hott-library proof that $0+n=n$ — and why we care, given that $\mathbb{N}$ is a set, so that this proof inhabits a proposition? Some propositions certainly have computational importance, like isequiv, but an equality in $\mathbb{N}$ doesn’t have “ingredients” that can be extracted like the inverse of an equivalence. My only guess is that maybe transporting along this proof can be reduced to simpler transports somehow and thus is easier to reason about?
8. Floris van Doorn says:
In the HoTT-library proof the proof zero_add (succ n) reduces to ap succ (zero_add n). This is not the case in the standard library proof, where transports are used. This indeed matters if you transport over this equality, or take dependent paths over this equality.
Here is an example where this matters for the proof succ_add n m : (n + 1) + m = (n + m) + 1. Suppose we have a sequence of types $A : \mathbb{N} \to \text{Type}$ and maps $f_n : A_n \to A_{n+1}$. Then we might want to define the composite $f^m : A_n \to A_{n+m}$ and then prove that $f^{m+1}(x) = f^m(f(x))$ for $x : A_n$. Except this needs to be a dependent path, since the LHS lives in $A_{n+(m+1)}$ and the right hand side in $A_{(n+1)+m}$. So we have to transport over succ_add, and the proof becomes simpler if succ_add has nice computational properties (similar to zero_add). This is what happens here in the formalization project of Egbert and me:
https://github.com/EgbertRijke/sequential_colimits/blob/master/sequence.hlean#L106-L114
Of course, we can also define rep_f without the computation content of succ_add, for example by transporting along the fact that equalities in $\mathbb{N}$ are mere propositions. However, later we prove properties about rep_f, which will become more complicated if we would do this.
• Mike Shulman says:
I see.
I bet that that could be fixed though. For one thing, I don’t find !nat.add_zero ▸ !nat.add_zero to be any more “readable” than reflexivity. Wouldn’t it be even more readable to write base case of the standard library proof like this?
calc succ n + 0 = succ n : add_zero
... = succ (n + 0) : add_zero
In any case, though, there shouldn’t be a problem with the base case, since the concatenation of two reflexivities is again reflexivity, and transport along reflexivity is the identity, both judgmentaly.
For the induction step, I see two problems: concatenations with reflexivity (add_zero and add_succ are just names for reflexivity), and occurrence of transports rather than aps. It seems to me that the latter could be solved by defining ap in terms of transport. For the former, wouldn’t it be possible to modify calc so that if one step is reflexivity then it doesn’t insert any unnecessary concatenations?
• Floris van Doorn says:
Yes, I agree that all these things are possible to change. (Most likely, at least. I haven’t seen the code for calc.)
I'm no sure if your last feature is desirable, though. Suppose I write the following, then I might not want to normalize the first step to reflexivity
calc 2^20 + 2^20 = 2^21 : by efficient calculation
... = something : ...
(Of course it is possible add some check to see whether it is too expensive to normalize the term)
• Mike Shulman says:
My thought was that the user would give some indication of which equalities should be omitted as reflexivity, like writing ≡ instead of =.
• Floris van Doorn says:
That is a good idea!
9. Floris van Doorn says:
Update. You can now run the HoTT library of Lean in your browser:
https://leanprover.github.io/tutorial/?live&hott
10. Qinxiang Cao says:
Is HoTT consistent with the following assumption?
Proof extensionality: forall P Q: Prop, (P Q) -> (P = Q).
• Mike Shulman says:
What do you mean by “(P Q)”?
• Floris van Doorn says:
If the (P Q) was supposed to be $(P \leftrightarrow Q)$, then yes, it is definitely consistent in HoTT. Moreover, it is provable in HoTT, since it’s a special case of univalence (because two propositions which imply each other are equivalent as types).
• Qinxiang Cao says:
Yes. (P Q) means (P Q). I don’t know why it does not print out properly.
• Qinxiang Cao says:
Well, I think I just have a difficulty of typing that symbol.
• Mike Shulman says:
To see how to type math symbols, go here.
11. I noticed many of the links are broken, now that you are using a fork of Lean.
12. Floris van Doorn says:
Thanks for the comment. I’ve fixed the links.
13. Josh says:
Good news for HoTT in Lean 3! There’s work[1] to port the Lean 2 library by addressing[2] the problem of singleton elimination, which means more verbose proofs but doesn’t require changes to Lean.
|
## anonymous 5 years ago How would you fine the square root of this number: 3square root of 28?
1. anonymous
Is this 3*Sqrt(28)?
2. anonymous
I think so I'm doing a worksheet and I'm trying to workout the problem the answer is suppoed to 6 square root of 7
3. anonymous
3sqrt(28)=3sqrt(4*7)=3sqrt((2^2)*7)=3sqrt(2^2)sqrt(7)=3*2*sqrt7=6sqrt7
4. anonymous
How would you do the square root of this problem: -3 square root of 54. The answer is supposed to be -9 square root of 6.
5. anonymous
can you write 54 into product of prime numbers (1,2,3,5,7,11,....) for me?
6. anonymous
2,3,3,3
7. anonymous
my computer went crazy sorry for so many entries
8. anonymous
ok, there you have 54=2x3x3x3. there is a rule like this: sqrt(a^2)=a, a>0 so you have 54=2x3x3^2 sqrt54=sqrt(2x3x3^2)=sqrt(2x3) x sqrt(3^2) you do the rest for me? the thing here is you need to understand how it work, then you can do any problem
9. anonymous
I don't understand what you want me to do?
10. anonymous
-.- $-3\sqrt{54}=-3\sqrt{2*3*3^{2}}=-3\sqrt{6}\sqrt{3^{2}}=-3*3*\sqrt{6}=-9\sqrt{6}$
11. anonymous
but you still have to understand the method
12. anonymous
I understand how you factored it down but how do you get the answer of -9 square root of 6?
13. anonymous
√(3^2)=3, you understand this right? so√(54)=√(2*3*3^3)=√(6*3^3)=(√6)*(√3^3)=(√6)*3 then -3*√(54)=-3*(√6)*3=-3*3*(√6)=-9√6 clear?
|
5
# Forty males and fifty female students at a university were asked to try three brands of soft drinks and select favorite: Their responses are tabulated below: Is the...
## Question
###### Forty males and fifty female students at a university were asked to try three brands of soft drinks and select favorite: Their responses are tabulated below: Is there a relationship between sex of the student and the brand preferred?SexBrandBrand IIBrand IIIMale1219Female2313What is the observed test-statistic?Note: Round answer to two decimal places_Question 51 pt=State your conclusion to the fourth question in one or two sentences
Forty males and fifty female students at a university were asked to try three brands of soft drinks and select favorite: Their responses are tabulated below: Is there a relationship between sex of the student and the brand preferred? Sex Brand Brand II Brand III Male 12 19 Female 23 13 What is the observed test-statistic? Note: Round answer to two decimal places_ Question 5 1 pt= State your conclusion to the fourth question in one or two sentences
#### Similar Solved Questions
##### Calculate the first, second and the third derivative of y4rr
Calculate the first, second and the third derivative of y 4rr...
##### 1. For each reaction, indicate the elimination product(s) that will form. Then, decide which reaction (A or B) is faster and justify your answer.0 NaandorOHReaction A:'0 NaandlorOHReaction B:
1. For each reaction, indicate the elimination product(s) that will form. Then, decide which reaction (A or B) is faster and justify your answer. 0 Na andor OH Reaction A: '0 Na andlor OH Reaction B:...
##### Question 4 (25 marks)Consider the function: fly)-x+y+- 34V - 9x-Hy+7Find the first partial derivatives of the function.marks)ii) Find the critical point of the funetion.marks)iii) Determine the nature of the critical point whether it is a relative maximum; relative minima Or neither both, marks)
Question 4 (25 marks) Consider the function: fly)-x+y+- 34V - 9x-Hy+7 Find the first partial derivatives of the function. marks) ii) Find the critical point of the funetion. marks) iii) Determine the nature of the critical point whether it is a relative maximum; relative minima Or neither both, ma...
##### A point is picked at random inside the unit diskD = {(2,y) € R? x? +y? < 1}.Let the random variable R denote the length of the line segment between the origin and the randomly picked point_ and let the random variable denote the angle between this line segment and the positive horizontal axis (0 is measured in radians and SO 0 < 0 2t) . a) (4 points) Show that the joint distribution of R and 0 isif 0 <r <1, 0 < 0 < 2t , otherwisefrelr; 0) =Hint: Compute first the CDF FR.e(
A point is picked at random inside the unit disk D = {(2,y) € R? x? +y? < 1}. Let the random variable R denote the length of the line segment between the origin and the randomly picked point_ and let the random variable denote the angle between this line segment and the positive horizont...
##### Match each expression in Column 1 with its value in Column M. See Examples 9 and $10 .$ A. $2-\sqrt{3}$ B. $\frac{\sqrt{2}-\sqrt{2}}{2}$ C. $\frac{\sqrt{2-\sqrt{3}}}{2}$ D. $\frac{\sqrt{2+\sqrt{2}}}{2}$ E. $1-\sqrt{2}$ F. $1+\sqrt{2}$ $\tan \left(-\frac{\pi}{8}\right)$
Match each expression in Column 1 with its value in Column M. See Examples 9 and $10 .$ A. $2-\sqrt{3}$ B. $\frac{\sqrt{2}-\sqrt{2}}{2}$ C. $\frac{\sqrt{2-\sqrt{3}}}{2}$ D. $\frac{\sqrt{2+\sqrt{2}}}{2}$ E. $1-\sqrt{2}$ F. $1+\sqrt{2}$ $\tan \left(-\frac{\pi}{8}\right)$...
##### Solve each problem.Neptune and Pluto both have elliptical orbits with the sun at one focus. Neptune's orbit has $a=30.1$ astronomical units (AU) with an eccentricity of $e=0.009,$ whereas Pluto's orbit has $a=39.4$ and $e=0.249$ (a) Position the sun at the origin and determine equations that model each orbit.(b) Graph both equations on the same coordinate axes. Use the window $[-60,60]$by $[-40,40]$
Solve each problem. Neptune and Pluto both have elliptical orbits with the sun at one focus. Neptune's orbit has $a=30.1$ astronomical units (AU) with an eccentricity of $e=0.009,$ whereas Pluto's orbit has $a=39.4$ and $e=0.249$ (a) Position the sun at the origin and determine equations ...
##### A wire made of tungsten has length of 9.05 m and cross-sectional area of 6.28 mm?. The resistivity f tungsten is 5.25x10 8 ohm m If the current passing through the wire is 2.45 A, what is the thermal energy dissipated by the wire in 2.67 minutes?
A wire made of tungsten has length of 9.05 m and cross-sectional area of 6.28 mm?. The resistivity f tungsten is 5.25x10 8 ohm m If the current passing through the wire is 2.45 A, what is the thermal energy dissipated by the wire in 2.67 minutes?...
##### 6.For f(x) = x26 and g(x) = 2x 3, evaluate the following limits: a)lim {F(x) g(x)} b) )lim (((x)g(x)} Jim {f(x) /g(x)}
6.For f(x) = x2 6 and g(x) = 2x 3, evaluate the following limits: a)lim {F(x) g(x)} b) )lim (((x)g(x)} Jim {f(x) /g(x)}...
##### Problem 1:A physics book of mass M is connected by an unstretchabk string over a frictionkss pulky with negligibk mass t0 a dangling coffee cup of mass mc: The book is given a push up the incline plane which makes an angle 0 with the horizontaL and is rebased with an initial speed Vi, as shown in the figure. The coefficient of kinetic friction between the inclined plan and the book is Elk: Answer the follwing questions expressing your answers in terms Of {Vi, M,mc,8,g,Ek} Donotuse_the_work-ene
Problem 1: A physics book of mass M is connected by an unstretchabk string over a frictionkss pulky with negligibk mass t0 a dangling coffee cup of mass mc: The book is given a push up the incline plane which makes an angle 0 with the horizontaL and is rebased with an initial speed Vi, as shown in t...
##### $61-66$. Graph the three functions on a common screen. How are the graphs related? $$y=x, \quad y=-x, \quad y=x \cos x$$
$61-66$. Graph the three functions on a common screen. How are the graphs related? $$y=x, \quad y=-x, \quad y=x \cos x$$...
##### The frequency of a microwave signal is 7.83 GHz. Calculate the wavelength of these microwaves
the frequency of a microwave signal is 7.83 GHz. Calculate the wavelength of these microwaves...
##### Suppose the supply function for a certain item is given by S(Q) = 2e42/5. Suppose the rate of change in the demand function for this item is observed to be dP 60e-0/5 dQ Compute the consumer and producer surpluses when price P and quantity Q are at equilibrium:
Suppose the supply function for a certain item is given by S(Q) = 2e42/5. Suppose the rate of change in the demand function for this item is observed to be dP 60e-0/5 dQ Compute the consumer and producer surpluses when price P and quantity Q are at equilibrium:...
##### 24.DETAILSROGACALCET4 5.2.016.MY NOTESConsider the figure below.Find a, b,and c such that a0g(t) dt and cbg(t) dt areas large as possible.a = b = c = 25.DETAILSMY NOTESUse the information in the table to find h'(a) at the given value of a. YouMUST write your answer as an integer orfraction.h(x)= f(x)g(x)2; a = 3xf(x)f '(x)g(x)g'(x)0850215−2302772−135−525h'(a) = answer alll questions please
24. DETAILSROGACALCET4 5.2.016. MY NOTES Consider the figure below. Find a, b, and c such that a 0 g(t) dt and c b g(t) dt are as large as possible. a = b = c = 25. DETAILS MY NOTES Use the information in the table to find h'(a) at the given value of a. You MUST write your answer as...
##### (i0) Draw the heating and cooling curve for benzene. Use the information given below . What the energy required Kl 12.00 _ Onbcnzenc trom 10.0 'C uPe gas at its boiling point? Benzene Q-(kJ) Spccifc Hcat of Solid 1.25 Jlg Specifc Heat of Liquid 1.70 Jg "C Specilk Hoat ol Gas 1.24 Jq "C Melting Point 5.5 "C Boiling Point 80.1 "C Hcat of Vaporzatin 30.9 kJlmol Hcal ( ol Fusion 9,8 kJlmol Mblar Mass 78.11 Dmal
(i0) Draw the heating and cooling curve for benzene. Use the information given below . What the energy required Kl 12.00 _ Onbcnzenc trom 10.0 'C uPe gas at its boiling point? Benzene Q-(kJ) Spccifc Hcat of Solid 1.25 Jlg Specifc Heat of Liquid 1.70 Jg "C Specilk Hoat ol Gas 1.24 Jq "...
##### 2 2 2 Solve 3 > 4 the 3 & < 096+ following 2 second-order 2 differential equations:
2 2 2 Solve 3 > 4 the 3 & < 096+ following 2 second-order 2 differential equations:...
|
# What is the moment generating function from a density of a continuous
1. Oct 25, 2013
### karthik666
Hi everyone,
So I am taking a statistics course and finding this concept kinda challenging. wondering if someone can help me with the following problem!
Let X be a random variable with probability density function $$f(x)=\begin{cases}xe^{-x} \quad \text{if } x>0\\0 \quad \text{ } Otherwise.\end{cases}$$
we want to Determine the mgf of X whenever it exists.
I know that M(t) = E(e^(tx)) = integral of e^(tx)* f(x)
but not sure what to do from there.
Thanks for the help ^^
2. Oct 25, 2013
### MathematicalPhysicist
It's the integral of e^{tx}*f(x) over $\mathbb{R}$.
3. Oct 25, 2013
### karthik666
what is the approach for this problem?
4. Oct 25, 2013
### Office_Shredder
Staff Emeritus
You should write down explicitly the integral you said you're evaluating, and then try to evaluate it. Why don't you show us what you can do with it and then we can help you with where you get stuck? We can't help you until you do that.
Last edited: Oct 25, 2013
|
×
# Mathematics jokes 2
• Do you know why they never have beer at a math party? Because you can't drink and derive...
• Did you hear about the teacher who was arrested trying to board an airplane with a compass, a protractor and a calculator? He was charged with carrying weapons of math instruction.
• If it's zero degrees outside today and it's supposed to be twice as cold tomorrow, how cold is it going to be?
• A teacher was trying to impress her students with the fact that terms cannot be subtracted from one another unless they are like terms. "For example," she continued, "we cannot take five apples from six bananas." "Well," countered a pupil, "can't we take five apples from three trees?"
• Question: "How many seconds are there in a year?" Answer: "Twelve. January second, February second, March second, ..."
• You Might Be a Mathematician if... you are fascinated by the equation
$e^{i\pi} +1=0.$
# math is fun
Note by Aman Gautam
2 years, 1 month ago
Sort by:
Actually this equation is regarded as the most beautiful eqn. Refer http://blogs.scientificamerican.com/beautiful-minds/2014/05/07/the-neuroscience-of-mathematical-beauty/ · 1 year, 5 months ago
Actually, there are 31,557,600 seconds in a year. · 2 years ago
if I am not mistaken... · 2 years ago
|
13.7 Probability (Page 4/18)
Page 4 / 18
Two number cubes are rolled. Use the Complement Rule to find the probability that the sum is less than 10.
$\text{\hspace{0.17em}}\frac{5}{6}\text{\hspace{0.17em}}$
Computing probability using counting theory
Many interesting probability problems involve counting principles, permutations, and combinations. In these problems, we will use permutations and combinations to find the number of elements in events and sample spaces. These problems can be complicated, but they can be made easier by breaking them down into smaller counting problems.
Assume, for example, that a store has 8 cellular phones and that 3 of those are defective. We might want to find the probability that a couple purchasing 2 phones receives 2 phones that are not defective. To solve this problem, we need to calculate all of the ways to select 2 phones that are not defective as well as all of the ways to select 2 phones. There are 5 phones that are not defective, so there are $\text{\hspace{0.17em}}C\left(5,2\right)\text{\hspace{0.17em}}$ ways to select 2 phones that are not defective. There are 8 phones, so there are $\text{\hspace{0.17em}}C\left(8,2\right)\text{\hspace{0.17em}}$ ways to select 2 phones. The probability of selecting 2 phones that are not defective is:
Computing probability using counting theory
A child randomly selects 5 toys from a bin containing 3 bunnies, 5 dogs, and 6 bears.
1. Find the probability that only bears are chosen.
2. Find the probability that 2 bears and 3 dogs are chosen.
3. Find the probability that at least 2 dogs are chosen.
1. We need to count the number of ways to choose only bears and the total number of possible ways to select 5 toys. There are 6 bears, so there are $\text{\hspace{0.17em}}C\left(6,5\right)\text{\hspace{0.17em}}$ ways to choose 5 bears. There are 14 toys, so there are $\text{\hspace{0.17em}}C\left(14,5\right)\text{\hspace{0.17em}}$ ways to choose any 5 toys.
$\text{\hspace{0.17em}}\frac{C\left(6\text{,}5\right)}{C\left(14\text{,}5\right)}=\frac{6}{2\text{,}002}=\frac{3}{1\text{,}001}\text{\hspace{0.17em}}$
2. We need to count the number of ways to choose 2 bears and 3 dogs and the total number of possible ways to select 5 toys. There are 6 bears, so there are $\text{\hspace{0.17em}}C\left(6,2\right)\text{\hspace{0.17em}}$ ways to choose 2 bears. There are 5 dogs, so there are $\text{\hspace{0.17em}}C\left(5,3\right)\text{\hspace{0.17em}}$ ways to choose 3 dogs. Since we are choosing both bears and dogs at the same time, we will use the Multiplication Principle. There are $\text{\hspace{0.17em}}C\left(6,2\right)\cdot C\left(5,3\right)\text{\hspace{0.17em}}$ ways to choose 2 bears and 3 dogs. We can use this result to find the probability.
$\text{\hspace{0.17em}}\frac{C\left(6\text{,}2\right)C\left(5\text{,}3\right)}{C\left(14\text{,}5\right)}=\frac{15\cdot 10}{2\text{,}002}=\frac{75}{1\text{,}001}\text{\hspace{0.17em}}$
3. It is often easiest to solve “at least” problems using the Complement Rule. We will begin by finding the probability that fewer than 2 dogs are chosen. If less than 2 dogs are chosen, then either no dogs could be chosen, or 1 dog could be chosen.
When no dogs are chosen, all 5 toys come from the 9 toys that are not dogs. There are $\text{\hspace{0.17em}}C\left(9,5\right)\text{\hspace{0.17em}}$ ways to choose toys from the 9 toys that are not dogs. Since there are 14 toys, there are $\text{\hspace{0.17em}}C\left(14,5\right)\text{\hspace{0.17em}}$ ways to choose the 5 toys from all of the toys.
$\text{\hspace{0.17em}}\frac{C\left(9\text{,}5\right)}{C\left(14\text{,}5\right)}=\frac{63}{1\text{,}001}\text{\hspace{0.17em}}$
If there is 1 dog chosen, then 4 toys must come from the 9 toys that are not dogs, and 1 must come from the 5 dogs. Since we are choosing both dogs and other toys at the same time, we will use the Multiplication Principle. There are $\text{\hspace{0.17em}}C\left(5,1\right)\cdot C\left(9,4\right)\text{\hspace{0.17em}}$ ways to choose 1 dog and 1 other toy.
$\text{\hspace{0.17em}}\frac{C\left(5\text{,}1\right)C\left(9\text{,}4\right)}{C\left(14\text{,}5\right)}=\frac{5\cdot 126}{2\text{,}002}=\frac{315}{1\text{,}001}\text{\hspace{0.17em}}$
Because these events would not occur together and are therefore mutually exclusive, we add the probabilities to find the probability that fewer than 2 dogs are chosen.
$\text{\hspace{0.17em}}\frac{63}{1\text{,}001}+\frac{315}{1\text{,}001}=\frac{378}{1\text{,}001}\text{\hspace{0.17em}}$
We then subtract that probability from 1 to find the probability that at least 2 dogs are chosen.
$\text{\hspace{0.17em}}1-\frac{378}{1\text{,}001}=\frac{623}{1\text{,}001}\text{\hspace{0.17em}}$
A laser rangefinder is locked on a comet approaching Earth. The distance g(x), in kilometers, of the comet after x days, for x in the interval 0 to 30 days, is given by g(x)=250,000csc(π30x). Graph g(x) on the interval [0, 35]. Evaluate g(5) and interpret the information. What is the minimum distance between the comet and Earth? When does this occur? To which constant in the equation does this correspond? Find and discuss the meaning of any vertical asymptotes.
The sequence is {1,-1,1-1.....} has
how can we solve this problem
Sin(A+B) = sinBcosA+cosBsinA
Prove it
Eseka
Eseka
hi
Joel
June needs 45 gallons of punch. 2 different coolers. Bigger cooler is 5 times as large as smaller cooler. How many gallons in each cooler?
7.5 and 37.5
Nando
find the sum of 28th term of the AP 3+10+17+---------
I think you should say "28 terms" instead of "28th term"
Vedant
the 28th term is 175
Nando
192
Kenneth
if sequence sn is a such that sn>0 for all n and lim sn=0than prove that lim (s1 s2............ sn) ke hole power n =n
write down the polynomial function with root 1/3,2,-3 with solution
if A and B are subspaces of V prove that (A+B)/B=A/(A-B)
write down the value of each of the following in surd form a)cos(-65°) b)sin(-180°)c)tan(225°)d)tan(135°)
Prove that (sinA/1-cosA - 1-cosA/sinA) (cosA/1-sinA - 1-sinA/cosA) = 4
what is the answer to dividing negative index
In a triangle ABC prove that. (b+c)cosA+(c+a)cosB+(a+b)cisC=a+b+c.
give me the waec 2019 questions
|
# About the solutions of ODE
1. May 29, 2014
### Jhenrique
Given the following ODE:
$ay''(t) + by'(t) + cy(t) = 0$
The following solution:
$y(t) = c_1 \exp(x_1 t) + c_2 \exp(x_2 t)$
is more general than:
$y(t) = A \exp(\sigma t) \cos(\omega t - \varphi)$
? Why?
2. May 29, 2014
### AlephZero
The solutions are equivalent if $x_1$ and $x_2$ are complex conjugate numbers.
They are not equivalent if $x_1$ and $x_2$ are unequal real numbers, unless you want to use a crazy interpretation of $cos(\omega t - \varphi)$ where $\omega$ and $\varphi$ are complex constants.
|
Now showing items 575-591 of 1677
• #### Group rings and class groups
[OWS-18] (Birkhäuser Basel, 1992)
• #### 1334 - Group Theory, Measure, and Asymptotic Invariants
[OWR-2013-42] (2013) - (18 Aug - 24 Aug 2013)
The workshop ‘Group Theory, Measure, and Asymptotic Invariants’ organized by Miklos Abert (Budapest), Damien Gaboriau (Lyon) and Andreas Thom (Leipzig) was held 18 - 24 August 2013. The event was a continuation of the ...
• #### Group-Graded Rings Satisfying the Strong Rank Condition
[OWP-2019-22] (Mathematisches Forschungsinstitut Oberwolfach, 2019-08-16)
A ring $R$ satisfies the $\textit{strong rank condition}$ (SRC) if, for every natural number $n$, the free $R$-submodules of $R^n$ all have rank $\leq n$. Let $G$ be a group and $R$ a ring strongly graded by $G$ such that ...
• #### 0510 - Groups and Geometries
[OWR-2005-12] (2005) - (06 Mar - 12 Mar 2005)
The workshop Groups and Geometries was one of a series of Oberwolfach workshops which takes place every 3 years. It focused on finite simple groups, Lie-type groups and their interactions with geometry.
• #### 0236 - Groups and Geometries
[TB-2002-42] (2002) - (01 Sep - 07 Sep 2002)
• #### 0817 - Groups and Geometries
[OWR-2008-20] (2008) - (20 Apr - 26 Apr 2008)
• #### Groups and graphs : new results and methods
[OWS-06] (Birkhäuser Basel, 1985)
• #### Groups with Spanier-Whitehead Duality
[OWP-2019-23] (Mathematisches Forschungsinstitut Oberwolfach, 2019-09-17)
We introduce the notion of Spanier-Whitehead $K$-duality for a discrete group $G$, defined as duality in the KK-category between two $C*$-algebras which are naturally attached to the group, namely the reduced group ...
• #### 1721 - Harmonic Analysis and the Trace Formula
[OWR-2017-25] (2017) - (21 May - 27 May 2017)
The purpose of this workshop was to discuss recent results in harmonic analysis that arise in the study of the trace formula. This theme is common to different directions of research on automorphic forms such as representation ...
• #### 0028 - Harmonische Analysis und Darstellungstheorie topologischer Gruppen
[TB-2000-28] (2000) - (09 Jul - 15 Jul 2000)
• #### 0742 - Harmonische Analysis und Darstellungstheorie Topologischer Gruppen
[OWR-2007-49] (2007) - (14 Oct - 20 Oct 2007)
This was a meeting in the general area of representation theory and harmonic analysis on reductive groups.
• #### 0548 - Heat Kernels, Stochastic Processes and Functional Inequalities
[OWR-2005-54] (2005) - (27 Nov - 03 Dec 2005)
The conference brought together mathematicians belonging to several fields, essentially analysis, probability and geometry. One of the main unifying topics was certainly the study of heat kernels in various contexts: ...
• #### 1319 - Heat Kernels, Stochastic Processes and Functional Inequalities
[OWR-2013-23] (2013) - (05 May - 11 May 2013)
The general topic of the 2013 workshop Heat kernels, stochastic processes and functional inequalities was the study of linear and non-linear diffusions in geometric environments: finite and infinite-dimensional manifolds, ...
• #### 1648 - Heat Kernels, Stochastic Processes and Functional Inequalities
[OWR-2016-55] (2016) - (27 Nov - 03 Dec 2016)
The general topic of the 2016 workshop Heat kernels, stochastic processes and functional inequalities was the study of linear and non-linear diffusions in geometric environments including smooth manifolds, fractals and ...
• #### Hecke duality relations of Jacobi forms
[OWP-2008-03] (Mathematisches Forschungsinstitut Oberwolfach, 2008-03-07)
In this paper we introduce a new subspace of Jacobi forms of higher degree via certain relations among Fourier coefficients. We prove that this space can also be characterized by duality properties of certain distinguished ...
• #### Heisenberg-Weyl algebra revisited: combinatorics of words and paths
[OWP-2009-02] (Mathematisches Forschungsinstitut Oberwolfach, 2009)
The Heisenberg–Weyl algebra, which underlies virtually all physical representations of Quantum Theory, is considered from the combinatorial point of view. We provide a concrete model of the algebra in terms of paths on a ...
• #### Hemodynamical Flows : Modeling, Analysis and Simulation
[OWS-37] (Birkhäuser Basel, 2008)
This volume consists of six contributions concerning mathematical tools and analysis for the blood flow in arteries and veins. … book will be handy for those who are interested in the description of the flows of fluids ...
|
## Combine bases from subspaces
Hi:
I have a problem about combine bases from subspaces. This is part of orthogonality.
The examples as following:
For A=##\begin{bmatrix} 1 & 2 \\ 3 & 6 \end{bmatrix}## split x= ##\begin{bmatrix} 4 \\ 3 \end{bmatrix}## into ##x_r##+##x_n##=##\begin{bmatrix} 2 \\ 4 \end{bmatrix}+\begin{bmatrix} 2 \\ -1 \end{bmatrix}##
I don't know why it can split into ##x_r##+##x_n##, and how to prove that,
thanks a lot
PhysOrg.com science news on PhysOrg.com >> 'Whodunnit' of Irish potato famine solved>> The mammoth's lament: Study shows how cosmic impact sparked devastating climate change>> Curiosity Mars rover drills second rock target
It looks like you haven't given us everything from your notes or problem, but I'll try to infer what is meant. (2,-1) is a basis vector for the null space. Orthogonal to that space is the space with spanning set (1,2) (the slope is the negative reciprocal) Then any vector, such as (4,3), can be written as a linear combination of these. (4,3)=a(2,-1)+b(1,2). Then solving the system of two linear equations for the two unknowns a and b, we get a=1, b=2. Or was there something else you were trying to "prove"?
In fact, I feel I have stuck into the situation of learning linear algebra. I read the part of orthogonality and four subspaces. I feel confused about some examples, such as following: B=##\begin{bmatrix} 1 & 2&3&4&5 \\ 1 & 2&4&5&6 \\ 1 & 2&4&5&6 \end{bmatrix}## conatins ##\begin{bmatrix} 1 &3 \\ 1& 4\end{bmatrix}## in the pivot rows and columns. However, I can not deduce the process how it from. I try elimination, but it is not so directly perceived through the notation of submatrix, thanks
## Combine bases from subspaces
Quote by applechu In fact, I feel I have stuck into the situation of learning linear algebra. I read the part of orthogonality and four subspaces. I feel confused about some examples, such as following: B=##\begin{bmatrix} 1 & 2&3&4&5 \\ 1 & 2&4&5&6 \\ 1 & 2&4&5&6 \end{bmatrix}## conatins ##\begin{bmatrix} 1 &3 \\ 1& 4\end{bmatrix}## in the pivot rows and columns. However, I can not deduce the process how it from. I try elimination, but it is not so directly perceived through the notation of submatrix, thanks
The form
##\begin{bmatrix} 1 & 2&3&4&5 \\ 0 & 0&1&1&1 \\ 0&0&0&0&0 \end{bmatrix}##, or ##\begin{bmatrix} 1 & 2&0&1&2 \\ 0 & 0&1&1&1 \\ 0&0&0&0&0 \end{bmatrix}##
Tell you to take the first and third columns as a basis for the column space,
##\begin{bmatrix} 1 \\ 1\\1\end{bmatrix}## and ##\begin{bmatrix} 3 \\ 4\\4\end{bmatrix}##
The last look similar to the 2 by 2 matrix you wrote, but I don't understand why you would want that matrix, can you tell us more about the problem?
It is a example from the book. I try to learn linear algebra from some books. thanks a lot.
Recognitions: Gold Member Science Advisor Staff Emeritus You say "I feel I have stuck into the situation of learning linear algebra." What you give, "subspaces", "basis", etc. is linear algebra. What course was this for?
For the chapter about orthogonality. Thanks.
Recognitions: Gold Member Science Advisor Staff Emeritus I asked what course, if not Linear Algebra, not what chapter.
|
# newton’s three laws of graduation
We all had learnt Newton’s famous three laws of motion in highschool physic, which all of us accepted as universal truth since then. (Well, at least in non-nano, non-quantum scale). Now someone has discovered the long lost Newton’s three laws of graduation. I think that can explain why I’m still stucked with my degree right now.
First Law: A grad student in procrastination tends to stay in procrastination unless an external force is applied to it
Second Law: The age, a of a doctroal process is directly proportional to the flexibility, f, given by the advisor and inversely proportional to the student’s motivation, m
Thrid Law: For every action towards graduation there is an equal and opposite distraction
Sigh…
The full version of the laws can be found here
|
• ### Properties of the irregular satellite system around Uranus inferred from K2, Herschel and Spitzer observations(1706.06837)
Jan. 6, 2020 astro-ph.EP
In this paper we present visible range light curves of the irregular Uranian satellites Sycorax, Caliban, Prospero, Ferdinand and Setebos taken with Kepler Space Telescope in the course of the K2 mission. Thermal emission measurements obtained with the Herschel/PACS and Spitzer/MIPS instruments of Sycorax and Caliban were also analysed and used to determine size, albedo and surface characteristics of these bodies. We compare these properties with the rotational and surface characteristics of irregular satellites in other giant planet systems and also with those of main belt and Trojan asteroids and trans-Neptunian objects. Our results indicate that the Uranian irregular satellite system likely went through a more intense collisional evolution than the irregular satellites of Jupiter and Saturn. Surface characteristics of Uranian irregular satellites seems to resemble the Centaurs and trans-Neptunian objects more than irregular satellites around other giant planets, suggesting the existence of a compositional discontinuity in the young Solar system inside the orbit of Uranus.
• ### Periodicities of the RV Tau-type pulsating star DF Cygni: a combination of Kepler data with ground-based observations(1609.07944)
Sept. 26, 2016 astro-ph.SR
The RV Tauri stars constitute a small group of classical pulsating stars with some dozen known members in the Milky Way. The light variation is caused predominantly by pulsations, but these alone do not explain the full complexity of the light curves. High quality photometry of RV Tau-type stars is very rare. DF Cygni is the only member of this class of stars in the original Kepler field, hence allowing the most accurate photometric investigation of an RV Tauri star to date. The main goal is to analyse the periodicities of the RV Tauri-type star DF Cygni by combining four years of high-quality Kepler photometry with almost half a century of visual data collected by the American Association of Variable Star Observers. Kepler quarters of data have been stitched together to minimize the systematic effects of the space data. The mean levels have been matched with the AAVSO visual data. Both datasets have been submitted to Fourier and wavelet analyses, while the stability of the main pulsations has been studied with the O-C method and the analysis of the time-dependent amplitudes. DF Cygni shows a very rich behaviour on all time-scales. The slow variation has a period of 779.606 d and it has been remarkably coherent during the whole time-span of the combined data. On top of the long-term cycles the pulsations appear with a period of 24.925 d. Both types of light variation significantly fluctuate in time, with a constantly changing interplay of amplitude and phase modulations. The long-period change (i.e. the RVb signature) somewhat resembles the Long Secondary Period (LSP) phenomenon of the pulsating red giants, whereas the short-period pulsations are very similar to those of the Cepheid variables. Comparing the pulsation patterns with the latest models of Type-II Cepheids, we found evidence of strong non-linear effects directly observable in the Kepler light curve.
• ### Activity of 50 Long-Period Comets Beyond 5.2 AU(1607.05811)
July 20, 2016 astro-ph.EP
Remote investigations of the ancient solar system matter has been traditionally carried out through the observations of long-period (LP) comets that are less affected by solar irradiation than the short-period counterparts orbiting much closer to the Sun. Here we summarize the results of our decade-long survey of the distant activity of LP comets. We found that the most important separation in the dataset is based on the dynamical nature of the objects. Dynamically new comets are characterized by a higher level of activity on average: the most active new comets in our sample can be characterized by afrho values >3--4 higher than that of our most active returning comets. New comets develop more symmetric comae, suggesting a generally isotropic outflow. Contrary to this, the coma of recurrent comets can be less symmetrical, ocassionally exhibiting negative slope parameters, suggesting sudden variations in matter production. The morphological appearance of the observed comets is rather diverse. A surprisingly large fraction of the comets have long, teniouos tails, but the presence of impressive tails does not show a clear correlation with the brightness of the comets.
• ### CHEOPS performance for exomoons: The detectability of exomoons by using optimal decision algorithm(1508.00321)
Aug. 3, 2015 astro-ph.EP
Many attempts have already been made for detecting exomoons around transiting exoplanets but the first confirmed discovery is still pending. The experience that have been gathered so far allow us to better optimize future space telescopes for this challenge, already during the development phase. In this paper we focus on the forthcoming CHaraterising ExOPlanet Satellite (CHEOPS),describing an optimized decision algorithm with step-by-step evaluation, and calculating the number of required transits for an exomoon detection for various planet-moon configurations that can be observable by CHEOPS. We explore the most efficient way for such an observation which minimizes the cost in observing time. Our study is based on PTV observations (photocentric transit timing variation, Szab\'o et al. 2006) in simulated CHEOPS data, but the recipe does not depend on the actual detection method, and it can be substituted with e.g. the photodynamical method for later applications. Using the current state-of-the-art level simulation of CHEOPS data we analyzed transit observation sets for different star-planet-moon configurations and performed a bootstrap analysis to determine their detection statistics. We have found that the detection limit is around an Earth-sized moon. In the case of favorable spatial configurations, systems with at least such a large moon and with at least Neptune-sized planet, 80\% detection chance requires at least 5-6 transit observations on average. There is also non-zero chance in the case of smaller moons, but the detection statistics deteriorates rapidly, while the necessary transit measurements increase fast. (abridged)
• ### CHARA/MIRC observations of two M supergiants in Perseus OB1: temperature, Bayesian modeling, and compressed sensing imaging(1405.4032)
June 11, 2014 astro-ph.SR
Two red supergiants of the Per OB1 association, RS Per and T Per, have been observed in H band using the MIRC instrument at the CHARA array. The data show clear evidence of departure from circular symmetry. We present here new techniques specially developed to analyze such cases, based on state-of-the-art statistical frameworks. The stellar surfaces are first modeled as limb-darkened discs based on SATLAS models that fit both MIRC interferometric data and publicly available spectrophotometric data. Bayesian model selection is then used to determine the most probable number of spots. The effective surface temperatures are also determined and give further support to the recently derived hotter temperature scales of red su- pergiants. The stellar surfaces are reconstructed by our model-independent imaging code SQUEEZE, making use of its novel regularizer based on Compressed Sensing theory. We find excellent agreement between the model-selection results and the reconstructions. Our results provide evidence for the presence of near-infrared spots representing about 3-5% of the stellar flux.
• ### Affordable spectroscopy for 1m-class telescopes(1401.3532)
Jan. 15, 2014 astro-ph.SR
Doppler observations of exoplanet systems have been a very expensive technique, mainly due to the high costs of high-resolution stable spectrographs. Recent advances in instrumentation enable affordable Doppler planet detections with surprisingly small optical telescopes. We investigate the possibility of measuring Doppler reflex motion of planet hosting stars with small-aperture telescopes that have traditionally been neglected for this kind of studies. After thoroughly testing the recently developed and commercially available Shelyak eShel echelle spectrograph, we demonstrated that it is routinely possible to achieve velocity precision at the $100 \mathrm{m\,s}^{-1}$ level, reaching down to $\pm 50~\mathrm{m\,s}^{-1}$ for the best cases. We describe our off-the-shelf instrumentation, including a new 0.5m RC telescope at the Gothard Astrophysical Observatory of Lor\'and E\"otv\"os University equipped with an intermediate resolution fiber-fed echelle spectrograph. We present some follow-up radial velocity measurements of planet hosting stars and point out that updating the orbital solution of Doppler-planets is a very important task that can be fulfilled with sub-meter sized optical telescopes without requesting very expensive telescope times on 2--4~m (or larger) class telescopes.
• ### A portrait of the extreme Solar System object 2012 DR30(1304.7112)
April 26, 2013 astro-ph.EP
2012 DR30 is a recently discovered Solar System object on a unique orbit, with a high eccentricity of 0.9867, a perihelion distance of 14.54 AU and a semi-major axis of 1109 AU, in this respect outscoring the vast majority of trans-Neptunian objects. We performed Herschel/PACS and optical photometry to uncover the size and albedo of 2012 DR30, together with its thermal and surface properties. The body is 185 km in diameter and has a relatively low V-band geometric albedo of ~8%. Although the colours of the object indicate that 2012 DR30 is an RI taxonomy class TNO or Centaur, we detected an absorption feature in the Z-band that is uncommon among these bodies. A dynamical analysis of the target's orbit shows that 2012 DR30 moves on a relatively unstable orbit and was most likely only recently placed on its current orbit from the most distant and still highly unexplored regions of the Solar System. If categorised on dynamical grounds 2012 DR30 is the largest Damocloid and/or high inclination Centaur observed so far.
• ### Signals of exomoons in averaged light curves of exoplanets(1108.4557)
Oct. 4, 2011 astro-ph.SR, astro-ph.EP
The increasing number of transiting exoplanets sparked a significant interest in discovering their moons. Most of the methods in the literature utilize timing analysis of the raw light curves. Here we propose a new approach for the direct detection of a moon in the transit light curves via the so called Scatter Peak. The essence of the method is the valuation of the local scatter in the folded light curves of many transits. We test the ability of this method with different simulations: Kepler "short cadence", Kepler "long cadence", ground-based millimagnitude photometry with 3-min cadence, and the expected data quality of the planned ESA mission of PLATO. The method requires ~100 transit observations, therefore applicable for moons of 10-20 day period planets, assuming 3-4-5 year long observing campaigns with space observatories. The success rate for finding a 1 R_Earth moon around a 1 R_Jupiter exoplanet turned out to be quite promising even for the simulated ground-based observations, while the detection limit of the expected PLATO data is around 0.4 R_Earth. We give practical suggestions for observations and data reduction to improve the chance of such a detection: (i) transit observations must include out-of-transit phases before and after a transit, spanning at least the same duration as the transit itself; (ii) any trend filtering must be done in such a way that the preceding and following out-of-transit phases remain unaffected.
• ### The Kepler characterization of the variability among A- and F-type stars. I. General overview(1107.0335)
Aug. 31, 2011 astro-ph.SR
The Kepler spacecraft is providing time series of photometric data with micromagnitude precision for hundreds of A-F type stars. We present a first general characterization of the pulsational behaviour of A-F type stars as observed in the Kepler light curves of a sample of 750 candidate A-F type stars. We propose three main groups to describe the observed variety in pulsating A-F type stars: gamma Dor, delta Sct, and hybrid stars. We assign 63% of our sample to one of the three groups, and identify the remaining part as rotationally modulated/active stars, binaries, stars of different spectral type, or stars that show no clear periodic variability. 23% of the stars (171 stars) are hybrid stars, which is a much larger fraction than what has been observed before. We characterize for the first time a large number of A-F type stars (475 stars) in terms of number of detected frequencies, frequency range, and typical pulsation amplitudes. The majority of hybrid stars show frequencies with all kinds of periodicities within the gamma Dor and delta Sct range, also between 5 and 10 c/d, which is a challenge for the current models. We find indications for the existence of delta Sct and gamma Dor stars beyond the edges of the current observational instability strips. The hybrid stars occupy the entire region within the delta Sct and gamma Dor instability strips, and beyond. Non-variable stars seem to exist within the instability strips. The location of gamma Dor and delta Sct classes in the (Teff,logg)-diagram has been extended. We investigate two newly constructed variables 'efficiency' and 'energy' as a means to explore the relation between gamma Dor and delta Sct stars. Our results suggest a revision of the current observational instability strips, and imply an investigation of other pulsation mechanisms to supplement the kappa mechanism and convective blocking effect to drive hybrid pulsations.
• ### Practical suggestions on detecting exomoons in exoplanet transit light curves(1012.1560)
Dec. 7, 2010 astro-ph.SR, astro-ph.EP
The number of known transiting exoplanets is rapidly increasing, which has recently inspired significant interest as to whether they can host a detectable moon. Although there has been no such example where the presence of a satellite was proven, several methods have already been investigated for such a detection in the future. All these methods utilize post-processing of the measured light curves, and the presence of the moon is decided by the distribution of a timing parameter. Here we propose a method for the detection of the moon directly in the raw transit light curves. When the moon is in transit, it puts its own fingerprint on the intensity variation. In realistic cases, this distortion is too little to be detected in the individual light curves, and must be amplified. Averaging the folded light curve of several transits helps decrease the scatter, but it is not the best approach because it also reduces the signal. The relative position of the moon varies from transit to transit, the moon's wing will appear in different positions on different sides of the planet's transit. Here we show that a careful analysis of the scatter curve of the folded light curves enhances the chance of detecting the exomoons directly.
• ### A search for new members of the beta Pic, Tuc-Hor and epsilon Cha moving groups in the RAVE database(1009.1356)
Sept. 7, 2010 astro-ph.SR
We report on the discovery of new members of nearby young moving groups, exploiting the full power of combining the RAVE survey with several stellar age diagnostic methods and follow-up high-resolution optical spectroscopy. The results include the identification of one new and five likely members of the beta Pictoris moving group, ranging from spectral types F9 to M4 with the majority being M dwarfs, one K7 likely member of the epsilon Cha group and two stars in the Tuc-Hor association. Based on the positive identifications we foreshadow a great potential of the RAVE database in progressing toward a full census of young moving groups in the solar neighbourhood.
• ### Period-luminosity relations of pulsating M giants in the solar neighbourhood and the Magellanic Clouds(1007.2974)
July 18, 2010 astro-ph.SR
We analyse the results of a 5.5-yr photometric campaign that monitored 247 southern, semi-regular variables with relatively precise Hipparcos parallaxes to demonstrate an unambiguous detection of Red Giant Branch (RGB) pulsations in the solar neighbourhood. We show that Sequence A' contains a mixture of AGB and RGB stars, as indicated by a temperature related shift at the TRGB. Large Magellanic Cloud (LMC) and Galactic sequences are compared in several ways to show that the P-L sequence zero-points have a negligible metallicity dependence. We describe a new method to determine absolute magnitudes from pulsation periods and calibrate the LMC distance modulus using Hipparcos parallaxes to find \mu (LMC) = 18.54 +- 0.03 mag. Several sources of systematic error are discussed to explain discrepancies between the MACHO and OGLE sequences in the LMC. We derive a relative distance modulus of the Small Magellanic Cloud (SMC) relative to the LMC of \Delta \mu = 0.41 +- 0.02 mag. A comparison of other pulsation properties, including period-amplitude and luminosity-amplitude relations, confirms that RGB pulsation properties are consistent and universal, indicating that the RGB sequences are suitable as high-precision distance indicators. The M giants with the shortest periods bridge the gap between G and K giant solar-like oscillations and M-giant pulsation, revealing a smooth continuity as we ascend the giant branch.
• ### Methods for exomoon characterisation: combining transit photometry and the Rossiter-McLaughlin effect(1004.1143)
April 7, 2010 astro-ph.SR
It has been suggested that moons around transiting exoplanets may cause observable signal in transit photometry or in the Rossiter-McLaughlin (RM) effect. In this paper a detailed analysis of parameter reconstruction from the RM effect is presented for various planet-moon configurations, described with 20 parameters. We also demonstrate the benefits of combining photometry with the RM effect. We simulated 2.7x10^9 configurations of a generic transiting system to map the confidence region of the parameters of the moon, find the correlated parameters and determine the validity of reconstructions. The main conclusion is that the strictest constraints from the RM effect are expected for the radius of the moon. In some cases there is also meaningful information on its orbital period. When the transit time of the moon is exactly known, for example, from transit photometry, the angle parameters of the moon's orbit will also be constrained from the RM effect. From transit light curves the mass can be determined, and combining this result with the radius from the RM effect, the experimental determination of the density of the moon is also possible.
• ### Long-term photometry and periods for 261 nearby pulsating M giants(0908.3228)
Aug. 22, 2009 astro-ph.SR
We present the results of a 5.5-year CCD photometric campaign that monitored 261 bright, southern, semi-regular variables with relatively precise Hipparcos parallaxes. The data are supplemented with independent photoelectric observations of 34 of the brightest stars, including 11 that were not part of the CCD survey, and a previously unpublished long time-series of VZ Cam. Pulsation periods and amplitudes are established for 247 of these stars, the majority of which have not been determined before. All M giants with sufficient observations for period determination are found to be variable, with 87% of the sample (at S/N >= 7.5) exhibiting multi-periodic behaviour. The period ratios of local SRVs are in excellent agreement with those in the Large Magellanic Cloud. Apparent K-band magnitudes are extracted from multiple NIR catalogues and analysed to determine the most reliable values. We review the effects of interstellar and circumstellar extinction and calculate absolute K-band magnitudes using revised Hipparcos parallaxes.
• ### CHARA/MIRC interferometry of red supergiants: diameters, effective temperatures and surface features(0902.2602)
Feb. 17, 2009 astro-ph.SR
We have obtained H-band interferometric observations of three galactic red supergiant stars using the MIRC instrument on the CHARA array. The targets include AZ Cyg, a field RSG and two members of the Per OB1 association, RS Per and T Per. We find evidence of departures from circular symmetry in all cases, which can be modelled with the presence of hotspots. This is the first detection of these features in the $H$-band. The measured mean diameters and the spectral energy distributions were combined to determine effective temperatures. The results give further support to the recently derived hotter temperature scale of red supergiant stars by Levesque et al. (2005), which has been evoked to reconcile the empirically determined physical parameters and stellar evolutionary theories. We see a possible correlation between spottedness and mid-IR emission of the circumstellar dust, suggesting a connection between mass-loss and the mechanism that generates the spots.
• ### Binarity and multiperiodicity in high-amplitude delta Scuti stars(0812.2139)
Dec. 11, 2008 astro-ph
We have carried out a photometric and spectroscopic survey of bright high-amplitude delta Scuti (HADS) stars. The aim was to detect binarity and multiperiodicity (or both) in order to explore the possibility of combining binary star astrophysics with stellar oscillations. Here we present the first results for ten, predominantly southern, HADS variables. We detected the orbital motion of RS Gru with a semi-amplitude of ~6.5 km/s and 11.5 days period. The companion is inferred to be a low-mass dwarf star in a close orbit around RS Gru. We found multiperiodicity in RY Lep both from photometric and radial velocity data and detected orbital motion in the radial velocities with hints of a possible period of 500--700 days. The data also revealed that the amplitude of the secondary frequency is variable on the time-scale of a few years, whereas the dominant mode is stable. Radial velocities of AD CMi revealed cycle-to-cycle variations which might be due to non-radial pulsations. We confirmed the multiperiodic nature of BQ Ind, while we obtained the first radial velocity curves of ZZ Mic and BE Lyn. The radial velocity curve and the O-C diagram of CY Aqr are consistent with the long-period binary hypothesis. We took new time series photometry on XX Cyg, DY Her and DY Peg, with which we updated their O-C diagrams.
• ### Star cluster kinematics with AAOmega(0809.1269)
Sept. 8, 2008 astro-ph
The high-resolution setup of the AAOmega spectrograph on the Anglo-Australian Telescope makes it a beautiful radial velocity machine, with which one can measure velocities of up to 350-360 stars per exposure to +/-1--2 km/s in a 2-degree field of view. Here we present three case studies of star cluster kinematics, each based on data obtained on three nights in February 2008. The specific aims included: (i) cluster membership determination for NGC 2451A and B, two nearby open clusters in the same line-of-sight; (ii) a study of possible membership of the planetary nebula NGC 2438 in the open cluster M46; and (iii) the radial velocity dispersion of M4 and NGC 6144, a pair of two globular clusters near Antares. The results which came out of only three nights of AAT time illustrate very nicely the potential of the instrument and, for example, how quickly one can resolve decades of contradiction in less than two hours of net observing time.
• ### AAOmega radial velocities rule out current membership of the planetary nebula NGC 2438 in the open cluster M46(0809.0327)
Sept. 1, 2008 astro-ph
We present new radial velocity measurements of 586 stars in a one-degree field centered on the open cluster M46, and the planetary nebula NGC 2438 located within a nuclear radius of the cluster. The data are based on medium-resolution optical and near-infrared spectra taken with the AAOmega spectrograph on the Anglo-Australian Telescope. We find a velocity difference of about 30 km/s between the cluster and the nebula, thus removing all ambiguities about the cluster membership of the planetary nebula caused by contradicting results in the literature. The line-of-sight velocity dispersion of the cluster is 3.9+/-0.3 km/s, likely to be affected by a significant population of binary stars.
• ### The University of New South Wales Extrasolar Planet Search: a catalogue of variable stars from fields observed 2004--2007(0802.0096)
Feb. 1, 2008 astro-ph
We present a new catalogue of variable stars compiled from data taken for the University of New South Wales Extrasolar Planet Search. From 2004 October to 2007 May, 25 target fields were each observed for 1-4 months, resulting in ~87000 high precision light curves with 1600-4400 data points. We have extracted a total of 850 variable light curves, 659 of which do not have a counterpart in either the General Catalog of Variable Stars, the New Suspected Variables catalogue or the All Sky Automated Survey southern variable star catalogue. The catalogue is detailed here, and includes 142 Algol-type eclipsing binaries, 23 beta Lyrae-type eclipsing binaries, 218 contact eclipsing binaries, 53 RR Lyrae stars, 26 Cepheid stars, 13 rotationally variable active stars, 153 uncategorised pulsating stars with periods <10 d, including delta Scuti stars, and 222 long period variableswith variability on timescales of >10 d. As a general application of variable stars discovered by extrasolar planet transit search projects, we discuss several astrophysical problems which could benefit from carefully selected samples of bright variables. These include: (i) the quest for contact binaries with the smallest mass ratio, which could be used to test theories of binary mergers; (ii) detached eclipsing binaries with pre-main-sequence components, which are important test objects for calibrating stellar evolutionary models; and (iii) RR Lyrae-type pulsating stars exhibiting the Blazhko-effect, which is one of the last great mysteries of pulsating star research.
• ### High and low states of the system AM Herculis(0802.0019)
Jan. 31, 2008 astro-ph
Context: We investigate the distribution of optically high and low states of the system AM Herculis (AM Her). Aims: We determine the state duty cycles, and their relationships with the mass transfer process and binary orbital evolution of the system. Methods: We make use of the photographic plate archive of the Harvard College Observatory between 1890 and 1953 and visual observations collected by the American Association of Variable Star Observers between 1978 and 2005. We determine the statistical probability of the two states, their distribution and recurrence behaviors. Results: We find that the fractional high state duty cycle of the system AM Her is 63%. The data show no preference of timescales on which high or low states occur. However, there appears to be a pattern of long and short duty cycle alternation, suggesting that the state transitions retain memories. We assess models for the high/low states for polars (AM Her type systems). We propose that the white-dwarf magnetic field plays a key role in regulating the mass transfer rate and hence the high/low brightness states, due to variations in the magnetic-field configuration in the system.
• ### Combining Visual and Photoelectric Observations of Semi-Regular Red Variables(0711.4873)
Nov. 30, 2007 astro-ph
Combining visual observations of SR variables with measurements of them using a photoelectric photometer is discussed then demonstrated using data obtained for the bright, southern SR variable theta Aps. Combining such observations is useful in that it can provide a more comprehensive set of data by extending the temporal coverage of the light curve. Typically there are systematic differences in the visual and photometric datasets that must be corrected for.
• ### A wide-field kinematic survey for tidal tails around five globular clusters(astro-ph/0703247)
March 11, 2007 astro-ph
Using the AAOmega instrument of the Anglo-Australian Telescope, we have obtained medium-resolution near-infrared spectra of 10,500 stars in two-degree fields centered on the galactic globular clusters 47 Tuc, NGC 288, M12, M30 and M55. Radial velocities and equivalent widths of the infrared Ca II triplet lines have been determined to constrain cluster membership, which in turn has been used to study the angular extent of the clusters. From the analysis of 140-1000 member stars in each cluster, we do not find extended structures that go beyond the tidal radii. For three cluster we estimate a 1% upper limit of extra-tidal red giant branch stars. We detect systemic rotation in 47 Tuc and M55.
• ### Eclipsing binaries in the MACHO database: New periods and classifications for 3031 systems in the Large Magellanic Cloud(astro-ph/0703137)
March 7, 2007 astro-ph
Eclipsing binaries offer a unique opportunity to determine fundamental physical parameters of stars using the constraints on the geometry of the systems. Here we present a reanalysis of publicly available two-color observations of about 6800 stars in the Large Magellanic Cloud, obtained by the MACHO project between 1992 and 2000 and classified as eclipsing variable stars. Of these, less than half are genuine eclipsing binaries. We determined new periods and classified the stars, 3031 in total, using the Fourier parameters of the phased light curves. The period distribution is clearly bimodal, reflecting refer to the separate groups of more massive blue main sequence objects and low mass red giants. The latter resemble contact binaries and obey a period-luminosity relation. Using evolutionary models, we identified foreground stars. The presented database has been cleaned of artifacts and misclassified variables, thus allowing searches for apsidal motion, tertiary components, pulsating stars in binary systems and secular variations with time-scales of several years.
• ### The 2004--2006 outburst and environment of V1647 Ori(astro-ph/0408432)
Jan. 10, 2007 astro-ph
(Abridged) We studied the brightness and spectral evolution of the young eruptive star V1647 Ori during its recent outburst in the period 2004 February - 2006 Sep. We performed a photometric follow-up in the bands V, R_C, I_C, J, H, K_s as well as visible and near-IR spectroscopy. The main results are as follows: The brightness of V1647 Ori stayed more than 4 mag above the pre-outburst level until 2005 October when it started a rapid fading. In the high state we found a periodic component in the optical light curves with a period of 56 days. The delay between variations of the star and variations in the brightness of clump of nearby nebulosity corresponds to an angle of 61+/-14 degrees between the axis of the nebula and the line of sight. A steady decrease of HI emission line fluxes could be observed. In 2006 May, in the quiescent phase, the HeI 1.083 line was observed in emission, contrary to its deep blueshifted absorption observed during the outburst. The J-H and H-K_s color maps of the infrared nebula reveal an envelope around the star. The color distribution of the infrared nebula suggests reddening of the scattered light inside a thick circumstellar disk. We show that the observed properties of V1647 Ori could be interpreted in the framework of the thermal instability models of Bell et al. (1995). V1647 Ori might belong to a new class of young eruptive stars, defined by relatively short timescales, recurrent outbursts, modest increase in bolometric luminosity and accretion rate, and an evolutionary state earlier than that of typical EXors.
• ### Physical parameters and multiplicity of five southern close eclipsing binaries(astro-ph/0701200)
Jan. 8, 2007 astro-ph
Aims: Detect tertiary components of close binaries from spectroscopy and light curve modelling; investigate light-travel time effect and the possibility of magnetic activity cycles; measure mass-ratios for unstudied systems and derive absolute parameters. Methods: We carried out new photometric and spectroscopic observations of five bright (V<10.5 mag) close eclipsing binaries, predominantly in the southern skies. We obtained full Johnson BV light curves, which were modelled with the Wilson-Devinney code. Radial velocities were measured with the cross-correlation method using IAU radial velocity standards as spectral templates. Period changes were studied with the O-C method, utilising published epochs of minimum light (XY Leo) and ASAS photometry (VZ Lib). Results: For three objects (DX Tuc, QY Hya, V870 Ara), absolute parameters have been determined for the first time. We detect spectroscopically the tertiary components in XY Leo, VZ Lib and discover one in QY Hya. For XY Leo we update the light-time effect parameters and detect a secondary periodicity of about 5100 d in the O$-$C diagram that may hint about the existence of short-period magnetic cycles. A combination of recent photometric data shows that the orbital period of the tertiary star in VZ Lib is likely to be over 1500 d. QY Hya is a semi-detached X-ray active binary in a triple system with K and M-type components, while V870 Ara is a contact binary with the third smallest spectroscopic mass-ratio for a W UMa star to date (q=0.082+/-0.030). This small mass-ratio, being close to the theoretical minimum for contact binaries, suggests that V870 Ara has the potential of constraining evolutionary scenarios of binary mergers. The inferred distances to these systems are compatible with the Hipparcos parallaxes.
|
# (False?) Proof that Monotone Darboux Functions are Continuous
I have been attempting to show that strictly increasing Darboux Functions are continuous, where $f$ is a Darboux Function if it has the so called "Intermediate Value Theorem", i.e.
for any two values $a$ and $b$ in the domain of $f$, and any $y$ between $f(a)$ and $f(b)$, there is some $c$ between $a$ and $b$ with $f(c) = y$$\tag{Wikipedia} I am somewhat familiar with a number of proofs of this statement, and have been trying to find new ways to prove it. I have come up with the following proof, with which I have some serious qualms: Assume f: [a,b] \to \mathbb{R} is an increasing Darboux Function. Let \epsilon>0 be given, and suppose that x<y with x,y\in[a,b]. Choose any L\in \left(f(x),f(y)\right), which is equivalent to f(x)<L<f(y) since f is increasing. If |f(x)-L|<\epsilon\, let y'=y and x'=x. If |f(x)-L|\ge\epsilon\, choose x' and y' such that L-\epsilon < f(x') < L < f(y') < L+\epsilon by Intermediate Value Property and note |f(x')-L|<\epsilon Let \delta = y' - x' Now, by Intermediate Value Theorem there exists c\in(x',y') such that f(x)=L. This implies that 0 < c-x' < y'-x' so that |x'-c|< y'-x'=\delta while |f(x')-f(c)| = |f(x')-L| < \epsilon Now, I am not confident that this proves continuity of f. Instead of following the standard process, namely 1. Starting with an \epsilon 2. Choosing some \delta and assuming |x-c|<\delta 3. Showing this implies |f(x)-f(c)|<\epsilon I feel that I have done everything out of order and thus my "proof" is invalid. Q: Assuming I am correct and that this proof is fundamentally flawed, is there any simply way to correct it without completely altering the proof? • A monotone increasing, Darboux function on [a, b] will necessarily have \alpha = f(a) \le f(x) \le f(b) = \beta, so it will map [a, b] \to [\alpha, \beta] surjectively because of the Darboux property. Then the answers given here apply immediately. – Chris Jul 28 '17 at 6:47 • @Chris completely true, and a very valid proof strategy. Nevertheless, I am already familiar with such a proof, and my post is concerned with validating the validity (or more likely, lack of validity) in my methods – Brevan Ellefsen Jul 28 '17 at 6:59 ## 2 Answers I think that what you are doing is equivalent to : Let y_2<x<y_1. Given \epsilon >0, take y'_1\in (x,y_1) such that$$f(y'_1)\in [f(x),f(y_1)]\cap [f(x),f(x)+\epsilon)$$and take y'_2 \in (y_2,x) such that$$f(y'_2)\in [f(y_2),f(x)]\cap (f(x)-\epsilon, f(x)].$$Let \delta=\min (y'_1-x,x-y'_2). Then \delta >0 and$$\forall x'\in (-\delta+x,\delta+x)\; (\;-\epsilon +f(x)<f(y'_2)\leq f(x')\leq f(y'_1)< \epsilon +f(x)\;).$$Remark: It is common usage that "$f$is increasing" means$x<y\implies f(x)\leq f(y),$and that "$f$is strictly increasing" means$x<y\implies f(x)<f(y).$Note that what I have written above does not require$f$to be strictly increasing. • wow. This is really fantastic way to rework my argument! I've diagrammed each step carefully to make sure this lines up with the geometric argument I had in my head. Great idea letting$\delta = \operatorname{min}(y'_1 - x, x - y'_2)$... I hadn't thought at all to try this! As near as I can tell, your argument here is exactly what I was thinking of but couldn't sort out enough to put into a logical flow. With your (serious) rewording and reorganization, is this now a valid proof of continuity? – Brevan Ellefsen Jul 31 '17 at 4:28 • The idea of letting$\delta= \min $(et cetera) is a common technique. – DanielWainfleet Jul 31 '17 at 4:32 • Common, yes. I've used it many times before. I just meant to say that I completely neglected that I could use it here! – Brevan Ellefsen Jul 31 '17 at 4:35 • I like your idea for a proof of continuity by this method. It's elegant and brief and does not require case-by-case analysis (As compared to showing that$\lim_{y\to x^-}f(y)=f(x)=\lim_{y\to x^+}f(y).$– DanielWainfleet Aug 1 '17 at 1:50 In the end, you still have show that the definition of continuity applies at any fixed point$c$, finding an appropriate$\delta$for any choice of$\epsilon > 0$. You are correct that this proof is flawed and it breaks down even in the easier case where you assume$|f(x) - L| < \epsilon$. Choosing an arbitrary$\epsilon > 0$is a good first step. Next you choose arbitrary$x < y$in$[a,b]$. By placing$L$between$f(x)$and$f(y)$, you have done nothing more than to fix a point$c$between$x$and$y$where$f(c) = L$. As you say the existence of this point is guaranteed by the IVT. However, this is also fine in that you now have a point specified where you are trying to establish continuity. For the first case you assume that$|f(x) - f(c)| = |f(x) - L| < \epsilon$. Of course this condition is only possible through the continuity you are trying to prove, but, nevertheless, you can proceed under this assumption and see where it leads. You set$\delta = y'-x' = y-x$and assert that now we have both$|x' - c| < \delta $and$|f(x') - f(c)| < \epsilon$where the latter is true only by hypothesis. As this is an independent case, it must demonstrate that continuity at$c$holds. Now that$\delta$has been fixed, if you choose any other point$x''$satisfying$|x'' - c| < \delta$it must be shown that$|f(x'') - f(c)| < \epsilon$. Since$x' < c < y'$and$\delta = y' - x'$, if you choose an$x''$between$x'$and$c$, then$|f(x'') - f(c)| < \epsilon$, by the intermediate value property. Unfortunately, you can choose$x''$such that$|x'' -c| < \delta$and$x'' > y'$and nothing is known about$f(x'')$other than$f(x'') > f(y)\$.
The only way to fix this is to replicate the standard argument where it is shown that a monotone function has a left and right limit at every point -- a consequence of the completeness of the real numbers.
• Thank you very much RRL. Your answer goes perfectly with Daniel's, as your answer has helped me find the flaws in my proof that I need to fix and Daniel's answer has shown me what patching those holes and rewriting the proof looks like. If I could accept both I would! :) – Brevan Ellefsen Jul 31 '17 at 4:40
• @BrevanEllefsen: You're very welcome. Glad to make this active and help you crystallize your thinking. – RRL Jul 31 '17 at 5:38
• The last paragraph is the crux of the whole argument. +1 for the same. – Paramanand Singh Jul 31 '17 at 6:39
|
# Homework Help: Conceptual Question on the Force of a relaxed spring
1. Sep 10, 2010
### Kibbel
1. The problem statement, all variables and given/known data
When running a moving cart into a relaxed spring, will the force of the spring be constant on the cart?
2. Relevant equations
$$\Delta$$P$$\rightarrow$$ =$$\rightarrow$$Fnet$$\Delta$$t
or F = 1/2kx^2?
3. The attempt at a solution
my guess is that the more you compress or strech a spring, the stronger the opposing force
2. Sep 10, 2010
### G01
Your conceptual explanation is correct. That is essentially Hooke's Law. The force caused by an ideal spring increases linearly with it's compression (or expansion).
However, F does not equal 1/2kx^2. That is the potential energy. F=-kx
|
# [Tex/LaTex] How to decrease separation between different plots? bar graph pgfplots
bar chartnodes-near-coordspgfplots
\usepackage{pgfplots}
\usepackage{amsmath}
\include{macros/style}
\include{macros/use_packages}
\usepackage{indentfirst}
\usepackage{graphicx}
\usepackage{algorithm}
\usepackage{algpseudocode}
\usepackage{tikz}
\usepackage{array,booktabs,ragged2e}
\usepackage{listings}
\usepackage{color}
\section{Results}
interval & carT & carD & carC & carG\\
10 Hz & 62680 & 125130 & 10000 & 100000\\
OSC & 62680 & 125130 & 100000 & 100000\\
Adaptive & 62680 & 125130 & 1000000 & 1000000\\
}\mydata
\begin{tikzpicture}
\hspace{-2cm}
\begin{axis}[
ybar,
bar width=17pt,
x=7cm,
ybar=1pt,
width=1\textwidth,
height=.5\textwidth,
legend style={at={(0.5,1)},
anchor=north,legend columns=-1},
symbolic x coords={10 Hz, OSC, Adaptive},
xtick=data,
nodes near coords,
nodes near coords align={horizontal},
ymin=0,ymax=400000,
ylabel={Number of Packets Sent},
]
\legend{Six Lanes, 12 Lanes, E.C. Row, i75}
\end{axis}
\end{tikzpicture}
For me the key to a good solution is to switch from absolute width (in pt) to relative width using the axis coordinate system. Doing this has the advantage that when you once have fixed the bar widths and "distance of the bars to the axis" (using enlarge x limits) you can scale your plot using width only.
For details please have a look at the comments in the code.
% used PGFPlots v1.15
\documentclass[border=5pt]{standalone}
\usepackage{pgfplots}
\usepackage{pgfplotstable}
\pgfplotsset{
% use this compat' level or higher to use the feature of specifying
% bar width' in axis units
compat=1.7,
}
interval & carT & carD & carC & carG \\
10 Hz & 62680 & 125130 & 10000 & 100000 \\
OSC & 62680 & 125130 & 100000 & 100000 \\
Adaptive & 62680 & 125130 & 1000000 & 1000000 \\
}\mydata
\begin{document}
\begin{tikzpicture}
\begin{axis}[
% it doesn't make sense to specify x' AND width', because only
% one of them can be used. Here x' would make the race.
% x=7cm,
width=1\textwidth,
height=.5\textwidth,
ymin=0,
ymax=400000,
ylabel={Number of Packets Sent},
xtick=data,
% ---------------------------------------------------------------------
% replace symbolic x coords' by xticklabels from table' (step 1)
% (and do step 2 which is given at the \addplot's)
% symbolic x coords={10 Hz, OSC, Adaptive},
xticklabels from table={\mydata}{interval},
ybar,
% when not using symbolic coords we have the opportunity to specify
% bar width' in axis units ...
bar width=0.2,
ybar=1pt,
% ... and then we can also give an absolute value for enlarge x limits'
enlarge x limits={abs=0.5},
% Now you can change width' as you like and don't need to tough
% here, because everything is given "relative"
% ---------------------------------------------------------------------
nodes near coords,
legend style={
at={(0.5,1)},
anchor=north,
legend columns=-1,
},
]
% replaced x=interval' with x expr=\coordindex' (step 2)
% to have the same result as using symbolic x coords'
|
# Cleveref Links missing
i have a problem with cleveref. I'm using cleveref with the nameinlink-option but there is still no link. This is my header:
\documentclass[12pt,a4paper]{scrreprt}
\usepackage[latin1]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsthm}
\usepackage{enumerate}
\usepackage[english]{babel}
\usepackage{fontenc}
\usepackage{graphicx,epsfig,color}
\usepackage{geometry}
\usepackage{float}
\geometry{
left=2cm,
right=2cm,
top=2cm,
bottom=3cm,
bindingoffset=5mm
}
\usepackage{dsfont}
\DeclareGraphicsRule{.pdftex}{pdf}{.pdftex}{}
\usepackage{setspace}
\usepackage{ifpdf}
\usepackage{caption}
And i'm using:
\Cref{cor:simplification}
I don't get what I'm doing wrong.
-
Welcome to TeX.SX! cleveref doesn't create clickable links, if that's what you mean. Add \usepackage{hyperref} before \usepackage{cleveref}. – Torbjørn T. Mar 10 '14 at 19:27
Thank you, that worked. – neunfuenf Mar 10 '14 at 20:10
|
Question
Tom and his friend are driving to Louisville, Kentucky, next week. They figure that the trip will be about 432 miles round trip. If his car gets 31 mpg and the current cost of gasoline is $2.79, how much will they spend on gas?$51.61 $44.46$38.88 $31.00 in progress 0 3 days 2021-07-20T04:48:10+00:00 2 Answers 2 views 0 ## Answers ( ) 1. Answer: The answer is$38.88.
2. ### Answer: \$38.88 (choice C)
=====================================================
Explanation:
Let x be the number of gallons used up. It’s some positive real number.
We’ll use the formula
miles = (mpg)*(number of gallons)
to help figure out how much fuel is used. The trip is 432 miles, and the mpg rating is 31
So,
miles = (mpg)*(number of gallons)
432 = 31*x
31x = 432
x = 432/31
x = 13.9354838709678 which is approximate.
Multiply that with the 2.79 to get the final answer.
2.79*x = 2.79*13.9354838709678 = 38.8800000000001
The 1 at the very end is probably due to some rounding error, but then again it may be accurate. Either way, that value rounds to 38.88
Side note: The 31 mpg rating is very likely an average value and not the same throughout the trip. This is due to the car changing speeds at different points of the journey. In reality, the true answer will vary slightly from the 38.88 figure we just calculated (but it should be fairly close).
|
# TARGETING ESTIMATION OF CCC-GARCH MODELS WITH INFINITE FOURTH MOMENTS
Rasmus Pedersen ()
Econometric Theory, 2016, vol. 32, issue 2, 498-531
Abstract: As an alternative to quasi-maximum likelihood, targeting estimation is a much applied estimation method for univariate and multivariate GARCH models. In terms of variance targeting estimation, recent research has pointed out that at least finite fourth moments of the data generating process is required, if one wants to perform inference in GARCH models by relying on asymptotic normality of the estimator. Such moment conditions may not be satisfied in practice for financial returns, highlighting a potential drawback of variance targeting estimation. In this paper, we consider the large-sample properties of the variance targeting estimator for the multivariate extended constant conditional correlation GARCH model when the distribution of the data generating process has infinite fourth moments. Using nonstandard limit theory, we derive new results for the estimator stating that, under suitable conditions, its limiting distribution is multivariate stable. The rate of consistency of the estimator is slower than $\sqrt T$ and depends on the tail shape of the data generating process. A simulation study illustrates the derived properties of the targeting estimator.
Date: 2016
Citations: View citations in EconPapers (4) Track citations by RSS feed
https://www.cambridge.org/core/product/identifier/ ... type/journal_article link to article abstract page (text/html)
Related works:
Working Paper: Targeting estimation of CCC-Garch models with infinite fourth moments (2014)
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text
|
# Using Weekly Earnings Opportunity to Measure the Prize Money Gender Gap
Prize money has been the primary yardstick for measuring gender inequality in professional tennis. Most in-depth studies of prize money differences have focused on earnings of top players or prize money commitments by tournament. In this post, I take a different view on prize money differences by focusing on earning opportunities in each week of the the tour calendar.
You can find a variety of analyses on the Web about prize money differences between the men’s and women’s tours. Many of these have looked at the career earnings of top male and female players. Others have looked at the prize money allocated on a tournament-by-tournament basis.
When it comes to the inequality felt by any individual player, both of these approaches have some drawbacks. Most players are not the top earners in the game, so the comparisons of the highest earners is of limited relevance for them. Second, tournament-by-tournament comparisons do not account for draw size differences in singles, qualifying, or doubles draws; nor do they consider where tour events overlap on the calendar, forcing a competitor to choose at most one of these to enter.
A measure that gets closer to a player’s experience of prize inequalities considers their likely earnings opportunity in any competition week.
Take this week, for example. The China Open is the only WTA event running this week. The draw size for the women’s event has a singles draw of 60, a qualifying draw of 32, and a draw of 28 doubles teams. That means there will be 84 players splitting the singles money and 56 splitting the doubles money. With a pot of 8.2 million US dollars, the draw size translates into an expected earning per singles player of 71 thousand.
On the men’s side, they have two ATP 500 events to choose from this week: the China Open and Rakuten Japan Open. Both are 32 singles draw events with qualification draws of 16 and doubles draws of 16. The respective prize money is $4.7 million and$1.9 million. Since players can only compete in one event per week, those numbers mean ATP earnings opportunities are an expected 46 thousand per singles player, lower expected earnings for men in this week.
When we look at the week-by-week picture in expected earnings, we see that weeks like this week where the WTA earnings potential exceeds the ATP are rare. In the majority of weeks, the chart below shows that the ATP (in blue) eclipses the WTA (in purple) earnings. In fact, next week is a more stereotypical situation. The men will be playing at the Shanghai masters with 65K in expected earnings, while the women will have 3 International events to choose from at only 5K in expected earnings per player.
We can see the earnings gap more clearly by taking the difference in expected earnings in each week. This is charted below and reveals that there are 27 of 37 weeks of competition where an ATP player is expected to earn more than a WTA player. The difference in those weeks is a median of 11K per player.
Earnings are less overall in doubles but the gender gap follows the same trends as in singles. There are 26 weeks where doubles male players are expected to earn more than WTA doubles players at tour events, with men earning a median of 3.5K more per week in doubles play.
These comparisons of per capita expected earnings echo the major conclusions of early studies on gender inequalities in tour prize money: that payouts are still far from equal for the WTA and ATP, with most differences happening at the levels below the Grand Slams. An advantage of the per capita numbers is that they can give us a better sense of how those differences are actually experienced by individual players. And, from this point-of-view, the week of the China Open must feel like a rare boon in the calendar for women’s players.
|
Runge Function
1. Sep 13, 2007
leon1127
My teacher asked a very interesting question. so given a runge function 1/(1+x^2) and i interpolate it on uniformly spaced point in the inteval -1 and 1 by p_n(x)
How does the pole -i and i contribute to the oscillation of p_n(x)? I never thought pole would come into play.
2. Sep 13, 2007
HallsofIvy
Staff Emeritus
What exactly do you mean by "interpolate it ... by p_n(x)". A polynomial at n points?
Slightly different but you might think about this: The Taylor's series for 1/(1+x2), around x=0, has "radius of convergence" equal to 1 precisely because it has poles at i and -i. In the complex plane, the radius of convergence really is a "radius". It can't go beyond i or -i, both at distance 1 from 0, because they are poles.
3. Sep 13, 2007
leon1127
so i have x0 = {-1, -0.9, -0.8,....., 0.9, 1} equidistant point. Then by lagrange polynomial there exists a polynomial, degree <= cardinal[x0] - 1, that will interpolate the ordered pair (x0, f(x0)}.
more specifially my teacher showed the comparison between runge(x) and exp(-10x^2) on the same set of point. The polynomial has very large oscillation near the end point of the interval. I can see how taylor diverges but now we are interpolating n+1 point instead of its derivative.. so i dont know.
Last edited: Sep 13, 2007
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
|
# A circular park of radius 20 m is situated in a colony. Three boys Ankur, Syed and David are sitting at equal distance on its boundary each having a toy telephone in his hands to talk each other. Find the Length of the string of each phone.
Let Ankur, Syed and David standing on the point P, Q and R.
Let PQ = QR = PR = X
$$\therefore$$ DPQR is an equilateral traingle.
Drawn altitudes PC, QD and RN from vertices to the sides of a traingle and intersect these altitudes at the centre of a circle M.
As PQR is an equilateral, therefore these altitudes bisects their sides.
In $$\triangle{PQC}$$,
$${PQ}^2 = {PC}^2 + {QC}^2$$
(By Pythagoras theorem)
$$\Rightarrow$$ $${x}^2 = {PC}^2 + ( \frac{x}{2} )^2$$
$$\Rightarrow$$$${PC}^2 = {x}^2 - (\frac{x}{2} )^2$$
$$\Rightarrow$$ $${PC}^2 = {x}^2 - \frac{x^2}{4} = 3\frac{x^2}{4}$$
(Since, QC = $$\frac{1}{2}$$ QR = $$\frac{x}{2}$$ )
$$\therefore$$ PC = $$\frac{\sqrt{3}x}{2}$$
Now, MC = PC - PM = $$\frac{\sqrt{3}x}{2}$$ - 20
Now, in $$\triangle{QCM}$$,
$${QM}^2 = {QC}^2 + {MC}^2$$
(By Pythagoras theorem)
$$\Rightarrow$$ $$20^2 = (\frac{x}{2} )^2 + (\frac{\sqrt{3}x}{2} - 20)^2$$
$$\Rightarrow$$ $$400 = \frac{x^2}{4} +( \frac{\sqrt{3}x}{2})^2 - 20\sqrt{3}x + 400$$
$$\Rightarrow$$ $$0 = {x}^2 - 20\sqrt{3}x$$
$$\Rightarrow$$ $${x}^2 = 20\sqrt{3}x$$
$$\therefore$$ $$x = 20\sqrt{3}$$
Hence, PQ = QR = PR = $$20\sqrt{3}$$m.
|
`
Timezone: »
Poster
Local Linear Convergence of Forward--Backward under Partial Smoothness
Jingwei Liang · Jalal Fadili · Gabriel Peyré
Mon Dec 08 04:00 PM -- 08:59 PM (PST) @ Level 2, room 210D
In this paper, we consider the Forward--Backward proximal splitting algorithm to minimize the sum of two proper closed convex functions, one of which having a Lipschitz continuous gradient and the other being partly smooth relatively to an active manifold $\mathcal{M}$. We propose a generic framework in which we show that the Forward--Backward (i) correctly identifies the active manifold $\mathcal{M}$ in a finite number of iterations, and then (ii) enters a local linear convergence regime that we characterize precisely. This gives a grounded and unified explanation to the typical behaviour that has been observed numerically for many problems encompassed in our framework, including the Lasso, the group Lasso, the fused Lasso and the nuclear norm regularization to name a few. These results may have numerous applications including in signal/image processing processing, sparse recovery and machine learning.
|
## Elementary Technical Mathematics
The lawn consists of 3 rectangular areas. Determine the width and length of each to find the area in square feet. The top rectangle measures 60 ft x 100 ft. $A=60\times 100=6000\ ft^2$ The middle rectangle measures 120 ft (60 ft + 60 ft) x 105 ft. $A=120\times 105=12600\ ft^2$ The lower rectangle measures 240 ft (120 ft + 60 ft + 60 ft) x 555 ft. $A=240\times 55=13200\ ft^2$ Sum to find the total area of the lawn. $6000+12600+13200=31800\ ft^2$ The problem requests the area be expressed in tsf (thousand square feet). Divide to convert. $31800\div1000=31.8 tsf$
|
EDIT:So some motivation of what I'm doing: I have a large set of polynomials that are related to the path of a point through space over time. I want to find polynomials that intersect sometime in the "near" future, but I don't want to have to do all $\frac{n*(n-1)}{2}$ polynomial-to-polynomial evaluations. So I'm trying to build a "broad phase" that only offers up pairs of polynomials to be solved in a "narrow phase" (ie: actual root finding) if they're "pretty close" to colliding. Whatever the algorithm for the broad phase is, it can't involve iterating over all the polynomial pairs or it defeats the point.
One sort of square-peg-round-hole solution would be to use something like bounding boxes around the polynomials and use a spatial partitioning tree to find where boxes overlap, and then do the root finding on those. But it doesn't handle cases very well where the time interval of interest is quite large, or especially if one of the interval ends is infinity or negative infinity.
So I wanted to explore it from another direction and see if I can come up with something that works better.
1
# Bounding the roots of the sum of two polynomials
Suppose I have two polynomials with real coefficients. Suppose I can perform any sort of preprocessing on them I want. I want to be able to pre-emptively say that the sum of the polynomials doesn't have any roots inside a given interval without doing any explicit calculations on the sum itself. False positives (that is, saying there aren't any roots when there are some) would be deal-breaking, but false negatives (reporting there might be roots when there aren't) would be acceptable.
Or to put it more explicitly:
All functions $p_x(t)$ have a form like:
$p_x(t) = a_{n,x} * t^n + a_{n-1, x} * t^{n-1} + ... + a_{1,x} * t + a_{0,x}$
We can define $p_3(t) = p_1(t) + p_2(t)$
I want to determine if $p_3(t)$ might have any roots inside a given interval $[t_{min}, t_{max}]$. But I want to do it only using properties of $p_1(t)$ and $p_2(t)$, their roots, etc. and not anything that would need me to calculate anything for $p_3(t)$, its roots, etc.
Any ideas on how to approach the problem?
|
# Characteristic polynomial and diagonalizability of a matrix.
Let$$A∈M_{n×n}(C)$$ with characteristic polynomial $$p(x) =cx^a\prod _{i=1}^{k}(λ_i−x)$$ and $$λ_i\not= 0,∀i,a∈Z_{>0}.$$ Show that if $$\dim(\ker(A)) +k=n$$, then A is diagonalizable.
My proof:
Each linear factor in $$p(x)$$ has algebraic multiplicity 1, and $$1 \leq \dim(E_x) \leq$$ the algebraic multiplicity of $$x$$.
So $$1 \leq \dim(E_x) \leq 1 \implies \dim(E_x) = 1$$.
Therefore the algebraic multiplicity is equal to the geometric multiplicity for each eigenvector and A is diagonizable.
Is this proof valid? It seems too obvious and I didn't even use $$\dim(\ker(A)) +k=n$$.
• The statement is false, unless you assume that $\lambda_i\ne\lambda_j$, for $i\ne j$. About your question, $0$ is an eigenvalue as soon as $a>0$, and you need to take care of it, too. – egreg Apr 6 at 7:46
• I'm finding your first point hard to understand, would you mind elaborating? The question guidelines imply that $λ_i \not= 0$, so would that also solve the second problem? – Pivot Apr 6 at 7:56
The statement is false, unless you make some further assumption. For every matrix, the characteristic polynomial has the stated form, but not every matrix is diagonalizable. Simple example: the characteristic polynomial of $$\begin{bmatrix} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{bmatrix}$$ is $$p(x)=x\prod_{i=1}^2(\lambda_i-x)$$, with $$\lambda_1=\lambda_2=1$$. This matrix is not diagonalizable.
The assumption under which the statement is true is that $$\lambda_i\ne\lambda_j$$, for $$i\ne j$$.
In this case your argument is good: each nonzero eigenvalue has algebraic multiplicity $$1$$. However, you also need to take care of the $$0$$ eigenvalue, when $$a>0$$.
|
# financial maths greeks
• May 28th 2011, 05:05 PM
cooltowns
financial maths greeks
Hi
I have attached my question due to not being very competent with latex.
I need help with part i) and part iii)Attachment 21610
I have obtained the greek for $\frac{\partial P}{\partial\sigma}=S\sqrt{T-t}\; N'(d1)$
any help is appreciated
thanks
• May 29th 2011, 09:36 AM
Wilmer
|
dev. The two groups that are being compared must be unpaired and unrelated (i.e., independent). Stack Exchange Network. Probabilities in a binomial setting can be calculated in a straightforward way by using the formula for a binomial coefficient. (1997) Bootstrap Methods and Their Application. If $$p \leq \alpha$$ reject the null hypothesis. On January 18, 2020January 18, 2020 By admin_admin. There are a variety of exact algorithms that are more than good enough for general use, and these are what you get when you use the binomial RNGs from R, SciPy, etc. wilson: Wilson Score interval. beta: Clopper-Pearson interval based on Beta distribution. In order to use the normal approximation method, the assumption is that both n p 0 ≥ 10 and n (1 − p 0) ≥ 10. Equation to compute the Binomial CI using the Normal approximation method is given below Sum of many independent 0/1 components with probabilities equal p (with n large enough such that npq ≥ 3), then the binomial number of success in n trials can be approximated by the Normal distribution with mean µ = np and standard deviation q np(1−p). P-value for the normal approximation method Minitab uses a normal approximation to the binomial distribution to calculate the p-value for samples that are larger than 50 (n > 50). Lecture Notes 3 Approximation Methods Inthischapter,wedealwithaveryimportantproblemthatwewillencounter in a wide variety of economic problems: approximation of functions. By using regression analysis and after rounding the coefficient to one decimal place, the approximation obtained is () 1 .2 1 .3 5 1 0 .5 Φ z = − e − z. Not every binomial distribution is the same. BruceET BruceET. normal approximation: The process of using the normal curve to estimate the shape of the distribution of a data set. Check assumptions and write hypotheses. Definition and Properties. According to the Central Limit Theorem, the the sampling distribution of the sample means becomes approximately normal if the sample size is large enough. In order to use the normal approximation, we consider both np and n( 1 - p ). The z test statistic tells us how far our sample proportion is from the hypothesized population proportion in standard error units. The normal approximation of the binomial distribution works when n is large enough and p and q are not close to zero. Excepturi aliquam in iure, repellat, fugiat illum voluptate repellendus blanditiis veritatis ducimus ad ipsa quisquam, commodi vel necessitatibus, harum quos a dignissimos. If X is the number of heads, then we want to find the value: P(X = 0) + P(X = 1) + P(X = 2) + P(X = 3) + P(X = 4) + P(X = 5). This result is known as the Delta Method. But what do we mean by n being “large enough”? To improve our estimate, it is appropriate to introduce a continuity correction factor. (10.86) is structured such that the nonlinear terms in the matrix A (c) are evaluated using the current approximation, c ^ (k), so that: Minitab Express will not check assumptions for you. $$p_{0}$$ = hypothesize population proportion This differs from the actual probability but is within 0.8%. We will utilize a normal distribution with mean of np = 20(0.5) = 10 and a standard deviation of (20(0.5)(0.5))0.5 = 2.236. This is used because a normal distribution is continuous whereas the binomial distribution is discrete. Therefore b D5 3t is the best line—it comes closest to the three points. [3] Of the approximations listed above, Wilson score interval methods (with or without continuity correction) have been shown to be the most accurate and the most robust, [2] [3] [7] though some prefer the Agresti–Coull approach for larger sample sizes. Second, even for high dimensional parameter spaces, it may also work well when computing the marginal distribution across one of the components of $\theta$. The normal Approximation Breaks down on small intervals. Use a normal approximation to find the probability of the indicated number of voters. If both of these numbers are greater than or equal to 10, then we are justified in using the normal approximation. First, we will formulate the solution for the scattered field using the Born approximation. Odit molestiae mollitia laudantium assumenda nam eaque, excepturi, soluta, perspiciatis cupiditate sapiente, adipisci quaerat odio voluptates consectetur nulla eveniet iure vitae quibusdam? Also, like a normal distribution, the binomial distribution is supposed to be symmetric. I know of no reason to use the normal approximation to the binomial distribution in practice. The confidence interval of the mean of a measurement variable is commonly estimated on the assumption that the statistic follows a normal distribution, and that the variance is therefore independent of the mean. To determine the probability that X is less than or equal to 5 we need to find the z-score for 5 in the normal distribution that we are using. Notation. when these approximation are good? 1 $\begingroup$ Well, if you wanted to know, for example, the mean and std. Formula. Cambridge University Press. To use the normal approximation method a minimum of 10 successes and 10 failures in each group are necessary (i.e., $$n p \geq 10$$ and $$n (1-p) \geq 10$$). Normal approximation method is easy to compute and use of normal approximation method is supported by the central limit theorem and with sufficiently large sample size ‘n’, the Normal distribution is a good estimate of the Binomial distribution. If $$p>\alpha$$ fail to reject the null hypothesis. This shows that we can use the normal approximation in this case. The normal approximation to the Poisson-binomial distribution. To check to see if the normal approximation should be used, we need to look at the value of p, which is the probability of success, and n, which is the number of observations of our binomial variable. Step 2: Figure out if you can use the normal approximation to the binomial. If n * p and n * q are greater than 5, then you can use the approximation: n * p = 310 and n * q = 190. As we have seen, we can use these Taylor series approximations to estimate the mean and variance estimators. These issues can be sidestepped by instead using a normal distribution to approximate a binomial distribution. From the central limit theorem, one would expect that it occurs in many different large sample problems. When using the normal approximation method we will be using a z test statistic. Use Minitab Express and remember to copy+paste all relevant output and to … In Minitab Express, the exact method is the default method. In this case, assume that 197 eligible voters aged 18-24 are randomly selected. binom_test: experimental, inversion of binom_test. The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems (sets of equations in which there are more equations than unknowns) by minimizing the sum of the squares of the residuals made in the results of every single equation. Suppose we wanted to compute the probability of observing 69, 70, or 71 smokers in 400 when p = 0.20. The normal distribution is used as an approximation for the Binomial Distribution when X ~ B (n, p) and if 'n' is large and/or p is close to ½, then X is approximately N (np, npq). mations are needed. Methods and formulas for ... Confidence interval (CI) for the normal approximation. Do not do any calculations by hand. The normal approximation can always be used, but if these conditions are not met then the approximation may not be that good of an approximation. This is a general rule of thumb, and typically the larger the values of np and n (1 - … In this note I shall focus on two of his seminal papers (1975, 1977) on asymptotic expansions. This function is primarily designed to be called by boot.ci to calculate the normal approximation after a bootstrap but it can also be used without doing any bootstrap calculations as long as t0 and var.t0 can be supplied. [If the city has at least 1000 people, note that since np = 100(.15) = 15 > 10, and nq = 100(.85) = 85 > 10, we could also use the normal approximation to the binomial if we so desired.] Normal approximation, Stein’s method, Wasserstein distance. Where p 0 is the hypothesized population proportion that you are comparing your sample to. See this introductory article for an overview of the Poisson-binomial distribution. However, when the distribution of the ‘change’ is skewed, then it is not possible to calculate CI using normal approximation. This is a general rule of thumb, and typically the larger the values of np and n( 1 - p ), the better is the approximation. Home / Use the P-value method. This approximation has a simple form yet is very accurate. While there had been successful uses of multivariate versions of Stein’s method for normal approximation in the years following the introduction of the univariate method (e.g., by Go¨tze [7], Rinott and Rotar [17], [18], and Raiˇc [14]), there had not until recently been a version of the method of exchangeable pairs for use in a multivariate setting. First, we must determine if it is appropriate to use the normal approximation. In order to use the normal approximation, we consider both np and n( 1 - p). We can look up the $$p$$-value using Minitab Express by constructing the sampling distribution. Confidence Interval of a Mean Normal approximation method On this page: Definition & Properties Assumptions & Requirements . While in theory, this is an easy calculation, in practice it can become quite tedious or even computationally impossible to calculate binomial probabilities. share | cite | improve this answer | follow | edited Feb 15 '17 at 4:32. answered Feb 15 '17 at 4:23. Before talking about the normal approximation, let's plot the exact PDF for a Poisson-binomial distribution that has 500 parameters, each a (random) value between 0 and 1. Binomial distribution is most often used to measure the number of successes in a sample of … Recall that $$p_0$$ is the population proportion in the null hypothesis. Before we can conduct our hypothesis test we must check this assumption to determine if the normal approximation method or exact method should be used. when bad? 3. In this section, approximation methods called Born and Rytov are formulated that provide the solution for the scattered field (Iwata & Nagata, 1974; Kaveh, Soumekh, & Muller, 1982).They serve the basis for the Fourier diffraction theorem that will be discussed in the next subsection. ", The Normal Approximation to the Binomial Distribution, Expected Value of a Binomial Distribution, Use of the Moment Generating Function for the Binomial Distribution. 2. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. Recall, the z distribution is a normal distribution with a mean of 0 and standard deviation of 1. agresti_coull: Agresti-Coull interval. In particular v is Use this online binomial distribution normal approximation calculator to simplify your calculation work by avoiding complexities. This post shows that there is another instance where it provides a good approximation using a different mechanic (Laplace's Method). Where $$p_0$$ is the hypothesized population proportion that you are comparing your sample to. Then I'll leave you on your own to use essentially the same method to get $\beta.$ // For the most enthusiastic reception on this site, a ... You will not get exactly the value 0.10404 from the normal approximation, but it will be close. However, the normal and chi-square approximations are only valid asymptotically. Thus z = (5.5 – 10)/2.236 = -2.013. default: ‘normal’ method to use for confidence interval, currently available methods : normal: asymptotic normal approximation. This is a rule of thumb, which is guided by statistical practice. We will compare an exact binomial probability with that obtained by a normal approximation. The use of normal approximation makes this task quite easy. Normal approximation interval. Example: Find the normal approximation for an event with number of occurences as 10, Probability of Success as 0.7 and Number of Success as 7. $\endgroup$ – James Phillips Jan 3 '19 at 16:10. A commonly used formula for a binomial confidence interval relies on approximating the distribution of error about a binomially-distributed observation, ^, with a … The normal approximation to … The commonest misuse here is to assume that somehow the data must approximate to a normal distribution, when in fact non-normality is much more common. While the behavior of small samples is unpredictable, the behavior of large samples is not. The most important application is in data fitting. Note that p-values are also symbolized by $$p$$. A simple and easy approximation to F(x), is the normal power (NP) approximation. Note that this formula follows the basic structure of a test statistic that you learned last week: $$test\;statistic=\frac{sample\;statistic-null\;parameter}{standard\;error}$$, $$\widehat{p}$$ = sample proportion Minitab ® 19 Support. Normal Approximation to the Binomial Distribution Normal distribution is fine approximation to the binomial distribution, in a binomial distribution one can easily confirm that the mean for a single binomial trial, where "success" is scored as 1 and "failure" is scored as 0, is p; where p is the probability of S. . If assumptions were met in part A, use the normal approximation method. The use of the binomial formula for each of these six probabilities shows us that the probability is 2.0695%. A function of the form Φ(z )= 1 − 0 .5 e − Az b can be used as an approximation to the standard normal cumulative function. R´esum´e: Nous expliquons comment combiner la m´ethode de Stein avec les outils du calcul de Malliavin pour majorer, de mani`ere explicite, la distance de Wasserstein entre une fonctionnelle d’un champs gaussien donn´ee et son approximation normale multidimensionnelle. However, it is better to be conservative and limit the use of the normal distribution as an approximation to the binomial when np > 5 and n(1 - p) > 5. Similarly, in analyses of contingency tables, the chi-square approximation will be poor for a small sample size, and it is preferable to use 2 The Delta Method 2.1 Slutsky’s Theorem By consulting a table of z-scores we see that the probability that z is less than or equal to -2.236 is 1.267%. References. We consider the tossing of 20 coins and want to know the probability that five coins or less were heads. Based on our decision in step 4, we will write a sentence or two concerning our decision in relation to the original research question. The normal distribution keeps popping up time and time again. We can find the p value by mapping the test statistic from step 2 onto the z distribution. How to Use the NORM.INV Function in Excel, Standard and Normal Excel Distribution Calculations, How to Construct a Confidence Interval for a Population Proportion, Confidence Interval for the Difference of Two Population Proportions, Understanding Quantiles: Definitions and Uses, How to Use the BINOM.DIST Function in Excel, B.A., Mathematics, Physics, and Chemistry, Anderson University. If we are conducting a one-tailed (i.e., right- or left-tailed) test, we look up the area of the sampling distribution that is beyond our test statistic. While the Normal Approximation method is easy to teach and understand, I would rather deliver a lesson on quantum mechanics than attempt to explain the equations behind the Exact Confidence Interval. The normal distribution is a good approximation to the binomial when n is sufficiency large and p is not too close to 0 or 1. Recent developments on normal approximation by Stein’s method and strong Gaussian approximation will also be discussed. With such a large sample, we might be tempted to apply the normal approximation and use the range 69 to 71. If both of these numbers are greater than or equal to 10, then we are justified in using the normal approximation. Step 3: Find the mean, μ by multiplying n and p: n * p = 310 (You actually figured that out in Step 2!). I understand the question and diagram, but am (sadly) ignorant of the purpose for this. The confidence interval provides a measure of the reliability of our estimate of a statistic, whether the mean or any other statistic that we calculate from our data. Is directly as an approximation method because it is possible to get different conclusions between the null hypothesis our... Its presentation in textbooks has been criticised, with many statisticians advocating that it occurs in many different large problems. “ normal equations ” from linear algebra must be unpaired and unrelated ( i.e., independent ) the hypothesis. Not be used here for... confidence interval different conclusions between the hypothesis... By Stein ’ s method and strong Gaussian approximation will be using a normal approximation method be. We could use binomial methods if the city had at least add an explanatory comment p-values are symbolized... By admin_admin our normal approximation method used to solve equilibrium problems if the city had at least an!, one would expect that it occurs in many different large sample problems in order to the. Article for an overview of the binomial distribution by examining our p-value this.! A program that implements the method in SAS to simplify your calculation work by avoiding complexities '19 16:10! Where \ ( \alpha=.05\ ) ) on asymptotic expansions that z is less than or to! Standard deviation of 1 where p 0 is the hypothesized population proportion that you are comparing your sample.... When n is large enough ” x approximation is used to solve equilibrium.! Shows that there is another instance where it provides a good approximation using a z test statistic,!: np > 5 normal ’ method to use the normal distribution popping! Examining our p-value tend to hone in on the true population value a different mechanic ( Laplace 's )! The conditions, we consider both np and n ( 1 − ˆp ) ≥,. Z is less than or equal to 10, then we are justified in the. Consectetur adipisicing elit used in several ways a large sample problems these both... Is an approximation of the binomial distribution the process of using the recursive-formula method from previous... Jan 3 '19 at 16:10 method 2.1 Slutsky ’ s Theorem the normal curve estimate! Well, if you can use the when to use normal approximation method approximation to F ( x ), is best! Samples tend to hone in on the true population value is the best line—it comes closest to the three.! At 16:10 method, Wasserstein distance conditions, we will be to this value method... A table of z-scores we see that both np and n ( 1 - ). Mechanic ( Laplace 's method ) recall, the normal distribution is supposed be! 10.6.2 for linear problems, can also be used if incomplete information prevents use this... Only valid asymptotically ) for the normal approximation method of the distribution of a calculation follow... ) = 75 approximation has a simple and easy approximation to the distribution., a guess or estimate: Ninety-three million miles is an approximation method we will see how our. Methods are usually more reliable than the normal distribution, the binomial distribution is discrete steps of a data.... Probability of observing 69, 70, or 71 smokers in 400 when p = 0.20 instance where it a... At least add an explanatory comment the method in SAS where it provides a good approximation using a test! Both larger than 5, so you can use the range 69 71... Is directly as an approximation of the binomial distribution through the steps of a mean normal approximation $James! Is discrete 400 when p = 0.20 method when to use normal approximation method use the normal approximation be used into. Or less were heads know, for example, the behavior of small samples not! 400 when p = 0.25 then we are justified in using the normal power np. In several ways be sidestepped by instead using a binomial coefficient standard deviations ( i.e. independent! ’ method to use for confidence interval the normal approximation method of the posterior 100! P 0 is the population proportion in the null hypothesis probability but is within 0.8 %,! Simple and easy approximation to compute probabilities for a binomial p-value and the confidence interval normal! Are said to be symmetric arising from random samples tend to hone in on the true value! The Poisson-binomial distribution normal: asymptotic normal approximation to the three points np and n ( 1 - )! Not confuse this with the normal approximation, it is easy for students to manually. Np = 25 and n ( 1 - p ) is another instance where it provides good! We must determine if it is not to … approximations might also be discussed 0.25 then are... Lecture Notes 3 approximation methods Inthischapter, wedealwithaveryimportantproblemthatwewillencounter in a binomial distribution 2 onto the z test statistic add explanatory! Time again Express and remember to copy+paste all relevant output and to clearly your., or 71 smokers in 400 when p = 0.25 then we are justified in the... Also be applied to nonlinear problems might also be applied to nonlinear.. Terms like 'approximates to ' or 'essentially normal ' are used for distributions that are nothing! With different means and standard deviation of 1 i.e., independent ) z distribution by being. To get different conclusions between the null hypothesis of Gram–Charlier type when n is large enough and =! Formulate the solution for the posterior distribution can be sidestepped by instead using z... In standard error units guided by statistical practice p\ ) -value using Minitab Express and to! N = 100 and p and q are not close to zero at... -Value using Minitab Express and remember to copy+paste all relevant output and to identify. Recent developments on normal approximation will be to this value and q are not close to zero textbooks been. However, when to use it, and a program that implements method! This technique, please at least add an explanatory comment 'essentially normal ' are used distributions. Among eligible voters aged 18-24, 22 % of them voted and p = 0.20 of! -Value using Minitab Express by constructing the sampling distribution z is less than or equal to -2.236 1.267... | improve this answer | follow | edited Feb 15 '17 at 4:23 can decide the... Therefore b D5 3t is the hypothesized population proportion in the null hypothesis were met part! Ipsum dolor sit amet, consectetur adipisicing elit from linear algebra my previous article and are. A binomial distribution is supposed to be symmetric random samples tend to hone in the. Estimate: Ninety-three million miles is an approximation of the binomial distribution but what do we mean by n “... Poisson approximation is an approximation of the indicated number of voters at.. Sit amet, consectetur adipisicing elit checking the conditions, we will now see how to this., and a program that implements the method in SAS approximation: the process of using the normal distribution a. Z-Scores we see that the probability that five coins or less were heads of 69. The following: np > 5 approximate binomial distribution is continuous whereas the binomial distribution solve! '17 at 4:23 of 20 coins and want to know, for example the... Is appropriate to use the normal approximation method used to approximate binomial distribution 5.5 – 10 ) /2.236 =.! The test statistic remember to copy+paste all relevant output and to clearly identify final! Larger than 5, so you can use the normal approximation equal -2.236... Are both larger than 5, so you can use these Taylor series approximations to the... Up time and time again randomly selected like a normal approximation method this approximation has simple... Method is the best line—it comes closest to the binomial distribution popping up time and time again different (. The recursive-formula method from my previous article close to zero thus z = ( 5 – ). Np ( 1 - p ) distribution of a calculation than or equal to 10, the z statistic. Recall, the mean and std know, for example, if can! Note that p-values are also symbolized by \ ( p\ ) -value using Minitab Express by constructing the sampling.. Mean normal approximation to the binomial distribution in practice Feb 15 '17 at 4:32. Feb! Ci ) for the normal approximation by Stein ’ s method and strong Gaussian approximation be! To estimate the mean and variance estimators distributions of real data are heterogeneous and are comprised of discrete. The binomial distribution is continuous whereas the binomial distribution is discrete different means and standard deviation 1! Wedealwithaveryimportantproblemthatwewillencounter in a wide variety of economic problems: approximation of the distance of posterior... Find the probability of the binomial confidence interval unless stated otherwise, assume 197... We mean by n being “ large enough and p = 0.25 then are. That the probability of the distribution of the distribution of the binomial normal... This technique, please at least add an explanatory comment rule of thumb, which guided! 3 '19 at 16:10 the successive approximation method used to solve equilibrium problems simplify calculation... Compute probabilities for a binomial coefficient statistic from step 2: Figure out if you can use normal! For students to calculate CI using normal approximation to the central limit Theorem large samples is unpredictable, normal!, use the normal approximation to … approximations might also be discussed the question and diagram but! ‘ normal ’ method to use the normal approximation in this note I shall focus on two his... Chi-Square approximations are only valid asymptotically for example, if you wanted to,... ≥ 10, then we are justified in using the normal approximation to the three points ( 5 10... Dried Wakame How To Cook, 16" Foam Dice, Pokémon Emerald Pokéblock Guide, Drupal Site Hacked Redirect, Trauma Surgeon Salary Texas, Electricity Definition Physics, Dirty Pickles Recipe, Apartments In Stafford, Va Under$1000, Medical Clipart Black And White, Entenmann's Chocolate Cupcakes Calories, Ys Rajasekhara Reddy Brothers,
|
### Gassa's blog
By Gassa, history, 5 months ago, translation, ,
Hi all!
The main phase of VRt Contest 2019 has ended. In the preliminary standings, five contestants managed to obtain a score of more than 60,000 points. The fight continued to the last day. Who will emerge as the winner after the final testing? We will know after it ends. (Update: results in a separate comment.)
I invite the contestants to share their solution ideas in the comments below, or in separate posts if their writeup is too large for a comment.
• +75
» 5 months ago, # | ← Rev. 2 → +47 Thanks for the contest!I think that a lot of people thought of my decision, bacause it is simple. But anyway briefly describe it.The basic idea: to create groups of workers, and at each step choose the next location where the group will work such that it requires the same number of people that are in group. Let's write down all locations (except base) in one vector. In a cycle from 1 to 7 (the number of workers in the group) we will create an array of strings — a sequence of actions of this group. We start from the base. While there are unfulfilled places, we start the function of searching for the next place (about it a bit later), go there and update all values (group's coordinates, time, answer, etc). At the same time, we calculate the profit that this group has brought. If at some point it has nowhere to go, then proceed to the creation of a new group with the same number of workers. At the same time we look at the calculated profit. If it is negative, then we do not output the actions of the group. Selection the next location. The most obvious solution is to act greedily, and, for all possible places of work, calculate the time when group will be at the place. From all such times choose the smallest. Result = about 55k points. It can be improved. If, when choosing the next job, to take into account not only the arrival time. I also considered the position offset relative to the base (the distance from the new location to the base minus the distance from the previous to the base). As well as the length of the path already covered by the group and the time of completion of work at this location (by the condition — h). It all shoved into one formula along with the arrival time. As a result, the task was reduced to the calculation of the coefficients in front of these variables in the final formula. Since the coefficients were different for different input data, I had to iterate over them in a loop, and look at where algorithm gives the greatest answer. My final formula: arrive_time - (delta * (len - magic2) - one[i].h * magic3 / 3) / magic delta: the position offset relative to the base; len: length of the path; one[i].h: time of completion of work at this location; magic, magic2, magic3: the coefficients ^_^ For more details: My submission. I commented everything in great detail there!
» 5 months ago, # | +25 First of all thanks for the contest, really enjoyed itI used a very simple greedy solution to solve the problem.The idea is to divide the workers into groups of size $7$, I find the path for the first worker, While choosing the next location I always take the location that starts first, taking into consideration the start time of this job, and the current time + time to go to this location, after the first worker of the group is over there will be some gaps in the current locations still not done, I try to fill this gaps with jobs that need at most $6$ workers and so on until all this jobs are done for this group, this way I guarantee that all the jobs are been worked by simultaneity.Every time after I use a new worker I calculate the profit to check if using this worker gives me extra profit or not. The idea can be improved a lot, but unfortunately I was busy all week with some other stuff.You can check the code
• » » 5 months ago, # ^ | ← Rev. 4 → +34 If somebody is not convinced enough, I have these examples coming to my mind:yandex.algorithm (OK, this one have a plot twist, so it isn't so bad)marathon matchhashcode (worst possible)bubblecup (a little bit different, but still delivering packages on a plane)more hashcode (how to not hate this contest?)I'm sure that there are many more such tasks.
• » » » 5 months ago, # ^ | +64 Well, although I agree that the general idea of the problem is similar to the other problems you listed, IMHO there were details, as pointed out by gassa, that made it very interesting.In fact, I really liked this contest, found the problem statement very clear and parameters seemed wisely chosen.Thanks gassa and everyone else involved! Looking forward the next marathon!
• » » » » 5 months ago, # ^ | +34 Ok, if this guy says so, the contest was good. :D
• » » 5 months ago, # ^ | +97 Thanks for the comment.I agree the setting is of the standard ones. However, there's a twist which also made its way to the problem title: the workers have to collaborate at exactly the same time to complete the jobs. It may seem not really important, right until you try some classic approach which involves local changes. Then, each local change you try to make creates a chain reaction of changes. To avoid that, you can either restrict your solution (e.g., have $7$ groups of workers, where workers from the $k$-th group only collaborate in groups of $k$), or restrict local changes (and get stuck in a local optimum more often), or try an approach that avoids them altogether (here, I'll wait for the very top performers to share their thoughts).Other parts of the problem are simplified, like the graph being the plane with Manhattan distances, or the workers being exactly the same (fungible). In retrospect, I see how this actually makes the problem feel more standard. But it also helps to focus on the twist, and definitely helps to get the first solution running.You are of course within your right to dislike yet another problem in this setting, and not take part. I'm certain there will be more contests with optimization problems, some of which will surely interest you more.
» 5 months ago, # | +27 How does the tshirt look like?
» 5 months ago, # | +8 How's it so possible that my solution passed a hundred pretests and later gets WA1. This is strange and my score didn't change at all.
• » » 5 months ago, # ^ | +8 Yeah i saw your solution getting 0 points! This happened to me in the last marathon round because my last submission to the problem was a solution with very less score but still they considered that. They will only consider your last solution which you submitted an got a positive score. (Its written in the blog) But it shows a WA on test 1, that is definitely weird!
• » » 5 months ago, # ^ | 0 Sorry, this will be fixed!The reason is as follows. A couple days before the deadline, a user reported a bug in the checker. After a rejudge, yours was the only solution that was affected. I had an impression you resubmitted a working one after that, but apparently this is not the case. My bad, and sorry again!As per the rules, your last solution which gets a positive score will be run on the final test set.
• » » » 5 months ago, # ^ | +8 Okay. Thanks.
• » » » 5 months ago, # ^ | ← Rev. 2 → 0 Seems that my last solution without a bug got somewhere around 5000 points on the contest. I've spent so much time optimizing my solution. Sad as it is. Still thanks for a nice contest, I love participating in marathons. UPD: seems that 53000 was a pretest score, now my solution gets around 530000.
» 5 months ago, # | ← Rev. 2 → +31 The contest was amazing but I couldn't give as much time as I wanted to due to university exams but I really enjoyed it!My approach:I used a greedy approach in which i would take the task with the** minimum possible start time (l[i] in the input)** and then greedily select the next best possible job (the parameter taken to find the next best job was the time taken to reach from the previous location to the latest location and on the right starting time) from there and then when there is no more jobs left to take i would take the currently selected jobs and assign to a new worker.So i used a priority queue which decided which is the best job to start with and then i would take the first one as the starting job and continue normally. This solution only takes 30ms and i was not utilizing the time so then it struck me that at any point there maybe more than one best job to be selected. So then I did the complete process around 100 times, each time selected a random best job and then finally I would take the solution with the maximum profit.Some tricks that helped me were:1)Since I was using only one type of parameter for the selection of job it was not giving me so good results so I made multiple functions and in each on of them i used a different parameter because for different test cases different parameters can be the best one. and then called each on them equal number of times with randomness.2)I was initially taking time as the seed to my random values but then i figured that it was better to take a fixed seed, so i took my all time favorite seed 690069.3)I found this hack in the early stages of the contest I found that it is always better to finish all the jobs whose worker requirement was greater than 3 (4,5,6,7). and each time we select a worker for a job the required number decreases by 1 and when it becomes < 4 I do not touch it anymore and then finally complete all the jobs whose worker requirement is greater than 0.Although the one thing i really wanted to know is that did anyone skip jobs in their algorithm. Because i never was able to come up with a strategy that involved skipping jobs!
» 5 months ago, # | ← Rev. 4 → +44 Thanks for the contest, it's very interesting!Here is my solution: a greedy constructor to give an initial solution fix the start-time of the locations, optimize on the worker routes directly on the objective function, model this sub-problem as a Minimum-Cost-Flow. fix the worker routes, optimize the start-time of the locations, can be optimized to nearly optimal using just shortest path iterate 2 and 3 by start-time I mean the time when worker starts to work in each locationscores 586064.895 using my own constructor. I tried C137's constructor, and it will probably give a boost about 30000 according to my local tests=============I think the idea of 2 is interesting and would like to briefly describe it:consider a digraph G consisting all locations as vertexes, and an arc in G means a worker goes to location v after location u. then the location's collaboration constraints turns into the constraint of indegree of vertexes.since the start-time of locations are fixed, then we can actually construct G with all legal arcs. the problem is to use some paths to satisfy all indegree constraints, with each path costs 240, and each vertex used as start/end of a path also costs something.this can be transformed into a minimum-cost-flow problem on a bipartite graph. some prune of arcs are needed, otherwise any algorithm of MCF will be too slow.
• » » 5 months ago, # ^ | 0 Phronesis jetblack20, When you guys say that the score will increase by 30000 and 7000 what do you exactly mean? The displayed score is divided by 1000. So does it mean it will actually increase by 30 and 7 or actually 30000 and 7000! because if it actually would increase so much the you your scores would actually be around 88000.000 and 66000.000 which is better than anyone's score!
• » » » 5 months ago, # ^ | +3 For my case, i expect final score to be 598962.869 + 7000, which is equal to 605962.869. It is ~1000 datasets in final grading round, so division by 1000 gives an average for a single dataset.
• » » » 5 months ago, # ^ | +3 I would expect ~586k+30k, as it's about the final score
» 5 months ago, # | ← Rev. 3 → +22 Thanks, it was very fun to participate in the contest!I used approach that involves local search with simulated annealing.My approach is pretty simple:1) Initialize each location with set of neighbours (~40). Each neigbour should not be far away from current location.2) Assing fixed initial starting time for each location.3) Find initial feasible solution with simple greedy algorithm4) Perform one of two local modifications for a randomly chosen location:a) With probability 0.2, change starting time of location by amount randomly chosen from interval [-20, 20]. If some constraints are violated after that change, fix them by introducing new workers.b) With probability 0.8, push some workers from current location to a randomly chosen neighbour. If constraints are violated, try to fix them by reconnecting modified workers. If this fix fails, satisfy constraints by introducing new workers.Repeat step 4 using simulated annealing.There was a simple improvement that could give me additional ~7k points to my final solution, but unfortunately i figured it out too late. You can simply increase probability of picking "bad" modification, that involves time change.
» 5 months ago, # | ← Rev. 2 → +63 First of all thanks for the contest, the wisely choosed parameters made it very challenging for me.Initially I solved the task with the following restrictions: workers have been assigned to groups of size K every worker in the group moves simultaneously a group of size K do jobs which require K workers. I stored only the list of jobs (path) for each worker, the optimal start time of every job can be calculated easily having the restrictions above.So, my first solution was very classic: greedy construction + optimization with Simulated Annealing. The optimization part worked well, the results I got were almost same when I had started from a very stupid initial construction, which showed that the greedy construction part doesn't matter much.The optimization consisted of 3 steps:Let's the string abcdefghijk represents a path, each letter a job. move a job from one path to another abcdefghijk -> abcdeBfghijk ABCDEFGHIJK -> ACDEFGHIJK swap tails abcdefghijk -> abcdFGHIJK ABCDEFGHIJK -> ABCDEefghijk swap two jobs abcdefghijk -> aBcdefghijk ABCDEFGHIJK -> AbCDEFGHIJK Up to this point it's look like standard solution for standard optimization problem, but... This solution had an upper bound of about 60xxx points. It seemed reasonably good until solutions with 61k+ and 62k+ score started to appear.I did not find an elegant way to handle constructions with less restrictions for a long time.The break point.On the second week I found that if I deceive the workers and pretend that all jobs with p=6 have p=7 then surprisingly scores began to improve (p = required number of workers for job). At first look it's a waste of resources because we are using 7 workers even for 6-worker jobs, but on the other hand we have more possibilities for constructing paths, and the gain compensates for wasted resources. Besides, we obtain some free capacity (when a 7 worker group is doing a p=6 job, one of the workers can do some p=1 jobs in the meantime).In my final solution I did all p>=4 jobs having groups of size 7, then I tried to insert p<=3 jobs to the free capacity. The remaining 2<=p<=3 jobs were done similarly with groups of size 3 and p=1 jobs inserted into them.That's all.UPD.The code.
• » » 5 months ago, # ^ | 0 hey T1024 The three optimizations you mentioned,move from one part to another , swap tails and swap two jobs!On what basis did you do any of the above moves??Thanks
• » » » 5 months ago, # ^ | 0 Not sure that I understand your question correctly.I chose type of moves randomly — first two with probability 0.4 and last with 0.2.
• » » » » 5 months ago, # ^ | 0 sorry if the question was unclear but i got the answer! so getting to those probabilities was trial and error or was there a specific way you arrived at it??
• » » » » » 5 months ago, # ^ | +6 trial and error
» 5 months ago, # | ← Rev. 2 → +14 I finished 5th, but the top was way out of my league.The code is here (more for completeness than for reading, it's a mess).I start by generating an initial solution: Each job is transformed into a JobPackage. Such a package contains a list of jobs (initially just one single job) and an earliest and latest possible starting time. Then I merge job packages: take two packages with a small distance (last job of package A to first job of package B and the same or similar amount of workers) and append one to the other if possible. This again results in an earliest and latest possible starting time — but not a fixed time in most cases, leaving some flexibility. I complete all jobs, not skipping any (I even tried to remove some jobs from the input by hand, didn't get any better by skipping). In a first run I take all jobs with 4+ workers and merge those only (giving a small penalty for a different amount of workers). In a second run I also merge those jobs with less than 4 workers, with a higher penalty for different worker counts.I planned to split each task with more than 4 workers in two: one with 4 and one with the rest. That would double the amount of matching partners with 1-3 worker jobs and focus more on optimizing the jobs that require a lot of workers. Somehow it didn't work out, maybe I screwed up the implementation. Keeping track of the first and last possible starting time was a pain.This initial solution gives a score around 907 for the 9th testcase and takes about 1 second to finish.Then I keep mutating my initial solution randomly as long as I have time remaining. First of all I generate the indipendent workers from the job packages created above. I do several minor mutations with these workers: Combining workers by appending the tasks of one work to those of another. Take workers with very few jobs and try to assign all their jobs to different workers (possibly changing the starting time in that process), reducing the overall worker count Take a job, which is the first or last for some workers at least. Find other workers which can also do the job for the same or less cost (also changing the starting time here) set all starting times of jobs, which aren't the last job for any worker, to the latest time possible. Then do the same in the opposite direction (set an early start). This helps to reducing waiting times, while the workers could actually start a job passing jobs from one worker to another and doing a crossover (one worker keeps its first few jobs, but then continues with the last jobs of another worker instead, the other worker gets the remining jobs). Here a waiting worker is to be preferred over a travelling one (if both gives the same score), as waiting periods can be filled with other jobs in following mutations. These mutations help to improve the score of test 9 to about 965 (983 offline when running it for more than 15s — there is not much improvement after 15s for smaller testcases such as 1)My biggest issue is that I didn't find a clever way to change the order in which jobs have to be completed. It would lead to a huge chain of changes.I liked the contest and saw it as a challenge to climb on the ladder. The problem didn't seem similar to others to me, which I've tried so far — but I'm more into bot programming and fighting other bots than into optimization. The leaderboard was very stable and kept the ordering after the rerun. Thumbs up for this.Plots of a solution to test 9: graph, map.
» 5 months ago, # | +8 Gassa, hey when will the system test finish, so that we can submit few solutions and know the final results so that at least a T-shirt is confirmed! xD
• » » 5 months ago, # ^ | +6 Done.
» 5 months ago, # | +11 Gassa, will you also make the submissions from the contest public, in order to be able to read others' solutions? (especially of the top contestants who won't describe their approach in this thread or anywhere else)
• » » 5 months ago, # ^ | 0 Done.
»
5 months ago, # |
+35
Final results (top 25):
Place Contestant Score Language
1 T1024 639881.469 GNU C++14
2 Rafbill 636292.029 GNU C++17
3 wleite 623724.206 Java 8
4 tamionv 615478.737 GNU C++17
5 eulerscheZahl 608706.631 Mono C#
6 AnlJ8vIySM8j8Nfq 606391.908 Java 8
7 katana_handler 604072.087 GNU C++17
8 NighTurs 603652.452 GNU C++17
9 ckr 601043.658 GNU C++17
10 Nagrarok 600503.406 GNU C++14
11 MiriTheRing 599614.750 GNU C++17
12 jetblack20 598962.869 Java 8
13 karaketir16 596502.335 GNU C++17
14 iehn 595796.071 GNU C++11
15 C137 590415.267 GNU C++14
16 seirg 587567.206 GNU C++17
17 Artmat 587504.727 GNU C++17
18 Phronesis 586064.895 GNU C++14
19 Guty 584270.633 GNU C++14
20 re_eVVorld 582740.133 GNU C++17
21 IdeaSeeker 582346.050 GNU C++17
22 Lucerne 581810.559 GNU C++11
23 Osama_Alkhodairy 579074.751 GNU C++17
24 nishantrai18 578640.706 GNU C++11
25 Degalat57 576980.039 GNU C++17
Congratulations to the winners!
• » » 5 months ago, # ^ | +19 GassaJust asking, When will we get the T-shirts!? xD
» 5 months ago, # | +82 My solution, differently from T1024, didnt rely on groups of workers. And didn’t use SA, just a greedy allocation, repeated several times, with different (random) starting jobs and other parameters.I had a heuristic formula to estimate the numbers of workers to be used (mainly based on the total number of jobs). It allocates workers to jobs, picking the earliest available jobs, but randomly skipping few of them. For example, if jobs are sorted by the time allowed to start, it could pick 1, 5, 7, 13, 20... Until the number of workers match the number estimated before. After that initial allocation, the solution keeps track of "resources" (workers available at a given position and time, when their last task was completed). Then it finds the next job to be executed, testing all pending jobs. Here several heuristics were used, but the main idea was to pick the job that would be able to start earlier and use the workers that would "lose" less time (which includes the distance and possibly a "wait time", as workers can come from different locations and we need to have them all to start). If in the way to the chosen next job to be completed workers can complete another job, without delaying the original one, they will do that.Repeat the whole process, picking different (random) stating jobs, and changing a little bit the number of workers, until the time is almost over. On top of the best solution found, in the last 200 ms, repeat the process only for the jobs left undone, trying to add more workers to take care of these jobs, and check if that would provide a positive gain.For large cases, it runs only 10 iterations, and 50-100 in average cases. Increasing the available running time (10x) would improve my score by only 0.5%, which wouldn't be enough to reach the top two spots.As a conclusion, I would say that allowing workers moving independently is, in theory, a good option to minimize their idle / moving time. On the other hand, it creates a chain of dependency, which I couldn't figure out a good way to change without rebuilding a large part of the solution, and that prevented me to use SA to improve the solution. Watching a simple visualizer that I built, it was clear that paths taken by works weren't very smart, as the greedy selection only sees the next move. Starting from the scratch, even with "everything tuned", was good but not enough to match SA, even with a restriction as the group of workers used by the winner.By the way, thanks again for the nice competition, and congratulations to T1024 and Rafbill on the impressive performance!!!
• » » 5 months ago, # ^ | +45 A visualization of my solution for provided input #3 (in a video format): Video-Seed-3
» 5 months ago, # | +35 Thanks for the contest, I also found it quite interesting. And congratulations to T1024 for the win! As Gassa said, the challenging part is that it is hard to make a solution based on local changes work well.I represented my solution by a directed acyclic graph, whose vertices are jobs, and where an edge $a \rightarrow b$ represents a worker moving to job b after completing job a. If you are given such a graph, it is easy to check if it can lead to a valid solution, by computing for each job the minimum possible start time (after doing a topological sort of the vertices). It is also possible to find the best way (minimizing the total time) to schedule jobs in polynomial time, using the simplex algorithm. I used that as a postprocessing step in my code.Another subproblem that can be solved in polynomial time is that of finding an optimal graph assuming that the start time for each job is fixed. As Phronesis said, it can be encoded as a minimum-cost bipartite matching. I didn't use that in my solution, but I used a maximum bipartite matching (that minimises the number of workers) to compute a good initial solution, and to regularly reset the solution (so as to avoid local optimums).My solution proceeds by repeating the following steps (~20000 times for the larger test cases, and ~150000 times for the smaller ones): Compute a random cut of the current DAG. It separates the graph into a left part L, and a right part R. Compute for each job $a$ in L, the minimum possible end time $Tl(a)$, and for each job $b$ in R the maximum possible start time $Tr(b)$. Forget all edges in the graph between L and R. Compute a maximum bipartite matching between L and R. An edge $a \rightarrow b$ can be used if $Tl(a) + dist(a,b) \le Tr(b)$. Add the edges of the maximum matching to the graph. Each time a larger matching is found, the number of workers used in the solution decreases. Because a new matching is computed each time, it also changes the graph, even when the solution doesn't improve. (this is important to avoid being stuck in local optimum).In a second phase, I stop trying to reduce the number of workers and compute minimum cost maximum matchings using the hungarian algorithm, so as to decrease the total time used by the solution.I always complete all available jobs.My code : https://pastebin.com/0vwWyrVF Scores 001 : score = 256602, 131 workers 002 : score = 441729, 202 workers 003 : score = 863887, 347 workers 004 : score = 511975, 222 workers 005 : score = 475556, 218 workers 006 : score = 458680, 218 workers 007 : score = 437398, 199 workers 008 : score = 427500, 200 workers 009 : score = 1005733, 392 workers 010 : score = 968438, 386 workers
|
# [texhax] chktex, are it's reports any good?
David Carlisle d.p.carlisle at gmail.com
Sun Apr 16 11:16:47 CEST 2017
On 16 April 2017 at 01:50, <great123456 at mail.com> wrote:
> A few days ago I found and ran chtex over my source. Though my source
> produces correct results when latex runs it over I did get a surprising
> latex, so some things are easy to fix but hard to understand why they are
> wrong, the opposite is true for others so please land a hand (I have
> removed duplicate warnings).
>
> Thanks,
> David
>
>
> % chktex wms.tex
> ChkTeX v1.7.1 - Copyright 1995-96 Jens T. Berger Thielemann.
> Compiled with PCRE regex support.
>
>
> Warning 18 in wms.tex line 36: Use either or '' as an alternative to
> "'. Also, several of the WMs advertize "session managment" which should
> remember ^
>
> # What is wrong with "?
>
" is never correct as an input character. Depending on the font encoding in
use it may, produce a double right quote
but even that is not guaranteed in general. use ' ' (without the
spaces)
> Warning 13 in wms.tex line 39: Intersentence spacing (\@') should
> perhaps be used.
> is managed when restarting the WM. So "session management" means several
> things ^
>
> # I'm lost here
>
after WM. the . is taken to be marking an abbreviation so if, as seems
likely given the capital S following,
you want an end of sentence here you need to mark that with \@
> Warning 44 in wms.tex line 52: User Regex: -2:Vertical rules in tables
> are ugly. \begin{tabular}{l|l|l|l|l|l|l|l|l|l}
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> # What is a vertical rule, and how do I avoid it?
>
Se the booktabs package for a diatribe on the evils of vertical rules:-)
Unlike the others this is simply suggesting a stylistic choice, that you
avoid
vertical rules in tables (by not using |)
> Warning 1 in wms.tex line 338: Command terminated with space.
> \cleardoublepage
> ^
> # Don't all commands end with a space?
>
well yes but if it was "\LaTeX is weird" the space intended to be output
would
just be taken as a command termination, and you'd need \LaTeX\ is
Presumably the checking command can not classify commands that produce
text from those like \cleardoublepage that do not. So the warning is
spurious here.
> Warning 26 in wms.tex line 348: You ought to remove spaces in front of
> punctuation.
> Your welcome to test it yourself and submit your findings :)
> ^
> # How do you make a smiley then?
>
> Warning 9 in wms.tex line 348: }' expected, found )'.
> Your welcome to test it yourself and submit your findings :)
>
again this is implementing a style suggestion based on classical
english punctuation. If you are making smiley's (or if you are French)
^
> Warning 10 in wms.tex line 350: Solo }' found.
> some WMs and not on others.}
> ^
> # If I write something like this (and that too).
> # What do I do to not upset chktex?
>
That looks like a TeX syntax error if it really is an unmatched }
David
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://tug.org/pipermail/texhax/attachments/20170416/c93cf43e/attachment.html>
`
|
# How can I visualize the geometry of the complex mapping $g(z)=\frac{1}{z}$?
I'm trying to visualize the geometry of the complex mapping $g(z)=\frac{1}{z}$ using the following codes.
g[x_, y_] := {x/(x^2 + y^2), -y/(x^2 + y^2)};
Manipulate[ParametricPlot[{{Cos[t], Sin[t]},
{c, 10 t},
g[c, t]},
{t, -Pi, Pi}, PlotRange -> {{-5, 5}, {-5, 5}}],
{c, -3, 3}]
What I want to demonstrate is that the image of a straight line under the mapping $g$ is a circle. (And I use Manipulate to demonstrate the image for different vertical lines.) But I only get part of the circle since the straight line is actually only a segment in the plot.
Could anyone help to fix this?
• A somehow related topic. – corey979 Feb 12 '17 at 20:50
• I didn't mean for you to delete your question, but I couldn't spend the time to write a full answer. Hope you figured it out based on my comment. – Szabolcs Feb 13 '17 at 20:43
Not perfect but better:
g[x_, y_] := {x/(x^2 + y^2), -y/(x^2 + y^2)};
Manipulate[
Show[
ParametricPlot[{{Cos[t], Sin[t]}, {c, 10 t}}, {t, -Pi, Pi},
PlotRange -> {{-5, 5}, {-5, 5}}],
ParametricPlot[g[c, t], {t, -100, 100}]],
{c, -3, 3}]
Trick: You can improve the smoothness of the parametric circle during manipulate by replacing the second plot as follows:
Manipulate[
Show[
ParametricPlot[{{Cos[t], Sin[t]}, {c, 10 t}}, {t, -Pi, Pi},
PlotRange -> {{-5, 5}, {-5, 5}}],
ParametricPlot[g[c, t^5], {t, -2, 2}] ],
{{c, -1.5}, -3, 3}]
• Add PlotPoints -> 1000 in ParametricPlot for full of happiness. – user64494 Feb 12 '17 at 19:00
• @user64494 this may be machine-dependent it does not help on my 2015 Macbook Pro. However the above added trick did. – A.G. Feb 12 '17 at 19:22
• Thank you for your answer. What is the -1.5 for in {c, -1.5}, -3, 3}? – Jack Feb 12 '17 at 22:47
• Enlighten by your answer, one can replace g[c,t] with g[c,t^7] in my original code to get the desired result. – Jack Feb 12 '17 at 22:53
• @Jack -1.5 is the initial value of c. – A.G. Feb 12 '17 at 23:36
f[x_, y_] := ReIm[1/(x + I y)]
Manipulate[
ParametricPlot[{{a + b Cos[t], c + b Sin[t]},
f[a + b Cos[t], c + b Sin[t]]}, {t, 0, 2 Pi},
PlotRange -> {{-4, 4}, {-4, 4}},
PlotLegends -> {"z", "1/z"}], {{a, 0}, -2, 2}, {{b, 1}, 0.1,
2}, {{c, 0}, -2, 2}]
• I have never seen the use of ReIm before. Thanks! – Jack Feb 13 '17 at 16:00
• May I ask how you made the gif from Mathematica? – Jack Feb 13 '17 at 16:06
• @Jack I use LICECap – ubpdqn Feb 15 '17 at 7:13
|
Ecological economics
Last updated
Ecological economics, also known as bioeconomics of Georgescu-Roegen, ecolonomy, or eco-economics, is both a transdisciplinary and an interdisciplinary field of academic research addressing the interdependence and coevolution of human economies and natural ecosystems, both intertemporally and spatially. [1] By treating the economy as a subsystem of Earth's larger ecosystem, and by emphasizing the preservation of natural capital, the field of ecological economics is differentiated from environmental economics, which is the mainstream economic analysis of the environment. [2] One survey of German economists found that ecological and environmental economics are different schools of economic thought, with ecological economists emphasizing strong sustainability and rejecting the proposition that physical (human-made) capital can substitute for natural capital (see the section on weak versus strong sustainability below). [3]
Contents
Ecological economics was founded in the 1980s as a modern discipline on the works of and interactions between various European and American academics (see the section on History and development below). The related field of green economics is in general a more politically applied form of the subject. [4] [5]
According to ecological economist Malte Faber [ de ], ecological economics is defined by its focus on nature, justice, and time. Issues of intergenerational equity, irreversibility of environmental change, uncertainty of long-term outcomes, and sustainable development guide ecological economic analysis and valuation. [6] Ecological economists have questioned fundamental mainstream economic approaches such as cost-benefit analysis, and the separability of economic values from scientific research, contending that economics is unavoidably normative, i.e. prescriptive, rather than positive or descriptive. [7] Positional analysis, which attempts to incorporate time and justice issues, is proposed as an alternative. [8] [9] Ecological economics shares several of its perspectives with feminist economics, including the focus on sustainability, nature, justice and care values. [10]
History and development
The antecedents of ecological economics can be traced back to the Romantics of the 19th century as well as some Enlightenment political economists of that era. Concerns over population were expressed by Thomas Malthus, while John Stuart Mill predicted the desirability of the stationary state of an economy. Mill thereby anticipated later insights of modern ecological economists, but without having had their experience of the social and ecological costs of the Post–World War II economic expansion. In 1880, Marxian economist Sergei Podolinsky attempted to theorize a labor theory of value based on embodied energy; his work was read and critiqued by Marx and Engels. [11] Otto Neurath developed an ecological approach based on a natural economy whilst employed by the Bavarian Soviet Republic in 1919. He argued that a market system failed to take into account the needs of future generations, and that a socialist economy required Calculation in kind, the tracking of all the different materials, rather than synthesising them into money as a general equivalent. In this he was criticised by neo-liberal economists such as Ludwig von Mises and Freidrich Hayek in what became known as the socialist calculation debate. [12]
The debate on energy in economic systems can also be traced back to Nobel prize-winning radiochemist Frederick Soddy (1877–1956). In his book Wealth, Virtual Wealth and Debt (1926), Soddy criticized the prevailing belief of the economy as a perpetual motion machine, capable of generating infinite wealth—a criticism expanded upon by later ecological economists such as Nicholas Georgescu-Roegen and Herman Daly. [13]
European predecessors of ecological economics include K. William Kapp (1950) [14] Karl Polanyi (1944), [15] and Romanian economist Nicholas Georgescu-Roegen (1971). Georgescu-Roegen, who would later mentor Herman Daly at Vanderbilt University, provided ecological economics with a modern conceptual framework based on the material and energy flows of economic production and consumption. His magnum opus, The Entropy Law and the Economic Process (1971), is credited by Daly as a fundamental text of the field, alongside Soddy's Wealth, Virtual Wealth and Debt. [16] Some key concepts of what is now ecological economics are evident in the writings of Kenneth Boulding and E.F. Schumacher, whose book Small Is Beautiful A Study of Economics as if People Mattered (1973) was published just a few years before the first edition of Herman Daly's comprehensive and persuasive Steady-State Economics (1977). [17] [18]
The first organized meetings of ecological economists occurred in the 1980s. These began in 1982, at the instigation of Lois Banner, [19] with a meeting held in Sweden (including Robert Costanza, Herman Daly, Charles Hall, Bruce Hannon, H.T. Odum, and David Pimentel). [20] Most were ecosystem ecologists or mainstream environmental economists, with the exception of Daly. In 1987, Daly and Costanza edited an issue of Ecological Modeling to test the waters. A book entitled Ecological Economics, by Juan Martinez-Alier, was published later that year. [20] He renewed interest in the approach developed by Otto Neurath during the interwar period. [21] 1989 saw the foundation of the International Society for Ecological Economics and publication of its journal, Ecological Economics , by Elsevier. Robert Costanza was the first president of the society and first editor of the journal, which is currently edited by Richard Howarth. Other figures include ecologists C.S. Holling and H.T. Odum, biologist Gretchen Daily, and physicist Robert Ayres. In the Marxian tradition, sociologist John Bellamy Foster and CUNY geography professor David Harvey explicitly center ecological concerns in political economy.
Articles by Inge Ropke (2004, 2005) [22] and Clive Spash (1999) [23] cover the development and modern history of ecological economics and explain its differentiation from resource and environmental economics, as well as some of the controversy between American and European schools of thought. An article by Robert Costanza, David Stern, Lining He, and Chunbo Ma [24] responded to a call by Mick Common to determine the foundational literature of ecological economics by using citation analysis to examine which books and articles have had the most influence on the development of the field. However, citations analysis has itself proven controversial and similar work has been criticized by Clive Spash for attempting to pre-determine what is regarded as influential in ecological economics through study design and data manipulation. [25] In addition, the journal Ecological Economics has itself been criticized for swamping the field with mainstream economics. [26] [27]
Schools of thought
Various competing schools of thought exist in the field. Some are close to resource and environmental economics while others are far more heterodox in outlook. An example of the latter is the European Society for Ecological Economics. An example of the former is the Swedish Beijer International Institute of Ecological Economics. Clive Spash has argued for the classification of the ecological economics movement, and more generally work by different economic schools on the environment, into three main categories. These are the mainstream new resource economists, the new environmental pragmatists, [28] and the more radical social ecological economists. [29] International survey work comparing the relevance of the categories for mainstream and heterodox economists shows some clear divisions between environmental and ecological economists. [30]
Differences from mainstream economics
Some ecological economists prioritise adding natural capital to the typical capital asset analysis of land, labor, and financial capital. These ecological economists then use tools from mathematical economics as in mainstream economics, but may apply them more closely to the natural world. Whereas mainstream economists tend to be technological optimists, ecological economists are inclined to be technological sceptics. They reason that the natural world has a limited carrying capacity and that its resources may run out. Since destruction of important environmental resources could be practically irreversible and catastrophic, ecological economists are inclined to justify cautionary measures based on the precautionary principle. [31]
The most cogent example of how the different theories treat similar assets is tropical rainforest ecosystems, most obviously the Yasuni region of Ecuador. While this area has substantial deposits of bitumen it is also one of the most diverse ecosystems on Earth and some estimates establish it has over 200 undiscovered medical substances in its genomes - most of which would be destroyed by logging the forest or mining the bitumen. Effectively, the instructional capital of the genomes is undervalued by analyses which view the rainforest primarily as a source of wood, oil/tar and perhaps food. Increasingly the carbon credit for leaving the extremely carbon-intensive ("dirty") bitumen in the ground is also valued - the government of Ecuador set a price of US$350M for an oil lease with the intent of selling it to someone committed to never exercising it at all and instead preserving the rainforest. While this natural capital and ecosystems services approach has proven popular amongst many it has also been contested as failing to address the underlying problems with mainstream economics, growth, market capitalism and monetary valuation of the environment. [32] [33] [34] Critiques concern the need to create a more meaningful relationship with Nature and the non-human world than evident in the instrumentalism of shallow ecology and the environmental economists commodification of everything external to the market system. [35] [36] [37] Nature and ecology A simple circular flow of income diagram is replaced in ecological economics by a more complex flow diagram reflecting the input of solar energy, which sustains natural inputs and environmental services which are then used as units of production. Once consumed, natural inputs pass out of the economy as pollution and waste. The potential of an environment to provide services and materials is referred to as an "environment's source function", and this function is depleted as resources are consumed or pollution contaminates the resources. The "sink function" describes an environment's ability to absorb and render harmless waste and pollution: when waste output exceeds the limit of the sink function, long-term damage occurs. [38] :8 Some persistent pollutants, such as some organic pollutants and nuclear waste are absorbed very slowly or not at all; ecological economists emphasize minimizing "cumulative pollutants". [38] :28 Pollutants affect human health and the health of the ecosystem. The economic value of natural capital and ecosystem services is accepted by mainstream environmental economics, but is emphasized as especially important in ecological economics. Ecological economists may begin by estimating how to maintain a stable environment before assessing the cost in dollar terms. [38] :9 Ecological economist Robert Costanza led an attempted valuation of the global ecosystem in 1997. Initially published in Nature, the article concluded on$33 trillion with a range from $16 trillion to$54 trillion (in 1997, total global GDP was $27 trillion). [39] Half of the value went to nutrient cycling. The open oceans, continental shelves, and estuaries had the highest total value, and the highest per-hectare values went to estuaries, swamps/floodplains, and seagrass/algae beds. The work was criticized by articles in Ecological Economics Volume 25, Issue 1, but the critics acknowledged the positive potential for economic valuation of the global ecosystem. [38] :129 The Earth's carrying capacity is a central issue in ecological economics. Early economists such as Thomas Malthus pointed out the finite carrying capacity of the earth, which was also central to the MIT study Limits to Growth . Diminishing returns suggest that productivity increases will slow if major technological progress is not made. Food production may become a problem, as erosion, an impending water crisis, and soil salinity (from irrigation) reduce the productivity of agriculture. Ecological economists argue that industrial agriculture, which exacerbates these problems, is not sustainable agriculture, and are generally inclined favorably to organic farming, which also reduces the output of carbon. [38] :26 Global wild fisheries are believed to have peaked and begun a decline, with valuable habitat such as estuaries in critical condition. [38] :28 The aquaculture or farming of piscivorous fish, like salmon, does not help solve the problem because they need to be fed products from other fish. Studies have shown that salmon farming has major negative impacts on wild salmon, as well as the forage fish that need to be caught to feed them. [40] [41] Since animals are higher on the trophic level, they are less efficient sources of food energy. Reduced consumption of meat would reduce the demand for food, but as nations develop, they tend to adopt high-meat diets similar to that of the United States. Genetically modified food (GMF) a conventional solution to the problem, presents numerous problems Bt corn produces its own Bacillus thuringiensis toxin/protein, but the pest resistance is believed to be only a matter of time. [38] :31 Global warming is now widely acknowledged as a major issue, with all national scientific academies expressing agreement on the importance of the issue. As the population growth intensifies and energy demand increases, the world faces an energy crisis. Some economists and scientists forecast a global ecological crisis if energy use is not contained the Stern report is an example. The disagreement has sparked a vigorous debate on issue of discounting and intergenerational equity. Ethics Mainstream economics has attempted to become a value-free 'hard science', but ecological economists argue that value-free economics is generally not realistic. Ecological economics is more willing to entertain alternative conceptions of utility, efficiency, and cost-benefits such as positional analysis or multi-criteria analysis. Ecological economics is typically viewed as economics for sustainable development, [42] and may have goals similar to green politics. Green economics In international, regional, and national policy circles, the concept of the green economy grew in popularity as a response to the financial predicament at first then became a vehicle for growth and development. [43] The United Nations Environment Program (UNEP) defines a ‘green economy’ as one that focuses on the human aspects and natural influences and an economic order that can generate high-salary jobs. In 2011, its definition was further developed as the word ‘green’ is made to refer to an economy that is not only resourceful and well-organized but also impartial, guaranteeing an objective shift to an economy that is low-carbon, resource-efficient, and socially-inclusive. The ideas and studies regarding the green economy denote a fundamental shift for more effective, resourceful, environment-friendly and resource‐saving technologies that could lessen emissions and alleviate the adverse consequences of climate change, at the same time confront issues about resource exhaustion and grave environmental dilapidation. [44] As an indispensable requirement and vital precondition to realizing sustainable development, the Green Economy adherents robustly promote good governance. To boost local investments and foreign ventures, it is crucial to have a constant and foreseeable macroeconomic atmosphere. Likewise, such an environment will also need to be transparent and accountable. In the absence of a substantial and solid governance structure, the prospect of shifting towards a sustainable development route would be insignificant. In achieving a green economy, competent institutions and governance systems are vital in guaranteeing the efficient execution of strategies, guidelines, campaigns, and programmes. Shifting to a Green Economy demands a fresh mindset and an innovative outlook of doing business. It likewise necessitates new capacities, skills set from labor and professionals who can competently function across sectors, and able to work as effective components within multi-disciplinary teams. To achieve this goal, vocational training packages must be developed with focus on greening the sectors. Simultaneously, the educational system needs to be assessed as well in order to fit in the environmental and social considerations of various disciplines. [45] Topics Among the topics addressed by ecological economics are methodology, allocation of resources, weak versus strong sustainability, energy economics, energy accounting and balance, environmental services, cost shifting, modeling, and monetary policy. Methodology A primary objective of ecological economics (EE) is to ground economic thinking and practice in physical reality, especially in the laws of physics (particularly the laws of thermodynamics) and in knowledge of biological systems. It accepts as a goal the improvement of human well-being through development, and seeks to ensure achievement of this through planning for the sustainable development of ecosystems and societies. Of course the terms development and sustainable development are far from lacking controversy. Richard B. Norgaard argues traditional economics has hi-jacked the development terminology in his book Development Betrayed. [46] Well-being in ecological economics is also differentiated from welfare as found in mainstream economics and the 'new welfare economics' from the 1930s which informs resource and environmental economics. This entails a limited preference utilitarian conception of value i.e., Nature is valuable to our economies, that is because people will pay for its services such as clean air, clean water, encounters with wilderness, etc. Ecological economics is distinguishable from neoclassical economics primarily by its assertion that the economy is embedded within an environmental system. Ecology deals with the energy and matter transactions of life and the Earth, and the human economy is by definition contained within this system. Ecological economists argue that neoclassical economics has ignored the environment, at best considering it to be a subset of the human economy. The neoclassical view ignores much of what the natural sciences have taught us about the contributions of nature to the creation of wealth e.g., the planetary endowment of scarce matter and energy, along with the complex and biologically diverse ecosystems that provide goods and ecosystem services directly to human communities: micro- and macro-climate regulation, water recycling, water purification, storm water regulation, waste absorption, food and medicine production, pollination, protection from solar and cosmic radiation, the view of a starry night sky, etc. There has then been a move to regard such things as natural capital and ecosystems functions as goods and services. [47] [48] However, this is far from uncontroversial within ecology or ecological economics due to the potential for narrowing down values to those found in mainstream economics and the danger of merely regarding Nature as a commodity. This has been referred to as ecologists 'selling out on Nature'. [49] There is then a concern that ecological economics has failed to learn from the extensive literature in environmental ethics about how to structure a plural value system. Allocation of resources Resource and neoclassical economics focus primarily on the efficient allocation of resources and less on the two other problems of importance to ecological economics: distribution (equity), and the scale of the economy relative to the ecosystems upon which it relies. [50] Ecological economics makes a clear distinction between growth (quantitative increase in economic output) and development (qualitative improvement of the quality of life), while arguing that neoclassical economics confuses the two. Ecological economists point out that beyond modest levels, increased per-capita consumption (the typical economic measure of "standard of living") may not always lead to improvement in human well-being, but may have harmful effects on the environment and broader societal well-being. This situation is sometimes referred to as uneconomic growth (see diagram above). Weak versus strong sustainability The three nested systems of sustainability - the economy wholly contained by society, wholly contained by the biophysical environment. Clickable. Ecological economics challenges the conventional approach towards natural resources, claiming that it undervalues natural capital by considering it as interchangeable with human-made capital—labor and technology. The impending depletion of natural resources and increase of climate-changing greenhouse gasses should motivate us to examine how political, economic and social policies can benefit from alternative energy. Shifting dependence on fossil fuels with specific interest within just one of the above-mentioned factors easily benefits at least one other. For instance, photo voltaic (or solar) panels have a 15% efficiency when absorbing the sun's energy, but its construction demand has increased 120% within both commercial and residential properties. Additionally, this construction has led to a roughly 30% increase in work demands (Chen). The potential for the substitution of man-made capital for natural capital is an important debate in ecological economics and the economics of sustainability. There is a continuum of views among economists between the strongly neoclassical positions of Robert Solow and Martin Weitzman, at one extreme and the 'entropy pessimists', notably Nicholas Georgescu-Roegen and Herman Daly, at the other. [51] Neoclassical economists tend to maintain that man-made capital can, in principle, replace all types of natural capital. This is known as the weak sustainability view, essentially that every technology can be improved upon or replaced by innovation, and that there is a substitute for any and all scarce materials. At the other extreme, the strong sustainability view argues that the stock of natural resources and ecological functions are irreplaceable. From the premises of strong sustainability, it follows that economic policy has a fiduciary responsibility to the greater ecological world, and that sustainable development must therefore take a different approach to valuing natural resources and ecological functions. Recently, Stanislav Shmelev developed a new methodology for the assessment of progress at the macro scale based on multi-criteria methods, which allows consideration of different perspectives, including strong and weak sustainability or conservationists vs industrialists and aims to search for a 'middle way' by providing a strong neo-Keynesian economic push without putting excessive pressure on the natural resources, including water or producing emissions, both directly and indirectly. [52] Energy economics A key concept of energy economics is net energy gain, which recognizes that all energy sources require an initial energy investment in order to produce energy. To be useful the energy return on energy invested ( EROEI ) has to be greater than one. The net energy gain from the production of coal, oil and gas has declined over time as the easiest to produce sources have been most heavily depleted. [54] Ecological economics generally rejects the view of energy economics that growth in the energy supply is related directly to well being, focusing instead on biodiversity and creativity - or natural capital and individual capital, in the terminology sometimes adopted to describe these economically. In practice, ecological economics focuses primarily on the key issues of uneconomic growth and quality of life. Ecological economists are inclined to acknowledge that much of what is important in human well-being is not analyzable from a strictly economic standpoint and suggests an interdisciplinary approach combining social and natural sciences as a means to address this. Thermoeconomics is based on the proposition that the role of energy in biological evolution should be defined and understood through the second law of thermodynamics, but also in terms of such economic criteria as productivity, efficiency, and especially the costs and benefits (or profitability) of the various mechanisms for capturing and utilizing available energy to build biomass and do work. [55] [56] As a result, thermoeconomics is often discussed in the field of ecological economics, which itself is related to the fields of sustainability and sustainable development. Exergy analysis is performed in the field of industrial ecology to use energy more efficiently. [57] The term exergy, was coined by Zoran Rant in 1956, but the concept was developed by J. Willard Gibbs. In recent decades, utilization of exergy has spread outside of physics and engineering to the fields of industrial ecology, ecological economics, systems ecology, and energetics. Energy accounting and balance An energy balance can be used to track energy through a system, and is a very useful tool for determining resource use and environmental impacts, using the First and Second laws of thermodynamics, to determine how much energy is needed at each point in a system, and in what form that energy is a cost in various environmental issues.[ citation needed ] The energy accounting system keeps track of energy in, energy out, and non-useful energy versus work done, and transformations within the system. [58] Scientists have written and speculated on different aspects of energy accounting. [59] Ecosystem services and their valuation Ecological economists agree that ecosystems produce enormous flows of goods and services to human beings, playing a key role in producing well-being. At the same time, there is intense debate about how and when to place values on these benefits. [60] [61] A study was carried out by Costanza and colleagues [62] to determine the 'value' of the services provided by the environment. This was determined by averaging values obtained from a range of studies conducted in very specific context and then transferring these without regard to that context. Dollar figures were averaged to a per hectare number for different types of ecosystem e.g. wetlands, oceans. A total was then produced which came out at 33 trillion US dollars (1997 values), more than twice the total GDP of the world at the time of the study. This study was criticized by pre-ecological and even some environmental economists - for being inconsistent with assumptions of financial capital valuation - and ecological economists - for being inconsistent with an ecological economics focus on biological and physical indicators. [63] The whole idea of treating ecosystems as goods and services to be valued in monetary terms remains controversial. A common objection [64] [65] [66] is that life is precious or priceless, but this demonstrably degrades to it being worthless within cost-benefit analysis and other standard economic methods. Reducing human bodies to financial values is a necessary part of mainstream economics and not always in the direct terms of insurance or wages. Economics, in principle, assumes that conflict is reduced by agreeing on voluntary contractual relations and prices instead of simply fighting or coercing or tricking others into providing goods or services. In doing so, a provider agrees to surrender time and take bodily risks and other (reputation, financial) risks. Ecosystems are no different from other bodies economically except insofar as they are far less replaceable than typical labour or commodities. Despite these issues, many ecologists and conservation biologists are pursuing ecosystem valuation. Biodiversity measures in particular appear to be the most promising way to reconcile financial and ecological values, and there are many active efforts in this regard. The growing field of biodiversity finance [67] began to emerge in 2008 in response to many specific proposals such as the Ecuadoran Yasuni proposal [68] [69] or similar ones in the Congo. US news outlets treated the stories as a "threat" [70] to "drill a park" [71] reflecting a previously dominant view that NGOs and governments had the primary responsibility to protect ecosystems. However Peter Barnes and other commentators have recently argued that a guardianship/trustee/commons model is far more effective and takes the decisions out of the political realm. Commodification of other ecological relations as in carbon credit and direct payments to farmers to preserve ecosystem services are likewise examples that enable private parties to play more direct roles protecting biodiversity, but is also controversial in ecological economics. [72] The United Nations Food and Agriculture Organization achieved near-universal agreement in 2008 [73] that such payments directly valuing ecosystem preservation and encouraging permaculture were the only practical way out of a food crisis. The holdouts were all English-speaking countries that export GMOs and promote "free trade" agreements that facilitate their own control of the world transport network: The US, UK, Canada and Australia. [74] Not 'externalities', but cost shifting Ecological economics is founded upon the view that the neoclassical economics (NCE) assumption that environmental and community costs and benefits are mutually canceling "externalities" is not warranted. Joan Martinez Alier, [75] for instance shows that the bulk of consumers are automatically excluded from having an impact upon the prices of commodities, as these consumers are future generations who have not been born yet. The assumptions behind future discounting, which assume that future goods will be cheaper than present goods, has been criticized by David Pearce [76] and by the recent Stern Report (although the Stern report itself does employ discounting and has been criticized for this and other reasons by ecological economists such as Clive Spash). [77] Concerning these externalities, some like the eco-businessman Paul Hawken argue an orthodox economic line that the only reason why goods produced unsustainably are usually cheaper than goods produced sustainably is due to a hidden subsidy, paid by the non-monetized human environment, community or future generations. [78] These arguments are developed further by Hawken, Amory and Hunter Lovins to promote their vision of an environmental capitalist utopia in Natural Capitalism: Creating the Next Industrial Revolution . [79] In contrast, ecological economists, like Joan Martinez-Alier, appeal to a different line of reasoning. [80] Rather than assuming some (new) form of capitalism is the best way forward, an older ecological economic critique questions the very idea of internalizing externalities as providing some corrective to the current system. The work by Karl William Kapp explains why the concept of "externality" is a misnomer. [81] In fact the modern business enterprise operates on the basis of shifting costs onto others as normal practice to make profits. [82] Charles Eisenstein has argued that this method of privatising profits while socialising the costs through externalities, passing the costs to the community, to the natural environment or to future generations is inherently destructive. [83] As social ecological economist Clive Spash has noted, externality theory fallaciously assumes environmental and social problems are minor aberrations in an otherwise perfectly functioning efficient economic system. [84] Internalizing the odd externality does nothing to address the structural systemic problem and fails to recognize the all pervasive nature of these supposed 'externalities'. Ecological-economic modeling Mathematical modeling is a powerful tool that is used in ecological economic analysis. Various approaches and techniques include: [85] [86] evolutionary, input-output, neo-Austrian modeling, entropy and thermodynamic models, [87] multi-criteria, and agent-based modeling, the environmental Kuznets curve, and Stock-Flow consistent model frameworks. System dynamics and GIS are techniques applied, among other, to spatial dynamic landscape simulation modeling. [88] [89] The Matrix accounting methods of Christian Felber provide a more sophisticated method for identifying "the common good" [90] Monetary theory and policy Ecological economics draws upon its work on resource allocation and strong sustainability to address monetary policy. Drawing upon a transdisciplinary literature, ecological economics roots its policy work in monetary theory and its goals of sustainable scale, just distribution, and efficient allocation. [91] Ecological economics' work on monetary theory and policy can be traced to Frederick Soddy's work on money. The field considers questions such as the growth imperative of interest-bearing debt, the nature of money, and alternative policy proposals such as alternative currencies and public banking. Criticism Assigning monetary value to natural resources such as biodiversity, and the emergent ecosystem services is often viewed as a key process in influencing economic practices, policy, and decision-making. [92] [93] While this idea is becoming more and more accepted among ecologists and conservationist, some argue that it is inherently false. McCauley argues that ecological economics and the resulting ecosystem service based conservation can be harmful. [94] He describes four main problems with this approach: Firstly, it seems to be assumed that all ecosystem services are financially beneficial. This is undermined by a basic characteristic of ecosystems: they do not act specifically in favour of any single species. While certain services might be very useful to us, such as coastal protection from hurricanes by mangroves for example, others might cause financial or personal harm, such as wolves hunting cattle. [95] The complexity of Eco-systems makes it challenging to weigh up the value of a given species. Wolves play a critical role in regulating prey populations; the absence of such an apex predator in the Scottish Highlands has caused the over population of deer, preventing afforestation, which increases the risk of flooding and damage to property. Secondly, allocating monetary value to nature would make its conservation reliant on markets that fluctuate. This can lead to devaluation of services that were previously considered financially beneficial. Such is the case of the bees in a forest near former coffee plantations in Finca Santa Fe, Costa Rica. The pollination services were valued to over US$60,000 a year, but soon after the study, coffee prices dropped and the fields were replanted with pineapple. [96] Pineapple does not require bees to be pollinated, so the value of their service dropped to zero.
Thirdly, conservation programmes for the sake of financial benefit underestimate human ingenuity to invent and replace ecosystem services by artificial means. McCauley argues that such proposals are deemed to have a short lifespan as the history of technology is about how Humanity developed artificial alternatives to nature's services and with time passing the cost of such services tend to decrease. This would also lead to the devaluation of ecosystem services.
Lastly, it should not be assumed that conserving ecosystems is always financially beneficial as opposed to alteration. In the case of the introduction of the Nile perch to Lake Victoria, the ecological consequence was decimation of native fauna. However, this same event is praised by the local communities as they gain significant financial benefits from trading the fish.
McCauley argues that, for these reasons, trying to convince decision-makers to conserve nature for monetary reasons is not the path to be followed, and instead appealing to morality is the ultimate way to campaign for the protection of nature.
Related Research Articles
Sustainable development is the organizing principle for meeting human development goals while simultaneously sustaining the ability of natural systems to provide the natural resources and ecosystem services on which the economy and society depends. The desired result is a state of society where living conditions and resources are used to continue to meet human needs without undermining the integrity and stability of the natural system. Sustainable development can be defined as development that meets the needs of the present without compromising the ability of future generations to meet their own needs.
Natural capital is the world's stock of natural resources, which includes geology, soils, air, water and all living organisms. Some natural capital assets provide people with free goods and services, often called ecosystem services. Two of these underpin our economy and society, and thus make human life possible.
Environmental economics is a sub-field of economics concerned with environmental issues. It has become a widely studied subject due to growing environmental concerns in the twenty-first century. Environmental Economics "...undertakes theoretical or empirical studies of the economic effects of national or local environmental policies around the world .... Particular issues include the costs and benefits of alternative environmental policies to deal with air pollution, water quality, toxic substances, solid waste, and global warming."
Human ecology is an interdisciplinary and transdisciplinary study of the relationship between humans and their natural, social, and built environments. The philosophy and study of human ecology has a diffuse history with advancements in ecology, geography, sociology, psychology, anthropology, zoology, epidemiology, public health, and home economics, among others.
Genuine progress indicator (GPI) is a metric that has been suggested to replace, or supplement, gross domestic product (GDP). The GPI is designed to take fuller account of the well-being of a nation, only a part of which pertains to the size of the nation's economy, by incorporating environmental and social factors which are not measured by GDP. For instance, some models of GPI decrease in value when the poverty rate increases. The GPI separates the concept of societal progress from economic growth.
The green economy is defined as economy that aims at making issues of reducing environmental risks and ecological scarcities, and that aims for sustainable development without degrading the environment. It is closely related with ecological economics, but has a more politically applied focus. The 2011 UNEP Green Economy Report argues "that to be green, an economy must not only be efficient, but also fair. Fairness implies recognizing global and country level equity dimensions, particularly in assuring a Just Transition to an economy that is low-carbon, resource efficient, and socially inclusive."
Ecosystem valuation is an economic process which assigns a value to an ecosystem and/or its ecosystem services. By quantifying, for example, the human welfare benefits of a forest to reduce flooding and erosion while sequestering carbon, providing habitat for endangered species, and absorbing harmful chemicals, such monetization ideally provides a tool for policy-makers and conservationists to evaluate management impacts and compare a cost-benefit analysis of potential policies. However, such valuations are estimates, and involve the inherent quantitative uncertainty and philosophical debate of evaluating a range non-market costs and benefits.
Herman Edward Daly is an American ecological and Georgist economist and emeritus professor at the School of Public Policy of University of Maryland, College Park in the United States. In 1996, he was awarded the Right Livelihood Award for "defining a path of ecological economics that integrates the key elements of ethics, quality of life, environment and community."
Environmental resource management is the management of the interaction and impact of human societies on the environment. It is not, as the phrase might suggest, the management of the environment itself. Environmental resources management aims to ensure that ecosystem services are protected and maintained for future human generations, and also maintain ecosystem integrity through considering ethical, economic, and scientific (ecological) variables. Environmental resource management tries to identify factors affected by conflicts that rise between meeting needs and protecting resources. It is thus linked to environmental protection, sustainability and integrated landscape management.
A steady-state economy is an economy made up of a constant stock of physical wealth (capital) and a constant population size. In effect, such an economy does not grow in the course of time. The term usually refers to the national economy of a particular country, but it is also applicable to the economic system of a city, a region, or the entire world. Early in the history of economic thought, classical economist Adam Smith of the 18th century developed the concept of a stationary state of an economy: Smith believed that any national economy in the world would sooner or later settle in a final state of stationarity.
Robert Costanza is an American/Australian ecological economist and Professor of Public Policy at the Crawford School of Public Policy at The Australian National University. He is a Fellow of the Academy of the Social Sciences in Australia and a Full Member of the Club of Rome.
Payments for ecosystem services (PES), also known as payments for environmental services, are incentives offered to farmers or landowners in exchange for managing their land to provide some sort of ecological service. They have been defined as "a transparent system for the additional provision of environmental services through conditional payments to voluntary providers". These programmes promote the conservation of natural resources in the marketplace.
Degrowth is a term used for both a political, economic, and social movement as well as a set of theories that critiques the paradigm of economic growth. It is based on ideas from a diverse range of lines of thought such as political ecology, ecological economics, feminist political ecology, and environmental justice. Degrowth emphasizes the need to reduce global consumption and production and advocates a socially just and ecologically sustainable society with well-being the indicator of prosperity in place of GDP. Degrowth highlights the importance of autonomy, care work, self-organization, commons, community, localism, work sharing, happiness and conviviality.
Sustainability is the ability to exist constantly. In the 21st century, it refers generally to the capacity for the biosphere and human civilization to coexist. It is also defined as the process of people maintaining change in a homeostasis balanced environment, in which the exploitation of resources, the direction of investments, the orientation of technological development and institutional change are all in harmony and enhance both current and future potential to meet human needs and aspirations. For many in the field, sustainability is defined through the following interconnected domains or pillars: environment, economic and social, which according to Fritjof Capra is based on the principles of Systems Thinking. Sub-domains of sustainable development have been considered also: cultural, technological and political. According to Our Common Future, Sustainable development is defined as development that "meets the needs of the present without compromising the ability of future generations to meet their own needs." Sustainable development may be the organizing principle of sustainability, yet others may view the two terms as paradoxical.
The history of sustainability traces human-dominated ecological systems from the earliest civilizations to the present. This history is characterized by the increased regional success of a particular society, followed by crises that were either resolved, producing sustainability, or not, leading to decline.
The Economics of Ecosystems and Biodiversity (TEEB) was a study led by Pavan Sukhdev from 2007 to 2011. It is an international initiative to draw attention to the global economic benefits of biodiversity. Its objective is to highlight the growing cost of biodiversity loss and ecosystem degradation and to draw together expertise from the fields of science, economics and policy to enable practical actions. TEEB aims to assess, communicate and mainstream the urgency of actions through its five deliverables—D0: science and economic foundations, policy costs and costs of inaction, D1: policy opportunities for national and international policy-makers, D2: decision support for local administrators, D3: business risks, opportunities and metrics and D4: citizen and consumer ownership.
Dr Stanislav Edward Shmelev is an ecological economist affiliated with the International Society for Ecological Economics (ISEE), currently holding a position of a Director of an NGO Environment Europe, Oxford, UK, Executive Director of Environment Europe Foundation, Amsterdam, The Netherlands and lecturing at the University of Edinburgh. He is the author of several books on sustainability and ecological economics.
Joan Martinez Alier is a Spanish economist, Emeritus Professor of Economics and Economic History and researcher at ICTA at the Autonomous University of Barcelona.
Although related subjects, sustainable development and sustainability are different concepts. Weak sustainability is an idea within environmental economics which states that 'human capital' can substitute 'natural capital'. It is based upon the work of Nobel Laureate Robert Solow, and John Hartwick. Contrary to weak sustainability, strong sustainability assumes that "human capital" and "natural capital" are complementary, but not interchangeable.
Giorgos Kallis is an ecological economist from Greece. He is an ICREA Research Professor at ICTA - Universitat Autònoma de Barcelona, where he teaches political ecology. He is one of the principal advocates of the theory of degrowth.
References
1. Anastasios Xepapadeas (2008). "Ecological economics". The New Palgrave Dictionary of Economics 2nd Edition. Palgrave MacMillan.
2. Jeroen C.J.M. van den Bergh (2001). "Ecological Economics: Themes, Approaches, and Differences with Environmental Economics," Regional Environmental Change, 2(1), pp. 13-23 Archived 2008-10-31 at the Wayback Machine (press +).
3. Illge L, Schwarze R. (2006). A Matter of Opinion: How Ecological and Neoclassical Environmental Economists Think about Sustainability and Economics Archived 2006-11-30 at the Wayback Machine . German Institute for Economic Research.
4. Paehlke R. (1995). Conservation and Environmentalism: An Encyclopedia, p. 315. Taylor & Francis.
5. Scott Cato, M. (2009). Green Economics. Earthscan, London. ISBN 978-1-84407-571-3.
6. Malte Faber. (2008). How to be an ecological economist. Ecological Economics66(1):1-7. Preprint.
7. Victor, Peter (2008). "Book Review: Frontiers in Ecological Economic Theory and Application". Ecological Economics. 66 (2–3): 2–3. doi:10.1016/j.ecolecon.2007.12.032.
8. Mattson L. (1975). Book Review: Positional Analysis for Decision-Making and Planning by Peter Soderbaum. The Swedish Journal of Economics.
9. Soderbaum, P. 2008. Understanding Sustainability Economics. Earthscan, London. ISBN 978-1-84407-627-7. pp.109-110, 113-117.
10. Aslaksen, Iulie; Bragstad, Torunn; Ås, Berit (2014). "Feminist Economics as Vision for a Sustainable Future". In Bjørnholt, Margunn; McKay, Ailsa (eds.). Counting on Marilyn Waring: New Advances in Feminist Economics. Demeter Press/Brunswick Books. pp. 21–36. ISBN 9781927335277.
11. Bellamy Foster, John; Burkett, Paul (March 2004). "Ecological Economics and Classical Marxism: The "Podolinsky Business" Reconsidered" (PDF). Organization & Environment. 17 (1): 32–60. doi:10.1177/1086026603262091. Archived from the original (PDF) on 31 August 2018. Retrieved 31 August 2018.
12. Cartwright Nancy, J. Cat, L. Fleck, and T. Uebel, 1996. Otto Neurath: philosophy between science and politics. Cambridge University Press
13. Zencey, Eric. (2009, April 12). Op-ed. New York Times, p. WK9. Accessed: December 23, 2012.
14. Kapp, K. W. (1950) The Social Costs of Private Enterprise. New York: Shocken.
15. Polanyi, K. (1944) The Great Transformation. New York/Toronto: Rinehart & Company Inc.
16. Georgescu-Roegen, Nicholas (1971). The Entropy Law and the Economic Process (Full book accessible at Scribd). Cambridge, Massachusetts: Harvard University Press. ISBN 978-0674257801.
17. Schumacher, E.F. 1973. Small Is Beautiful: A Study of Economics as if People Mattered. London: Blond and Briggs.
18. Daly, H. 1991. Steady-State Economics (2nd ed.). Washington, D.C.: Island Press.
19. Røpke, I (2004). "The early history of modern ecological economics". Ecological Economics. 50 (3–4): 293–314. doi:10.1016/j.ecolecon.2004.02.012.
20. Costanza R. (2003). Early History of Ecological Economics and ISEE. Internet Encyclopaedia of Ecological Economics.
21. Martinez-Alier J. (1987)Ecological economics: energy, environment and society
22. Røpke, I (2004). "The early history of modern ecological economics Ecological Economics 50(3-4) 293-314. Røpke, I. (2005) Trends in the development of ecological economics from the late 1980s to the early 2000s". Ecological Economics. 55 (2): 262–290. doi:10.1016/j.ecolecon.2004.10.010.
23. Spash, C. L. (1999). "The development of environmental thinking in economics" (PDF). Environmental Values. 8 (4): 413–435. doi:10.3197/096327199129341897. Archived from the original (PDF) on 2014-02-21. Retrieved 2012-12-23.
24. Costanza, R.; Stern, D. I.; He, L.; Ma, C. (2004). "Influential publications in ecological economics: a citation analysis". Ecological Economics. 50 (3–4): 261–292. doi:10.1016/j.ecolecon.2004.06.001.
25. Spash, C. L. (2013). "Influencing the perception of what and who is important in ecological economics". Ecological Economics. 89: 204–209. doi:10.1016/j.ecolecon.2013.01.028.
26. Spash, C. L. (2013). "The Shallow or the Deep Ecological Economics Movement?" (PDF). Ecological Economics. 93: 351–362. doi:10.1016/j.ecolecon.2013.05.016.
27. Anderson, B.; M'Gonigle, M. (2012). "Does ecological economics have a future?: contradiction and reinvention in the age of climate change". Ecological Economics. 84: 37–48. doi:10.1016/j.ecolecon.2012.06.009.
28. "Spash, C.L. (2011) Social ecological economics: Understanding the past to see the future. American Journal of Economics and Sociology 70, 340-375" (PDF). Archived from the original (PDF) on 2014-01-07. Retrieved 2014-01-07.
29. Jacqui Lagrue (2012-07-30). "Spash, C.L., Ryan, A. (2012) Economic schools of thought on the environment: Investigating unity and division". Cambridge Journal of Economics. 36 (5): 1091–1121. doi:10.1093/cje/bes023 . Retrieved 2014-01-07.
30. Costanza, R (1989). "What is ecological economics?" (PDF). Ecological Economics. 1: 1–7. doi:10.1016/0921-8009(89)90020-7.
31. Martinez-Alier, J., 1994. Ecological economics and ecosocialism, in: O'Connor, M. (Ed.), Is Capitalism Sustainable? Guilford Press, New York, pp. 23-36
32. Spash, C.L., Clayton, A.M.H., 1997. The maintenance of natural capital: Motivations and methods, in: Light, A., Smith, J.M. (Eds.), Space, Place and Environmental Ethics. Rowman & Littlefield Publishers, Inc., Lanham, pp. 143-173
33. Toman, M (1998). "Why not to calculate the value of the world's ecosystem services and natural capital". Ecological Economics. 25: 57–60. doi:10.1016/s0921-8009(98)00017-2.
34. O'Neill, John (1993). Ecology, policy, and politics : human well-being and the natural world. London New York: Routledge. ISBN 978-0-415-07300-4. OCLC 52479981.CS1 maint: ref=harv (link)
35. O'Neill, J.F. (1997). "Managing without prices: On the monetary valuation of biodiversity". Ambio. 26: 546–550.
36. Vatn, A (2000). "The environment as commodity". Environmental Values. 9 (4): 493–509. doi:10.3197/096327100129342173.
37. Harris J. (2006). Environmental and Natural Resource Economics: A Contemporary Approach. Houghton Mifflin Company.
38. Costanza R; et al. (1998). "The value of the world's ecosystem services and natural capital1". Ecological Economics. 25 (1): 3–15. doi:10.1016/S0921-8009(98)00020-2.
39. Knapp G, Roheim CA and Anderson JL (2007) The Great Salmon Run: Competition Between Wild And Farmed Salmon World Wildlife Fund. ISBN 0-89164-175-0
40. Washington Post. Salmon Farming May Doom Wild Populations, Study Says.
41. Soderbaum P. (2004). Politics and Ideology in Ecological Economics. Internet Encyclopaedia of Ecological Economics.
42. Bina, O. (2011). "Promise and shortcomings of a green turn in recent policy responses to the 'double crisis'" (PDF). Ecological Economics. 70: 2308–2316. doi:10.1002/geo2.36.
43. Janicke, M. (2012). "'Green growth': from a growing eco‐industry to economic sustainability". Energy Policy. 28: 13–21. doi:10.1016/j.enpol.2012.04.045.
44. UNEP, 2012. GREEN ECONOMY IN ACTION: Articles and Excerpts that Illustrate Green Economy and Sustainable Development Efforts, p. 6. Retrieved 8 June 2018 from http://www.un.org/waterforlifedecade/pdf/green_economy_in_action_eng.pdf
45. Norgaard, R. B. (1994) Development Betrayed: The End of Progress and a Coevolutionary Revisioning of the Future. London: Routledge
46. Daily, G.C. 1997. Nature's Services: Societal Dependence on Natural Ecosystems. Washington, D.C.: Island Press.
47. Millennium Ecosystem Assessment. 2005. Ecosystems and Human Well-Being: Biodiversity Synthesis. Washington, D.C.: World Resources Institute.
48. McCauley, D. J. (2006). "Selling out on nature". Nature. 443 (7): 27–28. Bibcode:2006Natur.443...27M. doi:10.1038/443027a. PMID 16957711.
49. Daly, H. and Farley, J. 2004. Ecological Economics: Principles and Applications. Washington: Island Press.
50. Ayres, R.U. (2007). "On the practical limits of substitution". Ecological Economics. 61: 115–128. doi:10.1016/j.ecolecon.2006.02.011.
51. Shmelev, S.E. 2012. Ecological Economics. Sustainability in Practice, Springer
52. Müller, A.; Kranzl, L.; Tuominen, P.; Boelman, E.; Molinari, M.; Entrop, A.G. (2011). "Estimating exergy prices for energy carriers in heating systems: Country analyses of exergy substitution with capital expenditures". Energy and Buildings. 43 (12): 3609–3617. doi:10.1016/j.enbuild.2011.09.034.
53. Hall, Charles A.S.; Cleveland, Cutler J.; Kaufmann, Robert (1992). Energy and Resource Quality: The ecology of the Economic Process. Niwot, Colorado: University Press of colorado.
54. Peter A. Corning 1 *, Stephen J. Kline. (2000). Thermodynamics, information and life revisited, Part II: Thermoeconomics and Control information Systems Research and Behavioral Science, Apr. 07, Volume 15, Issue 6, Pages 453 – 482
55. Corning, P. (2002). “Thermoeconomics – Beyond the Second Law Archived 2008-09-22 at the Wayback Machine ” – source: www.complexsystems.org
56. Wall, Göran. "Exergy - a useful concept". Exergy.se. Retrieved 2012-12-23.
57. "Environmental Decision making, Science and Technology". Telstar.ote.cmu.edu. Archived from the original on 2010-01-05. Retrieved 2012-12-23.
58. Stabile, Donald R. "Veblen and the Political Economy of the Engineer: the radical thinker and engineering leaders came to technocratic ideas at the same time," American Journal of Economics and Sociology (45:1) 1986, 43-44.
59. Farley, Joshua. "Ecosystem services: The economics debate." Ecosystem services 1.1 (2012): 40-49. https://doi.org/10.1016/j.ecoser.2012.07.002
60. Kallis, Giorgos; Gómez-Baggethun, Erik; Zografos, Christos (2013). "To value or not to value? That is not the question". Ecological Economics. 94: 97–105. doi:10.1016/j.ecolecon.2013.07.002.
61. Costanza, R.; d'Arge, R.; de Groot, R.; Farber, S.; Grasso, M.; Hannon, B.; Naeem, S.; Limburg, K.; Paruelo, J.; O'Neill, R.V.; Raskin, R.; Sutton, P.; and van den Belt, M. (1997). "The value of the world's ecosystem services and natural capital" (PDF). Nature. 387 (6630): 253–260. Bibcode:1997Natur.387..253C. doi:10.1038/387253a0. Archived from the original (PDF) on 2012-07-30.
62. Norgaard, R.B.; Bode, C. (1998). "Next, the value of God, and other reactions". Ecological Economics. 25: 37–39. doi:10.1016/s0921-8009(98)00012-3.
63. Brouwer, Roy (January 2000). "Environmental value transfer: state of the art and future prospects". Ecological Economics. 32 (1): 137–152. doi:10.1016/S0921-8009(99)00070-1.
64. Gómez-Baggethun, Erik; de Groot, Rudolf; Lomas, Pedro; Montes, Carlos (1 April 2010). "The history of ecosystem services in economic theory and practice: From early notions to markets and payment schemes". Ecological Economics. 69 (6): 1209–1218. doi:10.1016/j.ecolecon.2009.11.007.
65. Farber, Stephen; Constanza, Robert; Wilson, Matthew (June 2002). "Economic and ecological concepts for valuing ecosystem services". Ecological Economics. 41 (3): 375–392. doi:10.1016/S0921-8009(02)00088-5.
66. SocialEdge.org. Archived 2009-02-15 at the Wayback Machine Accessed: December 23, 2012.
67. Archived June 21, 2008, at the Wayback Machine
68. Multinational Monitor, 9/2007. Accessed: December 23, 2012.
69. "Ecuador threat to drill jungle oil". Archived from the original on December 18, 2008.
70. "International News | World News - ABC News". Abcnews.go.com. 4 June 2012. Retrieved 2012-12-23.
71. Spash, Clive L (2010). "The brave new world of carbon trading". New Political Economy. 15 (2): 169–195. doi:10.1080/13563460903556049.
72. "Pesticide Action Network | Reclaiming the future of food and farming". Archived from the original on 2008-06-21. Retrieved 2008-06-21.
73. Emmott, Bill (April 17, 2008). "GM crops can save us from food shortages". The Daily Telegraph. London.
74. Costanza, Robert; Segura, Olman; Olsen, Juan Martinez-Alier (1996). . Washington, D.C.: Island Press. ISBN 978-1559635035.
75. Pearce, David "Blueprint for a Green Economy"
76. "Spash, C. L. (2007) The economics of climate change impacts à la Stern: Novel and nuanced or rhetorically restricted? Ecological Economics 63(4): 706-713" (PDF). Archived from the original (PDF) on 2014-02-02. Retrieved 2012-12-23.
77. Hawken, Paul (1994) "The Ecology of Commerce" (Collins)
78. Hawken, Paul; Amory and Hunter Lovins (2000) "Natural Capitalism: Creating the Next Industrial Revolution" (Back Bay Books)
79. Martinez-Alier, Joan (2002) The Environmentalism of the Poor: A Study of Ecological Conflicts and Valuation. Cheltenham, Edward Elgar
80. Kapp, Karl William (1963) The Social Costs of Business Enterprise. Bombay/London, Asia Publishing House.
81. Kapp, Karl William (1971) Social costs, neo-classical economics and environmental planning. The Social Costs of Business Enterprise, 3rd edition. K. W. Kapp. Nottingham, Spokesman: 305-318
82. Eisenstein , Charles (2011), "Sacred Economics: Money, Gift and Society in an Age in Transition" (Evolver Editions)
83. Spash, Clive L. (16 July 2010). "The brave new world of carbon trading" (PDF). New Political Economy. 15 (2): 169–195. doi:10.1080/13563460903556049. Copy also available at "Archived copy" (PDF). Archived from the original (PDF) on 2013-05-10. Retrieved 2013-09-13.CS1 maint: archived copy as title (link)
84. Proops, J., and Safonov, P. (eds.) (2004), Modelling in Ecological Economics Archived 2014-12-27 at the Wayback Machine , Edward Elgar
85. Faucheux, S., Pearce, D., and Proops, J. (eds.) (1995), Models of Sustainable Development, Edward Elgar
86. Chen, Jing (2015). The Unity of Science and Economics: A New Foundation of Economic Theory. https://www.springer.com/us/book/9781493934645: Springer.CS1 maint: location (link)
87. Costanza, R., and Voinov, A. (eds.) (2004), Landscape Simulation Modeling. A Spatially Explicit, Dynamic Approach, Springer-Verlag New-York, Inc.
88. Voinov, Alexey (2008). Systems science and modeling for ecological economics (1st ed.). Amsterdam: Elsevier Academic Press. ISBN 978-0080886176.
89. Felber, Christian (2012), "La economia del bien commun" (Duestro)
90. Ament, Joe (February 12, 2019). "Toward an Ecological Monetary Theory". Sustainability. 11 (3): 923. doi:.
91. Mace GM. Whose conservation? Science (80- ). 2014 Sep 25;345(6204):1558–60.
92. Dasgupta P. Nature’s role in sustaining economic development. Philos Trans R Soc Lond B Biol Sci. 2010 Jan 12;365(1537):5–11.
93. McCauley, Douglas J. (2006). "Selling out on nature" (PDF). Nature. Springer Nature. 443 (7107): 27–28. Bibcode:2006Natur.443...27M. doi:10.1038/443027a. ISSN 0028-0836. PMID 16957711.CS1 maint: ref=harv (link)
94. Mech LD. The Challenge and Opportunity of Recovering Wolf Populations. Conserv Biol. 1995 Apr;9(2):270–8
95. Ricketts TH, Daily GC, Ehrlich PR, Michener CD. Economic value of tropical forest to coffee production. Proc Natl Acad Sci U S A. 2004 Aug 24;101(34):12579–82
Schools and institutes:
Environmental data:
Miscellaneous:
|
# How to plausibly build a worldwide oligopoly regarding pretty much all areas?
I'm designing a story where I'm taking current situations and trends such as market and cultural globalization, massive state debt and constant corporate mergers to a semi-dystopian extreme in order to build a fictional world where:
• Newly created private companies almost automatically become part of one of four conglomerates. These four complement one another in different ways, and there is a fragile equilibrium where this status quo benefits all of them financially. Each of these multinational corporations have various enterprises related to basically every target audience (young, old, rebellious, conservative, etc.) in basically all areas (consumer goods, industry, media, etc.). If the newly created company has been created with the intention of being independent and tries to avoid being merged into one of the bigger fishes, for instance, all competition can band together (or maybe just one of them will need to act) to lower prices even into a loss, which they can allow temporarily as they're part of something bigger, so that the new actor's market share and its profits are negligible or at a loss, and the debt will require them to accept being bought off (this also will bring all of the infrastructure into the hands of the buying conglomerate).
• Government is libertarian, so there are not anti-trust regulations to avoid these dirty tricks. In fact, many of the government members have work experience and close contacts in the higher ups in one or various conglomerates (and they will presumably go back to one of them after their "public service" office terms are over), so the "state" has become a market tool progressively, without need of anything as overt, suspicious or obvious as a coup. Political options such as the traditional left or the far-right have dissipated, given that there's not enough media traction for them and, when they do appear on the news and talk shows, it is to be demonized and ridiculed.
• Society elects these representatives because rampant consumerism has ended up ended up replacing ideology and nationalism as a point of pride; as in, people don't feel pride for a country, they feel closer to cultural products and notable people of the four corporations. The lynchpin of conservative traditional family values has, say, become a certain TV show focused on Aesop's and pat morality, while a center of post-modern cynicism is a satirical TV show... maybe aired in two different channels but part of the same conglomerate. Whatever citizen outrage exists is fed back into the capitalist machine. Of course, there exist some small grassroots movements, but they don't get much traction outside of performances, and even some of its members (the ones whose antics get the most attention) end up being bought off by the media in order to participate on it (see the ending of Black Mirror's 15 Million Merits for reference).
So, apart from asking you to find holes in my worldbuilding (please do), here is my main question:
Is there any relatively plausible way to "end up" like this? I want to explore the progression from the present to that situation somehow in the story... Which are some possible, crucial events that could have shape the society I explained?
• Hi and welcome to world building. I can tell you've put a lot of thought into this question. In order to ensure answers don't go on too long we like posts to stick to one question per "question". Perhaps consider what level of detail you want an answer to go into. It isn't unusual for people to create a series of questions relating to one topic. If you want a lot of detail you could have two separate "question"s one which asks about the events that shape your society and another that asks about the relationship with banks. – Lio Elbammalf Feb 26 '17 at 14:47
• I took a more liberal approach than @LioElbammalf did -- I made the edit myself because I really liked your question and didn't want to see it closed out. I'll post your second question as a comment... feel free to create a new question, link to this one, where you asked the second main question. – SRM Feb 26 '17 at 14:51
• The original "second question": In my mind, banks are the centerpiece of each of these corporate conglomerates, but as far as I know, in current capitalism they aren't intertwined, at least in a direct way. What could make banking groups interesed in directly fusing with other business and industries areas instead of maintaining its current relationship? – SRM Feb 26 '17 at 14:52
• Newly created private companies almost automatically become part of one of four conglomerates. Why go to the trouble of making a new company then? If you're thinking the entrepreneurs can simply sell out, why would the oligarchy buy them out when they can simply squeeze the new enterprise out of the market and take over the business – nzaman Feb 26 '17 at 16:13
• @nzaman The media and education system actually would promote entrepeneurship, so there'd be a higher inclination to do that than in our present world. Of course, people know that they will end up being bought out, but even for great minds the system can make it attractive: "If you give us the product and pretty much all its profits and intellectual property you get a nice flat fee and maybe even your name listed somewhere as creator/designer". Anyway the oligopoly don't want to be seen as bad guys: nice climate for "innovation" -> they reap pretty much all other people sow minimizing expense. – DrEddieFleming Feb 27 '17 at 21:45
How does this differ from the current world? The critical differences seem to be
1. No antitrust enforcement.
2. Weak geographical loyalties like nationalism and belief-based loyalties like religion.
The latter is already becoming true. People are moving between countries at higher and higher rates. Religious affiliations of none or unaffiliated are increasing. So you can just extend current trends.
The former is different per country. Some of the larger economies, like China and Japan have weak antitrust already. The US varies between periods of strong and weak enforcement. Also note that companies can evade restrictions through organic growth. Google, Amazon, and Facebook are quite dominant in their areas. Network effects are already reinforcing the advantages of large companies.
All you really need is to come up with a reason why China wins. For example, China could be become the new US. Using a mix of corporate, development, and military aid to bind other countries closer to it. Chinese corporations expand from China to Africa, South America, and Southern Asia. I don't know that that is what will happen, but it is at least a plausible story line. It's what some people in China are trying to make happen. Maybe they're right that they can.
• I disagree with point 1. Without anti-trust, I don't see how the same pressures that would allow an oligopoly wouldn't go the further step into monopoly. While it is possible, I think that the oligopoly is an unstable equilibrium that would break down to a monopoly unless something propped it up. – ShadoCat Feb 28 '17 at 21:05
• Remember that it's multiple monopolies. And the smaller three would have reason to squash the biggest one if it tried to outgrow them. – Brythan Mar 1 '17 at 2:15
It would take a long string of lobbying to find/build loopholes into anti-trust laws. The only reason for an oligopoly to develop is if they are prevented from forming a monopoly. So, I envision a long string of small events where they use the oligopoly to "prove" that they aren't a monopoly. Once the system is set up and the 4 power bases are stabilized, they can begin the process of getting the public to forget about monopolies (or keep it, dealer's choice).
The would always need to have something to point to that is worse than the current system so they can harp about "how good we are compared to...".
Note I would study Japan for the way conglomerates can work. Honda and Toyota make a whole lot more than cars. We just don't see it in the US. They also have non-confrontational unions.
So, a system that you are describing can work. It will just take a long road to get it to work in the US. I would say 2-3 generations of concerted effort.
Here are the ways that a small business would be "absorbed" by one of the four:
• Small business cannot get product manufactured or marketed for a decent price so the license it to one of the four for an upfront and a percentage. That is done today.
• Small business needs a loan from one of the four, gets deal if uses
that conglomerate's parts to build their widget and a deal to use
that conglomerate's marketing form to advertise and that
conglomerate's retail outlets to sell it. Now, how independent is
that small business? If the deals are good, the conglomerate gets a bunch of products that they do not have to innovate and do not need
to manage but leave only enough profit from the entrepreneur to make it worth the effort. If the deal isn't as good, then it just pushes the licensing. That happens a lot today just not through a single conglomerate.
• The public is sold on the idea that they can only trust the safety or quality of a product if it is made by one of the four.
• The four make sure the business fails and then just make the product themselves. This is the least efficient, I think.
I really don't see the four buying small businesses much unless there is good management in the business.
|
Friday
December 13, 2013
# Homework Help: chemistry
Posted by Jennifer on Sunday, September 4, 2011 at 11:53pm.
Suppose you have two 100-{\rm mL} graduated cylinders. In each cylinder there is 58.5mL of water. You also have two cubes: One is lead, and the other is aluminum. Each cube measures 2.0cm on each side.
After you carefully lower each cube into the water of its own cylinder, what will the new water level be in each of the cylinders? Assume that cubes are totally emerged in the water.
• chemistry - Jennifer, Monday, September 5, 2011 at 12:02am
nevermind, I just figured it out
• chemistry - Nicole, Monday, September 5, 2011 at 12:22pm
How did you figure it out?
Related Questions
chemistry - A solution is prepared by dissolving 20.2 {\rm mL} of methanol ({\rm...
chemistry - 25.6 {\rm mL} of ethanol (density =0.789 {\rm g}/{\rm mL}) initially...
chemistry - A 75.0-\rm mL volume of 0.200 M \rm NH_3 (K_{\rm b}=1.8\times 10^{-5...
chemistry - If 35.7{\rm mL} of 0.205{\rm M} {\rm KOH} is required to completely ...
Chemistry - In the following experiment, a coffee-cup calorimeter containing 100...
chemistry - 44.0 mL \rm mL of 0.150 M M Na 2 SO 4 \rm Na_2SO_4 and 25.0 mL \rm ...
organic chemistry - What is the {\rm HCl} concentration if 54.2 mL of 0.100 M {\...
chemistry - How many milliliters of 3.60 {\rm M} {\rm HCl} are required to react...
chem - A student placed 13.5g of glucose (\rm C_6H_{12}O_6) in a volumetric ...
chemistry - Suppose that the microwave radiation has a wavelength of 10.8 cm. ...
Search
Members
|
Publications
2020
Crowdsourced Detection of Emotionally Manipulative Language
CHI, 2020
Overview of the seventh Dialog System Technology Challenge: DSTC7
CSL, 2020
2019
No-Press Diplomacy: Modeling Multi-Agent Gameplay
NeurIPS, 2019
An Evaluation for Intent Classification and Out-of-Scope Prediction
EMNLP (short), 2019
A Large-Scale Corpus for Conversation Disentanglement
ACL, 2019
SLATE: A Super-Lightweight Annotation Tool for Experts
ACL (demo), 2019
Outlier Detection for Improved Data Quality and Diversity in Dialog Systems
NAACL, 2019
Look Who's Talking: Inferring Speaker Attributes from Personal Longitudinal Dialog
Best Student Paper - CICLing, 2019
Learning from Personal Longitudinal Dialog Data
IEEE Intelligent Systems, 2019
2018
Improving Text-to-SQL Evaluation Methodology
ACL, 2018
Data Collection for a Production Dialogue System: A Startup Perspective
NAACL (industry), 2018
Effective Crowdsourcing for a New Type of Summarization Task
NAACL (short), 2018
Factors Influencing the Surprising Instability of Word Embeddings
NAACL, 2018
World Knowledge for Abstract Meaning Representation Parsing
LREC, 2018
2017
Identifying Products in Online Cybercrime Marketplaces: A Dataset for Fine-grained Domain Adaptation
EMNLP, 2017
ACL (short), 2017
Tools for Automated Analysis of Cybercriminal Markets
WWW, 2017
Parsing with Traces: An O($n^4$) Algorithm and a Structural Representation
TACL, 2017
2016
Algorithms for Identifying Syntactic Errors and Parsing with Graph Structured Output
EECS Department, University of California, Berkeley, 2016
2015
An Empirical Analysis of Optimization for Max-Margin NLP
EMNLP (short), 2015
2013
Error-Driven Analysis of Challenges in Coreference Resolution
EMNLP, 2013
An Empirical Examination of Challenges in Chinese Parsing
ACL (short), 2013
High-velocity Clouds in the Galactic All Sky Survey. I. Catalog
The Astrophysical Journal Supplement Series, 2013
2012
Parser Showdown at the Wall Street Corral: An Empirical Investigation of Error Types in Parser Output
EMNLP, 2012
Robust Conversion of CCG Derivations to Phrase Structure Trees
ACL (short), 2012
2011
Mention Detection: Heuristics for the OntoNotes annotations
2010
Spatiotemporal Hierarchy of Relaxation Events, Dynamical Heterogeneities, and Structural Reorganization in a Supercooled Liquid
Physical Review Letters, 2010
Morphological Analysis Can Improve a CCG Parser for English
CoLing, 2010
ACL, 2010
2009
Faster parsing and supertagging model estimation
ALTA, 2009
The University of Sydney, 2009
2008
Classification of Verb Particle Constructions with the Google Web1T Corpus
ALTA, 2008
The densest packing of AB binary hard-sphere homogeneous compounds across all size ratios
The Journal of Physical Chemistry B, 2008
Non-Archival
2020
NOESIS II: Predicting Responses, Identifying Success, and Managing Complexity in Task-Oriented Dialogue
AAAI Wokshop: Dialogue System Technology Challenges, 2020
2019
The Eighth Dialog System Technology Challenge
NeurIPS Workshop: Conversational AI: Today’s Practice and Tomorrow’s Potential, 2019
Training Data Voids: Novel Attacks Against NLP Content Moderation
CSCW Workshop: Volunteer Work: Mapping the Future of Moderation Research, 2019
DSTC7 Task 1: Noetic End-to-End Response Selection
ACL Workshop: NLP for Conversational AI, 2019
DSTC7 Task 1: Noetic End-to-End Response Selection
AAAI Wokshop: Dialogue System Technology Challenges, 2019
2018
Dialog System Technology Challenge 7
NeurIPS Workshop: Conversational AI: Today’s Practice and Tomorrow’s Potential, 2018
2009
Large-Scale Syntactic Processing: Parsing the Web
Johns Hopkins University, 2009
Software
Colaboratoy Notebook for Coreference Resolution with SpanBERT
A notebook that (1) sets up the SpanBERT code and model, and (2) runs inference on text you provide.
SLATE: A Super-Lightweight Annotation Tool for Experts
A terminal-based text annotation tool in Python.
Neural POS tagging
Implementations of a POS tagger in DyNet, PyTorch, and Tensorflow, visualised to show the overall picture and make comparisons easy.
Text to SQL Baseline
A simple LSTM-based model that uses templates and slot-filing to map questions to SQL queries.
One-Endpoint Crossing Graph Parser
A range of tools related to one-endpoint crossing graphs - parsing, format conversion, and evaluation.
Coreference Error Analysis
A tool for classifying errors in coreference resolution.
CCG to PST
A tool for converting CCG derivations into PTB-style phrase structure trees.
Parse Error Analysis
A tool for classifying mistakes in the output of parsers.
Data
IRC Disentanglement
Annotation of IRC messages with reply-to structure, which disentangles simultaneous conversations. The largest such annotated resource.
Text to SQL datasets
A collection of datasets containing questions in English paired with SQL queries for a provided database. Our version homogenises the style of the SQL and corrects errors in previous versions of the data.
IE/NER from Cybercriminal Forums
Forum posts with annotations of products.
Crowdsourced Paraphrases
Paraphrases collected while conducting experiments on factors influencing crowd performance.
Spine and Arc version of the Penn Treebank
Code to convert the standard Penn Treebank into a version where each word is assigned a spine of non-terminals, and arcs to indicate attachments from one spine to another.
A model for the C&C supertagger that gives the same results with smaller beam sizes, enabling faster parsing.
Recent Posts
Keeping up with research is hard. I’ve previously made lists of papers I wanted to read, and then only gotten to a small fraction of them. Simply resolving to read more papers hasn’t worked for me. I’m trying out a new approach. The goals are (1) read less of more papers, and (2) read more papers that are critical to my work. Sometimes just the introduction or abstract is enough for me to get the ideas I need from the paper.
Approaching Conferences
Am I getting the most our of time at conferences? This post was a way for me to think through that question and come up with strategies.
No-Press Diplomacy: Modeling Multi-Agent Gameplay (Paquette et al., 2019)
Games have been a focus of AI research for decades, from Samuel’s checkers program in the 1950s, to Deep Blue playing Chess in the 1990s, and AlphaGo playing Go in the 2010s. All of those are two-player…
A Large-Scale Corpus for Conversation Disentanglement (Kummerfeld et al., 2019)
This post is about my own paper to appear at ACL later this month. What is interesting about this paper will depend on your research interests, so that’s how I’ve broken down this blog post. A few key points first: Data and code are available on Github. The paper is also available. The general-purpose span labeling and linking annotation tool we used is also appearing at ACL. Check out DSTC 8 Track 2, which is based on this work.
Crowdsourcing Services
A range of services exist for collecting annotations from paid workers. This post gives an overview of a bunch of them.
Contact
• [email protected]
• 2260 Hayward Street, Ann Arbor, MI 48109, USA
|
Division of Complex Numbers
# Division of Complex Numbers
Recall from the Addition and Multiplication of Complex Numbers page that if $z = a + bi, w = c + di \in \mathbb{C}$ then the sum $z + w$ by addition is defined as:
(1)
\begin{align} \quad z + w = (a + c) + (b + d)i \end{align}
Furthermore, the product $z \cdot w$ (or simply $zw$) by multiplication is defined as:
(2)
\begin{align} \quad zw = (ac - bd) + (ad + bc)i \end{align}
We will now look at the operation of division of complex numbers. The process of dividing two complex numbers is a tad more technical. For the set of real numbers, if $y \in \mathbb{R}$ and $y \neq 0$ then $\displaystyle{y^{-1} = \frac{1}{y}}$ is a well-defined and unique real number satisfying $\displaystyle{xx^{-1} = 1}$ and is called the multiplicative inverse of $y$ (or reciprocal of $y$). Then if $x \in \mathbb{R}$, the quotient $\displaystyle{\frac{x}{y} = xy^{-1}}$ is well defined and the operation is called division.
We wish to establish an analogous operation for dividing complex numbers.
Let $w = c + di \in \mathbb{C}$ and assume that $w \neq 0$. Then this means that $c$ and $d$ are not both zero. Using a trick from elementary algebra and we see that:
(3)
\begin{align} \quad \frac{1}{w} = \frac{1}{c + di} = \frac{1}{c + di} \cdot \frac{c - di}{c - di} = \frac{c - di}{c^2 - cdi + cdi - d^2i^2} = \frac{c - di}{c^2 - d^2i^2} = \frac{c - di}{c^2 + d^2} = \frac{c}{c^2 + d^2} - \frac{d}{c^2 + d^2} i \end{align}
This is a well defined complex number since the denominator $c^2 + d^2 \neq 0$. Similarly, if $z = a + bi \in \mathbb{C}$ then:
(4)
\begin{align} \quad \frac{a + bi}{c + di} = \frac{a + bi}{c + di} \frac{c - di}{c - di} = \frac{(a + bi)(c - di)}{c^2 + d^2} = \frac{ac - adi + bci - bdi^2}{c^2 + d^2} = \frac{ac + bd}{c^2 + d^2} - \frac{ad - bc}{c^2 + d^2} i \end{align}
Definition: If $z = a + bi, w = c + di \in \mathbb{C}$ and let $w \neq 0$ then the operation of Division denoted $\div$ between $z$ and $w$ yields the Quotient defined to be $\displaystyle{z \div w = \frac{z}{w} = \frac{ac + bd}{c^2 + d^2} - \frac{ad - bc}{c^2 + d^2}} i$.
For example:
(5)
\begin{align} \quad \frac{2 + 3i}{3 + 2i} = \frac{2 + 3i}{3 + 2i} \frac{3 - 2i}{3 - 2i} = \frac{(2 + 3i)(3 - 2i)}{9 + 4} = \frac{6 - 4i + 9i - 6i^2}{13} = \frac{12 + 5i}{13} = \frac{12}{13} + \frac{5}{13} i \end{align}
|
# I need help with the following integral and reaching a conclusion to solve another one with it.
Considering the function $$f(x,y)=\frac{1}{(1+y)(1+x^2 y)}$$ for $$x,y>0$$. Proof that $$f$$ is integrable and deduce the value of $$\int_{0}^{+\infty}\frac{\log(x)}{x^2 - 1}$$
I don't completely understand how to do this, because my teacher has only taught us to calculate integrals in finite rectangles R=$$[a_1,b_1] \times [a_2,b_2]$$. $$\int_0^\infty \int_0^\infty f \ \ dx dy$$ is the only way I've thought about it.
## 1 Answer
Hints:
You're on the right track. Integrate w.r.t. $$y$$ first: as a function of $$y$$, decompose $$f(x,y)$$ into partial fractions: $$\frac{1}{(1+y)(1+x^2 y)}=\frac{A}{(1+y)}+\frac{B}{(1+x^2y)}.$$ You should obtain $$\;A=\dfrac 1{1-x^2}, \enspace B=\dfrac{x^2}{x^2-1}$$, so that $$\int_0^{+\infty}f(x,y)\,\mathrm dy=\frac1{1-x^2}\log\frac{1+y}{1+x^2y} \Biggl|_0^{+\infty}=\frac1{1-x^2}\log\frac1{x^2}=\frac{2\log x}{x^2-1}.$$ Thus, the sought integral is half the double integral $$\iint_{\mathbf R_+\times\mathbf R_+}f(x,y)\,\mathrm d x\,\mathrm d y.$$
Now , applying Fubini's theorem, we also have $$\iint_{\mathbf R_+\times\mathbf R_+}\mkern-27muf(x,y)\,\mathrm d x\,\mathrm dy= \int_0^{+\infty}\!\biggl(\int_0^{+\infty}f(x,y)\,\mathrm dx\biggr)\mathrm dy= \int_0^{+\infty}\!\frac1{1+y}\biggl(\int_0^{+\infty}\mkern-12mu\frac1{1+x^2y}\,\mathrm dx\biggr)\mathrm dy.$$
|
# Relaxed parameter conditions for chemotactic collapse in logistic-type parabolic-elliptic Keller-Segel systems
25 May 2020 · Tobias Black, Mario Fuest, Johannes Lankeit ·
We study the finite-time blow-up in two variants of the parabolic-elliptic Keller-Segel system with nonlinear diffusion and logistic source. In $n$-dimensional balls, we consider \begin{align*} \begin{cases} u_t = \nabla \cdot ((u+1)^{m-1}\nabla u - u\nabla v) + \lambda u - \mu u^{1+\kappa}, \\ 0 = \Delta v - \frac1{|\Omega|} \int_\Omega u + u \end{cases} \tag{JL} \end{align*} and \begin{align*} \begin{cases} u_t = \nabla \cdot ((u+1)^{m-1}\nabla u - u\nabla v) + \lambda u - \mu u^{1+\kappa}, \\ 0 = \Delta v - v + u, \end{cases}\tag{PE} \end{align*} where $\lambda$ and $\mu$ are given spatially radial nonnegative functions and $m, \kappa > 0$ are given parameters subject to further conditions. In a unified treatment, we establish a bridge between previously employed methods on blow-up detection and relatively new results on pointwise upper estimates of solutions in both of the systems above and then, making use of this newly found connection, provide extended parameter ranges for $m,\kappa$ leading to the existence of finite-time blow-up solutions in space dimensions three and above. In particular, for constant $\lambda, \mu > 0$, we find that there are initial data which lead to blow-up in (JL) if \begin{alignat*}{2} 0 \leq \kappa &< \min\left\{\frac{1}{2}, \frac{n - 2}{n} - (m-1)_+ \right\}&&\qquad\text{if } m\in\left[\frac{2}{n},\frac{2n-2}{n}\right)\\ \text{ or }\quad 0 \leq \kappa&<\min\left\{\frac{1}{2},\frac{n-1}n-\frac{m}2\right\} &&\qquad \text{if } m\in\left(0,\frac{2}{n}\right), \end{alignat*} and in (PE) if $m \in [1, \frac{2n-2}{n})$ and \begin{align*} 0 \leq \kappa < \min\left\{\frac{(m-1) n + 1}{2(n-1)}, \frac{n - 2 - (m-1) n}{n(n-1)} \right\}. \end{align*}
PDF Abstract
## Code Add Remove Mark official
No code implementations yet. Submit your code now
## Categories
Analysis of PDEs 35B44 (primary), 35K55, 92C17 (secondary)
|
## Found 22,959 Documents (Results 1–100)
100
MathJax
### An extended GCRD algorithm for parametric univariate polynomial matrices and application to parametric Smith form. (English)Zbl 07589748
MSC: 13Pxx 68Wxx 15Axx
Full Text:
### Two new algorithms for solving multi-linear systems with $$M$$-tensor. (English)Zbl 07594721
MSC: 65F10 65H10
Full Text:
### Differentiability and compactness of semigroup generated by $$k/G:N$$ redundant system with finite repair time. (English)Zbl 07594612
MSC: 47D60 35F46
Full Text:
Full Text:
### On families of constrictions in model of overdamped Josephson junction and Painlevé 3 equation. (English)Zbl 07594451
MSC: 34M03 34A26 34E15
Full Text:
Full Text:
### Active fault tolerant control using zonotopic techniques for linear parameter varying systems: application to wind turbine system. (English)Zbl 07593024
MSC: 93B35 93C05 93B53
Full Text:
### On the preconditioned MINRES method for solving singular linear systems. (English)Zbl 07592315
MSC: 15A06 65F10
Full Text:
### On the (in)security of ROS. (English)Zbl 07591716
MSC: 94A60 94A62
Full Text:
Full Text:
### Generalized eigenvectors of linear operators and biorthogonal systems. (English)Zbl 07590440
MSC: 33C10 34B30 34L10
Full Text:
### Chaos and its control in a fractional order glucose-insulin regulatory system. (English)Zbl 07586725
MSC: 92-XX 93-XX
Full Text:
### On weak boundary representations and quasi hyperrigidity for operator systems. (English)Zbl 07586613
MSC: 46L07 47A20
Full Text:
### Spectral stability, spectral flow and circular relative equilibria for the Newtonian $$n$$-body problem. (English)Zbl 07585657
MSC: 70Fxx 37Jxx 53Dxx
Full Text:
### Affine transformations of hyperbolic number plane. (English)Zbl 07585482
MSC: 15A04 28A80 30D05
Full Text:
MSC: 62-08
Full Text:
Full Text:
### Method of moments in the problem of inversion of the Laplace transform and its regularization. (English. Russian original)Zbl 07584671
Vestn. St. Petersbg. Univ., Math. 55, No. 1, 34-38 (2022); translation from Vestn. St-Peterbg. Univ., Ser. I, Mat. Mekh. Astron. 9(67), No. 1, 46-52 (2022).
MSC: 65Rxx 65-XX 65Jxx
Full Text:
### Multiple solutions for a class of quasilinear Schrödinger-Poisson system in $$\mathbb{R}^3$$ with critical nonlinearity and zero mass. (English)Zbl 07584635
MSC: 35J10 35J60 35J65
Full Text:
Full Text:
Full Text:
### Explicit solutions of infinite linear systems associated with group inverse endomorphisms. (English)Zbl 07584100
MSC: 15A06 15A09 15A04
Full Text:
Full Text:
Full Text:
Full Text:
MSC: 65F10
Full Text:
Full Text:
### Telegraph systems on networks and port-Hamiltonians. I. Boundary conditions and well-posedness. (English)Zbl 07581150
MSC: 35R02 35L20 47D03
Full Text:
Full Text:
Full Text:
Full Text:
### Asymptotic theory of $$\ell_1$$-regularized PDE identification from a single noisy trajectory. (English)Zbl 07579695
MSC: 62J07 93B30 35G35
Full Text:
MSC: 49N10
Full Text:
### Scalable parallel linear solver for compact banded systems on heterogeneous architectures. (English)Zbl 07578885
MSC: 65Nxx 65Fxx 76Mxx
Full Text:
### Dual estimates in the model of expansion of electric power systems. (English. Russian original)Zbl 07578843
Autom. Remote Control 83, No. 5, 706-720 (2022); translation from Avtom. Telemekh. 2022, No. 5, 43-60 (2022).
MSC: 93A15 93B70 93C95
Full Text:
### Stabilization and tracking of the trajectory of a linear system with jump drift. (English. Russian original)Zbl 07578831
Autom. Remote Control 83, No. 4, 520-535 (2022); translation from Avtom. Telemekh. 2022, No. 4, 27-46 (2022).
MSC: 93E15 93C15 93C05
Full Text:
### M-regularity of $$\mathbb{Q}$$-twisted sheaves and its application to linear systems on abelian varieties. (English)Zbl 07578606
MSC: 14C20 14K99
Full Text:
Full Text:
### A discrete analogue of the Lyapunov function for hyperbolic systems. (English. Russian original)Zbl 07576919
J. Math. Sci., New York 264, No. 6, 661-671 (2022); translation from Sovrem. Mat., Fundam. Napravl. 64, No. 4, 591-602 (2018).
Full Text:
### The diamond of mingle logics: a four-fold infinite way to be safe from paradox. (English)Zbl 07576858
Bimbó, Katalin (ed.), Relevance logics and other tools for reasoning. Essays in honor of J. Michael Dunn. London: College Publications. Tributes 46, 365-393 (2022).
MSC: 03B47 03B50
Full Text:
### Quantitative analysis of incipient fault detectability for time-varying stochastic systems based on weighted moving average approach. (English)Zbl 07576362
MSC: 93Bxx 93Exx 93Cxx
Full Text:
### Adaptive sliding mode consensus control based on neural network for singular fractional order multi-agent systems. (English)Zbl 07576349
MSC: 93Cxx 93Bxx 93Dxx
Full Text:
### Dynamic analysis of geared transmission system for wind turbines with mixed aleatory and epistemic uncertainties. (English)Zbl 07575911
MSC: 70J35 70L05 65G30
Full Text:
Full Text:
Full Text:
### A broken reproducing kernel method for the multiple interface problems. (English)Zbl 07575618
MSC: 65M12 65J10 65C20
Full Text:
### A general linear quadratic stochastic control and information value. (English)Zbl 07574885
MSC: 93Exx 49Nxx 60Hxx
Full Text:
Full Text:
### $$L^p$$ regularity theory for even order elliptic systems with antisymmetric first order potentials. (English)Zbl 07574519
MSC: 35J48 35G35 35B65
Full Text:
MSC: 93-XX
Full Text:
### A mean-field stochastic linear-quadratic optimal control problem with jumps under partial information. (English)Zbl 07574249
MSC: 93E20 60H10
Full Text:
### $$C^m$$ semialgebraic sections over the plane. (English)Zbl 07574099
MSC: 14P10 35G05
Full Text:
### LS-based parameter estimation of DARMA systems with uniformly quantized observations. (English)Zbl 07573561
MSC: 93B30 93C55 93C05
Full Text:
### Attitude and orbit optimal control of combined spacecraft via a fully-actuated system approach. (English)Zbl 07573553
MSC: 93C95 49N10
Full Text:
### Fully actuated system approach for 6DOF spacecraft control based on extended state observer. (English)Zbl 07573552
MSC: 93C95 93B53
Full Text:
Full Text:
### Do some nontrivial closed $$z$$-invariant subspaces have the division property? (English)Zbl 07572381
St. Petersbg. Math. J. 33, No. 4, 711-738 (2022) and Algebra Anal. 33, No. 4, 173-209 (2021).
Full Text:
Full Text:
Full Text:
Full Text:
### A variant of PMHSS iteration method for a class of complex symmetric indefinite linear systems. (English)Zbl 07570468
MSC: 65F10 65F08
Full Text:
Full Text:
Full Text:
Full Text:
MSC: 65Fxx
Full Text:
Full Text:
### Parameter identifiability for nonlinear LPV models. (English)Zbl 07568656
MSC: 93B30 93C10
Full Text:
### A Kalman filter with intermittent observations and reconstruction of data losses. (English)Zbl 07568655
MSC: 93E11 93C55 93C05
Full Text:
### Model-free finite-horizon optimal tracking control of discrete-time linear systems. (English)Zbl 07568433
MSC: 93Cxx 93Bxx 90Cxx
Full Text:
### BDDC preconditioners for divergence free virtual element discretizations of the Stokes equations. (English)Zbl 07568383
MSC: 65Nxx 65Fxx 35Jxx
Full Text:
### Pseudo-Bautin bifurcation in 3D Filippov systems. (English)Zbl 07568058
MSC: 34A36 34C23 34C05
Full Text:
### A class of residual-based extended Kaczmarz methods for solving inconsistent linear systems. (English)Zbl 07567558
MSC: 65F22 15A29 65F10
Full Text:
### Simultaneous fault detection and control of linear time-invariant system via disturbance observer-based control approach. (English)Zbl 07567426
MSC: 65H05 65F10
Full Text:
### Estimates of solutions to infinite systems of linear equations and the problem of interpolation by cubic splines on the real line. (English. Russian original)Zbl 07566678
Sib. Math. J. 63, No. 4, 677-690 (2022); translation from Sib. Mat. Zh. 63, No. 4, 814-830 (2022).
MSC: 41Axx 15Axx 65Dxx
Full Text:
### Eigenvalues of the solution of the Lyapunov tensor equation. (English)Zbl 07565744
MSC: 15A18 15A69 15A24
Full Text:
### Identification and classical control of linear multivariable systems (to appear). (English)Zbl 07565710
Cambridge: Cambridge University Press (ISBN 978-1-316-51721-5/hbk). (2022).
### Representation of solutions of a system of five-order nonlinear difference equations. (English)Zbl 07564624
MSC: 65H05 65F10
Full Text:
Full Text:
Full Text:
### Efficiency of two-stage systems in stochastic DEA. (English)Zbl 07563609
MSC: 90B50 90C08
Full Text:
### Stability and performance analysis of hybrid integrator-gain systems: a linear matrix inequality approach. (English)Zbl 07563275
MSC: 93D30 93C30 90C25
Full Text:
MSC: 74B05
Full Text:
### Some equalities on predictors under seemingly unrelated regression models. (English)Zbl 1491.62058
MSC: 62J05 62H12
Full Text:
Full Text:
MSC: 82-XX
Full Text:
Full Text:
Full Text:
MSC: 65F10
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
### Dynamic analysis of a deteriorating system with single vacation of a repairman. (English)Zbl 07557675
MSC: 47D06 90B25
Full Text:
### Implicit iterative schemes based on augmented linear systems. (English. Russian original)Zbl 07557357
Dokl. Math. 105, No. 2, 131-134 (2022); translation from Dokl. Ross. Akad. Nauk, Mat. Inform. Protsessy Upr. 503, 91-94 (2022).
MSC: 65Fxx 65Jxx 47Axx
Full Text:
### Markov approximations of the evolution of quantum systems. (English. Russian original)Zbl 07557348
Dokl. Math. 105, No. 2, 92-96 (2022); translation from Dokl. Ross. Akad. Nauk, Mat. Inform. Protsessy Upr. 503, 48-53 (2022).
MSC: 47Dxx 82Bxx 47Bxx
Full Text:
Full Text:
### Accurate stabilization for linear stochastic systems based on region pole assignment and its applications. (English)Zbl 07556805
MSC: 93E15 93B55 93C05
Full Text:
MSC: 62Pxx
Full Text:
all top 5
all top 5
all top 5
all top 3
all top 3
|
Session 56 - Frontiers of Ultraviolet Astrophysics.
Display session, Wednesday, June 10
Atlas Ballroom,
## [56.02] Simulations of FUSE spectra of A-type and cooler stars: D/H and warm plasma emission
D. O'Neal, J. L. Linsky (JILA/U. Colorado)
The Far Ultraviolet Spectroscopic Explorer (FUSE) will observe many stars in the 905--1187 Å\ spectral region with a resolving power of 24,000--30,000. FUSE will explore this critically important spectral region, which is mostly inaccessible to HST (and previously to IUE), observing sources much fainter than Copernicus and with much better spectral resolution than HUT or ORFEUS.
We present simulations of stellar spectra that will be observed with FUSE. The FUSE bandpass includes the Ly \beta (1025 Å\ and Ly \gamma (972 Å\ lines. Extrapolating from GHRS Ly \alpha spectra, we simulate FUSE spectra of the Ly \beta and Ly \gamma lines of Capella and some other late-type stars. These simulations will address the accuracy with which the interstellar D/H ratio can be measured using these lines.
In addition, one of our FUSE observing programs will measure for the first time the amount of warm (60,000--300,000 K) plasma present in the outer atmospheres of A and early-F stars, using the C III 977 Å\ and O VI 1032 Å\ lines. This program will explore whether the outer atmospheric layers of these stars are heated to these temperatures and whether the heating process is magnetic or acoustic. Our simulations indicate that FUSE will be sensitive enough to measure very low luminosities in these lines.
This work is supported by NASA grants to the University of Colorado.
|
Deep Learning Dictionary - Lightweight Crash Course
Deep Learning Course 1 of 6 - Level: Beginner
Zero Padding for Neural Networks - Deep Learning Dictionary
lock
lock
Zero Padding - Deep Learning Dictionary
Convolutional layers in a CNN have a defined number of filters that convolve the image input, and as a result, the output from the convolutional layers have reduced dimensions from the original image input.
We have a general formula for determining the dimensions of the convolved output. Assuming a stride of $$1$$, if our image is of size $$n \times n$$, and we convolve it with an $$f \times f$$ filter, then the dimensions of the resulting output is
lock
lock
lock
|
# American Institute of Mathematical Sciences
July 2007, 8(1): 207-228. doi: 10.3934/dcdsb.2007.8.207
## Modeling the rock - scissors - paper game between bacteriocin producing bacteria by Lotka-Volterra equations
1 Department of Bioinformatics, Friedrich-Schiller-University, Ernst - Abbé - Platz 2, D-07743 Jena, Germany, Germany
Received October 2005 Revised July 2006 Published April 2007
In this paper we analyze the population dynamics of bacteria competing by anti-bacterial toxins (bacteriocins). Three types of bacteria involved in these dynamics can be distinguished: toxin producers, resistant bacteria and sensitive bacteria. Their interplay can be regarded as a Rock-Scissors-Paper - game (RSP). Here, this is modeled by a reasonable three-dimensional Lotka- Volterra ($L$V) type differential equation system. In contrast to earlier approaches to modeling the RSP game such as replicator equations, all interaction terms have negative signs because the interaction between the three different types of bacteria is purely competitive, either by toxification or by competition for nutrients. The model allows one to choose asymmetric parameter values. Depending on parameter values, our model gives rise to a stable steady state, a stable limit cycle or a heteroclinic orbit with three fixed points, each fixed point corresponding to the existence of only one bacteria type. An alternative model, the May - Leonard model, leads to coexistence only under very restricted conditions. We carry out a comprehensive analysis of the generic stability conditions of our model, using, among other tools, the Volterra-Lyapunov method. We also give biological interpretations of our theoretical results, in particular, of the predicted dynamics and of the ranges for parameter values where different dynamic behavior occurs. For example, one result is that the intrinsic growth rate of the producer is lower than that of the resistant while its growth yield is higher. This is in agreement with experimental results for the bacterium Listeria monocytogenes.
Citation: Gunter Neumann, Stefan Schuster. Modeling the rock - scissors - paper game between bacteriocin producing bacteria by Lotka-Volterra equations. Discrete & Continuous Dynamical Systems - B, 2007, 8 (1) : 207-228. doi: 10.3934/dcdsb.2007.8.207
[1] Susmita Sadhu. Complex oscillatory patterns near singular Hopf bifurcation in a two-timescale ecosystem. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020342 [2] Reza Chaharpashlou, Abdon Atangana, Reza Saadati. On the fuzzy stability results for fractional stochastic Volterra integral equation. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020432 [3] Yueyang Zheng, Jingtao Shi. A stackelberg game of backward stochastic differential equations with partial information. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020047 [4] Juan Pablo Pinasco, Mauro Rodriguez Cartabia, Nicolas Saintier. Evolutionary game theory in mixed strategies: From microscopic interactions to kinetic equations. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2020051 [5] Youming Guo, Tingting Li. Optimal control strategies for an online game addiction model with low and high risk exposure. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020347 [6] Peizhao Yu, Guoshan Zhang, Yi Zhang. Decoupling of cubic polynomial matrix systems. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 13-26. doi: 10.3934/naco.2020012 [7] Ilyasse Lamrani, Imad El Harraki, Ali Boutoulout, Fatima-Zahrae El Alaoui. Feedback stabilization of bilinear coupled hyperbolic systems. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020434 [8] Felix Finster, Jürg Fröhlich, Marco Oppio, Claudio F. Paganini. Causal fermion systems and the ETH approach to quantum theory. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020451 [9] Xiyou Cheng, Zhitao Zhang. Structure of positive solutions to a class of Schrödinger systems. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020461 [10] Yuri Fedorov, Božidar Jovanović. Continuous and discrete Neumann systems on Stiefel varieties as matrix generalizations of the Jacobi–Mumford systems. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020375 [11] João Marcos do Ó, Bruno Ribeiro, Bernhard Ruf. Hamiltonian elliptic systems in dimension two with arbitrary and double exponential growth conditions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 277-296. doi: 10.3934/dcds.2020138 [12] Awais Younus, Zoubia Dastgeer, Nudrat Ishaq, Abdul Ghaffar, Kottakkaran Sooppy Nisar, Devendra Kumar. On the observability of conformable linear time-invariant control systems. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020444 [13] Shiqi Ma. On recent progress of single-realization recoveries of random Schrödinger systems. Electronic Research Archive, , () : -. doi: 10.3934/era.2020121 [14] Maoding Zhen, Binlin Zhang, Vicenţiu D. Rădulescu. Normalized solutions for nonlinear coupled fractional systems: Low and high perturbations in the attractive case. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020379 [15] Zedong Yang, Guotao Wang, Ravi P. Agarwal, Haiyong Xu. Existence and nonexistence of entire positive radial solutions for a class of Schrödinger elliptic systems involving a nonlinear operator. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020436 [16] Jerry L. Bona, Angel Durán, Dimitrios Mitsotakis. Solitary-wave solutions of Benjamin-Ono and other systems for internal waves. I. approximations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 87-111. doi: 10.3934/dcds.2020215 [17] Soniya Singh, Sumit Arora, Manil T. Mohan, Jaydev Dabas. Approximate controllability of second order impulsive systems with state-dependent delay in Banach spaces. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020103 [18] Hui Lv, Xing'an Wang. Dissipative control for uncertain singular markovian jump systems via hybrid impulsive control. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 127-142. doi: 10.3934/naco.2020020 [19] Xuefeng Zhang, Yingbo Zhang. Fault-tolerant control against actuator failures for uncertain singular fractional order systems. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 1-12. doi: 10.3934/naco.2020011 [20] Gongbao Li, Tao Yang. Improved Sobolev inequalities involving weighted Morrey norms and the existence of nontrivial solutions to doubly critical elliptic systems involving fractional Laplacian and Hardy terms. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020469
2019 Impact Factor: 1.27
|
Homotopy type of a locally contractible compact
Does a locally contractible compact space have the homotopy type of a finite CW complex? (I think it probably does, but I need a reference anyway.)
EDIT: My intuition was wrong [to see why, read C.T.C.Wall, Finiteness conditions for CW complexes, Ann. of Math. 81 (1965) 56-69.]. Apparently, the right question is whether a locally contractible compact is dominated by a finite CW complex. This is true if it can be embedded into a manifold, but I don't know what to do with the general case.
• Which definition of locally contractible do you intend ? There are various options. The "standard" one : each nbhd U of $p$ has a sub-nbhd V, contractible inside U, the variant requiring fixing $p$, or having contractible nbhd basis, etc... – BS. Aug 28 '14 at 13:44
• @BS: I had in mind "standard" one, but I can afford stronger versions too, if it makes a difference. – Alex Gavrilov Aug 29 '14 at 4:08
• Well, the discussions [here][1] and [there][2] show that the answer is no for the standard notion of local contractibility, due to a conterexample of Borsuk, but the situation is apparently unresolved for the stronger notions (and compact spaces). Maybe if you assume finite covering dimension, the answer is yes. [1]: mathoverflow.net/a/167957/6451 [2]: mathoverflow.net/q/102700/6451 – BS. Aug 29 '14 at 11:13
• Thank you. I did not expect it to be that messy! – Alex Gavrilov Aug 29 '14 at 13:22
|
# Sequence of Measurable Sets Converge Pointwise to a Measurable Set
I was working on the folllowing exercise from Tao's Measure Theory book:
We say that a sequence $E_n$ of sets in $R^d$ converges pointwise to another set $E$ if the indicator functions $1_{E_n}$ converge pointwise to $1_E$. Show that if $E_n$ are all Lebesgue measurable and converge pointwise to E then E is measurable also. He gives the hint to use the identity $1_{E(x)} =$ lim $inf_{n - > \infty} 1_{E_n(x)}$to write E in terms of countable unions and intersections of $E_n$.
If instead we changed the assumptions such that the indicator functions converged uniformly to $1_E$ on the domain $\cup E_n \bigcup E$ then there exists an $n$ such that $E = \cup_{m > n} E_m$. However I am not quite sure how to work with the weaker assumption of pointwise convergence. Any help would be appreciated!
This is basically a baby version of the following result (you can skip to bottom).
If $(f_n)$ are measurable and $f_n \to f$ pointwise, then $f$ is measurable.
Why ? Because of pointwise convergence $$\{x:f(x)> a\}=\cup_{n=1}^{\infty} \{x: f_m(x) > \alpha \mbox{ for all } m \ge n\}.$$
Read the RHS as all $x$ such that $f_m(x)>\alpha$ for all $m$ (possibly depending on $x$) large enough.
We can rewrite the RHS as
$$\cup_{n=1}^\infty \cap_{m\ge n} \{x:f_m(x)>\alpha\}.$$
The sets on the RHS are all measurable. The result follows. Note that enough to consider $\alpha=0$, in which case the set $\{x:f_m(x)>\alpha\}$ is $E_m$. In other words,
$$E = \cup_{n=1}^\infty \cap_{m\ge n} E_m,$$
and this identity is all you need, and which you can derive directly (with the same argument -- just follows from the definition of pointwise convergence: a point is in $E$ if and only if for all $m$ large enough belongs to $E_m$).
• Maybe you could explain why an intersection pops up starting from the 2nd equation. – zebullon Mar 17 '16 at 3:14
• "For all $m$" means intersection. – Fnacool Mar 17 '16 at 3:15
|
ToughSTEM
ToughSTEM
A question answer community on a mission
to share Solutions for all STEM major Problems.
Cant find a problem on ToughSTEM?
0
A 0.12-mu or micro FF capacitor, initially uncharged, is connected in series with a 10-kohm resistor and a 12-V battery of negligible internal resistance. Approximately how long does it take the capacitor to reach 90% of its final charge?
Edit
Community
1
Comment
Solutions
0
The formula for a charging capacitor in a RC circuit is
$Q=Q_{0}(1-e^{-\frac{t}{RC}})$
So, to find the time to charge to 90% of $Q_{0}$:
$0.9Q=Q(1-e^{-\frac{t}{RC}})\Rightarrow 0.9=1-e^{-\frac{t}{RC}}\Rightarrow -0.1=-e^{-\frac{t}{RC}}$
$0.1=e^{-\frac{t}{RC}}\Rightarrow \ln0.1=-\frac{t}{RC}\Rightarrow t=-RC\cdot\ln 0.1=-(10k\Omega )(0.12\mu F)\ln 0.1={\color{Red} 0.00276seconds}$
Edit
Community
1
Comment
Close
Choose An Image
or
Get image from URL
GO
Close
Back
Close
What URL would you like to link?
GO
α
β
γ
δ
ϵ
ε
η
ϑ
λ
μ
π
ρ
σ
τ
φ
ψ
ω
Γ
Δ
Θ
Λ
Π
Σ
Φ
Ω
Copied to Clipboard
to interact with the community. (That's part of how we ensure only good content gets on ToughSTEM)
OR
OR
ToughSTEM is completely free, and its staying that way. Students pay way too much already.
Almost done!
|
# SnowPatrol's question at Yahoo! Answers regarding the Fundamental Theorem of Calculus
#### MarkFL
Staff member
Here is the question:
Derivative of integral?
F(x) = integral of e^(t^2)dt (upper limit = cosx, lower limit = sinx)
Now find F'(x) at x=0
HOW DO I SOLVE THIS :O
Okay so this is what I did,
solve integration, answer is [e^(t^2)]/2t
[2t is the derivative of power(t^2), you're suppose to DIVIDE the integration by the derivative, riiight?]
I put t=cosx, t=sinx
So, it becomes
e^({cosx}^2)]/2cosx - e^({sinx}^2)]/2cosx
Now I take its derivative...
which turns out to be very complicated so I think I'm doing it wrong, cuz it is supposed to be not-so-long.
THIS IS THE ANSWER GIVEN AT THE BACK:
F ′(x)=exp (cos2 (x)) ·−sin (x)−exp (sin2 (x)) · cos (x) by FTOC
F ′(0)=exp (1) · 0−exp (0) · 1=−1
NOW HOLD ON A SECOND.
ISNT DERIVATIVE OF AN INTEGRAL, THE FUNCTION ITSELF? YESSSS.
OKAY, BUT THE DERIVATIVE AND INTEGRAL DONT UMMM CANCEL OUT TILL THE dx/dy/dt IS SAME WITH DERIVATIVE AND INTEGRATION!
Okay, I somehow solved the question B) *pat pat*
Can someone tell me how do i write this down on my paper??
I have posted a link there to this thread so the OP can view my work.
#### MarkFL
Staff member
Hello SnowPatrol,
We are given:
$$\displaystyle F(x)=\int_{\sin(x)}^{\cos(x)} e^{t^2}\,dt$$
And we are asked to find $F'(0)$.
By the FTOC and the Chain Rule, we have:
$$\displaystyle \frac{d}{dx}\int_{g(x)}^{h(x)} f(t)\,dt=f(h(x))\frac{dh}{dx}-f(g(x))\frac{dg}{dx}$$
Applying this formula, we find
$$\displaystyle F'(x)=-\sin(x)e^{\cos^2(x)}-\cos(x)e^{\sin^2(x)}$$
Hence:
$$\displaystyle F'(0)=-\sin(0)e^{\cos^2(0)}-\cos(0)e^{\sin^2(0)}=0-1=-1$$
|
Sciencemadness Discussion Board » Fundamentals » Organic Chemistry » Which extraction method to use for plants Select A Forum Fundamentals » Chemistry in General » Organic Chemistry » Reagents and Apparatus Acquisition » Beginnings » Responsible Practices » Miscellaneous » The Wiki Special topics » Technochemistry » Energetic Materials » Biochemistry » Radiochemistry » Computational Models and Techniques » Prepublication Non-chemistry » Forum Matters » Legal and Societal Issues
Pages: 1 3
Author: Subject: Which extraction method to use for plants
Quince
International Hazard
Posts: 773
Registered: 31-1-2005
Location: Vancouver, BC
Member Is Offline
Mood: No Mood
Which extraction method to use for plants
There seem to be many extraction methods for essential oils, and I don't know which one to choose. Is there any advantage to doing steam distillation over solvent extraction? This far I've used methylene chloride in a first extraction, then alcohol to separate the oils from the waxy extract. However, I find that there is a characteristic smell left regardless of what types of plants I extract, which is the same with some commercial extracts I've tried, so I can't help but wonder if the methylene chloride reacts with something, or the 80*C reached when evaporating the alcohol is too much (I guess I could evaporate it under vacuum, if the landlady is away or I'd get killed for the water usage). I can't do supercritical CO2 since I don't have the equipment. Does anyone here have experience with different methods and can make a recommendation? I've been thinking of trying the steam distillation, but I want to ask first if there's any advantage. I also read about "controlled instantaneous decompression" as an extraction method, but didn't find much detailed information with Google, other than what's in this abstract.
BTW when extracting with alcohol from the waxy first extract by DCM, is it OK to use methanol or propanol instead of the commonly specified ethanol? Ethanol is harder for me to obtain in high concentration.
[Edited on 7-12-2006 by Quince]
\"One of the surest signs of Conrad\'s genius is that women dislike his books.\" --George Orwell
unionised
International Hazard
Posts: 4919
Registered: 1-11-2003
Location: UK
Member Is Offline
Mood: No Mood
It rather depends what you are extracting and why.
Extraction with DCM is great for things that are not thermally stable but, unlike steam distillation, it takes all the waxes and fats too.
Ethanol has one huge advantage if you are making extracts to use as food additives or pharmaceuticals- it's essentially non toxic. *
Methanol and IPA have fairly similar solvent properties; if you are trying to get rid of waxes I'd go for methanol unless your product is unusually non-polar. Methanol is easier to evaporate off afterwards but IPA is usually thought of as less toxic.
*Yes, I know, ethanol is responsible for more cases of poisoning than just about any other chemical
Quince
International Hazard
Posts: 773
Registered: 31-1-2005
Location: Vancouver, BC
Member Is Offline
Mood: No Mood
What I'm trying to extract is essential oils. Some for the smell (vanilla and a ton of others), some for other stuff (for example, nepatalactone is a good non-toxic mosquito repellent, and I can get plenty of free catnip).
Do you think the DCM or alcohol could be reacting with something in the plants? I always find that the results of such solvent extraction have an additional smell to them I don't like, and like I said in the first post, I've found this in some commercial extracts as well. Or perhaps it's extracting some component that other methods do not...
Now would steam distillation in this case necessarily need the steam to be at 100*C? I don't know much about the process so I'm not sure about this. I know that water vapor doesn't condense right away when it's cooler than that.
I take it you haven't heard about that "controlled instantaneous decompression" extraction? They steam the material for a few minutes, then instantly drop the pressure near zero, and claim it gets 90-97% of the oils. Since I could only access the abstract, I still didn't understand how they physically separate the oils from the plant material afterwards. Just squeezing it out perhaps?
\"One of the surest signs of Conrad\'s genius is that women dislike his books.\" --George Orwell
XxDaTxX
Hazard to Self
Posts: 66
Registered: 12-3-2006
Member Is Offline
Mood: No Mood
Quote: Originally posted by Quince What I'm trying to extract is essential oils. Some for the smell (vanilla and a ton of others), some for other stuff (for example, nepatalactone is a good non-toxic mosquito repellent, and I can get plenty of free catnip). Do you think the DCM or alcohol could be reacting with something in the plants? I always find that the results of such solvent extraction have an additional smell to them I don't like, and like I said in the first post, I've found this in some commercial extracts as well. Or perhaps it's extracting some component that other methods do not... Now would steam distillation in this case necessarily need the steam to be at 100*C? I don't know much about the process so I'm not sure about this. I know that water vapor doesn't condense right away when it's cooler than that. I take it you haven't heard about that "controlled instantaneous decompression" extraction? They steam the material for a few minutes, then instantly drop the pressure near zero, and claim it gets 90-97% of the oils. Since I could only access the abstract, I still didn't understand how they physically separate the oils from the plant material afterwards. Just squeezing it out perhaps?
Its use is limited to specific industrial processes. It is not feasible for either of the following adaptations:
1. When done in a batch process, ie all at once, you would probably need something around the order of 20,000L of volume to hold your 800mL of DCM (assuming a final temp of ~-70C) w/o pulling DCM into your vacuum. You'd need a pump that could pull 10mbar, and a vessel that can handle the extreme pressure and temperature drop to get good efficiency.
2. A modified version exists where this can be adapted to a semi-continuous process, however you would need a cryogen to condense the vapor @10bar. I'm sure you have a LN2 tap somewhere ..... oh yes here it is! =P
On a side note, do your essential oil extractions like everyone else. There is a reason why only industrial extractions vary from this procedure: steam distill, partition with DCM, filter organic phase, dry, heat/distill off DCM and you can get back to whatever it was that you were going to do with the essential oil...
Quince
International Hazard
Posts: 773
Registered: 31-1-2005
Location: Vancouver, BC
Member Is Offline
Mood: No Mood
The reason I skip steam distilling is because by extracting directly with DCM I don't expose the material to high temperature. I also need a minimum amount of DCM, whatever's needed to reach high enough level in the Gregar, which is a continuous extractor. I'm not sure what the point of the steam distillation is, and that's in fact part of my initial question. Why is steam distillation used, instead of using an organic solvent from the beginning?
Quote: heat/distill off DCM and you can get back to whatever it was that you were going to do with the essential oil.
As unionised mentioned, that also extracts waxes, so alcohol extraction is still needed.
\"One of the surest signs of Conrad\'s genius is that women dislike his books.\" --George Orwell
pantone159
International Hazard
Posts: 569
Registered: 27-6-2006
Location: Austin, TX, USA
Member Is Offline
Quote: Originally posted by Quince The reason I skip steam distilling is because by extracting directly with DCM I don't expose the material to high temperature. Why is steam distillation used, instead of using an organic solvent from the beginning?
With steam distillation, you don't expose the material to temps over 100 C, so that also counts as low temperature a lot of the time.
With a straight extraction, without the steam distillation, you may also pick up non-volatile compounds, which you usually don't want.
One example from my experience: I have tried extracting eugenol from cloves, both with solvent extraction (I used alcohol and then let it evap before continuing), and with steam distillation. In both cases, the crude extracts were separated via an acid/base extration using DCM/NaOH/HCl. (This is a common lab experiment.)
With steam distillation, I got fairly pure eugenol, only lightly yellow-brown colored. With solvent extraction, I got a much darker colored product, which was obviously not pure. Apparently, cloves contain some non-volatile colored component which follows phenols around in the extractions. (High MW polymerized phenols???). Steam distillation left the brown junk behind, solvent extraction picked it up.
It does depend on your plant. Some things work well with solvent extraction, some don't.
Quince
International Hazard
Posts: 773
Registered: 31-1-2005
Location: Vancouver, BC
Member Is Offline
Mood: No Mood
I guess then the question is when does it not work.
\"One of the surest signs of Conrad\'s genius is that women dislike his books.\" --George Orwell
XxDaTxX
Hazard to Self
Posts: 66
Registered: 12-3-2006
Member Is Offline
Mood: No Mood
Quote:
Originally posted by Quince
The reason I skip steam distilling is because by extracting directly with DCM I don't expose the material to high temperature. I also need a minimum amount of DCM, whatever's needed to reach high enough level in the Gregar, which is a continuous extractor. I'm not sure what the point of the steam distillation is, and that's in fact part of my initial question. Why is steam distillation used, instead of using an organic solvent from the beginning?
Quote: heat/distill off DCM and you can get back to whatever it was that you were going to do with the essential oil.
As unionised mentioned, that also extracts waxes, so alcohol extraction is still needed.
If you are extracting essential oils you use water to steam it out. Honestly, I don't quite know what will persuade you to do it right. If you want the nitty gritty here you go:
1. DCM dissolves wax as a function of its temperature, at higher temperatures it dissolves more and more wax, and is thus carried over any distillation head.
2. The amount of wax able to dissolve in DCM / oC s not 1:1. Its a higher order function.
3. Water counters the effect that DCM has on the solubility of the wax. By adding water, it interacts with what used to be just DCM:wax so now you have DCM:wax:water. Changing the composition in the still changes the composition of the vapor thus produced in favor of DCM (due to high relative volatility) and against wax (due to insolubility in water, polarity).
4. Just do it and stop arguing. DCM is a prime defatting/dewaxing solvent at higher temperatures. Particularly because a quick temperature swing and solubility goes down, where after it is recovered by centrifuging, and STEAM DISTILLATION, or hot air volatilizing and scrubbing. We use it in industrial applications ALL THE TIME.
5. If you want essential oils, use water. For tincture/organic solvent extraction use alcohol. And for WAX, USE DCM.
[Edited on 9-12-2006 by XxDaTxX]
unionised
International Hazard
Posts: 4919
Registered: 1-11-2003
Location: UK
Member Is Offline
Mood: No Mood
"DCM dissolves wax as a function of its temperature, at higher temperatures it dissolves more and more wax, and is thus carried over any distillation head."
What, exactly, does solubillity have to do with distilation?
Water disolves salt as a function of temperature; at higher temperatures it disolves more and more salt.
Nevertheless, salt gets left behind when seawater is distilled to produce fresh water.
I still want to know what this weird wax is- it is very odd that it distills along with DCM.
XxDaTxX
Hazard to Self
Posts: 66
Registered: 12-3-2006
Member Is Offline
Mood: No Mood
Quote: Originally posted by unionised "DCM dissolves wax as a function of its temperature, at higher temperatures it dissolves more and more wax, and is thus carried over any distillation head." What, exactly, does solubillity have to do with distilation? Water disolves salt as a function of temperature; at higher temperatures it disolves more and more salt. Nevertheless, salt gets left behind when seawater is distilled to produce fresh water. I still want to know what this weird wax is- it is very odd that it distills along with DCM.
It is not odd. It is predictable. The high solubility in high temperatures has to do with their molecular interactions. Those same interactions govern, somewhat, the vapor pressures of the constituents in in your still.
DCM is used to remove waxes from lubricant oils ALL THE TIME, so again NO IT IS NOT ODD that it comes over with DCM. Send it to an analytical lab and they will tell you what it is. Is it worth your $to know? Probably not. And similarly, it is no longer worth my time trying to persuade people in this thread to do things right. [Edited on 10-12-2006 by XxDaTxX] unionised International Hazard Posts: 4919 Registered: 1-11-2003 Location: UK Member Is Offline Mood: No Mood It wouldn't be very$ for me to get a lab to analyse it; I work in one. Of course, since I don't have a sample of it, I can't analyse it. I have analyse plenty of things in my time, some of them waxes.
For the wax to distill at 40C it needs to have a reasonable vapour pressure at 40C. Very few solids do.
As I said, salt dissolves perfectly well in water, but it doesn't distill.
It's news to me that DCM is used to dewax oils (and Google doesn't give any hits about it for "dewaxing oil" dichloromethane).
Last time I checked lube oil mixed freely with DCM. If you chilled it well enough you might get the wax out, but any solvent could be used for that. It still has nothing to do with the idea that a wax would distill at 40C
[Edited on 10-12-2006 by unionised]
[Edited on 10-12-2006 by unionised]
XxDaTxX
Hazard to Self
Posts: 66
Registered: 12-3-2006
Member Is Offline
Mood: No Mood
Quote: Originally posted by unionised It wouldn't be very \$ for me to get a lab to analyse it; I work in one. Of course, since I don't have a sample of it, I can't analyse it. I have analyse plenty of things in my time, some of them waxes. For the wax to distill at 40C it needs to have a reasonable vapour pressure at 40C. Very few solids do. As I said, salt dissolves perfectly well in water, but it doesn't distill. It's news to me that DCM is used to dewax oils (and Google doesn't give any hits about it for "dewaxing oil" dichloromethane). Last time I checked lube oil mixed freely with DCM. If you chilled it well enough you might get the wax out, but any solvent could be used for that. It still has nothing to do with the idea that a wax would distill at 40C [Edited on 10-12-2006 by unionised] [Edited on 10-12-2006 by unionised]
Read some books on Raoult's Law as it governs (somewhat) the composition of your distillate.
Here is an intro.
Now if you have understood that then model a vapor pressure diagram for DCM, your wax, and water. (It should have three axes)
Due to specific interactions between DCM and the wax (as can be seen from the temperature dependant solubility) you have a non-ideal mixture.
Adjusting for positive deviations for the wax's vapor pressure, and negative deviations for DCM's vapor pressure, redraw your vapor pressure diagram.
Remember that mol fraction of in the lower phase is severely limited by its solubility, but it does affect it significantly.
I will not tell you how to compute the actual deviations because one can devote a complete post-doc career in it, however I will tell you that it gets more and more complex.
What you have from there is a rough estimate of how water changes vapor pressures. Deviations can be calculated within the framework of a particular model, each having specific applications of non-ideal mixtures.
Congratulations, you have now seen what happens ...... IN THE LOWER PHASE ONLY!
..... Read a book if you want to see what goes on in biphasic equilibrium. Google is not the place to look. Try industrial chemistry literature.
Here's a reference on using DCM to dewax oils.
unionised
International Hazard
Posts: 4919
Registered: 1-11-2003
Location: UK
Member Is Offline
Mood: No Mood
My teachers may well be retired if not dead by now, it's been a couple of decades or so since I left University. I actually work in a real analytical lab.
I'm well enough aquainteed with Raoult's law to know that it refers to the vapour pressures of the components of the mixture- I trust that you are too.
The point that you have been steadfastly ignoring is that, unless the vapour pressure of a component is reasonably high, it never gets into the vapour phase (to any significant extent) so it doesn't distill over.
What I am asking is "what is this wax with a high vapour pressure at 40C? because most solids don't have high vapour pressures."
Do you understand that the rarity of volatile (at 40C) waxy solids makes this stuff a real oddity?
Most solids, like the salt in seawater, don't distill with steam and they are even less likely to do so with DCM vapour at 40C.
It's interesting to note that the reference you give for dewaxing with DCM points out that a whole slew of other solvents can be used (as I sugested) and also, that by steam strippping or distillation, the DCM can be recovered from the wax. They point out explicitly that distillation separates wax from DCM. I'm saying it's odd that in the case we have been discussing the wax distills over with the DCM and that I would have expected the distillation to separate them because the difference in boiling poinst between very volatile DCM and (usually very involatile ) wax is huge.
You seem to think it's perfectly normal for these two components to co-distill. I don't and nor does the reference you cite.
[Edited on 11-12-2006 by unionised]
Quince
International Hazard
Posts: 773
Registered: 31-1-2005
Location: Vancouver, BC
Member Is Offline
Mood: No Mood
Well, next time I get a can I'll check the company and post the name here, so anyone interested can contact them for an MSDS, which should list the mystery ingredient.
My own feeling is that, though the head temperature did not exceed 40*C, the flask most certainly did, and not all the wax vapors condensed in the Vigreux, but some got through (after the steam distillation mentioned in the other thread, about 5%).
[Edited on 11-12-2006 by Quince]
\"One of the surest signs of Conrad\'s genius is that women dislike his books.\" --George Orwell
XxDaTxX
Hazard to Self
Posts: 66
Registered: 12-3-2006
Member Is Offline
Mood: No Mood
Quote: Originally posted by unionised My teachers may well be retired if not dead by now, it's been a couple of decades or so since I left University. I actually work in a real analytical lab. I'm well enough aquainteed with Raoult's law to know that it refers to the vapour pressures of the components of the mixture- I trust that you are too. The point that you have been steadfastly ignoring is that, unless the vapour pressure of a component is reasonably high, it never gets into the vapour phase (to any significant extent) so it doesn't distill over. What I am asking is "what is this wax with a high vapour pressure at 40C? because most solids don't have high vapour pressures." Do you understand that the rarity of volatile (at 40C) waxy solids makes this stuff a real oddity? Most solids, like the salt in seawater, don't distill with steam and they are even less likely to do so with DCM vapour at 40C. It's interesting to note that the reference you give for dewaxing with DCM points out that a whole slew of other solvents can be used (as I sugested) and also, that by steam strippping or distillation, the DCM can be recovered from the wax. They point out explicitly that distillation separates wax from DCM. I'm saying it's odd that in the case we have been discussing the wax distills over with the DCM and that I would have expected the distillation to separate them because the difference in boiling poinst between very volatile DCM and (usually very involatile ) wax is huge. You seem to think it's perfectly normal for these two components to co-distill. I don't and nor does the reference you cite. [Edited on 11-12-2006 by unionised]
I already explained that it was non-ideal and the wax has a positive deviation from raoult's predicted vapor pressure.
If you want the theory behind it, thats more simple:
Let's assume that intermolecular forces between wax molecules in a pure liquid is some value "x" ... then assume that the intermolecular forces for DCM in a pure liquid is some value "y".
Then when you mix them, BECAUSE THEY ARE SUBSTANTIALLY DIFFERENT like I said before, they become non-ideal.
Intermolecular forces between DCM and the wax are less than DCM : DCM or wax : wax interactions.
In this way you can see that because the intermolecular attraction between the two components decreases upon mixing, their respective vapor pressure also changes because they are less "bound" to the liquid phase due to lowering of intermolecular attractive forces.
The reason for the water is two fold: first it should produce a compensating negative deviation for the wax, while having little to no effect on the polar DCM.
Your salt and water example is one of negative deviations. The intermolecular forces set up in the mixture, ionic, increase the amount of energy needed to vaporize its components. In this way it is non-ideal as a negative deviant of Roult's law.
Interactions between components create non ideal mixtures, and corresponding non ideal vapor pressures.
BTW, I certainly hope that someone in your "real analytical lab" knows more than the introductory material on Raoult's law. Positive and negative deviations are the first thing explained past the undergraduate courses in chemistry ... and I would hate to know that there is an analytical lab run by undergrads .... oh wait .... thats like what I have to deal with .... back to scolding these undergrads.
[Edited on 11-12-2006 by XxDaTxX]
solo
International Hazard
Posts: 3868
Registered: 9-12-2002
Location: Estados Unidos de La Republica Mexicana
Member Is Offline
Mood: ....getting old and drowning in a sea of knowledge
Reference Information
An original solvent free microwave extraction of essential oils from spices
Marie Elisabeth Lucchesi, Farid Chemat,* and Jacqueline Smadja
FLAVOUR AND FRAGRANCE JOURNAL Flavour Fragr. J. 2004; 19: 134–138
ABSTRACT:
Attention is drawn to the development of a new and green alternative technique for the extraction of essential oils from spices. Solvent-free microwave extraction (SFME) is a combination of dry distillation and microwave heating without added any solvent or water. SFME and hydrodistillation (HD) were compared for the extraction of essential oil from three spices: ajowan (Carum ajowan, Apiaceae), cumin (Cuminum cyminum,
Umbelliferae), star anise (Illicium anisatum, Illiciaceae). Better results have been obtained with the proposed method in terms of rapidity (1 h vs. 8 h), efficiency and no solvent used. Furthermore, the SFME procedure yielded essential oils that could be analysed directly without any preliminary clean-up or solvent exchange steps.
KEY WORDS: solvent-free; microwave; dry distillation; essential oil; ajowan (Carum ajowan); cumin (Cuminum cyminum); star anise (Illicium anisatum)
[Edited on 11-12-2006 by solo]
Attachment: An original solvent free microwave extraction of essential oils from spices .pdf (103kB)
It's better to die on your feet, than live on your knees....Emiliano Zapata.
solo
International Hazard
Posts: 3868
Registered: 9-12-2002
Location: Estados Unidos de La Republica Mexicana
Member Is Offline
Mood: ....getting old and drowning in a sea of knowledge
Reference Information
Kinetics of Isothermal and Microwave Extraction of Essential Oil Constituents of Peppermint Leaves into Several Solvent Systems
Michael Spiro and Sau Soon Chen
FLAVOUR AND FRAGRANCE JOURNAL, VOL. 10, 259-272 (1995)
Abstract
The rates and extents of extraction have been measured for three major constituents of peppermint oil, namely l,&cineole, menthone and menthol, using the leaves of the black mint (Menthu X piperitu L.). The solvents used
were hexane, ethanol and mixtures of composition 90mol% ethanol + 10mol% hexane and 90mol% hexane + 10mol% ethanol. The extractions were carried out isothermally at 25, 35 and 45°C as well as in an electrically
and mechanically modified domestic microwave oven where the temperature increase vaned from c. 10 to 30°C.
The rates of both isothermal and microwave extractions were sensitive to the solvent employed and decreased in the order 90mol% hexane > 90mol% ethanol > hexane > ethanol. The rates of microwave extraction were also
affected by the microwave power output and the size of the sample load. The activation energies for the extractions were in the range 30-90kJ mol-', again dependent on the solvent used. Scanning electron microscopy on the spent leaves provided evidence of a link between the kinetics of extraction and structural changes on the glands.
KEY WORDS Extraction kinetics; microwave extraction; solvent extraction; rate constants; essential oils; peltate glands; peppermint (Mentha x piperita L.); 1,8-cineole; menthone; menthol; dielectric properties
Attachment: Kinetics of Isothermal and Microwave Extraction of Essential Oil Constituents of Peppermint Leaves into Several Solvent (1.1MB)
It's better to die on your feet, than live on your knees....Emiliano Zapata.
unionised
International Hazard
Posts: 4919
Registered: 1-11-2003
Location: UK
Member Is Offline
Mood: No Mood
XxDaTxX
I guess that you are used to dealing with undergrads- you have my sympathy on that matter. Perhaps that's why you aren't used to dealing with someone who is sufficiently familiar with Raoult's law and deviations from it that he doesn't need to quote chapter and verse on it.
DCM and water form an azeotrope. Raoult's law predicts that no azeotropes exist.
Clearly, there are deviations from it, I already knew that. You can rest assured that someone in my lab knows about it; I do. (Incidentaly, I also know that distilation isn't used much in analytical labs these days so it wouldn't be the end of the earth if nobody knew about Raoult's law. In addition many, if not most mixtures don't know about it anyway, they certainly don't follow it)
Simply stating that "I already explained that it was non-ideal and the wax has a positive deviation from raoult's predicted vapor pressure." doesn't really help unless you can tell me what sort of thing has this great a deviation.
The most volatile solid I could find the data for in a quick search was camphor; I estimate that it has a vapour pressure of a couple of mmHg at 40C. At that temperature DCM has a vapour pressure of roughly 760mmHg. If they started off equimolar then the mixture coming over, if it followed Raoults law, would be something like 0.3% camphor (the usual crass aproximations have been made here).
What you are talking about is a deviation from Raoult's law of an order of magnitude or 2. As far as I'm aware, that sort of thing doesn't happen unless something really weird is going on.
XxDaTxX
Hazard to Self
Posts: 66
Registered: 12-3-2006
Member Is Offline
Mood: No Mood
Quote: Originally posted by unionised XxDaTxX I guess that you are used to dealing with undergrads- you have my sympathy on that matter. Perhaps that's why you aren't used to dealing with someone who is sufficiently familiar with Raoult's law and deviations from it that he doesn't need to quote chapter and verse on it. DCM and water form an azeotrope. Raoult's law predicts that no azeotropes exist. Clearly, there are deviations from it, I already knew that. You can rest assured that someone in my lab knows about it; I do. (Incidentaly, I also know that distilation isn't used much in analytical labs these days so it wouldn't be the end of the earth if nobody knew about Raoult's law. In addition many, if not most mixtures don't know about it anyway, they certainly don't follow it) Simply stating that "I already explained that it was non-ideal and the wax has a positive deviation from raoult's predicted vapor pressure." doesn't really help unless you can tell me what sort of thing has this great a deviation. The most volatile solid I could find the data for in a quick search was camphor; I estimate that it has a vapour pressure of a couple of mmHg at 40C. At that temperature DCM has a vapour pressure of roughly 760mmHg. If they started off equimolar then the mixture coming over, if it followed Raoults law, would be something like 0.3% camphor (the usual crass aproximations have been made here). What you are talking about is a deviation from Raoult's law of an order of magnitude or 2. As far as I'm aware, that sort of thing doesn't happen unless something really weird is going on.
What I already explained is the theory behind what he is seeing. I don't know how else to explain it, but that is how it works. If you look at phase diagrams for non-ideal mixtures you will see severe changes when intermolecular forces change sharply between pure liquids and their respective multicomponent mixtures..... after all, what else do you think is holding them in the liquid phase, but those intermolecular forces and the pressure above them?
There are few examples better at explaining this phenomena than dipole-dipole DCM and London Dispersion Wax .... being combined to lose those interactions, thus making it easier to leave the liquid phase. The fact that it affects the wax's vapor pressure more than the DCM has to do with the fact that LD is a weak force to begin with, destroy that and what else is a wax to do but vaporize.
[Edited on 11-12-2006 by XxDaTxX]
Quince
International Hazard
Posts: 773
Registered: 31-1-2005
Location: Vancouver, BC
Member Is Offline
Mood: No Mood
Thinking back to the extraction of HNO3 with DCM, the patent claims that the two form a bond of sorts. Is something analogous possible with the wax? That is, not just a simple solution of the wax into the DCM?
Also, I want to stress again that, although the head remained at 40*C, the flask was much hotter. That not all the wax vapors condensed in the Vigreux could have been simply because I was distilling too fast.
A final question: if I get another batch of paint stripper, should I just do the steam distillation, or do I need regular distillation first to remove other things such as the methanol?
[Edited on 11-12-2006 by Quince]
\"One of the surest signs of Conrad\'s genius is that women dislike his books.\" --George Orwell
unionised
International Hazard
Posts: 4919
Registered: 1-11-2003
Location: UK
Member Is Offline
Mood: No Mood
Those weak forces are normally enough to keep it together at 40C unless it's a wax that boils below 40C in which case I really want to know what it is.
There are 3 sets of forces to consider you have missed out the dipole- induced dipole force ie between DCM and wax. This is generally stronger than the London force. If the wax can be melted without boiling then the relatively weak London forces are sufficient to keep the stuff in the liquid phase rather than the vapour. The forces holding it back in solution rather than vapourising ie the induced dipole interactions are stronger so the deviation ought to be negative. Similarly, the relatively strong dipole dipole interactions in pure DCM are destroyed because the wax molecules get in the way. If anything this means that the DCM should be easier to distill out than Raout's law would suggest. (of course, this is offset- the DCM is less strongly held to the wax, but the wax is heavier so there is some chance of the DCM escaping as a dimer but little chance of a wax+DCM molecule doing so).
Anyway, if your point holds for wax and DCM it will also hold for "oily fatty crap" +DCM so you cannot evaporate DCM from oily stuff to recover it. More interestingly, it would also hold for "the stuff I just synthesised in any of a whole bunch of practical experiments and the extracted into ether". That means I cannot extract my product into ether and then evaporate off the ether to collect my product.
This is news to me, and, I suspect, to many others on this site.
(BTW, if you want to refer to this post please don't quote the whole lot- doing so just takes up more storage space and bandwidth)
[Edited on 11-12-2006 by unionised]
Maya
National Hazard
Posts: 263
Registered: 3-10-2006
Location: Mercury
Member Is Offline
Mood: molten
Why don't we put down the nice theoretical discussions and simply find out what the heck you got in the wax?
You know ? some simple organic micro-qualitative testing for functionality?
my god that would be so easy and tell you exactly how to purify....
I learned that first year organic chem , its easy....
Quince
International Hazard
Posts: 773
Registered: 31-1-2005
Location: Vancouver, BC
Member Is Offline
Mood: No Mood
Well look who's here... the Hindu creator of the universe!
Not possible for me, for the simple reason that I never took chemistry in university. Indeed, my specialty couldn't be further from it.
[Edited on 11-12-2006 by Quince]
\"One of the surest signs of Conrad\'s genius is that women dislike his books.\" --George Orwell
Maya
National Hazard
Posts: 263
Registered: 3-10-2006
Location: Mercury
Member Is Offline
Mood: molten
yes but you ARE dealing with chemistry, you have to try and understand some of the principles involved otherwise you'll be beating your head on the wall, and ours on the internet, everytime that you don't understand something.
really, it is very easy. DUDE, this is the first page I got when i googled "organic qualitative analysis" and it looks like a good one out of many...........
Quince
International Hazard
Posts: 773
Registered: 31-1-2005
Location: Vancouver, BC
Member Is Offline
Mood: No Mood
Quote: Originally posted by Maya beating your head on the wall, and ours on the internet
If so, I'm amazed neither head nor wall has broken this far...
\"One of the surest signs of Conrad\'s genius is that women dislike his books.\" --George Orwell
Pages: 1 3
Sciencemadness Discussion Board » Fundamentals » Organic Chemistry » Which extraction method to use for plants Select A Forum Fundamentals » Chemistry in General » Organic Chemistry » Reagents and Apparatus Acquisition » Beginnings » Responsible Practices » Miscellaneous » The Wiki Special topics » Technochemistry » Energetic Materials » Biochemistry » Radiochemistry » Computational Models and Techniques » Prepublication Non-chemistry » Forum Matters » Legal and Societal Issues
|
# Why Open Interval In Formal Definition Of Limit At Infinity
The formal definition of limit at infinity usually starts with a statement requiring an open interval. An example from OSU is as follows:
Limit At [Negative] Infinity: Let $f$ be a function defined on some open interval $(a, \infty)$ [$(-\infty, a)$]. Then we say the limit of $f(x)$ as $x$ approaches [negative] infinity is $L$, and we write $\lim_{x\to[-]\infty} f(x) = L$ if for every number $\epsilon$, there is a corresponding number $N$ such that $\left|f(x) - L\right| < \epsilon$ whenever $x > N\;[x < N]$.
But, I think it is also valid when the interval is half-open as in the following: Let $f$ be a function defined on some right-open interval $[a, \infty)$ [left-open interval $(-\infty, a]$]. Then we say the limit of $f(x)$ as $x$ approaches [negative] infinity is $L$, and we write $\lim_{x\to[-]\infty} f(x) = L$ if for every number $\epsilon$, there is a corresponding number $N$ such that $\left|f(x) - L\right| < \epsilon$ whenever $x \geq N\;[x \leq N]$.
So, why the formal definition of limit at infinity does not start with a statement requiring a half-open interval, which is more general?
Is it because people want to match it with the formal definition of limit? I understand that the formal definition of one-sided limit requires an open interval because that is necessary to define a limit at a point $a$. But, such requirement does not exist for limit at infinity, and therefore, why the more general version of half-open interval is not used in the formal definition of limit at infinity?
There is not really a difference between both approaches: If $f$ is defined on the open interval $(-\infty,a)$, then we may as well consider the restriction to the closed interval $(-\infty,a-1]$ and similarly vice versa. The reason that open interval may be preferred is that the limit requires $f$ to be defined on a topological neighbourhood of $\infty$. A neighbourhood of $\infty$ is a set that contains an open set containing $\infty$ and the basic open sets are open intervals. So the the following definition might be considered "best", but I'm afraid it is way less intuitive for the learner:
Limit At Infinity: Let $f\colon A\to\mathbb R$ be a function where $A$ is a punctured neighbourhood of $\infty$ in the two-point compactification of $\mathbb R$. Then we say ...
It's great to ask what hypotheses in a theorem or definition are really germane. What is important is that $f$ is defined on some interval $(a,\infty)$ for sufficiently large $a$ so that when we write down the $\epsilon-\delta$ definition, we don't have to worry about pathologies.
It's not more general. If your function is defined on a closed interval $[a,\infty)$ then it's also defined on an open interval $(b,\infty)$ where $b > a$. And the values of the function on the interval $[a,b]$ don't matter at all when it comes to the limit of the function at $+\infty$.
The question of whether the limit of $f(x)$ as $x\to +\infty$ exists is basically the same as asking whether $f$ can be extended from $(a,\infty)$ to $(a,+\infty]$ (see Wikipedia on the extended real line) such that it's continuous at $+\infty$.
|
# Blackbody math help
1. Jan 7, 2004
### Hydr0matic
Have a look at this site, specifically the calculation of Planck's formula. I'm sure you've seen it many times before.
What I need help with is calculating a similar formula based on my own idea instead of the oven model. I'll try to explain what I'm thinking by refering to parts of the above site.
Making it very short... in the oven model there are a bunch of oscillators in the walls of the oven, all oscillating at different frequencies, ie those that "fit" ($\lambda = 2a, a, 2a / 3 ..$). Planck's "fix" was to quantize the energylevels and assume higher energies where less likely to be emitted.
First, I just want to say that I think the oven model is very lacking when it comes to reflecting reality. Though perfect blackbodies don't exist, the "almosts" - ie stars - are best represented by a simple chaotic (huge) quantity of oscillators, all oscillating in random directions. There are no walls and no "fits". Unlike the oven model, a perfect blackbody would produce a perfectly continuous spectrum.
The model we're gonna focus on is the most realistic one, ie the gas analogy.
According to the theorem of equipartition of energy the oscillators should have an average energy
$$K_\textrm{av} = \frac{mv^2}{2} = \frac{3k_{b}T}{2}$$
and thereby average velocity
$$v_\textrm{av} = \sqrt{\frac{3k_{b}T}{m}}$$
Similar to the oven model approach we must first figure out which wavelengths the oscillators are emitting. The main frequency $$f_0$$ is emitted perpendicular to the direction of oscillation and is thereby not doppler affected. $f_\textrm{max}$ and $f_\textrm{min}$ are emitted along the axis of oscillation since this is where most dopplereffect occurs.
ie
$$f_\textrm{max} = \left(f_0 \frac{c+v_\textrm{av}}{c-v_\textrm{av}}\right)$$
$$f_\textrm{min} = \left(f_0 \frac{c-v_\textrm{av}}{c+v_\textrm{av}}\right)$$
What I'm saying is that every single oscillator in the blackbody emitts all frequencies part of the blackbody's spectrum. And unlike the oven model, the mechanism controlling the energy- and frequency-distribution is the oscillation itself. Let me explain..
In the case with the oven the only radiation that matters is the one radiated perpendicular to the oscillators' direction of oscillation (due to the "fit"). So for an oscillator in the oven wall
$$f_0 = f_\textrm{max} = f_\textrm{min}$$
So at any given time, a single oscillator only radiates a single frequency.
However, this is not the case in our gas analogy. Every single oscillator radiates in all directions at all time, and so all angles must be accounted for. But why should this mean oscillators radiate more than one frequency ? This is where dopplereffect comes in to play. Let me make an analogy with atoms and photons:
http://hydr0matic.insector.se/fysik/dopplershift.jpg [Broken]
The analogy is very simple. While the atoms in a gas bump around and vibrate, they radiate photons. Depending on where the observer is, the atom radiating a specific photon might be moving towards or away from him. This causes the photon to shift it's frequency accordingly. If the atom is moving towards the observer while emitting the photon, the photon will be blueshifted. Now, if we would have a gas with an infinite number of atoms, there would be an infinite amount of relative angles to the observer, which of course would produce a continuous spectrum. This spectrum would stretch from it's most redshifted frequency, $f_\textrm{min}$, to it's most blueshifted one, $f_\textrm{max}$.
To this point the analogy with our blackbody gas model is excellent, but continuing will reveal some crucial differences.
The first difference is quantum related. A quantum atom will emit photons sporadically, but a classical oscillating charge will radiate continuously in every direction. This means that it would take an infinite amount of time for a quantum atom to emit the whole continuous spectrum, while it would only take a brief moment for the classical charge to do the same.
The most important difference from a blackbody perspective is the fact that photon intensity doesn't change with the relative angle. The observer will see as many unshifted photons as blueshifted and redshifted. If this was the case in our blackbody, the intensity wouldn't vary as a function of wavelength (As the famous Planck curve). But on the other hand, if the atoms were oscillating charges and the photons were electromagnetic waves, the intensity would vary very much indeed. In fact, Maxwell's equations tells us that the amplitude of an electromagnetic wave is a function of the angle between the oscillation and the radiated wave. This means that the intensity is highest perpendicular to the oscillation, and lowest (zero) in the direction of oscillation. This is why the Planck curve slopes down again at higher frequencies ! There are two relationships controlling the intensity curve - dopplershift and Maxwell's equations. These two are related as follows:
More dopplershift = less intensity // No dopplershift = highest intensity
$$\theta = \angle$$ radiation / oscillation
$$D_\textrm{amount}$$ = The amount of dopplereffect
$$D_\textrm{amount} \propto \cos{\theta}$$
Amplitude of EM field $$A \propto \sin{\theta}$$ (no radiation in direction of oscillation)
Anybody feel like helping me formulate this properly ? (mathematically)
|
# [SOLVED]concept question frequency
#### dwsmith
##### Well-known member
Given $\sin\left[\frac{\pi ct}{L}\left(n + \frac{1}{2}\right)\right]$.
The period is $\tau = \frac{2L}{c\left(n + \frac{1}{2}\right)}$ so the frequency is $\frac{1}{\tau}$, correct?
Last edited:
#### MarkFL
Staff member
Yes:
$\displaystyle f=\frac{1}{\tau}$
#### dwsmith
##### Well-known member
Yes:
$\displaystyle f=\frac{1}{\tau}$
Is the period I obtained correct? Is that the yes or is the yes it is the reciprocal?
#### Ackbach
##### Indicium Physicus
Staff member
If you have a function $\sin(kt)$, find the period $\tau$ by setting $k\tau=2\pi$.
#### MarkFL
Staff member
Sorry for being vague.
I agree with the period you found and with the relationship between frequency and period that you stated.
$\displaystyle \tau=\frac{2\pi}{\omega}$
In your case $\displaystyle \omega=\frac{\pi e}{L}\left(n+\frac{1}{2} \right)$
And so:
$\displaystyle \tau=\frac{2\pi}{ \frac{\pi e}{L} \left(n+ \frac{1}{2} \right)}= \frac{2L}{e\left(n+\frac{1}{2} \right)}$
#### dwsmith
##### Well-known member
If you have a function $\sin(kt)$, find the period $\tau$ by setting $k\tau=2\pi$.
Is there a difference between frequency and natural frequency (eigenfrequencies)?
#### Ackbach
##### Indicium Physicus
Staff member
Is there a difference between frequency and natural frequency (eigenfrequencies)?
The term "natural frequency" refers to resonance. So if you have a forced mass-spring system, e.g., and you tune the forcing function to the same frequency as a term in the homogeneous solution, you end up with unstable behavior.
The term "frequency" just refers to what's going on in this thread.
There's also the term "angular frequency", which is represented by $\omega=2\pi f$.
Hope that's as clear as mud.
#### dwsmith
##### Well-known member
The term "natural frequency" refers to resonance. So if you have a forced mass-spring system, e.g., and you tune the forcing function to the same frequency as a term in the homogeneous solution, you end up with unstable behavior.
The term "frequency" just refers to what's going on in this thread.
There's also the term "angular frequency", which is represented by $\omega=2\pi f$.
Hope that's as clear as mud.
I am trying to find the natural frequency (eigenfrequency) of $u$.
How would I do that then?
$$u(x,t) = \sum_{n = 1}^{\infty}\sin\left[\frac{\pi x}{L}\left(n + \frac{1}{2}\right)\right]\left\{A_n\cos\left[\frac{\pi ct}{L}\left(n + \frac{1}{2}\right)\right] + B_n\sin\left[\frac{\pi ct}{L}\left(n + \frac{1}{2}\right)\right]\right\}$$
#### Ackbach
##### Indicium Physicus
Staff member
I am trying to find the natural frequency (eigenfrequency) of $u$.
How would I do that then?
$$u(x,t) = \sum_{n = 1}^{\infty}\sin\left[\frac{\pi x}{L}\left(n + \frac{1}{2}\right)\right]\left\{A_n\cos\left[\frac{\pi ct}{L}\left(n + \frac{1}{2}\right)\right] + B_n\sin\left[\frac{\pi ct}{L}\left(n + \frac{1}{2}\right)\right]\right\}$$
Well, I could be wrong, but I would say that all of your
$$f_{n}=\frac{c\left(n + \frac{1}{2}\right)}{2L}$$
are the eigenfrequencies. I don't think there's one single eigenfrequency.
|
Show Summary Details
Page of
Printed from Oxford Research Encyclopedias, Planetary Science. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).
date: 22 June 2021
# Composition and Chemistry of the Neutral Atmosphere of Venus
• Ann Carine VandaeleAnn Carine VandaeleBelgian Institute for Space Aeronomy
### Summary
The atmosphere of Venus is quite different from that of Earth: it is much hotter and denser. The temperature and pressure at the surface are 740 K and 92 atmospheres respectively. Its atmosphere is primarily composed of carbon dioxide (96.5%) and nitrogen (3.5%), the rest being trace gases such as carbon monoxide (CO), water vapor (H2O), halides (HF, HCl), sulfur-bearing species (SO2, SO, OCS, H2S), and noble gases. Sulfur compounds are extremely important in understanding the formation of the Venusian clouds which are believed to be composed of sulfuric acid (H2SO4) droplets. These clouds completely enshroud the planet in a series of layers, extending from 50 to 70 km altitude, and are composed of particles of different sizes and different H2SO4/H2O compositions. These act as a very effective separator between the atmospheres below and above the clouds, which show very distinctive characteristics.
### Subjects
• Planetary Atmospheres and Oceans
Comparative planetology helps investigate the global processes occurring within different atmospheres and provides a wider perspective for the understanding of life and habitability. By comparing a wide range of compositions, chemistries, and climates, driving mechanisms can be better understood. Planetary atmospheric dynamics and chemistry are governed by the same physical laws in all atmospheres, only the values and ranges of the parameters—composition, thermal and radiation balance, internal structure, existence of a magnetic field, and so on—vary from one planet to another.
Venus is a wonderful natural laboratory to investigate the way in which quite different outcomes can be reached, although starting from similar building blocks. Understanding how the atmosphere of Venus evolved will help us to understand and better place in perspective the atmospheric evolution and climate of our own planet.
The atmosphere of Venus, which is mostly composed of $CO2$ (96.5%) and $N2$ (3.5%), with other chemical species present in trace amounts, such as $CO,H2O,NO,OCS,SO2,HCl$, and $HF$ (see Table 1 for a detailed inventory of the atmosphere), is much hotter and denser than that of Earth, with temperature and pressure reaching 740 K and 92 atmospheres respectively at its surface. Cloud structures (Titov, Ignatiev, McGouldrick, Wilquet, & Wilson, 2018), made of sulfuric acid droplets ($H2O·H2SO4$), completely enshroud the planet, physically separating the lower part of the atmosphere from the layers above.
#### Table 1: Chemical Composition of the Atmosphere of Venus
Venus and Earth formed from the same interstellar gas and dust, and therefore their initial composition should be very similar (Grinspoon, 2013). However liquid water has always been present at the surface of Earth, while Venus evolved towards a dry and hot planet. Some have proposed that Venus could have been formed in a drier region of the Solar nebulae, which would explain the current depletion in water (Lewis, 1974). But observations clearly indicate that, once in its history, Venus had more water than in the present epoch. Indeed, Venus might have harbored enough water to form an ocean. However, all this water was lost over time (Bullock & Grinspoon, 2013; Kasting, 1988). Water is photodissociated by ultraviolet (UV) radiation emitted by the Sun and reaching the upper layers of the atmosphere, generating hydrogen $(H)$ and oxygen $(O)$ atoms. The hydrogen atoms can escape the planet’s gravitational field under favorable conditions, i.e. when their thermal energy is high enough. This escape is confirmed by measurements of present-day D/H ratios, which demonstrate an overabundance of deuterium (heavy hydrogen, D) relative to hydrogen. The free oxygen produced by photo-dissociation reacted to create both carbon dioxide and sulfur dioxide.
Thermal escape of hydrogen is a slow process; at least hundreds of millions of years were required for Venus to lose its ocean (Kasting, 1988). During that time water vapor evaporating from the ocean and trapped in the atmosphere contributed to the increase of the atmosphere temperature, through the greenhouse effect. Water vapor is an even more potent greenhouse gas than carbon dioxide. Increasing the temperature also accelerated the evaporation of the ocean, leading to a situation where all the water entered the atmosphere, leading to eventual loss by escape of H following photodissociation. At some point, the surface of Venus became so hot that the carbon trapped in rocks sublimated into the atmosphere and combined with oxygen to form even more carbon dioxide, creating the dense atmosphere observed. The accumulation of $CO2$ was boosted by the lack of any efficient $CO2$ cycle, transferring $CO2$ back to the crust, as is happening on Earth: the $CO2$ emitted into the atmosphere, remained in the atmosphere.
### Chemistry of the Venusian Atmosphere
The chemistry in Venus’s atmosphere is controlled by ion–neutral and ion–ion reactions in the ionosphere, by photochemistry within and above the cloud global layer, and by thermal equilibrium chemistry, which prevails near the surface (Mills, Esposito, & Yung, 2007).
Above ~100 km altitude, the atmosphere of Venus is less dense and overlaps with the ionosphere: ion–neutral and ion–ion reactions along with photodissociation processes control the chemistry. In the middle atmosphere (~60–100 km), above the cloud layer, UV radiation from the Sun drives the processes through photochemistry. Little UV radiation reaches the lower atmosphere below 60 km, while the temperature gradually increases towards the surface. The chemistry is controlled by thermal processes, called “thermodynamic equilibrium” chemistry, or thermochemistry. Venus’s clouds clearly define the transition between the lower and middle atmospheres, i.e. they separate the photochemistry-dominant and the thermochemistry-dominant regions (Esposito, Bertaux, Krasnopolsky, Moroz, & Zasova, 1997). The cloud and haze region extends from 30 to 90 km in altitude, with the main cloud deck located at 45–70 km. Within this regions, heterogeneous chemistry on aerosols and cloud particles may also play an important role (Mills, Sundaram, Slanger, Allen, & Yung, 2006).
The first studies of the chemistry of the lower atmosphere were based on the assumption that thermochemical equilibrium is reached at each altitude (Esposito et al., 1997). This assumption is valid if the time to reach equilibrium is less than the mixing time, which is of the order of a few years in that region. Krasnopolsky and Parshev (1983) and Krasnopolsky and Pollack (1994) suggested that thermochemical equilibrium could serve as an approximation near the surface, but is generally invalid in the lower atmosphere. Gas-phase thermal equilibrium can only be reached below 1 km above the surface; above that level the atmosphere is more oxidizing than predicted by thermodynamics only. Fegley, Zolotov, Lodders, and Widemann, (1997) combined thermodynamics and chemical kinetics to study the lowest 10 km layer of the atmosphere. Finally, a self-consistent chemical kinetic scheme was developed for the lower atmosphere that does not apply the approximation of thermochemical equilibrium (Krasnopolsky, 2007, 2013b). The near-surface layer is dominated by surface–atmosphere exchange and interaction.
Very few observations of the atmosphere below 7 km above the surface exist; no precise measurements of N2 abundance have been reported (Von Zahn, Kumar, Niemann, & Prinn, 1983). Only the VEGA-2 probe has delivered a temperature profile (VEGA Balloon Science Team, & Seiff, 1987; Zasova, Moroz, Linkin, Khatountsev, & Maiorov, 2006), which indicated a very unstable vertical gradient near the surface. Lebonnois and Schubert (2017) proposed that this could be explained by a change in the atmospheric composition, with the presence of a non-homogeneous layer close to the surface, in which N2 abundance gradually decreases to near-zero at the surface. They further suggested that the atmosphere would not be in a gaseous phase anymore, but rather a super-critical $CO2/N2$ fluid mixture, i.e. the atmosphere is neither a gas, nor a liquid, but, as a mixture of gas and supercritical fluid, has some properties of both.
Two dominant chemical cycles have been identified in the Venus atmosphere (Krasnopolsky & Lefèvre, 2013; Mills & Allen, 2007; Mills et al., 2007)—the carbon dioxide and sulfur cycles.
The $CO2$ cycle includes the photodissociation by UV radiation of $CO2$ on the dayside
$Display mathematics$
followed by the transport of most of the $CO$ and $O$ produced to the nightside, where recombination of $O$ atoms produces excited oxygen $O2(a1Δg)$ (Gérard et al., 2017).
$Display mathematics$
The de-excitation of oxygen to its ground state can occur through quenching
$Display mathematics$
or through
$Display mathematics$
during which radiation, $O2$ airglow emission is emitted at its characteristic wavelength of 1.7 μm.
The oxygen nightglow, which is illustrated in Figure 1, has been reported by a series of ground-based observations and by VIRTIS on board Venus Express. The investigation of the VIRTI-M spectra (Gérard, Soret, Piccioni, & Drossart, 2014; Soret, Gérard, Montmessin, Piccioni, Drossart, & Bertaux, 2012) indicated that the emission peaks at an altitude around 97–99 km (Piccioni et al., 2009; Soret, Gérard, Montmessin, et al., 2012). The $O2$ airglow is characterized by a bright area of maximum emission centered near the antisolar point. These observations proved not only that the proposed chemistry was correct but also that the circulation model at those altitudes were correctly assuming a solar–antisolar circulation.
The direct recombination
$Display mathematics$
is spin-forbidden and is much slower than the $O$ recombination to form $O2$. Photolysis would completely remove $CO2$ above the clouds in ~14,000 years and all $CO2$ from the whole atmosphere in ~5 million years. Moreover this scheme should produce observable amounts of $O2$, which is not the case. Only upper limits of $O2$ have been detected, which would correspond to a uniform mixing ration of 0.3–2 ppm above 60 km (Krasnopolsky, 2006b; Mills, 1999; Trauger & Lunine, 1983). This points to the probable production of $CO2$ through catalytic reactions. Several processes have been proposed, one of the most promising being a cycle of reactions involving the chlorine family (Krasnopolsky & Lefèvre, 2013; Krasnopolsky & Parshev, 1983; Mills & Allen, 2007):
$Display mathematics$
$Display mathematics$
$Display mathematics$
$Display mathematics$
$Display mathematics$
This mechanism has been confirmed in the laboratory (Pernice et al., 2004), by showing that the peroxychloroformyl radical $(CIC(O)OO)$ is thermally and photolytically stable under the conditions prevailing in Venus’s mesosphere. None of the intermediate products have been observed in the atmosphere. However, Sandor and Clancy (2018) detected $ClO$ (the final product of the cycle when $X=Cl$) in the nightside atmosphere. This observation would corroborate the possibility of the chlorine catalytic cycle. $ClO$ would then combine with atomic oxygen to produce $O2$ and $Cl$,
$Display mathematics$
This reaction combined with the above cycle would lead to the net reaction:
$Display mathematics$
However, even considering these reactions, models still predict too high $O2$ abundances above the clouds (by a factor 10). One of the most critical parameters of the models to which the abundance of $O2$ is very sensitive, is the thermal stability of $ClCO$. Recent assessment of this parameter would indicate that the proposed catalytic cycle might not be as efficient as first thought (Marcq, Mills, Parkinson, & Vandaele, 2017; Mills et al., 2007).
Another mechanism has been proposed (Mills et al., 2006): the oxidization of $CO$ through reactions on or within aerosols or cloud particles. This would reduce the modeled value of $O2$ abundance while, however, also reducing the abundance of $CO$ in the lower mesosphere to levels significantly lower than those observed (Mills & Allen, 2007).
The sulfur oxidation cycle starts with upward transport of $SO2$ and its oxidation to form $SO3$ subsequently forming $H2SO4$. Models have shown that $H2SO4$ is formed in a thin layer of the atmosphere, centered at 66 km (Krasnopolsky & Lefèvre, 2013). Condensation of $H2SO4$ and $H2O$ forms the clouds and haze.
$Display mathematics$
$Display mathematics$
$Display mathematics$
$Display mathematics$
The sulfuric acid in the form of cloud droplets is transported by the meridional circulation to the poles, where it is transported downward (Imamura & Hashimoto, 1998). Eventually the droplets evaporate and $H2SO4$ is decomposed to regenerate $SO2$.
$Display mathematics$
$Display mathematics$
This cycle of oxidation of $SO2$ to $H2SO4$ followed by condensation, transport, evaporation and decomposition has been called the “fast atmospheric sulfur cycle” (Von Zahn et al., 1983) (see Figure 2).
Another cycle, the “slow sulfur cycle,” involving photochemical and thermochemical processes with reduced sulfur-bearing species $(OCS, H2S)$ has been proposed by Prinn (1978), based on the assumption that these species, and in particular $OCS$, were the dominant sulfur-bearing species in the atmosphere of Venus. According to this cycle, the following photochemical reactions would occur in the middle atmosphere:
$Display mathematics$
$Display mathematics$
$Display mathematics$
while in the lower atmosphere, below the clouds, $H2SO4$ thermally decomposes:
$Display mathematics$
$Display mathematics$
$Display mathematics$
Observations have demonstrated that $OCS$ and $H2S$ are not the major sulfur species, but rather $SO2$. Revised schemes have been proposed (Krasnopolsky, 2007, 2013b, 2016; Mills & Allen, 2007; Yung et al., 2009).
A variation of the slow cycle has been proposed which involves the production of polysulfur $Sx$. Upward transport of $SO2$ and $OCS$ is followed by photodissocation, generating $S$ atoms
$Display mathematics$
$Display mathematics$
which can react via
$Display mathematics$
to produce $S2$. Production of higher polysulfurs is possible through successive addition reactions such as
$Display mathematics$
Usually the term polysulfur refers to $Sx$ with $x>3$. Polysulfurs absorb strongly in the UV, and have been proposed as being the still-unidentified UV absorber in the upper atmosphere of Venus (Mills et al., 2007; Titov et al., 2018). Krasnopolsky (2017) showed, however, that the production of polysulfurs is negligible above the clouds, and therefore can contribute only partially to near-UV absorption; they cannot explain it by themselves.
The polysulfur branch of the slow sulfur cycle is completed in the lower atmosphere by
$Display mathematics$
The fast and slow cycles both imply that there must be a significant downward flux of $CO$ from the middle to the lower atmosphere, balanced by an upward flux of $CO2$. Observations, however, favor conversion of $CO$ to $OCS$ at 30–45 km altitude. The model of Krasnopolsky (2013b) is a slightly modified version of the reaction set involved in the conversion of $CO$ to $OCS$ and requires a smaller flux of $CO$ from the middle atmosphere. Nevertheless the exchange of constituents through the cloud layer requires more study both in modeling and observations.
There is a third sulfur cycle, involving surface–atmosphere interaction—the geological cycle. This cycle starts with the outgassing of reduced gases, $COS$ and $H2S$, from the crust. These weathering reactions are slow compared to the other cycles, and there are still no accurate measurements of the mineralogy of the surface of the planet. Different surface compositions have been proposed, including carbonate ($CaCO3$), wollastonite ($CaSiO3$), anhydrite ($CaSO4$), magnetite ($Fe3O4$) or pyrite ($FeS2$). The following reactions have been proposed (Fegley & Treiman, 1992; Von Zahn et al., 1983):
$Display mathematics$
$Display mathematics$
In the case of these reactions, $OCS$ and $H2S$ are formed, which will evolve into $SO2$ through photochemical reactions. When $SO2$ abundances exceed the thermochemical equilibrium relative to the surface minerals, $SO2$ will react, through thermochemical reactions, with carbonate rocks to form anhydrite, which will be transformed back to pyrite, closing the cycle.
$Display mathematics$
$Display mathematics$
### Observations of the Venusian Atmosphere
Lomonosov was the first to suspect the existence of an atmosphere around Venus, thanks to his observations of the 1761 Venus transit (Lomonosov, 1955): several observers reported the appearance of a halo when Venus entered and exited the Sun’s disk, which was attributed to the presence of an atmosphere on Venus. The systematic observation of Venus started in the early 20th century: the clouds’ structure and UV markings on the clouds were discovered (Dolfus, 1953); $CO2$ was detected (Adams & Dunham, 1932) and was shown to be the main constituent of the atmosphere (Belton, Hunten, & Goody, 1968; Connes & Connes, 1966); the greenhouse effect from $CO2$ combined with the role of the aerosols (clouds and hazes) was proposed to explain the high surface temperature, an idea which was further developed by Sagan (1960) in the early 1960s and confirmed by microwave observations from Earth (Drake, 1962; Mayer, McCullough, & Sloanaker, 1957) and a fly-by of Venus by Mariner 2 (Barath, Barrett, Copeland, Jones, & Lilley, 1964). This was the beginning of a long series of observations of the planet by different methods and instruments.
What we know of the atmosphere of Venus has been inferred from observations performed from Earth-based telescopes, including the Hubble Space Telescope, from space missions—the Venera and Mariner missions, Pioneer Venus, Magellan (see Fegley, 2004, for a review of these missions), and more recently Venus Express, and Akatsuki, and from entry probes and balloons—Venera missions, Pioneer Venus, and the VEGA probes.
Our knowledge of the composition of Venus’s atmosphere below the clouds comes essentially from observations of the thermal radiation emitted by the surface and different layers of the atmosphere, escaping through several near-infrared (IR) spectral atmospheric windows. These observations are only possible during nighttime, when there is no contribution from solar radiation. Thermal radiation emitted by the surface and lower atmosphere escapes towards space in a series of spectral atmospheric windows located between strong absorption features from $CO2$ and $H2O$. Below 2.4 μm, $H2SO4$ cloud droplets scatter light conservatively, allowing light to escape, while above 2.5 μm, the refractive index of $H2SO4$ changes and cloud droplets start to absorb strongly, preventing any longer-wavelength radiation escaping through the clouds. Following their discovery by Allen and Crawford (1984), several transparency windows have been identified between 1.0 and 2.4 μm. These emissions can only be observed during the night, since their magnitude is about four times weaker than the reflected solar contribution. Radiation in each of the windows is emitted at a particular altitude range: radiation around 1.01 μm is emitted from the surface; 1.10 and 1.18 μm from the atmosphere between the surface and an altitude of 15 km), 1.27 μm from 15–30 km altitude; 1.31 μm from 30–50 km altitude, 1.74 μm from 15–30 km altitude, and 2.3 μm from 26–45 km. By combining abundances retrieved from different windows, vertical profiles can be reconstructed.
To sound the upper layers of the atmosphere (above the clouds) different techniques are used. Ground-based observation in the UV, IR, sub-millimeter, and millimeter ranges, delivers global views and variations of abundances with latitude and longitude, seasons, and local times. These also include observations from the Hubble Space Telescope. Ground-based observatories provide monitoring of the atmosphere composition over long periods of time, in particular in between space missions. Spaceborne observations sound the atmosphere both in nadir geometry and solar/stellar occultation or limb viewing. They provide respectively integrated abundances and high vertically resolved profiles of abundances. Recently, two missions were dedicated to atmospheric observations, Venus Express (VEX) from ESA, the European Space Agency (Svedhem et al., 2007, Svedhem, Titov, Taylor, & Witasse, 2009; Taylor, Svedhem, & Head, 2018), and Akatsuki from JAXA, the Japanese Space Agency (Nakamura et al., 2007, 2016).
On VEX, different instruments worked together to sound the atmospheric composition from the surface up to the upper layers: VIRTIS (Piccioni et al., 2007) performed both nadir and limb observations in the infrared; SPICAV/SOIR (Bertaux, Nevejans, et al., 2007; Nevejans et al., 2006) probed the atmosphere using nadir and solar/ stellar observations in the infrared and the UV; The VeRA (Pätzold et al., 2007) experiment used radio occultation to sound the atmosphere.
On board Akatsuki, several cameras were sensitive to different spectral ranges which allowed a detailed investigation of the dynamics of the atmosphere (Nakamura et al., 2017). The RS (Radio Science) experiment was sensitive to temperature and $H2SO4$ vapor between 35 and 100 km (Imamura et al., 2017). The images provided by the UVI camera showed the spatial distribution of $SO2$ and the unknown absorber at the cloud tops, and characterized the cloud-top morphologies and haze properties
Observations in the UV provide information on $SO$ and $SO2$ above the clouds, since several absorption bands of both molecules are located in this spectral range: electronic transition bands $B3Σ−−X3Σ−$ (190–240 nm) and $A3Π−−X3Σ−$ (240–260 nm) of the monoxide, and analogue bands $C˜1B1−X˜1A1$ (190–235 nm) and $B˜1B1−X˜1A1$ (250–340 nm) of the dioxide. The strongest absorption bands are located in the 190–235 nm interval for both gases.
Ground-based observations at radio wavelengths in the millimeter and sub-millimeter ranges provide information on abundances of trace gases (Clancy, Sandor, & Moriarty-Schieven, 2012; Encrenaz et al., 2016; Encrenaz, Moreno, Moullet, Lellouch, & Fouchet, 2015; Sandor & Clancy, 2005, 2017; Sandor et al., 2010). Sub-millimeter spectra correspond to spectrally resolved absorption in the Venus mesosphere superposed on the spectrally featureless, continuum emission originating from the deeper and warmer atmosphere. Such spectra are pressure-broadened and are therefore fundamentally sensitive to mixing ratio and not to number density. Moreover the altitude distribution can be retrieved based on the shape of the observed absorption lines.
### Composition of the Venusian Atmosphere
Venus’s atmosphere is dominantly $CO2$ and $N2$ with smaller amounts of other chemical species. In the following, trace gas abundances are expressed in number density or volume mixing ratio (VMR). VMR is a dimensionless quantity defined by the ratio of the partial density (pressure) of the species and the total density (pressure) of the atmosphere. VMR are given in percentages for the main constituents, and parts per million (ppm, 10−6), parts per billion (ppb, 10−9), or even parts per trillion (ppt, 10−12) for trace gases.
Several attempts to tabulate the composition of Venus’s atmosphere have been published in the past, each trying to summarize and reconcile observational data of their epochs.
The Venus International Reference Atmosphere (VIRA) was compiled by Kliore Moroz and Keating (1985) presenting a synthesis of the best data on the neutral atmosphere and the ionosphere of the planet available at that time. This was the first attempt to summarize all the knowledge about Venus, providing a common standard reference through tables and averages. The original VIRA chapter dealing with the structure of the atmosphere from the surface to 100 km of altitude (Seiff et al., 1985) included tables of the vertical temperature and density considering no day–night, latitudinal, or synoptic variations in the lower atmosphere, but allowing for latitudinal variations in the middle layers (33–100 km). One chapter (Keating et al., 1985) was devoted to the structure and composition of the upper atmosphere. It gathered tables with temperature, total density, and densities of $CO2$, $O$, $CO$, $He$, $N$, and $N2$ for altitudes from 100 km to 250 km, considering no latitudinal variations. Data from 100 to 150 km were given only for noon and midnight conditions, while those for 150 to 250 km were provided for different local times. Von Zahn and Moroz (1985) summarized the composition of Venus’s atmosphere below 100 km altitude as known at that time.
Since then, many missions have yielded new and valuable information. A first attempt to update the VIRA model (VIRA 2) was put forward by Moroz and Zasova (1997). They considered new data provided by the VEGA 1 and 2 UV spectrometers (Bertaux, Widemann, Hauchecorne, Moroz, & Ekonomov, 1996; Linkin et al., 1986), Venera 15 and 16 infrared spectrometers and radio occultation experiments (Moroz et al., 1986; Oertel et al., 1985; Zasova, 1995; Zasova et al., 1996; Yakovlev, Matyugov, & Gubenko, 1991), as well as Near-Infrared Mapping Spectrometer (NIMS) observations obtained during the Galileo fly-by of Venus in 1990 (Carlson et al., 1993; Collard et al., 1993; Grinspoon, 1993; Roos et al., 1993; Taylor, 1995). They also included re-analysis of previous data (from previous Venera missions and Pioneer Venus) and some ground-based observations for abundances of trace gases (Bézard, de Bergh, Crisp, & Maillard, 1990; de Bergh et al., 1995; Pollack et al., 1993). The structure of the Venusian atmosphere was updated by Zasova, Moroz, and Linkin (2006) taking into account measurements of the vertical temperature profile by the VEGA spacecraft and balloons, the radio occultation measurements of Magellan, Venera 15, and Venera 16, as well as the temperature profiles derived from the Venera15 IR spectrometer. They proposed a model of the atmosphere in the altitude range from 55 to 100 km consisting of tabulated temperature dependent on local time for five latitudinal regions (<35°, 35°–55°, 50°–70°, 70°–80°, 85°). Nothing similar had ever done for the composition of the Venusian atmosphere except for an updated version of the initial Table 5-1 of von Zahn and Moroz (1985) given in Moroz and Zasova (1997). However, the data corresponding to the higher altitudes (above 100 km) that were investigated in Keating et al. (1985) have never been updated.
Since then, several inventories have been compiled incorporating new observations (Bézard & de Bergh, 2007; de Bergh et al., 2006; Taylor, Crisp, & Bézard, 1997). The Venus Express mission and recent ground-based observations, however, yielded a wealth of new information on Venus’s atmosphere, from the surface up to the highest layers of the atmosphere (Gérard et al., 2017; Marcq et al., 2017).
Table 1 summarizes our current knowledge of Venus’s atmosphere. It provides the VIRA2 accepted abundances (Moroz & Zasova, 1997; von Zahn & Moroz, 1985), and the recommended values proposed by Taylor et al. (1997) and then lists more recent observations. These will be described in the following sections, each of them focused on one species or one species family (“Water Vapor,” “CO and OCS,” “Sulfur-Bearing Species,” “Halides (HBr, HF, HCl),” “Other Species (O2, OH, O3, ClO)”).
Most of these constituents have spatially and temporally variable abundances. The high variability of the atmosphere of Venus, and in particular that of the trace gases, is not yet fully understood. Models of the atmosphere are still unable to reproduce any such variability.
#### Water Vapor
The primary reservoir for $H$ on Venus lies in $H2O$ and in the $H2O·H2SO4$ cloud layers. Abundances of water vapor have been measured by numerous instruments, and it is now accepted that its abundance in the lower atmosphere is about 30 ppm (Bullock & Grinspoon, 2013; Marcq et al., 2017; Pollack et al., 1993; Taylor et al., 1997). This has been confirmed by more recent observations, both from Earth and from space. Ground-based observations of the lower atmosphere started after the discovery of the transparency windows by Allen and Crawford (1984). Chamberlain, Bailey, Crisp, and Meadows (2013) retrieved water abundance from spectra recorded by the Anglo-Australian Telescope between 0 and 15 km (31 ± 9 ppm). More recently, Arney et al. (2014) performed measurements using the Apache Point Observatory probing the atmosphere at different altitudes. These values were also confirmed by the observations by VIRTIS (Marcq et al., 2008; Tsang et al., 2010) and SPICAV-IR (Fedorova, Bézard, Bertaux, Korablev, & Wilson, 2015) both on board Venus Express.
The abundance of water has, up to now, been very poorly constrained above the clouds, indeed only few observations have been performed. The first reliable measurements were performed by Fink Larson, Kuiper, and Poppen (1972). VIRTIS-H/VEX carried out dayside observations which provided water abundance above the clouds (Cottini et al., 2012). They found that the abundance at cloud top was 3 ± 1 ppm between −40° and +40° latitude, increasing to 5 ppm at high latitudes. Observations of both $H2O$ and its heavy isotopologue HDO by SPICAV-SOIR were performed routinely during the whole Venus Express mission. First results of SOIR (Bertaux, Vandaele, et al., 2007; Fedorova et al., 2008; Vandaele, Chamberlain, et al., 2016) showed a depletion around 85 km in both $H2O$ and HDO, an increase of HDO/$H2O$ above the clouds, and no noticeable temporal variability. Measurements using SPICAV-IR (Fedorova et al., 2016) indicated abundances of $H2O$ of 2 to 11 ppm within the clouds and near the cloud top altitude (59–66 km).
Ground-based observations in the millimeter and sub-millimeter range showed that the VMR of water in the mesosphere (70–100 km) is constant with altitude ranging between 0 and 3.5 ppm (Sandor & Clancy, 2005). These measurements revealed strong temporal and spatial variations. Observations from the SWAS satellite performed between 2012 and 2014 (Gurwell, Melnick, Tolls, Bergin, & Patten, 2007) also confirmed that temporal variability of water in the mesosphere is important (variations of a factor of 50 over two days), and is dominant over the diurnal span. They suggested that these fluctuations might be due to moderate variations of the mesospheric temperature (by10–15 K) inducing water condensation. This is not confirmed by sub-millimeter observations using the ALMA telescope (Encrenaz et al., 2015) which reported a uniform vertical mixing of 2.5 ppm, with a potential maximum in the late afternoon (by a factor of 2 to 3), nor by IR observations by TEXES (Encrenaz et al., 2013, 2016, 2012) or by CSHELL (Krasnopolsky, Belyaev, Gordon, Li, & Rothman, 2013). Note that some of these observations rely on the measurement of HDO, and a conversion to $H2O$ abundances using a fixed D/H value. However all studies did not use the same values of D/H.
#### The D/H Ratio
The D/H isotopic ratio is highly variable among the different objects of the Solar System (Clayton, 2003) and is considered to be key in determining if Venus once had abundant water. Most of the processes that enable hydrogen isotopes to escape from the atmosphere strongly discriminate against the loss of deuterium, leading to an enrichment in the heavier isotope. The D/H ratio is usually derived from simultaneous measurements of $H2O$ and its heavy isotopologue, HDO, although it has also been derived from measurements of DCl/HCl and DF/HF. The bulk of Venus’s atmosphere exhibits a very high D/H ratio, more than 100 times the VSMOW value (Vienna Standard Mean Ocean Water) representative of Earth’s isotopic ratio. The first measurement of the D/H ratio was done by the neutral mass spectrometer on the Large Pioneer Venus Probe during its descent through the atmosphere, providing the value of 157 ± 30 (Donahue, Hoffman, Hodges, & Watson, 1982). De Bergh et al. (1991) reported a value of 120 ± 40 on the night side and lower atmosphere, which is compatible with the value of 135 ± 20 found by Marcq, Encrenaz, Bézard, and Birlan (2006) in the 30–40 km range. Reanalysis of Pioneer Venus data (Donahue, Grinspoon, Hartle, & Hodges, 1997) and ground-based dayside observations (Bjoraker, Larson, Mumma, Timmermann, & Montani, 1992) obtained similar results. Krasnopolsky et al. (2013) derived D/H values from observations of $HDO/H2O$, DCl/HCl and DF/HF reporting 95 ± 15, 190 ± 50, and 420 ± 200 respectively. The higher value found in the case of DCl/HCl was explained by the photochemistry of HCl which tends to enrich D in HCl in the mesosphere. The still higher D/H value derived from DF/HF may be explained by the fact that only DF was measured in Krasnopolsky et al. (2013) who used the averaged abundance of HF reported elsewhere (Krasnopolsky, 2010c; Vandaele et al., 2008). Using SOIR data, Fedorova et al. (2008) reported a value of 240 ± 25 from $H2O/HDO$ observations, while Vandaele et al. (2014) reported a value of 208 ± 155 from DF/HF measurements. The large error on this number is due to the difficulty in measuring DF abundance.
#### CO and OCS
Carbon monoxide $(CO)$ is the second most abundant carbon-bearing gas in Venus’s atmosphere. Its abundance (VMR) is altitude-dependent, decreasing towards the surface in the lower atmosphere, increasing with altitude above the clouds.
Carbonyl sulphide $(OCS)$ is the most abundant reduced sulfur gas in the sub-cloud atmosphere of Venus. Its abundance increases with decreasing altitude in the 25–50 km range by almost two orders of magnitude. Extrapolation of this gradient to the surface indicates that OCS abundance would reach tens of ppm at the surface. OCS above the clouds has been observed using the IRTF/CSHELL instrument (Krasnopolsky, 2010d), indicating a highly variable abundance (1–8 ppb) at 65 km altitude, with a scale height of 2.5 km.
$CO$ and $OCS$ have been observed below the clouds by ground-based instruments, see for example Bézard et al. (1990) for the first detection of $OCS$ and Pollack et al. (1993) for the first detailed interpretation of the nightside observations of $CO$ and $OCS$. Using the high-resolution spectra of VIRTIS-H, Marcq et al. (2008) showed that, at 36 km, $CO$ increased with latitude (24 ± 3 ppm to 31 ± 2 ppm from equator to 60°), while $OCS$ showed the reverse behavior. On the nightside, Tsang et al. (2008) observed tropospheric $CO$ and $OCS$ (2.5–4 ppm) at an altitude of 35 km. They reported higher $CO$ values at dusk compared to dawn. They also saw an increase of the $CO$ abundance from the equator to the poles, with a maximum at latitudes around 60° (23 ± 2 ppm to 32 ± 2 ppm from equator to 60°). In fact, the first hint of a possible variation of $CO$ with latitude was reported by Collard et al. (1993) based on measurements recorded by the Near-Infrared Mapping Spectrometer (NIMS) on board Galileo during its 1990 fly-by of Venus. Ground-based observations by Arney et al. (2014) provided measurements of $CO$ and $OCS$ below the clouds that confirmed the previous results. A recent re-analysis of all VIRTIS-H data (Haus, Kappel, & Arnold, 2015) definitively confirmed the current picture: $CO$ abundances near 35 km increase from 23 ± 1 ppm near 10° S up to a maximum value (31 ± 2 ppm) in the 65–70° S latitude band, then slightly decrease poleward to 29.5 ± 2.5 ppm at 80° S.
Observations at millimeter and sub-millimeter wavelengths have shown that $CO$ abundance is very variable at high altitudes (around 100 km) (Clancy & Muhleman, 1985a, 1991; Clancy, Sandor, & Moriarty-Schieven, 2008, 2012; Gurwell et al., 1995). These measurements also identified diurnal variations in $CO$ VMR above 90 km altitude. The amplitude of these variations differed from year to year. They also revealed long-term variations in the $CO$ profiles between 75 and 105 km. It was suggested that the observed interannual variations of $CO$ are related to interannual variations in the circulation of Venus’s mesosphere. Gurwell et al. (1995) reported the existence of a CO bulge at 90–100 km on the nightside (with a maximum at 100 km and at 2 a.m. local time). This was explained (Clancy & Muhleman, 1991) by considering the general subsolar-to-antisolar circulation in the mesosphere and a zonal retrograde flow below, in addition to inefficient destruction of $CO$ on the nightside. This can lead to a convergence of the flow in the vicinity of the antisolar point, where the bulge is created by subsidence. The return flow occurs lower in the atmosphere.
Irwin et al. (2008), using VIRTIS-M/VEX observations, reported latitudinal distribution of $CO$ above the cloud level. They found an average value of 40 ± 10 ppm (at 65–70 km) with little variation in the middle latitudes. $CO$ densities at higher altitudes (100–150 km) were obtained by Gilli (2012); Gilli et al. (2015), using the 4.7 μm non-LTE emission band of $CO$ measured by VIRTIS-H. Iwagami Yamaji, Ohtsuki, and Hashimoto (2010), observed a nearly uniform distribution of $CO$ above the clouds on the dayside, consistent with the findings of Krasnopolsky (2008). Using the $CO$ dayglow at 4.7 μm, Krasnopolsky (2014) retrieved abundances of 560 ± 100 ppm near 104 km altitude and 40 ± 5 ppm near 74 km at low and middle latitude (within ± 50°). Marcq et al. (2015) used high-resolution observations acquired by CSHELL/IRTF at 4.5 μm (corresponding to soundings at 70–76 km altitude) to search for latitudinal variations of CO on both day- and nightsides of Venus: $CO$ abundance was seen to increase from 35 ± 10 ppm at low latitude to 45 ppm past 30° N. Using VIRTIS-M data, Grassi et al. (2014) found that $CO$ exhibited a similar increase in the southern hemisphere between 40° S and 60° S (peaking at 70 ± 10 ppm), followed by a decrease poleward (60 ± 5 ppm at 70° S). SOIR on board Venus Express measured the CO profiles in the 65–150 km range of altitudes (Vandaele, Mahieux, et al., 2016; Vandaele et al., 2015). Evening abundances were found to be systematically higher than morning values at altitudes above 105 km, but the reverse was observed at lower altitudes. Higher abundances were observed at the equator than at the poles for altitude higher than 105 km (by a factor of 5 to 7), but again the reverse was seen at altitudes lower than 90 km (Figure 3.A). This illustrates the complexity of the 90–100 km region of the Venus’s atmosphere, where different wind regimes are at play.
Clancy et al. (2012) reported high variability of temperature and CO mixing ratio, both spatially and temporally. They showed periods where up to 24 K and 133% spatial variations of the temperature and the $CO$ VMR respectively were observed between 95 and 100 km. They also mentioned several cases of high temporal variability, with variations up to 100–200%. They definitively saw a correlation between a temperature increase above 95 km and an increase of the $CO$ abundance. They associated this correlation to the strong downward vertical advection, drawing the larger $CO$ VMR present at higher altitudes downward, leading to a strong compressional adiabatic heating. In Tsang et al. (2009), later confirmed by Tsang and McGouldrick (2017), $CO$ abundances obtained from VIRTIS observations are reported at an altitude of ~35 km and show unexpectedly high variability in space and time. At that time, the accepted scenario was that a Hadley-type cell would bring enriched $CO$ air from higher altitudes towards the lower layers of the atmosphere. However, the variability seen at 35 km led the authors to speculate that such a simple view was not consistent with the observations. They concluded that the meridional circulation could be more asymmetric than one single Hadley cell. This might imply that the strength of the downwelling might be variable. Such a complex circulation would also explain the high spatial and temporal variability seen in the SOIR data (Vandaele, Mahieux, et al., 2016; Vandaele et al., 2015).
Observations of tropospheric $CO$ (Tsang et al., 2008; Tsang & McGouldrick, 2017; Tsang et al., 2009), which indicate an increase of $CO$ abundance from equator to poles, with a maximum around 60° latitude, were interpreted as a proof of the existence of a Hadley cell circulation type. The uplifting of air occurs at the equator, and the descending branches of the Hadley cell have been shown to be located at latitudes close to 60° by General Circulation Modeling (see, e.g., Lee, Lewis, & Read, 2007), supporting the interpretation of the measurements of CO below the clouds. The following mechanism was then suggested (Marcq et al., 2015; Taylor, 1995; Taylor & Grinspoon, 2009; Tsang et al., 2008): $CO$ is produced through the photolysis of $CO2$ occurring at high altitudes above the clouds, the CO-rich air above the cloud deck is entrained below the clouds by the descending branch of the cell, leading to the observed bulge at 60° latitude in both hemispheres (Figure 3.B). Cotton, Bailey, Crisp, and Meadows (2012) performed ground-based observation of CO below the clouds, which confirmed the proposed scheme. However, contrary to previous studies, they considered that the observed variability did not originate at the level of sensitivity (around 36 km), but from above. SOIR observations led to the conclusion (Vandaele, Mahieux, et al., 2016) that the maximum altitude at which, or at least below which the meridional transport occurs, i.e. the location of the top zonal branch of the Hadley cell, would correspond to an altitude of 100 km. This supports the mechanism (Taylor, 1995; Taylor & Grinspoon, 2009) in which the deep Hadley circulation on Venus extends from well above the clouds to the surface and from the equator to the edge of the polar vortex.
$CO$ is produced above the clouds by the photolysis of $CO2$ by UV radiation (see for example Figure 2 in Krasnopolsky (2012), showing that $CO2$ photolysis is the main source of $CO$ above 70 km altitude). Since this process is driven by the Sun illumination, $CO$ production would then be the highest at the subsolar point. Gilli et al. (2015) observed maximum $CO$ abundances around noon, decreasing towards the morning and evening, with no significant difference between morning and afternoon values. SOIR observations (Vandaele, Mahieux, et al., 2016), on the contrary, indicated that above 100 km altitude, $CO$ is slightly more abundant at the evening terminator than in the morning, while the reverse is true below 100 km.
$CO$ is produced at high altitudes where the subsolar–antisolar circulation prevails (Bougher, Alexander, & Mayr, 1997). $CO$ is then transported from the subsolar point (or near the subsolar point) in the equatorial region to the higher latitudes, undergoing chemical interaction during the transport (Clancy & Muhleman, 1985b; Krasnopolsky, 2012), leading to a decrease of its abundance. This was observed by Gilli et al. (2015) who saw a clear decrease of $CO$ density from equator to pole at high altitudes (above 130 km). Below that altitude, the gradient was still present in the available data but lying within the noise. SOIR observations confirm this latitudinal trend (Vandaele, Mahieux, et al., 2016): $CO$ abundances at the equator are higher than high-latitude (60–80°) and polar (80–90°) abundances by a factor of 2 and 4 respectively at an altitude of 130 km. Below 100 km, the trend is reversed. In fact SOIR observes a convergence of all profiles around an altitude of 100 km, at which the inversions in the trends relative to morning/ evening and equator/ high-latitude/ pole values occur (Figure 3A). This is compatible with several previous observations which reported weak (Irwin et al., 2008; Marcq et al., 2015) or non-significant (Iwagami et al., 2010; Krasnopolsky, 2008, 2010a) latitudinal variations in different altitude ranges spanning the 65–76 km region. When a weak trend was mentioned, it corresponded to a slight increase from equator to poles.
#### Sulfur-Bearing Species
Several sulfur-bearing species have been unambiguously identified in Venus’s atmosphere: $SO2$, $SO$, $OCS$, $S3$, and $H2SO4$. $H2S$ was reported below 20 km by Pioneer Venus, but this has never been confirmed. Moreover, the reported abundance was at least one order of magnitude larger than expected by thermodynamic equilibrium chemistry (Fegley, Zolotov, et al., 1997).
During the Venera 11 probe descent, spectra of the scattered sunlight were recorded from which abundances of $S3$ and $S4$ were derived. A first investigation (Maiorov et al., 2005) yielded abundances of $S3$ ranging from 0.03 ppb at 3 km to 0.1 ppb at 19 km. A more recent re-analysis of these same spectra (Krasnopolsky, 2013b), provided revised numbers (11 ± 3 ppt at 3–10 km; 18 ± 3 ppt at 10–19 km) but also abundances for $S4$ (4 ± 4 ppt at 3–10 km; 6 ± 2 ppt at 10–19 km).
Abundances of $H2SO4$ below the clouds were provided by ground-based observations (Jenkins, Kolodner, Butler, Suleiman, & Steffes, 2002; Sandor, Clancy, & Moriarty-Schieven, 2012), and by radio occultations from the Pioneer Venus Orbiter (Jenkins & Steffes, 1991), from the Magellan spacecraft (Jenkins, Steffes, Hinson, Twicken, & Tyler, 1994; Kolodner & Steffes, 1998), and more recently by the VeRA experiment on board Venus Express (Oschlisniok et al., 2012). These revealed complicated meridional distribution and local-time dependence of the mixing ratio, which are not reproduced in numerical models. Indeed, abundances of about 1–2 ppm were found between 0° S and 70° S latitude in the altitude range from 50 to about 52 km, sometimes increasing to values of about 3 ppm on the dayside and 5 ppm on the night side near 50 km. The abundance at polar latitudes (>70° S) did not exceed 1 ppm within the considered altitude range.
$H2SO4$ saturation vapor depends highly on temperature and concentration, leading to relatively abundant concentration (a few ppm) at the lower cloud boundary (48 km), and to low concentrations, below detection limit of about 1 ppm higher in the atmosphere. This was confirmed by recent observations carried out by the UVI camera on board Akatsuki (Imamura et al., 2017), which provided $H2SO4$ vapor profiles within and above the clouds. They found maximum values of 10 ppm at 48 km, then decreasing above to lower values. They found that the $H2SO4$ vapor VMR roughly follows the saturation curve at cloud heights, suggesting equilibrium with the cloud particles. Sandor et al. (2012) performed sub-millimeter observations of $SO$, $SO2$, and $H2SO4$ with the JCMT, in the 70–100 km altitude range, with a maximum sensitivity at 80–85 km. They could only derive upper limits (1 ± 2 ppb) considering all observations together, while upper limits from single spectra ranged from 3 to 44 ppb. The observed $H2SO4$ vapor distribution suggests that its abundance is controlled by condensation and evaporation in the clouds, thermal decomposition in the lower atmosphere, and global-scale circulation (Imamura & Hashimoto, 1998, 2001; Krasnopolsky, 2015).
Sulfur dioxide $(SO2)$ is the most abundant sulfur-bearing species. Its abundance varies with altitude and is highly spatially and temporally variable.
The abundance of $SO2$ below the clouds was measured by the Venera 12 (Gel’man et al., 1979) and Pioneer Venus (Oyama et al., 1980) gas chromatographs. It can also be inferred from nightside measurements in the IR (2.4 μm). Space observations with VIRTIS-H (Marcq et al., 2008) and ground-based instruments (Arney et al., 2014; Bézard et al., 1993) confirmed the value recommended by Taylor et al. (1997) (130 ± 40 ppm). Microwave observations are also sensitive to the $SO2$ absorption, although their results strongly depend on the assumption made on the temperature profile. Such observations suggested low $SO2$ mixing ratio below the clouds with values lower than 100 ppm at low latitude and lower than 50 ppm in polar regions (Butler, Steffes, Suleiman, Kolodner, & Jenkins, 2001; Jenkins et al., 2002). So far, the number of the available measurements and their accuracy did not allow the detection of any variations in $SO2$ below the clouds. Arney et al. (2014), using ground-based spectra in the 2.3 µm window, reported a hemispheric dichotomy with slightly more $SO2$ in the northern hemisphere, although the difference lies within the retrievals error.
VEGA 1 and 2 probes provided $SO2$ profiles from the cloud top down to the surface (Bertaux et al., 1996), indicating the presence of peak structures at altitudes between 42 and 52 km with abundances reaching 210 ppm. Below 40 km both profiles exhibited the same decrease of $SO2$ with decreasing altitude. It has been proposed that VEGA $SO2$ profiles are decreasing further towards the surface, finally reaching thermochemical equilibrium. Extrapolating the value at the surface (25 ppm), the thermochemical lifetime of $SO2$ would be 320,000 years (Fegley, Klingelhöfer, Lodders, & Widemann, 1997). Clouds would eventually disappear over geological timescales, due to lack of $SO2$ to replenish the sulfuric acid. Detailed thermochemical modeling indicates that volcanic outgassing is the most probable primary source of sulfur dioxide in Venus’s lower atmosphere (Bullock & Grinspoon, 2001), an idea which has been supported by VEX observations of the surface thermal emission which strongly suggest recent or current volcanic activity (Shalygin et al., 2015; Smrekar et al., 2010).
The exchange of $SO2$ from the lower to the upper atmosphere is not fully understood, but most probably involves convective transport, presumably in conjunction with Hadley cell circulation. Secular variation of this circulation may alter the $SO2$ supply from the lower atmosphere being the origin of the high variability seen above the clouds. Photochemical oxidation efficiently destroys $SO2$ from Venus’s upper atmosphere leading to the formation of the sulfuric acid droplets making up the clouds and haze enshrouding the planet. Therefore, the $SO2$ abundance above the clouds drops to ppb levels. Krasnopolsky (2012) suggested that the origin of the high variability of $SO2$ observed above the clouds could be explained by the photolysis of $SO2$ followed by the formation of $H2SO4$ near the cloud top being very sensitive to Eddy diffusion.
Vandaele et al. (2017a, 2017b) investigated VEX-era observations of $SO2$ above the clouds. This obviously involved measurements from instruments on board VEX, but also ground-based observations in the sub-millimeter, using JCMT (Sandor et al., 2012; Sandor et al., 2010) and ALMA (Encrenaz et al., 2015), and in the IR with the TEXES/IRTF instrument (Encrenaz et al., 2013, 2016, 2012), as well as from the Hubble Space Telescope with the UV STIS (Space Telescope Imaging Spectrograph) instrument (Jessup et al., 2015). On VEX, both SPICAV-UV and SOIR (Belyaev et al., 2008; Mahieux, Vandaele, Robert, et al., 2015) observed $SO2$ above the clouds. SPICAV-UV performed measurements during solar occultation (Belyaev et al., 2012), stellar occultations (Belyaev et al., 2017) and in nadir mode (Marcq et al., 2011; Marcq, Bertaux, Montmessin, & Belyaev, 2013).
$SO2$ was first detected in the Venusian upper atmosphere from Earth-based ultraviolet observations with a mixing ratio of 0.02–0.5 ppm at the UV cloud top (Barker, 1979). Space-based identifications of $SO2$ UV absorption followed soon after, with the International Ultraviolet Explorer (IUE) (Conway, McCoy, Barth, & Lane, 1979) and the Pioneer Venus Orbiter (PVO) Ultraviolet Spectrometer (UVS) (Stewart, Anderson, Esposito, & Barth, 1979). The PVO/UVS observations showed a steady decline of the UV cloud top $SO2$ content from 100 ppb down to 10 ppb on a period of 10 years (Esposito et al., 1988). This decline was rather fast over the first year, and much slower later on. Esposito et al. (1988) interpreted this behavior as the result of a massive “injection of $SO2$ into the Venus middle atmosphere by a volcanic explosion.” IUE observations also confirmed this steep decline of $SO2$ abundance, reporting a decrease from 380 ± 70 ppb in 1979 down to 50 ± 20 ppb in 1988 (Na, Esposito, & Skinner, 1990). In 1995 a first measurement from the Hubble Space Telescope was performed, yielding a value of 20 ± 10 ppb (Na & Esposito, 1995) thus confirming the decrease in $SO2$ abundance with time. The VEX observations extended the long-term investigation of the evolution of $SO2$ abundance above the clouds (see right-hand panel of Figure 4): a clear increase in $SO2$ abundance by a factor of 2 was observed at the beginning of the mission from 2006 to 2007, followed by a decrease until 2012 by a factor of 5 to 10 (Marcq et al., 2013). These trends were also observed by SOIR (Mahieux, Vandaele, Robert, et al., 2015) and by SPICAV-UV (Belyaev et al., 2012) solar occultations, at the lowest altitudes sounded by the instrument (69–73 km): from 30 ppb (2007) to a maximum of 100 ppb (2009), decreasing again to 40 ppb (2013). However, the amplitude of the long-term trend was smaller than the short-term variations. Long-term variations were also reported by Encrenaz et al. (2016), showing that the disk-integrated $SO2$ VMR varied by a factor of 10 between 2012 and 2016 with a minimum in 2014 (30 ppb) and a maximum in 2016 (300 ppb).
Short spatial and temporal variability is best observed from Earth-based instruments which can provide global snapshots of the visible part of the planet on relatively short time periods. The three left-hand panels of Figure 4 illustrate the short term variations observed in $SO2$ abundance above the clouds. These maps were obtained with the TEXES spectrometer during three consecutive nights (Encrenaz et al., 2012). The spatial distribution of $SO2$ and the localization of the hot spots were different each night. This was confirmed by several observations (Encrenaz et al., 2013, 2016, 2012, 2015; Jessup et al., 2015; Sandor et al., 2012; Sandor, Clancy, Moriarty-Schieven, & Mills, 2010). Significant variability occurs within a few hours, consistent with a very fast photochemical or microphysical loss mechanism. Supported by ALMA observations, Encrenaz et al. (2015) showed that the short-scale variations were not correlated to day/ night or latitude variations.
#### Halides (HBr, HF, HCl)
Hydrogen halides are active species, involved in all the main chemical cycles governing Venus’s atmosphere, i.e. the $CO2$, sulfur oxidation and the polysulfur cycles (Krasnopolsky & Lefèvre, 2013; Mills et al., 2007).
Krasnopolsky and Belyaev (2017) carried out a search for HBr using CSHELL instrument at the NASA Infrared Telescope Facility in 2015. They derived an upper limit of 1 ppb at cloud top (78 km) and of 20–70 ppb below 60 km. Krasnopolsky and Belyaev (2017) further investigated the impact of such upper limits on the bromine chemistry. They concluded that the bromine chemistry might be effective on Venus, but, if the Cl/Br ratio in the atmosphere would be similar to that in the Solar System, then HBr would only reach 1 ppb in the lower atmosphere, rendering the bromine chemistry completely insignificant.
No new measurements of HF below the clouds were reported since Taylor et al. (1997) and the recommended value of 5 ± 2 ppb is still valid. Krasnopolsky (2010c) showed that, above the clouds, HF is constant at 3.5 ± 0.2 ppb (at 68 km) in both morning and afternoon observations and in the latitude range ± 60°. The SOIR/VEX measurements showed that HF VMR vary from 2.5–10 ppb at 80 km increasing to 30–70 ppb at 103 km (Mahieux, Wilquet, et al., 2015).This would require a source at 103 km and a sink near 80 km.
A recent survey using IR ground-based spectrometry sounded the 15–25 km altitude range (Arney et al., 2014) providing $HCl$ abundances in agreement with previous observations (0.5 ppm). Moreover, these showed no evidence of any horizontal variability. Iwagami et al. (2008) confirmed this value below the clouds and provided also measurements of HCl above the clouds at 60–66 km altitude (0.76 ± 0.1 ppm). They suggested the existence of a production mechanism of $HCl$ within the clouds. However both Krasnopolsky (2010c) and Sandor and Clancy (2012) reported lower abundances at cloud top (0.40 ppm). The SOIR instrument on board VEX provided $HCL$ vertical profiles (Mahieux, Wilquet, et al., 2015). These values at 70 km disagree by an order of magnitude with previous data obtained at cloud top or below and require an unidentified sink near 70 km and a source near 105 km.
Both $HCl$ and $HF$ of SOIR (Mahieux, Wilquet, et al., 2015) showed high short-term variability. $HCl$ values found at the equator and mid-latitudes are higher than at the poles. Sandor and Clancy (2012) did not see clear evidence of any local time variations.
#### Other Species (O2, OH, O3, ClO)
Only upper limits of the $O2$ abundance have been obtained (Mills, 1999; Trauger & Lunine, 1983). Krasnopolsky (2006a) proposed an even more restrictive interpretation of the measurements of Trauger and Lunine (1983). However, the existence of $O2$ has been acknowledged by several studies of its airglow emissions, which have intensively investigated the vertical, temporal, and geographic distribution emitted by excited $O2$ decaying radiatively to the ground-state (Gérard et al., 2017).
Hydroxyl radical was detected in Venus’s upper atmosphere through its nightside airglow emission (Piccioni et al., 2008) using VIRTIS instrument on board Venus Express. The $OH$ emission bands were unambiguously identified in the range 1.40–1.49 μm (Δν = 2 sequence of the $OH$ Meinel band) and 2.6–3.14 μm (Δν = 1 sequence). $OH$ was observed between 85 and 110 km, peaking at an altitude of 96 ± 2 km (Gérard, Soret, Saglam, Piccioni, & Drossart, 2010; Migliorini, Piccioni, Moinelo, Cardesi, & Drossart, 2011; Soret, Gérard, Piccioni, & Drossart, 2012). Krasnopolsky (2010d) detected airglow $OH$ lines using ground-based observations. Gérard, Soret, Piccioni, and Drossart (2012) showed that the $OH$ and $O2$ emissions are highly spatially correlated, indicating the role of $O$ atoms as a precursor of both emissions.
SPICAV-UV detected ozone $(O3)$ in the atmosphere of Venus (Montmessin et al., 2011), showing that it is vertically confined in layers in the thermosphere (between 90 and 120 km, with a mean altitude of 99 km), with densities of the order of 107–108 molecules cm-3.
Sandor and Clancy (2018) performed the first ever measurement of chlorine monoxide $(ClO)$ in the mesosphere. Their observations are compatible with a constant ClO VMR of 2.6 ± 0.5 ppb within a layer located above 85 ± 2 km and extending up to 90–100 km. Temporal variation of a factor of 2 was clearly observed. This altitude distribution of $ClO$ supports the observations of $O3$ and the interpretation proposed by Montmessin et al. (2011), i.e. the observed $O3$ and $ClO$ profiles are consistent with a chlorine-catalyzed ozone-destruction scheme above 90 km. The model of Krasnopolsky (2010d) predicted an ozone layer at 94 km but with 200 times more O3 than observed. This led to a revision of the nighttime photochemical model considering fluxes of $O,N,H$, and $Cl$ from the dayside (Krasnopolsky, 2013a). This model better simulates the observed ozone layer, but predicts 48 ppb of $ClO$ at 88 km, which is in contradiction with the observations.
High-resolution spectra of Venus of the nitric oxide $(NO)$ fundamental band at 5.3 μm were acquired using the TEXES/IRTF instrument, (Krasnopolsky, 2006b). A simple photochemical model for $NO$ and $N$ in the 50–112 km range was coupled to a radiative transfer code to simulate the observed absorption features of the $NO$ and some $CO2$ lines, providing a $NO$ VMR of 5.5 ± 1.5 ppb below 60 km. Krasnopolsky (2006b) assumed that lightning is the only known source of $NO$ in the lower atmosphere of Venus. NO nightside airglow was observed by the Pioneer Venus OUVS spectrometer (Stewart, Gérard, Rusch, & Bougher, 1980). SPICAV/VEX performed observations in the limb and nadir directions (Stiepen, Gérard, Dumont, Cox, & Bertaux, 2013; Stiepen, Soret, Gérard, Cox, & Bertaux, 2012), and the 1.224 μm $NO$ transition was unambiguously detected by VIRTIS on two occasions (Garcia-Munoz, Mills, Piccioni, & Drossart, 2009).
### Conclusions
The exploration of the atmosphere of Venus started in the early 20th century through ground-based observations, and benefited from the early Venera and Pioneer Venus missions. Results of the Venus Express and Akatsuki missions, combined with recent ground-based observations have dramatically improved our knowledge of the atmosphere of Venus. Abundances of most of the trace gases have been investigated from the surface up to the upper layers of the atmosphere. The atmospheres below and above the clouds differ in many respects, having quite different circulation patterns, different chemical composition, as well as different (photo)chemical processes linking the atmospheric constituents. As an example, $SO2,OCS$, and $H2O$ show a drastic change in their mixing ratio of at least one order of magnitude (even more for $SO2$) between the regions below and above the clouds.
Another aspect which is still to be understood is the high variability of the atmosphere of Venus, seen both in terms of spatial and temporal variability. This is observed in the abundances of the neutral atmosphere, as has been shown in this article, but also in the airglow emissions occurring in the upper layers (Gérard et al., 2017), in the clouds’ structures (Titov et al., 2018) and in the thermal structure of the atmosphere (Limaye et al., 2018). Observations indicate that the variation of the incident solar flux plays only a minor role in the rapid changes, which suggests the existence of a non-steady transport. Gravity waves have been proposed to be a potential source of variability, but up to now, there is no evidence that they would provide the required amplitude. Nevertheless, the different waves—gravity, Kelvin, Rossby, and tidal waves—exist on Venus and are thought to play an important role in the general circulation of the atmosphere via their contribution to momentum transport, which varies with altitude and latitude (Sánchez-Lavega, Lebonnois, Imamura, Read, & Luz, 2017).
Although the main cycles driving the composition of the atmosphere have been identified, much work remains to confirm them with observations and to improve models. New chemical schemes are required to fully understand the existing observations and models should be able to use photochemistry, dynamics and microphysics of the clouds in a coherent approach and should encompass the whole atmosphere from ground to the upper atmosphere.
As for observations of the composition of the Venus’s atmosphere, new missions should focus on observations that would enable is to better understand the different chemical cycles, in particular that of sulfur, and the exchanges from the surface to the lower atmosphere and then to the middle and upper atmosphere. This encompasses searches for potential sources at the surface, like volcanisms and hot spots (Smrekar et al., 2010) through measurements of surface emissivity, and better characterization of the compositional gradients and variability below and above the clouds, especially at cloud top. This would enable us to disentangle the different processes involved in the sulfur cycle—photochemistry, dynamics, and microphysics—and provide insights on the possible mechanisms that drive the observed variability (climate variability, circulation variations, active volcanism, etc.). Sensing, at the same time, the surface and sub-surface, their changes in composition and structure, would provide insights on surface–atmosphere interaction. This is in fact the objective of the EnVision mission, a M5 ESA mission, scheduled for launch in 2032 (Ghail et al., 2017). This mission will combine observations by radars and spectroscopic instruments (operating in the UV to the IR) to provide a global view of the planetary surface and interior and their relationship with the atmosphere.
### Glossary
Nadir observations—The instrument on board a spacecraft orbiting the planet is looking down towards the surface of the planet. Measurements are sensitive to the thermal radiation emitted by the surface (the only contribution during night observations) and the solar radiation reflected by the surface and/or scattered by the atmosphere.
Limb observations—Atmospheric remote-sounding technique involving observing radiation emitted or scattered from the limb, which is the portion of a planetary (or stellar) atmosphere at the outer boundary of the disk, viewed “edge on.” Limb viewing provides a much longer path through the atmosphere, and looking through a larger mass of air improves the chances of observing sparsely distributed substances (Livesey, 2014).
Stellar/solar occultation—An atmospheric remote sounding technique involving observing radiation emitted (or reflected) by a distant body (solar, stellar, lunar, or an orbiting satellite), transmitted along a limb path through an absorbing and/or scattering planetary atmosphere, and detected by a remote observer (Livesey, 2014). The stellar/ solar occultation technique is a powerful method of gaining information on the vertical structure of atmospheres. At sunset, the recording of spectra starts well before the occultation occurs (the solar spectrum outside the atmosphere is used for referencing), and continues until the line of sight crosses the planet. At sunrise, the recording of spectra continues well above the atmosphere to provide the corresponding reference. Transmittances are obtained by dividing the spectra measured through the atmosphere by the reference spectrum recorded outside the atmosphere. In this way, transmittances become independent of instrumental characteristics, such as the absolute response or the ageing of the instrument and in particular of the detector. Such observations provide high vertical resolution.
Venus Express—The first European mission to Venus (Svedhem et al., 2007). Its overarching science objectives covered the atmosphere, plasma environment, and surface properties. The payload consisted essentially of instruments inherited from the Mars Express and Rosetta missions and comprised a combination of spectrometers, spectro-imagers and imagers covering a wavelength range from ultraviolet to thermal infrared, a plasma analyzer and a magnetometer. Launched in November 2005, it arrived at Venus in April 2006 and directly started sending back science data until 16 December 2014, when ESA announced that the Venus Express mission had ended. Venus Express dived into the planet’s upper atmosphere to altitudes only 165 km above the surface during a series of low passes in the period 2008–2013, in order to measure the density of the upper polar atmosphere. The campaign showed that the upper layers of the atmosphere were a surprising 60% thinner than predicted and showed high density variability.
Akatsuki—A Japanese mission to Venus. The Planet-C mission was approved in 2001 and was launched in May 2010 (Nakamura et al., 2007). At that time, as usual for Japanese missions, Planet-C was renamed Akatsuki, “Morning Star.” After a first failed orbital insertion attempt, Akatsuki was placed in a solar orbit with a period slightly shorter than the orbital period of Venus. Later, Akatsuki successfully entered Venus orbit on its second attempt in December 2015 (Nakamura et al., 2016). Its current orbit is highly elliptical, near-equatorial.
Akatsuki was designed to observe the atmospheric dynamics of Venus and to better understand the atmospheric super-rotation. To address its scientific objectives, five cameras cover different spectral ranges: IR1 (InfraRed 1 μm camera) (Iwagami et al., 2018), IR2 (InfraRed 2 μm camera) (Satoh et al., 2017), UVI (UltraViolet Imager) (Yamazaki et al., 2018), LIR (Longwave InfraRed camera), and LAC (Lightning and Airglow Camera). Combining these cameras’ observations, allows detection of atmospheric motions at different altitudes. Radio occultation (RS, Radio Science) is sensitive to temperature and H2SO4 vapor between 35 and 100 km (Imamura et al., 2017). Akatsuki has already provided many significant scientific results, one of the most compelling being the discovery of large-scale stationary gravity waves in the atmosphere (Fukuhara et al., 2017).
• Ya Marov, M., & Grinspoon, D. H. (1998). The planet Venus. Yale University Press.
This comprehensive book provides a detailed synopsis of Venus, with particular attention to the history of its observation and exploration, the planet’s formation, and the development of the runaway greenhouse effect.
• Hunten, D. M., Colin, L., Donahue, T. M., & Moroz, V. I. (Eds.). The series Venus. University of Arizona Press, 1983; Bougher, S. W., Hunten, D. M., & Phillips, R. J. (Eds.). (1997). Venus II: Geology, geophysics, atmosphere, and solar wind environment. University of Arizona Press; and Bézard, B., Russell, C. T., Satoh, T., Smrekar, S. E., & Wilson, C. F. (Eds.). (2018). Venus III, Space Science Reviews. Springer.
These publications are comprehensive sources of information on different aspects of the planet. Each of them summarizes the knowledge of their time. The third volume reports on the achievements gained in the era of the Venus Express mission.
• Mackwell, S., Simon-Miller, A., Harder, J., & Bullock, M. (2013). Comparative climatology of terrestrial planets. University of Arizona Press.
This compendium gives a wide overview of the current understanding of atmospheric formation and climate evolution. Particular emphasis is given to surface–atmosphere interactions, mantle processes, photochemistry, and interactions with the interplanetary environment, all of which influence the climatology of terrestrial planets.
|
RO EN
## Triple Roman domination subdivision number in graphs
Keywords: Triple Roman domination number, Triple Roman domination subdivision number.
### Abstract
For a graph $G=(V, E)$, a triple Roman domination function is a function $f: V(G)\longrightarrow\{0, 1, 2, 3, 4\}$ having the property that for any vertex $v\in V(G)$, if $f(v)<3$, then $f(\mbox{AN}[v])\geq|\mbox{AN}(v)|+3$, where $\mbox{AN}(v)=\{w\in N(v)\mid f(w)\geq1\}$ and $\mbox{AN}[v]=\mbox{AN}(v)\cup\{v\}$. The weight of a triple Roman dominating function $f$ is the value $\omega(f)=\sum_{v\in V(G)}f(v)$. The triple Roman domination number of $G$, denoted by $\gamma_{[3R]}(G)$, equals the minimum weight of a triple Roman dominating function on $G$. {\em The triple Roman domination subdivision number} $\mbox{sd}_{\gamma_{[3R]}}(G)$ of a graph $G$ is the minimum number of edges that must be subdivided (each edge in $G$ can be subdivided at most once) in order to increase the triple Roman domination number. In this paper, we first show that the decision problem associated with $\mbox{sd}_{\gamma_{[3R]}}(G)$ is NP-hard and then establish upper bounds on the triple Roman domination subdivision number for arbitrary graphs.
Department of Mathematics
|
login home contents what's new discussion bug reports help links subscribe changes refresh edit
By definition definite integral is determined only up to integration constant. When integral depends on parameters constant of integration also depends on parameters. Some choices of integration constant may lead to undesirable results. For example:
fricas
integrate(x^n, x)
(1)
Type: Union(Expression(Integer),...)
fricas
integrate(sinh(a*x), x)
(2)
Type: Union(Expression(Integer),...)
In both cases integration constant have singularity, first for , second for , while there is choice of integration constant which avoids such singularity. Namely, in the first case have removable singularity at , in the second case have removable singularity at . However even with such choice of integration constant simply plugging in parameter to indefinite integral does not work, we get division by zero during evaluation (see Division by zero during evaluation).
Subject: Be Bold !! ( 15 subscribers )
|
## ABR Jaco repo public release!
https://github.com/abr/abr_jaco2
We’ve been working with Kinova’s Jaco$^2$ arm with joint torque sensors for the last year or so as part of our research at Applied Brain Research, and we put together a fun adaptive control demo and got to show it to Justin Trudeau. As you might have guessed based on previous posts, the robotic control used force control. Force control is available on the Jaco$^2$, but the API that comes with the arm has much too slow an update time for practical use for our application (around 100Hz, if I recall correctly).
So part of the work I did with Pawel Jaworski over the last year was to write an interface for force control to the Jaco$^2$ arm that had a faster control loop. Using Kinova’s low level API, we managed to get things going at about 250Hz, which was sufficient for our purposes. In the hopes of saving other people the trouble of having to redo all this work to begin to be able to play around with force control on the Kinova, we’ve made the repo public and free for non-commercial use. It’s by no means fully optimized, but it is well tested and hopefully will be found useful!
The interface was designed to plug into our ABR Control repo, so you’ll also need that installed to get things working. Once both repos are installed, you can either use the controllers in the ABR Control repo or your own. The interface has a few options, which are shown in the following demo script:
import abr_jaco2
from abr_control.controllers import OSC
robot_config = abr_jaco2.Config()
interface = abr_jaco2.Interface(robot_config)
ctrlr = OSC(robot_config)
# instantiate things to avoid creating 200ms delay in main loop
ctrlr.generate(q=zeros, dq=zeros, target=zeros(3))
# run once outside main loop as well, returns the cartesian
# coordinates of the end effector
robot_config.Tx('EE', q=zeros)
interface.connect()
interface.init_position_mode()
interface.send_target_angles(robot_config.INIT_TORQUE_POSITION)
target_xyz = [.57, .03 .87] # (x, y, z) target (metres)
interface.init_force_mode()
while 1:
# returns a dictionary with q, dq
feedback = interface.get_feedback()
# ee position
xyz = robot_config.Tx('EE', q=q, target_pos = target_xyz)
u = ctrlr.generate(feedback['q'], feedback['dq'], target_xyz)
interface.send_forces(u, dtype='float32')
error = np.sqrt(np.sum((xyz - TARGET_XYZ[ii])**2))
if error < 0.02:
break
# switch back to position mode to move home and disconnect
interface.init_position_mode()
interface.send_target_angles(robot_config.INIT_TORQUE_POSITION)
interface.disconnect()
You can see you have the option for position control, but you can also initiate torque control mode and then start sending forces to the arm motors. To get a full feeling of what is available, we’ve got a bunch of example scripts that show off more of the functionality.
Here are some gifs feature Pawel showing the arm operating under force control. The first just shows compliance of normal operational space control (on the left) and an adaptation example (on the right). In both cases here the arm is moving to and trying to maintain a target location, and Pawel is pushing it away.
You can see that in the adaptive example the arm starts to compensate for the push, and then when Pawel lets go of the arm it overshoots the target because it’s compensating for a force that no longer exists.
So it’s our hope that this will be a useful tool for those with a Kinova Jaco$^2$ arm with torque sensors exploring force control. If you end up using the library and come across places for improvement (there are many), contributions are very appreciated!
Also a big shout out to the Kinova support team that provided many hours of support during development! It’s an unusual use of the arm, and their engineers and support staff were great in getting back to us quickly and with useful advice and insights.
|
# Product eigenstate eigenvalues
1. Jan 21, 2009
### quasar_4
1. The problem statement, all variables and given/known data
What are the eigenvalues of the set of operators (L1^2, L1z, L2^2, L2z) corresponding to the product eigenstate $$\left\langle$$m1 l1 | m2 l2 $$\right\rangle$$?
PS: If you have Liboff's quantum book, this is problem #9.30.
2. Relevant equations
We've also been learning about the Clepsch Gorden coefficients, so they might play a role. I can't figure out enough of the tex on this website to type them (It doesn't seem to like my LaTeX commands, even though I'm usually pretty well-versed on it!).
We also have that | l1l2m1m2> = |l1m1> |l2m2>.
3. The attempt at a solution
I'm very, very confused about these eigenvalues. I am not even sure what the eigenvalue equation looks like. I thought that since | l1l2m1m2> = |l1m1> |l2m2>, maybe the eigenvalues are just the same as | l1l2m1m2> eigenvalues, so that's my current best guess, but I'm not sure. I am confused as to how or if this is related to somehow finding Clebsch-Gordon coefficients. Basically, anything you can tell me will help. In fact, I'd love help just interpeting the problem (I'm not really sure what it is I'm looking for to find these eigenvalues). Any hints? Anyone?
|
Recall that bonds are formed from overlap of electron cloud density from two atomic orbitals. Bonding vs Antibonding Molecular Orbitals The difference between bonding and antibonding molecular orbitals can be best explained using “Molecular orbital theory.”These two types of molecular orbitals are formed when covalent chemical bonds are formed. In this anti-bonding MO, with energy much higher than the original AOs, any electrons present are located in lobes pointing away from the central internuclear axis. In this theory, electrons are potrayed to move in waves. Molecular Mechanics. molecules that consist of two identical atoms, e.g. 0000037544 00000 n COVID-19 Resources. Two same-sign orbitals have a constructive overlap, forming a molecular orbital with the bulk of the electron density located between the two nuclei. Videos IV.ii-v: Open-Shell Considerations. October 1, 2020 at 9:36 am. Chapter 9 Molecular Geometry and Bonding Theories Chapter 9 Molecular Geometry and Bonding Theories by Michael Farabaugh 4 years ago 41 minutes 11,204 views This video explains the concepts from your packet on , Chapter 9 , (Molecular Geometry and Bonding Theories), which can be found ... CHEM1311 - General Chemistry Chapter 9 lecture Molecular orbital diagram for hydrogen: For a diatomic molecule, an MO diagram effectively shows the energetics of the bond between the two atoms, whose AO unbonded energies are shown on the sides. The Aufbau principle states that orbitals are filled starting with the lowest energy. Atomic orbitals can also interact with each other out-of-phase, leading to destructive cancellation and no electron density between the two nuclei. molecular orbital) nach Friedrich Hund und Robert Sanderson Mulliken ordnet alle Elektronen des Moleküls einem Satz Molekülorbitale zu. ! 2. A molecule’s chemical formula and structure are the two important factors that determine its properties, particularly reactivity. Leben. (a) Show how two 3d orbitals can have σ overlap. When molecular orbital is formed by subtraction of wave function, the type of molecular orbitals formed are called Antibonding Molecular Orbitals and is represented by ΨMO = ΨA - ΨB. Molecular Orbital TheoryMolecular Orbital Theory Lecture 1 The Bohr Model Prof G. W. Watson Lloyd Institute 2.05 [email protected] Adsorption / Emission spectra for Hydrogen Johann Balmer (1885) measured line spectra for hydrogen 364.6 nm (uv), 410.2 nm … Drawing molecular orbital diagrams is one of the trickier concepts in chemistry. Videos III.i-iii: Semiempirical Wave Function Theory. Such an orbital has one or more nodes in the bonding region between the nuclei. According to this theory, a molecular orbital can hold a maximum of two electrons. In heteronuclear diatomic molecules, atomic orbitals only mix when the electronegativity values are similar. Introduction to Molecular Orbital Theory. Molekülorbitale können als Linearkombinationen zu einer endlichen Basis angesetzt werden. The reduction these electrons’ energy is the driving force for chemical bond formation. Wave functions are the basic set of functions that describe a given atom’s electrons. molecular orbital) nach Friedrich Hund und Robert Sanderson Mulliken ordnen alle Elektronen des Moleküls einem Satz Molekülorbitalen zu. An MO can specify a molecule’s electron configuration, and most commonly, it is represented as a linear combination of atomic orbitals (the LCAO-MO method), especially in qualitative or approximate usage. However, removing an electron from the antibonding level produces the molecule He2+, which is stable in the gas phase with a bond order of 0.5. 0000005471 00000 n In graphical representations of orbitals, the orbital phase is depicted either by a plus or minus sign (with no relationship to electric charge) or by shading one lobe. You may wish to review Sections 1.5 and 14.1 before you begin to study this section. Atomic orbital energy correlates with electronegativity, as electronegative atoms hold electrons more tightly, lowering their energies. Die Orbitale können durch Darstellung ihrer Isoflächen veranschaulicht werden. November 2, 2020 at 10:02 am. 0000002892 00000 n ! Molecular-orbital calculations on transition metal complexes, charge-transfer spectra and the sequence of metal and ligand orbitals. In carbon monoxide (CO), the oxygen 2s orbital is much lower in energy than the carbon 2s orbital, so the degree of mixing is low. This MO diagram depicts the molecule H2, with the contributing AOs on the outside sandwiching the MO. 0000001713 00000 n Figure 7.18 Molecular orbital energy diagrams for (a) N2 and (b) O2 and F2. The above formula verifies breaking the H2 bond, which in this case gives a bond order of zero. Explore the latest questions and answers in Molecular Orbital Theory, and find Molecular Orbital Theory experts. It also explains the bonding in a number of other molecules, such as violations of the octet rule and more molecules with more complicated bonding (beyond the scope of this text) that are difficult to describe with Lewis structures. Dihelium does not exist. Draw the molecular orbital diagram for B 2. WorldCat Home About WorldCat Help. It uses 3-D pictorial presentations of molecular orbitals to elucidate organic reaction mechanisms - such as those found in pericyclic chemistry. Which one of the following statements is false? Notes. Dimethyl ether, for example, has the same ratios as ethanol. It is a linear molecule. The basic thought of what is molecular orbitals can be the organized combinations of the atomic orbitals according to the symmetry of the molecules and the characteristics of atoms. They have higher energy than atomic orbitals. Note the inevitable linear geometry. Quantum mechanics specifies that we can get molecular orbitals through a linear combination of atomic orbitals; that is, by adding and subtracting them. The basis functions are one-electron functions centered on the nuclei of component atoms in a molecule. Soc. A bond involving molecular orbitals that are symmetric with respect to rotation around the bond axis is called a sigma bond (σ-bond). Consequently, the molecule has a large dipole moment with a negative partial charge δ- at the chlorine atom and a positive partial charge δ+ at the hydrogen atom. The sign of the phase itself does not have physical meaning except when mixing orbitals to form molecular orbitals. These electrons have opposite spin in order to minimize the repulsion between them. Three general rules apply: The filled MO that is highest in energy is called the Highest Occupied Molecular Orbital, or HOMO; the empty MO just above it is the Lowest Unoccupied Molecular Orbital, or LUMO. Molecular Orbital Theory (Incomplete) Molecular Orbital Theory or MO Theory utilizes concepts of atomic orbitals to rationalize general behaviour of chemicals. The Lewis structure shows that the beryllium in BeH 2 makes 2 bonds and has no lone pairs. Appropriate AO and MO levels are filled with electrons symbolized by small vertical arrows, whose directions indicate the electron spins. $\text{Bond} \ \text{Order} = \frac{2 (\text{bonding}\ \text{electrons})-1(\text{anti-bonding}\ \text{e}-)}{2} = 0.5$. Thus the electrons of an atom are present in various atomic orbitals and are associated with several nuclei. 0000036626 00000 n ��|lDt��!6��\>V�2/o ����� 4�;S���(���]U#Ax����5���>��*�V>�k���B�Ж�WtSe����*-�L���i[���h}YZ�U�Yń3���� lՉ�8T�6MU�efO��� �l�QI8��^�. sp Hybrid Orbitals in BeH2 1. The molecule Li2 is a stable molecule in the gas phase, with a bond order of one. [E Lindholm; L Åsbrink] Home. The last diagram presents the molecule dilithium (Li2). Chem. Stable dihydrogen molecule: A bond order of one indicates a stable bond. The bonding diagram for the hypothetical molecule He2. Bonding and antibonding orbitals are illustrated in MO diagrams, and are useful for predicting the strength and existence of chemical bonds. SMRUTI PRAKASH SWAIN 11th-E DAV UNIT8. The next step in constructing an MO diagram is filling the newly formed molecular orbitals with electrons. While MOs for homonuclear diatomic molecules contain equal contributions from each interacting atomic orbital, MOs for heteronuclear diatomics contain different atomic orbital contributions. !Prof. 0000001477 00000 n The empirical formula is often, but not always, the same as the molecular formula. Other applets. Dihydrogen with an electron in the antibonding orbital: By adding energy to an electon and pushing it to the antibonding orbital, this H2 molecule’s bond order is zero, effectively showing a broken bond. The shape of the molecular orbitals and their respective energies are approximated by comparing the energies of the individual atoms’ atomic orbitals —or molecular fragments—and applying known values for level repulsion and other similar factors. In N 2 and in most other diatomic molecules (NO, NS, CO, CS) there are 4 sigma symmetry molecular orbitals made from a mixing of the 2s and 2p z atomic orbitals on each atom. 0000005595 00000 n An orbital’s phase is a direct consequence of electrons’ wave-like properties. In chemical reactions, orbital wave functions are modified—the electron cloud shape is changed—according to the type of atoms participating in the chemical bond. 2.41.1 The Molecular Orbital. Qualitative Molecular Orbital theory is a fascinating aspect of organic chemistry that can provide a remarkable insight into the workings of organic reactions based on how orbitals interact to control the outcome of reactions. 0000002969 00000 n Orbital interactions that produce bonding or antibonding orbitals in heteronuclear diatomics occur if there is sufficient overlap between atomic orbitals, as determined by their symmetries and similarity in orbital energies. Simple Molecular Orbitals - Sigma and Pi Bonds in Molecules bond order = (number of bonding electrons) - (number of antibonding elect rons) 2 = amount of bonding 1sa hydrogen molecule = H2 LUMO HOMO σ = 1sa + 1sb = bonding MO = potential energy higher, less stable lower, more stable LUMO = lowest unoccupied molecular orbital HOMO = highest occupied molecular orbital Similar phase of … The formation of molecular orbitals can be explained by the linear combination of atomic orbitals. Videos II.iii-v: Molecular Simulations. Hydrogen Atom Applet. 7. 0000036288 00000 n The unbonded energy levels are higher than those of the bound molecule, which is the energetically-favored configuration. degenerate; Study Notes. A linear combination of atomic orbitals, or LCAO, is a quantum superposition of atomic orbitals and a technique for calculating molecular orbitals in quantum chemistry. If the phase changes, the bond becomes a pi bond (π-bond). To combine the known orbitals of constituent atoms in a molecule ’ chemical... This page Would you like to put a link to this theory, and find molecular orbital is., for example, water is always composed of molecules with the or... Its energy is the simplest integer ratio of its properties, particularly reactivity sciences, a molecular orbital (... Accounts for the stabilization that occurs during bonding, their valence electrons one-electron functions centered on the outside sandwiching MO! From two or more atoms held together by covalent bonds and Woodward, leading to destructive cancellation no! Atomic and molecular orbital theory explains the formation of antibonding orbitals are typically shown in the bonding (... The span of the s2p and p2p orbitals zu einer endlichen Basis angesetzt werden relative energies the! They continuously oscillate through vibrational and rotational motions a positive value war einziger Sohn jüdischen., bibliographies and reviews: or Search WorldCat displays the molecular orbital theory is on! Are symmetric with respect to rotation around the bond in this theory, a orbital. To actually explain chemical bonding that accounts for the following: z Predict existence! Charge-Transfer spectra and the electron density from each interacting atomic orbital contributions us to do the following (. Durch Darstellung ihrer Isoflächen veranschaulicht werden as a backup '' to Henry 's! Their valence electrons become shared one element the pi symmetry molecular orbitals configurations of atoms the exclusion... Chlorine atom is much more electronegative than the hydrogen molecular ion ( H +! Ruedenberg war einziger Sohn der jüdischen Webereiunternehmer Otto Rüdenberg und seiner Ehefrau Meta Sophie, geborene Wertheimer linear, will... A heteronuclear diatomic molecule as two point masses ( the two nuclei molecule ’ s.., with opposite spins have a constructive overlap, forming a molecular orbital ( MO theory the! Indicating a stable bond electrons remain in three lone pairs electron or a pair of atoms participating in the bonding... Hund und Robert Sanderson Mulliken ordnen alle Elektronen des molecular orbital theory by komali mam einem Satz Molekülorbitalen zu e 4! And reviews: or Search WorldCat geometries—bond lengths and angles—about which they continuously through! And reviews: or Search WorldCat between an oxygen molecule orbitals in quantum chemistry is non-polar due to average! Have a constructive overlap, forming a molecular orbital theory ( MO theory explains paramagnetic... These molecules not exist useful for predicting the strength and existence of bonds superoxide anion in terms of electrons. Sequence of metal and ligand orbitals should be lower in energy terms of orbital. ( H 2 + ) in 3-D the Pauli exclusion principle states the... Electrons in chemical reactions, orbital wave functions are compared in Fig to a Nobel for... And no electron density between the two between two atoms ) connected by a massless.! Each atom is much more electronegative than the hydrogen molecular ion ( H 2 )... Einer der letzten beiden jüdischen Schüler, die ihren Abschluss machen durften phase is method. Reviews: or Search WorldCat HCl is very soluble in water ( and in other solvents... Contrast, molecular orbitals can be described as wave functions are compared in.! To describe the a given atom ’ s electrons linear combination of the atomic orbitals cancellation! Can also interact with each other out-of-phase, leading to a Nobel for. Collection of web documents can be described as two atoms through overlapping or mixing of atomic orbitals combine form. The strength and is stable even at high temperatures strength and is used by chemists to describe arrangement. Identical symmetry bond length on your homepage n molecular orbitals are typically shown in chemical... In MO diagrams, and oxygen are stable homonuclear diatomic molecules have fixed geometries—bond! Individual atoms combine to form n molecular orbitals for hydrogen, nitrogen, and it used! Particularly reactivity arrows, whose directions indicate the electron density from two more... Elsewhere, Predict wavefunctions that cover the entire molecule or different chemical elements gas,. Is called molecular chemistry or molecular physics, depending on the focus Darstellung. Through vibrational and rotational motions diagrams of MO energy levels are filled with electrons symbolized by small vertical arrows whose! Nach Friedrich Hund und Robert Sanderson Mulliken ordnen alle Elektronen des Moleküls Satz. Machen durften sadiq, June 12, 2009 at 5:14 p.m.: I want to learn very fast through computer. Or double or triple bonds the key term below, die meist über gesamte... Drawing molecular orbital theory experts combine to form molecular orbitals an explanation of bonds! Is non-polar due to the electronegativity values are similar in energy of bonding and four.... Reflects that the bond axis is called the bonding orbital two diagrams differ in. Link this page Would you like to put a link to this lecture on your homepage two atoms form successful! An atom ’ s electrons remain in three lone pairs and that the other electrons remain in three pairs! Are typically shown in the gas phase, with the bulk of the original atomic orbitals which are.! Are the same atomic composition while being different molecules by reflecting their exact number chemical... Consists of a 2:1 ratio of hydrogen to oxygen atoms theory can be described as wave are. The interhalogen molecule, which explains why the He2 molecule does not.. That accounts for the stabilization that occurs during bonding page Would you like to a. Molecule: a bond to be stable, the covalent bond, valence... Case does not have physical meaning except when mixing orbitals to rationalize general behaviour chemicals... Molecular orbitals are typically shown in the relative energies of the phase changes, the lower energy orbital one... Defined as half the difference between the two modified in chemical reactions—the electron cloud shape is to. And no electron density located between the two nuclei MOs for homonuclear diatomic molecule force for chemical bond in chemistry. Orbital are called bonding electrons, and oxygen are stable homonuclear diatomic molecule as point. Are a total of 10 valence electrons fixed equilibrium geometries—bond lengths and angles—about which they continuously oscillate vibrational! This accounts for the following: z Predict the existence of chemical bonds between a pair electrons! Either particle or wave nature part in the bonding orbital utilizes concepts of atomic orbitals, used to calculate probability... Electron density located between the two important factors that determine its properties particularly. To move in waves alle Elektronen des Moleküls einem Satz Molekülorbitalen zu the outside sandwiching the MO case gives bond. Ehefrau Meta Sophie, geborene Wertheimer these must make 4 sigma symmetry molecular orbitals: Online version Liberles! Review Sections 1.5 and 14.1 before you begin to study this section function as below... The antibonding orbital, MOs for heteronuclear diatomics contain different atomic orbital, is. Levels for rotation and vibration the following problem ( s ) monofluoride: the interhalogen molecule, chlorine monofluoride of... Of molecules with the bulk of the atomic orbitals which are similar in.. Are one-electron functions centered on the “ mixing ” or combining of orbitals can overlap in two ways, on... S2P and p2p orbitals MO theory is it not used or by subtraction of wave function as shown.... Liberles, Arno: I want to learn very fast through my computer this mixing of atomic and orbitals. Overlapped ( mixed ) centered on the outside sandwiching the MO starting with the lowest.. Makes 2 bonds and has no lone pairs and that the bond axis called! These electrons have opposite spin in order to minimize the repulsion between them semiempirical method. Hydrogen, the covalent bond between the number of compositional atoms is filling the formed. The homonuclear diatomic molecules are composed of only two atoms ) connected by a massless spring accounts the... Diagrams differ only in the center is lower than that of the homonuclear diatomic molecule its molecular orbital is. °C, clf condenses as a pale yellow liquid using the popup menu at the upper molecular orbital MO! Predicting the strength and is stable even at high temperatures for Lists Search for a bond given! Maximum number of bonding is a powerful and extensive approach which describes electrons as either particle wave... The other electrons remain in three lone pairs and that the maximum number bonding! At least three covalently-bonded atoms, atomic orbitals have a constructive overlap, forming a molecular orbital theory experts a... A total of 6 electrons to add to the electronegativity difference of zero as a backup! Electron configurations of atoms are described as wave functions are modified in chemical structures hydrogen molecular ion ( H +... Calculate molecular orbitals Darstellung ihrer Isoflächen veranschaulicht werden orbital that can hold a maximum of electrons! The b 2 molecule is a single entity composed of only two atoms ) connected by a spring. You can define, and any electrons in chemical reactions—the electron cloud changes—according. Often dashed diagonal lines, often dashed diagonal lines, often dashed diagonal lines, often dashed diagonal lines connect. Orbitals with electrons symbolized by small vertical arrows, whose directions indicate the electron the. Are typically shown in the chemical bond a compound ‘ s empirical formula is,. 12, 2009 at 5:14 p.m.: I want to learn very fast through my computer the unbonded energy of... Orbital diagrams are diagrams of MO energy levels for rotation and vibration axis called... Molecules the pi symmetry molecular orbitals that are symmetric with respect to rotation around bond! They have identical symmetry Basis functions are modified—the electron cloud shape changes—according to the electronegativity difference of molecular orbital theory by komali mam total... Advanced context, the bonding MOs are called molecular chemistry or molecular physics, depending on their phase relationship polyatomic.
Cigna Dental 1000 Vs 1500, 2021 Lance 650 For Sale, Kicker Tweeters Price, Southwest Booking Class G, Nzxt Starter Pc Plus Review, Naruto Shippuden Soundtrack, Roadworks Mfg Instructions, Inspirational Sports Stories Of Success,
|
# Derandomize MAX-CUT problem using $\log n$ bits
Consider the MAX-CUT problem. We can flip $$n$$ coins to generate a random cut, and by linearity of expectation we get that with "good probability" our cut we'll be bigger then $$\frac{n}{2}$$.
Using pseudo random generators (XOR for example) we can generate $$n$$ pairwise independent bits from $$\log n$$ random bits. Using that approach, we can de-randomize the MAX-CUT problem with polynomial complexity.
With that algorithm, we are only checking $$n$$ possible cuts, where there are total of $$2^n$$. Is it promised that a "good" cut is within these $$n$$ cuts? Why?
Consider a graph with $$n$$ vertices and $$m$$ edges.
Let $$\mathcal{D}$$ be a pairwise independent distribution over $$\{0,1\}^n$$, and suppose that $$x = (x_1,\ldots,x_n) \sim \mathcal{D}$$. For every edge $$(i,j)$$, the probability that it is cut in the cut corresponding to $$x$$ is $$\Pr[x_i \neq x_j] = \frac{1}{2},$$ due to pairwise independence. Therefore the expected number of edges cut is exactly $$\frac{m}{2}$$, by linearity of expectation. In particular, there is at least one realization of $$x$$ (that is, one point in the support of $$\mathcal{D}$$) which cuts at least $$m/2$$ edges.
• Thanks. $\mathcal{D}$ is over $\{0,1\}^n$, which correspond to the algorithm that generates $n$ bits (and the de-randomized one loop over $2^n$ options). My Question is about the pseudo-random algorithm, which generate $\log n$ bits instead, and so loop over $n$ options. Why does it's support has a "good" cut, as well? – galah92 Jun 18 '19 at 11:53
• My answer is also about the pseudorandom distribution. The vector $x$ isn’t sampled uniformly. Rather, it is sample according to a pairwise uniform distribution $\mathcal D$. – Yuval Filmus Jun 18 '19 at 11:58
|
Posted on:
#### 09 December 2019
Type
50 percent tuition discount for international students with financial need
In general, international students accepted at St. Scholastica are responsible for securing the money necessary to attend.
However, the College awards 50 percent tuition scholarships to well-prepared international students who are accepted to the College and have provided financial information demonstrating special need. Students are expected to have a strong academic background with at least a 3.0 (out of 4.0) average. The scholarship is renewable for up to three additional years or until receiving a bachelor's degree as long as you maintain full-time enrollment status and Satisfactory Academic Progress (SAP). SAP is defined as a 67% completion rate and a 2.0 GPA, or better. Transfer students are welcome to request scholarship assistance.
2017-18 annual expenses
Tuition: $35,634 Housing:$9,522 (including meal plan)
Posted on:
Type
|
Home
>
English
>
Class 11
>
Maths
>
Chapter
>
Statistics
>
Let x_1, x_2, ... ,x_n be n ob...
Updated On: 27-06-2022
Get Answer to any question, just click a photo and upload the photo and get the answer completely free,
Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams.
Transcript
hello the question is late X1 X2 X3 telex and the an observation and it will be the and its Barbie the Arithmetic mean the standard deviation is given by in order to calculate the standard deviation we need to first calculate the variance so what is the variance we know that various average sum of square deviation deviation from mean deviation from mean what is variance is defined as a the formula the formula for variance formula for variance is
Singh is 1 mile in km equals to 1 2 x minus x square x minus b whole square The Princess this is the formula for variance because variant average of sum of square of deviation from mean to the sum of deviation of mean and when we have done that average and moreover we know that the formula for standard in using standard table in terms of variances standard deviation is equal to under root of variance so this is the formula for standard deviation in terms of variance so from here the final formula for standard deviation which we get is 1 by an summation I = 2 1 2 and x square minus x ka whole square
what is the formula for standard deviation and correct option the question is option number 3 so this is a solution 94 watching
|
# How do you solve x^3-x^2<9x-9?
Jul 29, 2016
$x < - 3$ or $1 < x < 3$
#### Explanation:
This inequality factors as:
${x}^{2} \left(x - 1\right) < 9 \left(x - 1\right)$
If $x = 1$ then both sides are $0$ and the inequality is false.
Case $\boldsymbol{x > 1}$
$\left(x - 1\right) > 0$ so we can divide both sides by $\left(x - 1\right)$ to get:
${x}^{2} < 9$
Hence $- 3 < x < 3$
So this case gives solutions $1 < x < 3$
Case $\boldsymbol{x < 1}$
$\left(x - 1\right) < 0$ so we can divide both sides by $\left(x - 1\right)$ and reverse the inequality to get:
${x}^{2} > 9$
Hence $x < - 3$ or $x > 3$
So this case gives solutions $x < - 3$
|
What is the net force acting on a yo-yo when it doesn't move?
We get the formula Force = mass x acceleration $\left(F = m a\right)$ from Newton's second law of motion. Since the yo-yo is at rest, its acceleration is 0, and the net force acting on it is 0. $F = m \times 0 = 0 \text{N}$
|
# Browser not resolving names
Discussion in 'Networking' started by matulike, Aug 8, 2006.
Not open for further replies.
Joined:
Aug 7, 2006
Messages:
33
Hi, I put this in the OS section before, but I think it's maybe more appropriate here? -
Hi,
I've got a weird problem! My (WinXPProSP2) PC will not resolve any DNS names in a browser - I've tried IE and FireFox. BUT it will resolve from a command prompt!
Also no other applications are able to resolve: eg cannot update anti virus or any other software update, cant connect to MSN messenger, etc.
The even weird-er thing is that I CAN resolve fine within a virtual machine! That's what I'm using now, a SuSe virtual machine on an XP Pro SP2 host.
Well, just wondering if anyone had come across a similar problem?!
I think this only happened after I removed Panda anti virus and tried to install Kapersky - but the installer I had didn't work so I put McAfee 8 (Corp from work) on. I can't see how that would have this effect though.
I've had a look at my hosts file, it only has one entry: 127.0.0.1 localhost.
I also have a 'hosts.msn' file too - but that has the same single entry.
I've also recently removed a VPN client. NetScreen Secure remote. The only reason I mention that is that I had to add a DNS suffix for the domain I was connecting to, but I have set everything back to how it was/should be!
Any help'll be hugely appreciated!
2. ### datamonger
Joined:
Jul 25, 2006
Messages:
244
matulike,
Go to a command prompt. Type: ipconfig /all>c:\ipconfig.txt
Then open the ipconfig.txt file, copy all and paste it into a reply.
Joined:
Aug 7, 2006
Messages:
33
OK, here it is - note the 2 Virtual machine adapters...they've been there for a while and shouldn't cause any problems...besides, I'm actually typing this over one of those connections as it's the only way I can resolve DNS in a browser (in a VM!!)
Windows IP Configuration
Host Name . . . . . . . . . . . . : dada
Primary Dns Suffix . . . . . . . :
Node Type . . . . . . . . . . . . : Unknown
IP Routing Enabled. . . . . . . . : No
WINS Proxy Enabled. . . . . . . . : No
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : VMware Virtual Ethernet Adapter for VMnet8
Physical Address. . . . . . . . . : 00-50-56-C0-00-08
Dhcp Enabled. . . . . . . . . . . : No
IP Address. . . . . . . . . . . . : 192.168.232.1
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . :
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : VMware Virtual Ethernet Adapter for VMnet1
Physical Address. . . . . . . . . : 00-50-56-C0-00-01
Dhcp Enabled. . . . . . . . . . . : No
IP Address. . . . . . . . . . . . : 192.168.31.1
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . :
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Intel(R) PRO/1000 MT Desktop Adapter #4
Physical Address. . . . . . . . . : 00-07-E9-39-54-B8
Dhcp Enabled. . . . . . . . . . . : No
IP Address. . . . . . . . . . . . : 192.168.254.30
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 192.168.254.254
DNS Servers . . . . . . . . . . . : 87.194.0.66
192.168.254.254
80.253.114.39
87.194.0.67
82.197.65.252
Joined:
Aug 7, 2006
Messages:
33
In case it helps, here's a hiJackthis log too:
Logfile of HijackThis v1.99.1
Scan saved at 22:42:02, on 08/08/2006
Platform: Windows XP SP2 (WinNT 5.01.2600)
MSIE: Internet Explorer v7.00 (7.00.5450.0004)
Running processes:
C:\WINDOWS\System32\smss.exe
C:\WINDOWS\SYSTEM32\winlogon.exe
C:\WINDOWS\system32\services.exe
C:\WINDOWS\system32\lsass.exe
C:\WINDOWS\system32\Ati2evxx.exe
C:\WINDOWS\system32\svchost.exe
C:\Program Files\Windows Defender\MsMpEng.exe
C:\WINDOWS\System32\svchost.exe
C:\WINDOWS\system32\spoolsv.exe
C:\WINDOWS\system32\netdde.exe
c:\reskit\srvany.exe
C:\Program Files\Network Associates\Common Framework\FrameworkService.exe
C:\Program Files\Network Associates\VirusScan\Mcshield.exe
C:\Program Files\Network Associates\VirusScan\VsTskMgr.exe
C:\WINDOWS\system32\r_server.exe
C:\WINDOWS\System32\tcpsvcs.exe
C:\Program Files\VMware\VMware Workstation\vmware-authd.exe
C:\Program Files\Common Files\VMware\VMware Virtual Image Editing\vmount2.exe
C:\WINDOWS\system32\vmnat.exe
C:\WINDOWS\system32\vmnetdhcp.exe
C:\WINDOWS\SYSTEM32\Ati2evxx.exe
C:\Program Files\Java\jre1.5.0_06\bin\jusched.exe
C:\Program Files\Elaborate Bytes\VirtualCloneDrive\VCDDaemon.exe
C:\Program Files\Windows Defender\MSASCui.exe
C:\Program Files\ATI Technologies\ATI.ACE\cli.exe
C:\Program Files\SlySoft\AnyDVD\AnyDVD.exe
C:\Program Files\iTunes\iTunesHelper.exe
C:\Program Files\Network Associates\VirusScan\SHSTAT.EXE
C:\Program Files\Common Files\Network Associates\TalkBack\TBMon.exe
C:\Program Files\iPod\bin\iPodService.exe
C:\Program Files\Common Files\Symantec Shared\ccApp.exe
C:\WINDOWS\system32\devldr32.exe
C:\WINDOWS\system32\ctfmon.exe
C:\Program Files\ATI Technologies\ATI.ACE\CLI.exe
C:\WINDOWS\System32\svchost.exe
C:\Program Files\ATI Technologies\ATI.ACE\cli.exe
C:\Program Files\FTPRush\FtpRush.exe
C:\mirc\mirc.exe
C:\Program Files\Common Files\Symantec Shared\ccSetMgr.exe
C:\Program Files\Common Files\Symantec Shared\ccEvtMgr.exe
C:\Program Files\Common Files\Symantec Shared\CCPD-LC\symlcsvc.exe
C:\Program Files\Norton Ghost\Agent\VProSvc.exe
C:\Program Files\VMware\VMware Workstation\vmware.exe
C:\Program Files\VMware\VMware Workstation\bin\vmware-vmx.exe
C:\WINDOWS\explorer.exe
C:\Program Files\iTunes\iTunes.exe
C:\Program Files\Hijackthis\HijackThis.exe
R1 - HKCU\Software\Microsoft\Internet Connection Wizard,ShellNext = http://windowsupdate.microsoft.com/
O2 - BHO: (no name) - {53707962-6F74-2D53-2644-206D7942484F} - C:\PROGRA~1\SPYBOT~1\SDHelper.dll
O2 - BHO: SSVHelper Class - {761497BB-D6F0-462C-B6EB-D4DAF1D92D43} - C:\Program Files\Java\jre1.5.0_06\bin\ssv.dll
O2 - BHO: Windows Live Sign-in Helper - {9030D464-4C02-4ABF-8ECC-5164760863C6} - C:\Program Files\Common Files\Microsoft Shared\Windows Live\WindowsLiveLogin.dll
O4 - HKLM\..\Run: [SunJavaUpdateSched] C:\Program Files\Java\jre1.5.0_06\bin\jusched.exe
O4 - HKLM\..\Run: [VirtualCloneDrive] "C:\Program Files\Elaborate Bytes\VirtualCloneDrive\VCDDaemon.exe" /s
O4 - HKLM\..\Run: [RemoteControl] "C:\Program Files\CyberLink\PowerDVD\PDVDServ.exe"
O4 - HKLM\..\Run: [PCMService] "C:\Program Files\CyberLink\PowerCinema\PCMService.exe"
O4 - HKLM\..\Run: [NeroFilterCheck] C:\WINDOWS\system32\NeroCheck.exe
O4 - HKLM\..\Run: [Windows Defender] "C:\Program Files\Windows Defender\MSASCui.exe" -hide
O4 - HKLM\..\Run: [ATIPTA] "C:\Program Files\ATI Technologies\ATI Control Panel\atiptaxx.exe"
O4 - HKLM\..\Run: [ATICCC] "C:\Program Files\ATI Technologies\ATI.ACE\cli.exe" runtime
O4 - HKLM\..\Run: [AnyDVD] C:\Program Files\SlySoft\AnyDVD\AnyDVD.exe
O4 - HKLM\..\Run: [iTunesHelper] "C:\Program Files\iTunes\iTunesHelper.exe"
O4 - HKLM\..\Run: [ShStatEXE] "C:\Program Files\Network Associates\VirusScan\SHSTAT.EXE" /STANDALONE
O4 - HKLM\..\Run: [McAfeeUpdaterUI] "C:\Program Files\Network Associates\Common Framework\UpdaterUI.exe" /StartedFromRunKey
O4 - HKLM\..\Run: [Network Associates Error Reporting Service] "C:\Program Files\Common Files\Network Associates\TalkBack\TBMon.exe"
O4 - HKLM\..\Run: [ccApp] "C:\Program Files\Common Files\Symantec Shared\ccApp.exe"
O4 - HKLM\..\Run: [Norton Ghost 10.0] "C:\Program Files\Norton Ghost\Agent\GhostTray.exe"
O4 - HKLM\..\Run: [Atomic.exe] C:\Program Files\Atomic Clock Sync\Atomic.exe
O4 - HKCU\..\Run: [MsnMsgr] "C:\Program Files\MSN Messenger\MsnMsgr.Exe" /background
O4 - HKCU\..\Run: [BgMonitor_{79662E04-7C6C-4d9f-84C7-88D8A56B10AA}] "C:\Program Files\Common Files\Ahead\lib\NMBgMonitor.exe"
O4 - HKCU\..\Run: [kdx] C:\WINDOWS\kdx\KHost.exe -all
O4 - HKCU\..\Run: [ctfmon.exe] C:\WINDOWS\system32\ctfmon.exe
O4 - Global Startup: ATI CATALYST System Tray.lnk = C:\Program Files\ATI Technologies\ATI.ACE\CLI.exe
O6 - HKCU\Software\Policies\Microsoft\Internet Explorer\Restrictions present
O8 - Extra context menu item: E&xport to Microsoft Excel - res://C:\PROGRA~1\MICROS~2\OFFICE11\EXCEL.EXE/3000
O9 - Extra button: (no name) - {08B0E5C0-4FCB-11CF-AAA5-00401C608501} - C:\Program Files\Java\jre1.5.0_06\bin\ssv.dll
O9 - Extra 'Tools' menuitem: Sun Java Console - {08B0E5C0-4FCB-11CF-AAA5-00401C608501} - C:\Program Files\Java\jre1.5.0_06\bin\ssv.dll
O9 - Extra button: Research - {92780B25-18CC-41C8-B9BE-3C9C571A8263} - C:\PROGRA~1\MICROS~2\OFFICE11\REFIEBAR.DLL
O9 - Extra button: Messenger - {FB5F1910-F110-11d2-BB9E-00C04F795683} - C:\Program Files\Messenger\msmsgs.exe
O9 - Extra 'Tools' menuitem: Windows Messenger - {FB5F1910-F110-11d2-BB9E-00C04F795683} - C:\Program Files\Messenger\msmsgs.exe
O11 - Options group: [INTERNATIONAL] International*
O16 - DPF: {6414512B-B978-451D-A0D8-FCFDF33E833C} (WUWebControl Class) - http://update.microsoft.com/windowsupdate/v6/V5Controls/en/x86/client/wuweb_site.cab?1125784848535
O16 - DPF: {6E32070A-766D-4EE6-879C-DC1FA91D2FC3} (MUWebControl Class) - http://update.microsoft.com/microsoftupdate/v6/V5Controls/en/x86/client/muweb_site.cab?1127733560625
O16 - DPF: {82774781-8F4E-11D1-AB1C-0000F8773BF0} (DLC Class) - https://transfers.ds.microsoft.com/FTM/TransferSource/grTransferCtrl.cab
O17 - HKLM\System\CCS\Services\Tcpip\..\{290F5AF4-DE46-4F13-8110-B23DB2B39331}: NameServer = 87.194.0.66,192.168.254.254,80.253.114.39,87.194.0.67,82.197.65.252
O17 - HKLM\System\CCS\Services\Tcpip\..\{D601BDF1-6317-44F8-B808-0B9255E54EC7}: NameServer = 82.197.65.252,80.253.114.39,192.168.254.254
O17 - HKLM\System\CCS\Services\Tcpip\..\{EE1652CC-236E-49F6-8EB5-0B3A3C9A1018}: NameServer = 192.168.254.254,82.197.65.252,80.253.114.39
O17 - HKLM\System\CS1\Services\Tcpip\..\{290F5AF4-DE46-4F13-8110-B23DB2B39331}: NameServer = 87.194.0.66,192.168.254.254,80.253.114.39,87.194.0.67,82.197.65.252
O18 - Protocol: livecall - {828030A1-22C1-4009-854F-8E305202313F} - C:\PROGRA~1\MSNMES~1\MSGRAP~1.DLL
O18 - Protocol: msnim - {828030A1-22C1-4009-854F-8E305202313F} - C:\PROGRA~1\MSNMES~1\MSGRAP~1.DLL
O20 - Winlogon Notify: WgaLogon - C:\WINDOWS\SYSTEM32\WgaLogon.dll
O23 - Service: Ati HotKey Poller - ATI Technologies Inc. - C:\WINDOWS\system32\Ati2evxx.exe
O23 - Service: ATI Smart - Unknown owner - C:\WINDOWS\system32\ati2sgag.exe
O23 - Service: Symantec Event Manager (ccEvtMgr) - Symantec Corporation - C:\Program Files\Common Files\Symantec Shared\ccEvtMgr.exe
O23 - Service: Symantec Password Validation (ccPwdSvc) - Symantec Corporation - C:\Program Files\Common Files\Symantec Shared\ccPwdSvc.exe
O23 - Service: Symantec Settings Manager (ccSetMgr) - Symantec Corporation - C:\Program Files\Common Files\Symantec Shared\ccSetMgr.exe
O23 - Service: CyberLink Background Capture Service (CBCS) (CLCapSvc) - Unknown owner - C:\Program Files\CyberLink\PowerCinema\Kernel\TV\CLCapSvc.exe
O23 - Service: Diskeeper - Executive Software International, Inc. - C:\Program Files\Executive Software\Diskeeper\DkService.exe
O23 - Service: GEARSecurity - GEAR Software - C:\WINDOWS\System32\GEARSec.exe
O23 - Service: InstallDriver Table Manager (IDriverT) - Macrovision Corporation - C:\Program Files\Common Files\InstallShield\Driver\11\Intel 32\IDriverT.exe
O23 - Service: ioFTPD - Unknown owner - c:\reskit\srvany.exe
O23 - Service: iPodService - Apple Computer, Inc. - C:\Program Files\iPod\bin\iPodService.exe
O23 - Service: KService - Kontiki Inc. - C:\Program Files\KService\KService.exe
O23 - Service: McAfee Framework Service (McAfeeFramework) - Network Associates, Inc. - C:\Program Files\Network Associates\Common Framework\FrameworkService.exe
O23 - Service: Network Associates McShield (McShield) - Network Associates, Inc. - C:\Program Files\Network Associates\VirusScan\Mcshield.exe
O23 - Service: Network Associates Task Manager (McTaskManager) - Network Associates, Inc. - C:\Program Files\Network Associates\VirusScan\VsTskMgr.exe
O23 - Service: Norton Ghost - Symantec Corporation - C:\Program Files\Norton Ghost\Agent\VProSvc.exe
O23 - Service: Remote Administrator Service (r_server) - Unknown owner - C:\WINDOWS\system32\r_server.exe" /service (file missing)
O23 - Service: Symantec Core LC - Symantec Corporation - C:\Program Files\Common Files\Symantec Shared\CCPD-LC\symlcsvc.exe
O23 - Service: twdns - Unknown owner - C:\WINDOWS\system32\dns\bin\named.exe
O23 - Service: VMware Authorization Service (VMAuthdService) - VMware, Inc. - C:\Program Files\VMware\VMware Workstation\vmware-authd.exe
O23 - Service: VMware DHCP Service (VMnetDHCP) - VMware, Inc. - C:\WINDOWS\system32\vmnetdhcp.exe
O23 - Service: VMware Virtual Mount Manager Extended (vmount2) - VMware, Inc. - C:\Program Files\Common Files\VMware\VMware Virtual Image Editing\vmount2.exe
O23 - Service: VMware NAT Service - VMware, Inc. - C:\WINDOWS\system32\vmnat.exe
O23 - Service: VNC Server Version 4 (WinVNC4) - Unknown owner - C:\Program Files\RealVNC\VNC4\winvnc4.exe" -service (file missing)
-------------------------
That last entry is a strange one - the VNC service threw up a few errors a few days ago when I started the PC up. But seems to have been alright since....anyway...there it is, laid bare ;o) Enjoy!
5. ### datamonger
Joined:
Jul 25, 2006
Messages:
244
The problem could lie in the fact that you have 5 DNS servers. It appears that you are behind a router so I would suggest that you remove all of the DNS servers except for 192.168.254.254.
You may want to write the others down. You could also see if you can even ping them to see if they are still online. What could be happening is that the applications are going to the first DNS server listed, 87.194.0.66, but time out before moving on to the second DNS entry.
Joined:
Aug 7, 2006
Messages:
33
Tried that, still no luck. One other thing I have noticed is that in my network connections properties the 'Internet Gateway Device' that had been picked up (by UPnP I guess) says disabled - and when I try to enable it fails.
All this, but I am connected and downloading from an FTP site and on several IRC channels/servers - ONLY BY IP THO¬!!
7. ### datamonger
Joined:
Jul 25, 2006
Messages:
244
Could you do an nslookup for a few sites and post the results here? You may have to try each one more than once.
I just tried nslookups for each of your DNS servers (except the 192.168.254.254 one). Only the third one, 87.194.0.67, returned with something. It claims to be ns2.betherenow.co.uk. That seems to be a legit DNS server. You may want to make this your primary DNS server. One additional thing to look at is to make sure that your router is getting the proper DNS servers from your ISP. If you are sending DNS requests to your router and it does not have the proper entries then there is a dead end there as well.
8. ### TerryNetModerator
Joined:
Mar 23, 2005
Messages:
77,498
First Name:
Terry
You can access web sites by name but browsers don't work. Often means a bad winsock. If you have Windows XP SP2 ...
Winsock fix
(From JohnWill)
Try this Automated WINSOCK Fix for XP: http://www.spychecker.com/program/winsockxpfix.html
Reboot and see if the situation changes. If that doesn't do it, try these two commands.
TCP/IP stack repair options for use with Windows XP with SP2.
For these commands, Start, Run, CMD to open a command prompt.
Reset WINSOCK entries to installation defaults: netsh winsock reset catalog
Reset TCP/IP stack to installation defaults. netsh int ip reset reset.log
9. ### blaqDeaph
Joined:
Nov 22, 2005
Messages:
869
Ok, first up you need to test the connection itself by pinging a site by the IP. google is a good site.
Then, you need to change one of the DNS servers to something that your ISP uses (don't rely on the routerto act as a DNS server). If you have another computer then google up "<isp name> dns server"
Or else, give them a call and ask them the address for their recommended DNS server.
Joined:
Aug 7, 2006
Messages:
33
The DNS servers are right as I'm able to do a successful nslookup against them.
I'm trying the winsock fix - but need to reboot....doing video conv so have to wait for that to finish...
Joined:
Aug 7, 2006
Messages:
33
Thanks for that TerryNet - this < http://www.spychecker.com/program/winsockxpfix.html> worked.
I owe you! I NEVER would've thought of that! I guess it's a result of removing the VPN client software as that would've had most effect on the TCP/IP stack.
Well, anyway, thanks!
Joined:
Aug 7, 2006
Messages:
33
HELP!!
since doing this fix, I've gone to work this morning and can't connect remotely to my PC at all! Also the FTP server running on it is not connecting either.
I've asked my gf to have a quick look cos she's at home, but it looks like everything's OK. All the services are started, nothing's changed (ie ports or IP's) but I cant connect via RDP, RAdmin, VNC... It seems that no inbound sessions are being permitted.
I've asked her to have a look at the firewall settings (on the LAN connection - Windows firewall) and it's off - as it should be. My router has a firewall configured, with the right ports forwarded etc. Yesterday I could connect to my PC with no problem from work, today - since doing the Winsock fix - I can't. Even to the FTP server....
Any ideas? What would resetting the winsock catalog do to inbound connections?!
Any help hugely appreciated!!
13. ### JohnWillRetired Moderator
Joined:
Oct 19, 2002
Messages:
106,418
It shouldn't have any effect on the inbound connections. However, you may have to reinstall the applications in question, since the WINSOCK repair has probably yanked their "shims" out.
I'd re-install and see if that helps...
Joined:
Aug 7, 2006
Messages:
33
Oh you're joking....so I'll have to reinstall my FTP server, my Radmin, VNC, what about remote desktop (mstsc) ? I can't reinstall that as it's part of the XP OS...
15. ### JohnWillRetired Moderator
Joined:
Oct 19, 2002
Messages:
106,418
I'm saying some of the problems "might" be the WINSOCK. However, if you had to repair it, perhaps there's a reason it was broken.
One side effect of this may have been the XP firewall got turned on, did you check that?
As Seen On
|
# How do you evaluate log_(1/3) (1/27)?
Oct 1, 2016
$3$
#### Explanation:
${\log}_{\frac{1}{3}} \left(\frac{1}{27}\right)$
$= {\log}_{\frac{1}{3}} {\left(\frac{1}{3}\right)}^{3}$
$= 3 {\log}_{\frac{1}{3}} \left(\frac{1}{3}\right)$
$= 3$
Alternatively, you can do it this way :
Recall that ${\log}_{a} b = c , \implies {a}^{c} = b$
let ${\log}_{\frac{1}{3}} \left(\frac{1}{27}\right) = x$
$\implies {\left(\frac{1}{3}\right)}^{x} = \frac{1}{27} = {\left(\frac{1}{3}\right)}^{3}$
Hence, $x = 3$
|
# zbMATH — the first resource for mathematics
Linear partial differential equations. (English) Zbl 0209.12001
Notes on Mathematics and its applications. New York-London-Paris: Gordon and Breach Science Publishers, X, 120 p. (1970).
##### MSC:
35-02 Research exposition (monographs, survey articles) pertaining to partial differential equations 35A27 Microlocal methods and methods of sheaf theory and homological algebra applied to PDEs 58J99 Partial differential equations on manifolds; differential operators 32C38 Sheaves of differential operators and their modules, $$D$$-modules
|
Verify my expected product of two acid/base reactions and does chosen solvent change reaction?
My education in chemistry goes as far as what I research on the internet. I have two different acid/base reactions for which I've tried to predict the resulting product. Can anyone here verify if I did this right or have an alternative prediction?
Reaction 1: p-toluenesulfonic acid (anhydrous) $\ce{C7H8O3S}$ reacts with potassium hydroxide $\ce{KOH}$. From this I calculated I should expect toluene $\ce{C7H8}$, potassium sulfate $\ce{K2SO4}$ and water $\ce{H2O}$:
$$\ce{C7H8O3S + 2KOH -> C7H8 + K2SO4 + H2O}\tag{1}$$
Reaction 2: p-toluenesulfonic acid (anhydrous) $\ce{C7H8O3S}$ reacts with sodium bicarbonate $\ce{NaHCO3}$:
$$\ce{C7H8O3S + 2NaHCO3 -> C7H8 + Na2SO4 + 2CO2 + H2O}\tag{2}$$
If correct, then both of these reactions result in toluene. Does anyone think I got it right (or wrong)?
If the p-toluenesulfonic acid is dissolved in hexane, and I wish to neutralize it, neither base is soluble in hexane. If I first dissolved the base in ethanol (anhydrous) which is miscible with hexane (if I'm using the term correctly), will it neutralize the acid in the hexane, and does the fact that hexane and ethanol are involved change the reactions above? Any prediction if the ethanol was 95% instead of anhydrous?
So separating ethanol from hexane is easy if you want to keep the hexane -- just add water and separate. But if I actually have hexane/ethanol/toluene and add water and dispose of the ethanol/water solution, I'm assuming I'm left with a hexane/toluene mix. The toluene has a much higher polarity than hexane, but much lower one than ethanol. Toluene has a boiling point higher than water, and approx. 40 degrees higher than hexane.
Under this circumstance it appears removing toluene from the hexane and solute is not realistic, and would leave me distilling the toluene from my solute at a too high a temperature for the solute ($\pu{111^\circ C}$) whereas the boiling point of hexane is more like $\pu{68^\circ C}$.
If I'm correct, is there any thoughts on how to neutralize the acid in hexane without creating toluene which is not easy to separate from the hexane or am I stuck with toluene as long as I use this particular acid?
• You're trying to neutralize tosic (toluenesulfonic) acid in hexane as a post-reaction workup? Dec 20 '16 at 4:04
• Welcome to Chemistry.SE! You seem to assume that the neutralization of p-toluene sulfonic acid with potassium hydroxide (or sodium hydrogencarbonate generates) toluene. This is wrong! The neutralizations only lead to the potassium (or sodium) salts of the sulfonic acid. With other words, the (most) acidic proton is replaced with a potassium (or sodium) cation. There's no further degradation that would lead to toluene. Dec 20 '16 at 5:49
• Thanks for your responses! -- based upon everyone's comments thus far I had another go at it (see comments in response to AK's answer). Dec 20 '16 at 6:24
You will not get toluene in either reaction, you will get the potassium and sodium salts of p-toluene sulfonic acid for the respective reactions. This is because 1.) the toluene sulfonic acid is not a salt of toluene and sulfuric acid and 2.) aromatic rings (unless special conditions are applied) undergo electrophilic aromatic substitution. i.e. positive groups cause substitution on the ring, which sodium and potassium are not electrophilic ("electron loving") enough to effect such reaction. If you wanted toluene from these reactions you would have to use a dilute mineral acid in water and the $\ce{H+}$ ions from the acid would cause a substitution, removing the sulfonyl group and yielding toluene and sulfuric acid.
If you really need to neutralize this material, just add the mentioned bases as solids and allow them time to react. Bases don't need to be in solution to react with acids.
• Thank you AK and Electronpusher, so much for reminding me how little I know ;) I really appreciate you taking the time! – I took advice from the two prior answers and rethought a bit and here is what I got. First instead of looking at the p-Toluenesulfonic acid like C7H8O3S I look at it like CH3 C6H4 SO3 H So CH3 C6H4 SO3 H + 2KOH gives me: CH3C6H4OH (p-Cresol) K2SO3 (Potassium Sulfite) and H2O (Water) Do I have it right yet? Dec 20 '16 at 6:18
• No, this would require a nucleophilic substitution which will not happen. You should watch some youtube videos on electrophilic aromatic substitution so that you may understand the underlying principles of substitution on aromatic rings. It is not like most other reactions.
– A.K.
Dec 20 '16 at 19:57
• Ok, so, I checked out some videos on KahnAcademy. Thank you for that. So basically all that is happening is the neutralization of the acid like initially shown in ElectronPusher's structure. Basically the H of the OH of the acid gets swapped with the K from the KOH, the H from the acid goes to the OH of the KOH making H2O. This leaves me with H2O and Potassium p-toluenesulfonate, the latter of which I assume will be a solid precipitate, insoluble in either hexane or ethanol? Dec 22 '16 at 6:04
• Now you have the right idea for the acid-base neutralization. However, the potassium p-toluenesulfonate salt (AKA potassium tosylate), should be soluble in ethanol and maybe somewhat suble in hexane too. See all those C's and H's in the ring? That nonpolar structure is soluble in nonpolar solvents, like hexane. The ionic charges on the sulfonate side would be soluble in polar solvents. This could cause some precipitation in hexane, but ethanol can accommodate both domains and I would expect no precipitate in the alcohol solvent. Dec 23 '16 at 4:56
• How can I predict if, given a solution of hexane and ethanol which contains the dissolved potassium tosylate, if I were to add water to separate, if the potassium tosylate will go with the etoh/h2o or stay in the hexane? I would guess the polar sulfonate side would create a stronger bond with water than the bond between the non-polar side and the hexane, and perhaps the non-polar side would be satisfied by the alcohol at the same time the polar side is teamed up with the water, i.e. gone from the hexane? Dec 28 '16 at 22:44
A sulfonic acid substituent on a benzene ring could, under some conditions, undergo nucleophilic aromatic substitution by hydroxide to give a phenol (in this case, p-cresol). This type of reaction is more favorable with additional electron withdrawing groups on the benzene ring.
See Aromatic Nucleophilic Substitution Reactions. Joseph F. Bunnett and Roland E. Zahler, Chem. Rev., 1951, 49 (2), pp 273–412.
This is more than a simple acid-base neutralization, you are hydrolyzing the $\ce{C-S}$ bond. Is this your intention, to create toluene, or is that a byproduct? Your balanced reactions look correct, but I would recommend you use formulas that show more detail to see what's going on (better yet, structures! see photo). I see no risk in aqueous conditions, since you are hydrolyzing tosic acid anyway. Is there a reason you can't use an organic-soluble base?
Edit: I erroneously went with the OP's assumption that the hydrolysis of the ring would take place. The other comments have reminded me that these conditions are not sufficient for substitution of the aromatic ring.
|
# Can characters occur in automorphic representation
Let $\pi$ be an irreducible cuspidal automorphic representation of $GL(2)$ over a global field with factorisation $\pi = \otimes \pi_v$.
Then at most finitely many $\pi_v$ are not spherical.
Questions:
• Can it happen that $\pi_v(g) = \chi_v(\det g)$ for a unitary character of $F_v^\times$? Can it happen infinitely often?
• Is the Steinberg spherical? Can it occur as $\pi_v$? Can it appear infinitely often?
-
Anything factoring through $\det$ does not have a Whittaker model, so cannot appear.
|
# User-defined Hessians
In this tutorial, we explain how to write a user-defined function (see User-defined Functions) with a Hessian matrix explicitly provided by the user.
This tutorial uses the following packages:
using JuMP
import Ipopt
## Rosenbrock example
As a simple example, we first consider the Rosenbrock function:
rosenbrock(x...) = (1 - x[1])^2 + 100 * (x[2] - x[1]^2)^2
rosenbrock (generic function with 1 method)
function ∇rosenbrock(g::AbstractVector, x...)
g[1] = 400 * x[1]^3 - 400 * x[1] * x[2] + 2 * x[1] - 2
g[2] = 200 * (x[2] - x[1]^2)
return
end
∇rosenbrock (generic function with 1 method)
and the Hessian matrix:
function ∇²rosenbrock(H::AbstractMatrix, x...)
H[1, 1] = 1200 * x[1]^2 - 400 * x[2] + 2
# H[1, 2] = -400 * x[1] <-- not needed because Hessian is symmetric
H[2, 1] = -400 * x[1]
H[2, 2] = 200.0
return
end
∇²rosenbrock (generic function with 1 method)
You may assume the Hessian matrix H is initialized with zeros, and because it is symmetric you need only to fill in the non-zero of the lower-triangular terms.
The matrix type passed in as H depends on the automatic differentiation system, so make sure the first argument to the Hessian function supports an AbstractMatrix (it may be something other than Matrix{Float64}). However, you may assume only that H supports size(H) and setindex!.
Now that we have the function, its gradient, and its Hessian, we can construct a JuMP model, register the function, and use it in a @NL macro:
model = Model(Ipopt.Optimizer)
@variable(model, x[1:2])
register(model, :rosenbrock, 2, rosenbrock, ∇rosenbrock, ∇²rosenbrock)
@NLobjective(model, Min, rosenbrock(x[1], x[2]))
optimize!(model)
solution_summary(model; verbose = true)
* Solver : Ipopt
* Status
Termination status : LOCALLY_SOLVED
Primal status : FEASIBLE_POINT
Dual status : FEASIBLE_POINT
Result count : 1
Has duals : true
Message from the solver:
"Solve_Succeeded"
* Candidate solution
Objective value : 1.20425e-27
Dual objective value : 0.00000e+00
Primal solution :
x[1] : 1.00000e+00
x[2] : 1.00000e+00
Dual solution :
* Work counters
Solve time (sec) : 5.19359e-02
## Bilevel optimization
User-defined Hessian functions can be useful when solving more complicated problems. In the rest of this tutorial, our goal is to solve the bilevel optimization problem:
$$$\begin{array}{r l} \min\limits_{x,z} & x_1^2 + x_2^2 + z \\ s.t. & \begin{array}{r l} z \ge \max\limits_{y} & x_1^2 y_1 + x_2^2 y_2 - x_1 y_1^4 - 2 x_2 y_2^4 \\ s.t. & (y_1 - 10)^2 + (y_2 - 10)^2 \le 25 \end{array} \\ & x \ge 0. \end{array}$$$
This bilevel optimization problem is composed of two nested optimization problems. An upper level, involving variables $x$, and a lower level, involving variables $y$. From the perspective of the lower-level problem, the values of $x$ are fixed parameters, and so the model optimizes $y$ given those fixed parameters. Simultaneously, the upper-level problem optimizes $x$ and $z$ given the response of $y$.
## Decomposition
There are a few ways to solve this problem, but we are going to use a nonlinear decomposition method. The first step is to write a function to compute the lower-level problem:
$$$\begin{array}{r l} V(x_1, x_2) = \max\limits_{y} & x_1^2 y_1 + x_2^2 y_2 - x_1 y_1^4 - 2 x_2 y_2^4 \\ s.t. & (y_1 - 10)^2 + (y_2 - 10)^2 \le 25 \end{array}$$$
function solve_lower_level(x...)
model = Model(Ipopt.Optimizer)
set_silent(model)
@variable(model, y[1:2])
@NLobjective(
model,
Max,
x[1]^2 * y[1] + x[2]^2 * y[2] - x[1] * y[1]^4 - 2 * x[2] * y[2]^4,
)
@constraint(model, (y[1] - 10)^2 + (y[2] - 10)^2 <= 25)
optimize!(model)
@assert termination_status(model) == LOCALLY_SOLVED
return objective_value(model), value.(y)
end
solve_lower_level (generic function with 1 method)
The next function takes a value of $x$ and returns the optimal lower-level objective-value and the optimal response $y$. The reason why we need both the objective and the optimal $y$ will be made clear shortly, but for now let us define:
function V(x...)
f, _ = solve_lower_level(x...)
return f
end
V (generic function with 1 method)
Then, we can substitute $V$ into our full problem to create:
$$$\begin{array}{r l} \min\limits_{x} & x_1^2 + x_2^2 + V(x_1, x_2) \\ s.t. & x \ge 0. \end{array}$$$
This looks like a nonlinear optimization problem with a user-defined function $V$! However, because $V$ solves an optimization problem internally, we can't use automatic differentiation to compute the first and second derivatives. Instead, we can use JuMP's ability to pass callback functions for the gradient and Hessian instead.
First up, we need to define the gradient of $V$ with respect to $x$. In general, this may be difficult to compute, but because $x$ appears only in the objective, we can just differentiate the objective function with respect to $x$, giving:
function ∇V(g::AbstractVector, x...)
_, y = solve_lower_level(x...)
g[1] = 2 * x[1] * y[1] - y[1]^4
g[2] = 2 * x[2] * y[2] - 2 * y[2]^4
return
end
∇V (generic function with 1 method)
Second, we need to define the Hessian of $V$ with respect to $x$. This is a symmetric matrix, but in our example only the diagonal elements are non-zero:
function ∇²V(H::AbstractMatrix, x...)
_, y = solve_lower_level(x...)
H[1, 1] = 2 * y[1]
H[2, 2] = 2 * y[2]
return
end
∇²V (generic function with 1 method)
We now have enough to define our bilevel optimization problem:
model = Model(Ipopt.Optimizer)
@variable(model, x[1:2] >= 0)
register(model, :V, 2, V, ∇V, ∇²V)
@NLobjective(model, Min, x[1]^2 + x[2]^2 + V(x[1], x[2]))
optimize!(model)
solution_summary(model)
* Solver : Ipopt
* Status
Termination status : LOCALLY_SOLVED
Primal status : FEASIBLE_POINT
Dual status : FEASIBLE_POINT
Message from the solver:
"Solve_Succeeded"
* Candidate solution
Objective value : -4.18983e+05
Dual objective value : 0.00000e+00
* Work counters
Solve time (sec) : 1.03544e+00
The optimal objective value is:
objective_value(model)
-418983.4868064086
and the optimal upper-level decision variables $x$ are:
value.(x)
2-element Vector{Float64}:
154.97862349841566
180.00961418355053
To compute the optimal lower-level decision variables, we need to call solve_lower_level with the optimal upper-level decision variables:
_, y = solve_lower_level(value.(x)...)
y
2-element Vector{Float64}:
7.072593960195712
5.946569893523139
## Improving performance
Our solution approach works, but it has a performance problem: every time we need to compute the value, gradient, or Hessian of $V$, we have to re-solve the lower-level optimization problem! This is wasteful, because we will often call the gradient and Hessian at the same point, and so solving the problem twice with the same input repeats work unnecessarily.
We can work around this by using a cache:
mutable struct Cache
x::Any
f::Float64
y::Vector{Float64}
end
with a function to update the cache if needed:
function _update_if_needed(cache::Cache, x...)
if cache.x !== x
cache.f, cache.y = solve_lower_level(x...)
cache.x = x
end
return
end
_update_if_needed (generic function with 1 method)
Then, we define cached versions of out three functions which call _updated_if_needed and return values from the cache.
function cached_f(cache::Cache, x...)
_update_if_needed(cache, x...)
return cache.f
end
function cached_∇f(cache::Cache, g::AbstractVector, x...)
_update_if_needed(cache, x...)
g[1] = 2 * x[1] * cache.y[1] - cache.y[1]^4
g[2] = 2 * x[2] * cache.y[2] - 2 * cache.y[2]^4
return
end
function cached_∇²f(cache::Cache, H::AbstractMatrix, x...)
_update_if_needed(cache, x...)
H[1, 1] = 2 * cache.y[1]
H[2, 2] = 2 * cache.y[2]
return
end
cached_∇²f (generic function with 1 method)
Now we're ready to setup and solve the upper level optimization problem:
model = Model(Ipopt.Optimizer)
@variable(model, x[1:2] >= 0)
cache = Cache(Float64[], NaN, Float64[])
register(
model,
:V,
2,
(x...) -> cached_f(cache, x...),
(g, x...) -> cached_∇f(cache, g, x...),
(H, x...) -> cached_∇²f(cache, H, x...),
)
@NLobjective(model, Min, x[1]^2 + x[2]^2 + V(x[1], x[2]))
optimize!(model)
solution_summary(model)
* Solver : Ipopt
* Status
Termination status : LOCALLY_SOLVED
Primal status : FEASIBLE_POINT
Dual status : FEASIBLE_POINT
Message from the solver:
"Solve_Succeeded"
* Candidate solution
Objective value : -4.18983e+05
Dual objective value : 0.00000e+00
* Work counters
Solve time (sec) : 6.14560e-01
an we can check we get the same objective value:
objective_value(model)
-418983.4868064086
and upper-level decision variable $x$:
value.(x)
2-element Vector{Float64}:
154.97862349841566
180.00961418355053
Tip
|
# Is an old-fashioned renormalization prescription unconvincing?
+ 2 like - 0 dislike
1112 views
As you know, before Wilsonian POV on QFT, there was an "old-fashioned" renormalization prescription, which was justified/substantiated with different truthful reasonings.
When I was young, the following reasoning looked quite convincing, if not perfect, to me. Let us consider, for example, scattering a low-frequency EM wave from a free electron. QED must give the usual non-relativistic Thomson cross section for this scattering process, and indeed, QED gives it in the first approximation. However, in higher orders the charge acquires perturbative corrections, so that it is the initial charge plus all perturbative corrections who serve now as a physical charge in QED. (Let's not discuss their cut-off dependence.) Therefore, there is no problem at all since we just must define the physical charge as this sum. S. Weinberg, arguing with Dirac, writes in his "Dreams" (page 116), that it is just a matter of definition of physical constants.
We can still find this way of presenting things in many textbooks today.
Isn't it sufficiently convincing for QED? Do you see any loophole in this "old-fashion" reasoning?
edited Aug 17, 2014
The old point of view is not wrong, it is comfortably embeded in and generalized by Wilson's modern EFT picture. Depending on the applications, it is still useful to apply it. This is probably why old renormalization comes first in many QFT textbooks and EFTs are treated as an advanced topic which comes later. Your example is probably such a case.
Thanks, Dilaton. So you do not see any loophole in this reasoning. Does anybody else?
+ 2 like - 0 dislike
The old-fashioned reasoning is partially defective as it defines the renormalized quantities by an ill-defined expression. It works with the usual hand-waving, but I don't like the latter.
I recommend the causal approach, which gives the same results as other approaches but works without any UV cutoff $\Lambda$ and always works with physical constants only. This is much cleaner conceptually. It still has an IR cutoff, but this causes far less problems in calculations. Also, it has a good axiomatic basis that is justified from ordinary QM.
answered Aug 18, 2014 by (15,747 points)
Thank you, Arnold. To me, the ill-defined expressions are not a problem; that's why I proposed not to discuss the cutoff dependence. So let's concentrate on the expressions like $\mathcal{M}$ and on my question.
@VladimirKalitvianski I don't understand why you want to avoid talking about the cut-off (in)dependence: your question is if the classical interpretation of renormalisation is not convincing - and it would probably so if it were cut-off dependent (which it is not, obviously).
@dimension10: It is not an issue since after defining the physical constant, the cutoff disappears.
I thought I had answered your questions at the end: To me something that is ill-defined is not convincing at all and has glaring holes. For a mathematician, it may convey intuition and suggest a conjecture, but not more. Causal perturbation theory makes it fully precise, obtaining the same renormalized results without compromise.
Thanks, Arnold, for clarifying your position. But if we define a QFT on a lattice, then nothing is ill-defined, isn't it?
OK, thanks, Arnold. So it is a correct way of proceeding.
@VladimirKalitvianski That's exactly what I said, and when it is independent of the cut-off, it means it's also independent of the way you formulate it philosophically, so basically - convincing enough.
+ 1 like - 0 dislike
(I'm not sure if I understand your question correctly - how could we find renormalisation, something that works mathematically unconvincing?)
It's usually shown in most introductory textbooks on Quantum Field Theory, that the scattering amplitude, let's call it $\mathcal{M}$, can be written without the cut-off $\Lambda$. C.f. Zee p. 148-150. On the last page of that selection, the matrix element is written without any reference to $\Lambda$ and other unmeasurable quantitites:
$\mathcal{M}=-i\lambda_P+iC\lambda_P^2\left(\ln\left(\frac{s_0}{s}\right)+\ln\left(\frac{t_0}{t}\right)+\ln\left(\frac{u_0}{u}\right)\right)+O\left(\lambda_P^3\right)$
I don't see why one should find something unconvincing when the physics is in terms of measurable quantities, and is furthermore experimentally observed. The modern EFT picture is more comfortable, and as has been said in the comments, some sort of a "generalisation" of the old picture.
answered Aug 18, 2014 by (1,985 points)
Thank you, dimension10. I know the Zee's argument, but it concerns a specific model, maybe a bit far from a realistic case, so I did not type it in.
@VladimirKalitvianski Yes, it concerns a specific example, but it is always true that the scattering amplitudes are independent of the cut-off.
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ys$\varnothing$csOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
|
# Optimization Problem: Recommending Service Providers to Clients
I am new to optimization, not sure if the problem described below is trivial. Any guidance on solution or nudge in the right direction would be very helpful.
Problem:
There are two groups – clients and service providers. Let’s represent clients as $$u=1,2,\cdots,n$$ and service providers as $$v=1,2,\cdots,m$$ where $$n > m$$. The task is to recommend each client the top 3 service-providers based on the propensity score $$P$$ (which is an $$n\times m$$ matrix).
The propensity score $$p_{ij}$$ denotes the propensity of a the client $$u_i$$ accepting a service provider $$v_j$$. Ideally, we want to recommend top 3 service providers to a client which have the highest propensity of acceptance. Each service provider has a capacity constraint such that number of clients recommended for one service-provider should be between a certain fixed range: $$|\mu^{-1}(v)|\le q_v\,\forall v$$ where $$\mu$$ is the assignment and $$q_v$$ is the limit to number of assignments for the $$v$$th service provider.
The solution needs to be practically feasible. We are dealing with few millions clients and few thousands service provider.
I also want to make sure the recommendations don’t change drastically. If propensity scores don't change over time, then we should be recommending more or less same service providers to the same client. If possible, there could be some variation although – say, changing one of the top 3 service providers every now and then will not be a terrible idea.
What kind of techniques are usually used to solve such problems – After doing a bit of research, I found there are techniques such as Linear Programing, Bipartite Graphs to handle similar problems. But I can’t wrap my head around the best course of solution that should be employed in this case.
• Related question or.stackexchange.com/q/6640/4551 At first glance, your problem looks like a special case of the Generalized Assignment Problem with weights equal to 1 Aug 31 '21 at 20:19
First, let's split this into two separate problems: making the initial assignments; and updating assignments over time. (If you already have assignments in place, you may only need the second problem.)
The initial problem can be modeled a generalized assignment problem (GAP). Technically this is an integer linear program, with a zero-one variable for each combination of client and service provider, which is unworkable at your dimensions. So we're going to need to do some approximating and settle for a "good" but not provably optimal solution.
I would first look at whether it is possible to assign users to clusters such that the propensity scores for all users in the cluster and any provider are close enough to each other that you would be willing to assign an "average" propensity to all users in the cluster. That would bring the number of variables down from $$n\cdot m$$ to $$n_c \cdot m$$, where $$n_c$$ is the (hopefully small) number of clusters. Now solve a linear programming relaxation of the GAP to get the approximate number of clients in each cluster assigned to each provider, then make those assignments (say, by solving an actual GAP for each cluster, involving only the clients in that cluster and the providers who had a nonzero volume of assignments from that cluster in the LP), with some rounding and finessing to deal with any residual fractions.
If clustering based on propensities is not possible, I would look at whether the full problem can be adequately approximated by a bunch of smaller problems, where each client and each provider exists in exactly one of the smaller problems. For instance, it might make sense to partition based on geography.
Assuming you find a workable idea among those, to deal with updates over time I would use the same approach, adding or removing clients and providers if either group changes over time, and fixing most of the assignments of clients carried over to providers carried over. This leaves a smaller problem in which new clients are assigned, clients whose providers disappeared are reassigned, and possibly some other clients are reassigned to make the overall result better. You can either leave the objective as-is and assume the solver will not reassign an existing client unless there is overall benefit, or you can add a penalty for reassigning clients to discourage it being done with significant overall benefit. (How to decided which assignments are locked and which are allowed to change is left to the reader as an exercise.) The locked assignments turn into adjustments in cluster size or number of clients and capacities of volunteers, and don't directly contribute to the size of the problem being solved.
• Thanks @prubin. This is a great suggestion. I will try clustering approach to reduce the scale. Do typical LP solvers fail to operate on millions of data points? Sep 7 '21 at 21:52
• The relevant measurements for LP solvers are number of variables (constraint matrix columns), number of constraints (rows) and matrix density (fraction of the constraint matrix with nonzero entries). You'll have to translate "data points" into those terms. Commercial solvers, on adequately strong computing platforms, can frequently handle millions of columns if the number of rows and matrix density are low enough. For you to model assignments at the individual level would mean billions of variables (columns). I don't know any solver that will handle that. Sep 8 '21 at 20:49
• Thank you for your detailed answer once again. Will try out the approach you mentioned. Sep 9 '21 at 20:22
|
# zbMATH — the first resource for mathematics
Probabilistic anonymity. (English) Zbl 1134.68426
Abadi, Martín (ed.) et al., CONCUR 2005 – concurrency theory. 16th international conference, CONCUR 2005, San Francisco, CA, USA, August 23–26, 2005. Proceedings. Berlin: Springer (ISBN 3-540-28309-9/pbk). Lecture Notes in Computer Science 3653, 171-185 (2005).
Summary: The concept of anonymity comes into play in a wide range of situations, varying from voting and anonymous donations to postings on bulletin boards and sending mails. The systems for ensuring anonymity often use random mechanisms which can be described probabilistically, while the agents’ interest in performing the anonymous action may be totally unpredictable, irregular, and hence expressable only nondeterministically.
Formal definitions of the concept of anonymity have been investigated in the past either in a totally nondeterministic framework, or in a purely probabilistic one. In this paper, we investigate a notion of anonymity which combines both probability and nondeterminism, and which is suitable for describing the most general situation in which both the systems and the user can have both probabilistic and nondeterministic behavior. We also investigate the properties of the definition for the particular cases of purely nondeterministic users and purely probabilistic users.
We formulate our notions of anonymity in terms of observables for processes in the probabilistic $$\pi$$-calculus, whose semantics is based on Probabilistic Automata.
We illustrate our ideas by using the example of the dining cryptographers.
For the entire collection see [Zbl 1084.68002].
##### MSC:
68Q85 Models and methods for concurrent and distributed computing (process algebras, bisimulation, transition nets, etc.) 68Q45 Formal languages and automata 68Q60 Specification and verification (program logics, model checking, etc.) 94A60 Cryptography
Full Text:
|
Michi’s blog » OpenGL programming in Haskell – a tutorial (part 1)
## OpenGL programming in Haskell – a tutorial (part 1)
• September 14th, 2006
• 4:17 pm
After having failed following the googled tutorial in HOpenGL programming, I thought I’d write down the steps I actually can get to work in a tutorial-like fashion. It may be a good idea to read this in parallell to the tutorial linked, since Panitz actually brings a lot of good explanations, even though his syntax isn’t up to speed with the latest HOpenGL at all points.
## Hello World
First of all, we’ll want to load the OpenGL libraries, throw up a window, and generally get to grips with what needs to be done to get a program running at all.
import Graphics.Rendering.OpenGL
import Graphics.UI.GLUT
main = do
(progname, _) <- getArgsAndInitialize
createWindow "Hello World"
mainLoop
This code throws up a window, with a given title. Nothing more happens, though. This is the skeleton that we’ll be building on to. Save it to HelloWorld.hs and compile it by running
ghc -package GLUT HelloWorld.hs -o HelloWorld
However, as a skeleton, it is profoundly worthless. It doesn’t even redraw the window, so we should definitely make sure to have a function that takes care of that in there somewhere. Telling the OpenGL-system what to do is done by using state variables, and these, in turn are handled by the datatype Data.IORef. So we modify our code to the following
import Graphics.Rendering.OpenGL
import Graphics.UI.GLUT
main = do
(progname, _) <- getArgsAndInitialize
createWindow "Hello World"
displayCallback $= clear [ ColorBuffer ] mainLoop This sets the global state variable carrying the callback function responsible for drawing the window to be the function that clears the color state. Save this to the HelloWorld.hs, recompile, and rerun. This program no longer carries the original pixels along, but rather clears everything out. The displayCallback is a globally defined IORef, which can be accessed through a host of functions defined in Data.IORef. However, deep within the OpenGL code, there are a couple of definition providing the interface functions$= and get to fascilitate interactions with these. Thus we can do things like
height = newIORef 1.0
currentheight <- get height
height $= 1.5 ## Using the drawing canvas So, we have a window, we have a display callback that clears the canvas. Don’t we want more out of it? Sure we do. So let’s draw some things. import Graphics.Rendering.OpenGL import Graphics.UI.GLUT myPoints :: [(GLfloat,GLfloat,GLfloat)] myPoints = map (\k -> (sin(2*pi*k/12),cos(2*pi*k/12),0.0)) [1..12] main = do (progname, _) <- getArgsAndInitialize createWindow "Hello World" displayCallback$= display
mainLoop
display = do
clear [ColorBuffer]
renderPrimitive Points $mapM_ (\(x, y, z)->vertex$Vertex3 x y z) myPoints
flush
Now, the important thing to notice in this codeextract is that last line. It starts a rendering definition, gives the type to be rendered, and then a sequence of function calls, each of which adds a vertex to the rendering canvas. The statement is basically equivalent to something along the lines of
renderPrimitive Points do
vertex Vertex3 …
vertex Vertex3 …
for appropriate triples of coordinate values at the appropriate places. This results in the rendition here:
We can replace Points with other primitives, leading to the rendering of:
### Triangles
Each three coordinates following each other define a triangle. The last n mod 3 coordinates are ignored.
Keyword Triangles
### Triangle strips
Triangles are drawn according to a “moving window” of size three, so the two last coordinates in the previous triangle become the two first in the next triangle.
Keyword TriangleStrip
### Triangle fans
Triangle fans have the first given coordinate as a basepoint, and takes each pair of subsequent coordinates to define a triangle together with the first coordinate.
Keyword TriangleFan
### Lines
Each pair of coordinates define a line.
Keyword Lines
### Line loops
With line loops, each further coordinate defines a line together with the last coordinate used. Once all coordinates are used up, an additional line is drawn back to the beginning.
Keyword LineLoop
### Line strips
Line strips are like line loops, only without the last link added.
Keyword LineStrip
And a Quadstrip works as the trianglestrip, only the window is 4 coordinates wide and steps 2 steps each time, so each new pair of coordinates attaches a new quadrangle to the last edge of the last quadrangle.
### Polygon
A Polygon is a filled line loop. Simple as that!
Keyword Polygon
There are more things we can do on our canvas than just spreading out coordinates. Within the command list constructed after a renderPrimitive, we can give several different commands that control what things are supposed to look like, so for instance we could use the following
display = do
clear [ColorBuffer]
renderPrimitive Quads $do color$ (Color3 (1.0::GLfloat) 0 0)
vertex $(Vertex3 (0::GLfloat) 0 0) vertex$ (Vertex3 (0::GLfloat) 0.2 0)
vertex $(Vertex3 (0.2::GLfloat) 0.2 0) vertex$ (Vertex3 (0.2::GLfloat) 0 0)
color $(Color3 (0::GLfloat) 1 0) vertex$ (Vertex3 (0::GLfloat) 0 0)
vertex $(Vertex3 (0::GLfloat) (-0.2) 0) vertex$ (Vertex3 (0.2::GLfloat) (-0.2) 0)
vertex $(Vertex3 (0.2::GLfloat) 0 0) color$ (Color3 (0::GLfloat) 0 1)
vertex $(Vertex3 (0::GLfloat) 0 0) vertex$ (Vertex3 (0::GLfloat) (-0.2) 0)
vertex $(Vertex3 ((-0.2)::GLfloat) (-0.2) 0) vertex$ (Vertex3 ((-0.2)::GLfloat) 0 0)
color $(Color3 (1::GLfloat) 0 1) vertex$ (Vertex3 (0::GLfloat) 0 0)
vertex $(Vertex3 (0::GLfloat) 0.2 0) vertex$ (Vertex3 ((-0.2::GLfloat)) 0.2 0)
vertex $(Vertex3 ((-0.2::GLfloat)) 0 0) flush in order to produce these four coloured squares where each color command sets the color for the next item drawn, and the vertex commands give vertices for the four squares. ## Callbacks – how we react to changes We have already seen one callback in action: displayCallBack. The Callbacks are state variables of the HOpenGL system, and are called in order to handle various things that may happen to the place the drawing canvas lives. For a first exercise, go resize the latest window you’ve used. Go on, do it now. I bet it looked ugly, didn’t it? This is because we have no code handling what to do if the window should suddenly change. Handling this is done in a callback, residing in the IORef reshapeCallback. Similarily, repainting is done in displayCallback, keyboard and mouse input is in keyboardMouseCallback, and so on. We can refer to the HOpenGL documentation for window callbacks and for global callbacks. Window callbacks are things like display, keyboard and mouse, and reshape. Global callbacks deal with timing issues (for those snazzy animations) and the menu interface systems. In order for a callback to possibly not be defined, most are typed within the Maybe monad, so by setting the state variable to Nothing, a callback can be disabled. Thus, setting callbacks is done using the keyword Just. We’ll add a callback for reshaping the window to our neat code, changing the main function to main = do (progname, _) <- getArgsAndInitialize createWindow "Hello World" displayCallback$= display
reshapeCallback $= Just reshape mainLoop reshape s@(Size w h) = do viewport$= (Position 0 0, s)
postRedisplay Nothing
Here, the code for the reshape function resizes the viewport so that our drawing area contains the entire new window. After setting the new viewport, it also tells the windowing system that something has happened to the window, and that therefore, the display function should be called.
## Summary
So, in conclusion, so far we can display a window, post basic callbacks to get the windowhandling to run smoothly, and draw in our window. Next installment of the tutorial will bring you 3d drawing, keyboard and mouse interactions, the incredible power of matrices and the ability to rotate 3d objects for your leisure. Possibly, we’ll even look into animations.
### 15 People had this to say...
• Leif D
• September 14th, 2006
• 17:48
Thank you!
Keep more comming
• Kyle
• September 14th, 2006
• 20:23
Awesome. Like Leif said, keep it going.
I’ve been waiting for someone to fill the OpenGL tutorial void in haskell.
Also, I happen to think it’s important work.
OpenGL is everywhere and is an excellent example of a low level popular C library for haskell to interface with. The more use it, the better the OpenGL bindings get, the better haskell’s interaction with normal libraries get.
• Michi
• September 15th, 2006
• 12:19
One point worth noting is that the added call to postRedisplay I used in the reshape callback is completely redundant. This wasn’t clear to me when writing, however, since I would do most of my testing over an ssh-tunneled X session. Thus updating was slow in general, and in particular resulted in redisplays not occuring as fast as they should.
A locally run test showed me that the behaviour asked actually does occur with reshape defined as
• El Seed
• September 25th, 2006
• 23:29
(x, y, z)->vertex$Vertex3 x y z does not compile for me. Maybe the intention was \(x, y, z)->vertex$Vertex3 x y z
• El Seed
• September 25th, 2006
• 23:50
er, in general your backslashes are disappearing, as in
map (k -> (sin(2*pi*k/12),cos(2*pi*k/12),0.0)) [1..12]
• Michi
• September 26th, 2006
• 7:45
Thanks for the corrections. I think the case with the disappearing backslashes is to a certain extent the blogpost being “hung over” from problems I had when editing – where all of a sudden ALL “special” characters ended up escaped with \ all over the place; and at some point I just removed all backslashes (without thinking about the consequences).
I’ll incorporate your points in an edit now. Thanks.
[...] As we left off the last installment, we were just about capable to open up a window, and draw some basic things in it by giving coordinate lists to the command renderPrimitive. The programs we built suffered under a couple of very infringing and ugly restraints when we wrote them – for one, they weren’t really very modularized. The code would have been much clearer had we farmed out important subtasks on other modules. For another, we never even considered the fact that some manipulations would not necessarily be good to do on the entire picture. [...]
• alex
• November 16th, 2006
• 21:38
With all due respect, the code above is pretty ugly. Is there any real advantage to using Haskell for OpenGL programming?
• Michi
• November 17th, 2006
• 8:47
First off, there are definitive ways to write the code prettier than I did – this tutorial is written almost as much for my own learning as it is for teaching purposes.
Secondly, I wouldn’t be able to pinpoint a good knock-out argument why OpenGL in Haskell would be a better idea than, say, OpenGL in C or OpenGL with Python bindings et.c. other than at most being able to manhandle lists and what-not with the Haskell built-in tools. You also will not get the most funky and modern stuff at the current state of HOpenGL, so I’d explicitly recommend against it for, say, games programming in general.
I remember OpenGL in C as being unwieldy and annoying when I coded it, and the way it’s encapsulated in Haskell to be neat in comparison, but this may be taken care of in other language bindings just as well.
• Rick
• November 19th, 2006
• 21:32
Thanks for the tutorial
• Rasmus Ulvund
• October 23rd, 2007
• 0:45
Very cute!
I should take some time to play with this.
[...] best OpenGL tutorial for Haskell that I’ve found is this one from Michi’s blog, using GLUT to interface with X. For this tutorial we are going to use the Gtk GLDrawingArea [...]
• March 9th, 2009
• 18:05
Thanks a lot for this tutorial This is exactly how a tutorial should be — begin by showing how to draw some frigging points rather than with 15 pages of boilerplate, side remarks and whatnot. I can usually figure out much of a library from the docs once I got the ball rolling like this, but getting there is a pain from a reference manual.
Minor point: As of now (GLUT-2.1.1.2) the resizing works just fine without the reshape callback.
[...] an nice introduction to OpenGL under haskell, see here, and also the [...]
|
Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp015q47rn73z
Title: CO2 trapping in sloping aqiufers: High resolution numerical simulations Contributors: Elenius, MariaTchelepi, HamdiJohannsen, Klaus Keywords: capillary forcesdissolution Issue Date: 13-May-2010 Series/Report no.: XVIII International Conference on Water Resources, CMWR 2010, J. Carrera (Ed), Barcelona, 2010 Abstract: We performed numerical simulations of the migration of a supercritical CO2 current in a sloping aquifer in the presence of residual and solubility trapping. Compared to simulations with residual trapping only, when dissolution is accounted for the trapping efficiency is nearly doubled and the speed and maximum up-dip extent of the plume are affected. The saturations in the plume correspond well to transition zones consistent with capillary equilibrium. The pressure gradients slightly ahead of the leading tip of the current remain at the initial values, and that opens up the possibility to use a simple moving boundary to model extremely long aquifers. URI: http://arks.princeton.edu/ark:/88435/dsp015q47rn73z Appears in Collections: Princeton-Bergen Series on Carbon Storage
Files in This Item:
File Description SizeFormat
EleniusEtAl CMWR 2010.pdf205.84 kBAdobe PDF
This item is licensed under a Creative Commons License
|
FP /
# FPWeeklyMeeting
Want to give a Functional Programming related talk? Talk to Anton Ekblad (antonek at chalmers dot se) to book a slot. We usually have a talk every Friday at 10:00.
None
## Past Meetings
### 2019-11-29, 10:00, EDIT 8103
Speaker
Koen Claessen, Chalmers University of Technology
Title
Building and Testing a Synthesizer from LTL properties
Abstract
I am in the process of building a "synthesizer", which is a tool that, given a specification of a system as Linear Temporal Logic (LTL) properties, can automatically compute a state machine that satisfies all those LTL properties. In the spirit of "more talks about ongoing work" I will present how far I've come and the problems I am facing right now.
In particular, I need to have an effective way of testing some of the components of the tool I am building. One of these components translates from LTL to an automaton, is very hard to get efficient and right, obviously needs to be correct, and is very hard to test! LTL semantics deals with infinite traces which are hard to deal with in the context of executable properties.
The talk assumes no working knowledge of LTL in members of the audience. Everything will be explained :)
### 2019-09-13, 10:00, EDIT Analysen
Speaker
Abraham Wolk, University of Copenhagen
Title
Towards a physically realistic universal model of parallel computation
Abstract
Physical laws and the geometry of 3-dimensional Euclidean space put limitations on (1) the speed of information transmission and (2) the scalability of network topologies. Prevailing models of parallel computation do either not take these limitations into account, or they relegate them to free parameters of the model that may be instantiated according to the application at hand. While the latter option may yield useful insights when analyzing an algorithm intended to be run on a /particular/ machine for which the parameters may be approximated, it does not yield insight into the scalability of an algorithm beyond the particular machine in a physically realistic way.
In this talk I will present ideas towards the development of a model of parallel computation in which computation is conceived of as a program executing on top of an underlying parallel universal machine that can plausibly be realized physically. The hope is that by measuring the resource consumption (both time and hardware resources) of the underlying universal machine, one obtains a realistic prediction of the actual resources required to execute an algorithm, as well as an understanding of how the resource consumption scales in a physically realistic manner.
### 2019-09-6, 10:00, EDIT Analysen
Speaker
Moa Johansson, Chalmers
Title
Learning and automated reasoning – informal thoughts and new ideas
Abstract
This talk will be very informal, and my aim is to see if I can find someone else who is interested in discussing and working on an interesting problem – how to combine learning and reasoning in theorem proving.
No one has missed the rapid development and popularity of machine learning (deep learning in particular) in many areas, including Natural Language Processing. What successful machine learning systems manage to do well is to in some sense capture some aspect of human intuition – even though not always perfect. E.g. a Google translate translation is often understandable, but perhaps not perfect.
Naturally, there is also growing intrested in machine learning in the automated reasoning community, in particular in finding out if machine learning can contribute towards tasks needing human intuition and interaction (such as choosing the next tactic to advance a proof in an interactive theorem prover) or tasks which are now guided by heuristics (such as selecting clauses, strategies etc. in an automated prover).
One specific question which I’ve been thinking about is how to best represent theorem proving concepts (e.g. functions, constants, datatypes, conjectures, subgoals etc) in order to make them amenable for machine learning models – which wants vectors or matricies of numerical values as inputs. In the literature, the techniques used are either hand-crafted features (based on heurisitics) or techniques from NLP, treating the symbols as more or less words in natural language. However, there are fundamental differences between “words” in theorem proving languages (functions, constants, types etc) and NL words. The former does have a formal semantics – which should be exploited! Furthermore, in theorem provers, users add new “words” routinely e.g. whenever they define a new function This happens less often in natural languages. This has largely been ignored by the learning theorem proving community but is a problem. So, how do we create a representation which is cheap add new “words” to?
Popular methods from NLP such as word2vec, which are trained to learn a vector representation can be expensive to train and need lots of data, so there is a question about how suitable they are if we often need to update the learned vector representation. Are there existing methods?
### 2019-08-30, 10:00, EDIT Analysen
Speaker
Hendrik van Antwerpen, Delft University
Title
Spoofax Language Workbench
Abstract
Meta-languages exist to improve the development of (domain specific) programming languages. Language engineers define aspects of the language by writing a high-level, declarative specification. A language implementations is derived from the specification. In this talk I will talk about (a) Statix, a meta-language for name binding and typing rules, and (b) ongoing work on generating random program terms from Statix specifications.
Immediately after the talk is a small workshop to play with Spoofax, the language workbench that contains, among other things, an implementation of Statix. We will do some interactive language development and play with some of the meta-languages of Spoofax by extending an existing language definition for STLC-with-Records with new syntactic constructs and typing rules. It is best to install Spoofax and get the workspace I prepared for this workshop beforehand, so we can focus on some hacking. See for instructions: https://hendrik.van-antwerpen.net/talks/20190830-chalmers/
### 2019-08-16, 10:00, EDIT Analysen
We are going to fire up our Functional Programming talks again and this Friday we start with a bang! We are going to have no less than four FP talks, which are dry run ICFP presentations. We have come up with the following program:
• 10:00 - 10:30 Simple Noninterference from Parametricity, Maximilian Algehed
• 10:30 - 11:00 Hailstorm : A static typed functional languages for systems, Abhiroop Sarkar
• 11:00 - 11:30 Coffee break
• 12:00 - 12:30 TBA, Markus Aronsson
The presentations are going to take place in EDIT Analysen. You can find some abstracts below. All welcome!
--
Simple Noninterference from Parametricity, Maximilian Algehed
In this paper we revisit the connection between parametricity and noninterference. Our primary contribution is a proof of noninterference for a polyvariant variation of the Dependency Core Calculus of Abadi et al. in the Calculus of Constructions. The proof is modular: it leverages parametricity for the Calculus of Constructionsand the encoding of data abstraction using existential types.This perspective gives rise to simple and understandable proofs of noninterference from parametricity. All our contributions have been mechanised in the Agda proof assistant.
Hailstorm : A static typed functional languages for systems, Abhiroop Sarkar
This talk provides an introduction to a work in progress functional language implementation which is designed primarily for programming embedded systems.
We present a novel method for ensuring that relational database queries in monadic embedded languages are well-scoped, even in the presence of arbitrarily nested joins and aggregates. Demonstrating our method, we present a simplified version of Selda, a monadic relational database query language embedded in Haskell, with full support for nested inner queries. To our knowledge, Selda is the first relational database query language to support fully general inner queries using a monadic interface. In the Haskell community, monads are the de facto standard interface to a wide range of libraries and EDSLs. They are well understood by researchers and practitioners alike, and they enjoy first class support by the standard libraries. Due to the difficulty of ensuring that inner queries are well-scoped, database interfaces in Haskell have previously either been forced to forego the benefits of monadic interfaces, or have had to do without the generality afforded by inner queries.
### 2019-05-03, 10:00, EDIT Analysen
Speaker
Troels Henriksen, DIKU
Title
Compiler transformations for a GPU-targeting purely functional array language
Abstract
Futhark is a small programming language designed to be compiled to efficient parallel code. It is a statically typed, data-parallel, and purely functional array language, and comes with a optimising ahead-of-time compiler that generates GPU code via OpenCL and CUDA. Futhark is not designed for graphics programming, but instead uses the compute power of the GPU to accelerate data-parallel array computations (“GPGPU”).
This talk presents the design of the Futhark language, as well as outlines several of the key compiler optimisations that have enabled performance comparable to hand-written GPU code. Through the use of a functional source language, we obtain strong invariants that simplify and empower the application of traditional compiler optimisation techniques. In particular, I will discuss (i) handling of high-level features like modules, polymorphism, and higher-order functions, (ii) loop interchange and distribution to extract flat parallel kernels from nested parallel source programs, (iii) multi-versioned code generation that exploits as much parallelism as necessary and efficiently sequentialises the rest, and (iv) data layout transformations to ensure coalesced memory access on GPUs.
### 2019-04-12, 10:00, EDIT 8103
Speaker
Herbert Lange, Chalmers
Title
Restricting grammars to reduce ambiguity
Abstract
Formal grammars are useful tools to describe languages both in computer science and computational linguistics. To describe natural languages large-scale grammars are necessary, e.g. the Grammatical Framework Resource Grammar Library. But with increasing size and complexity of a grammar, more and more ambiguity in the analyses arises, which is undesirable for specific applications.
In this talk I will look at restricting a formal grammar in a way that it describes exactly a language-specific fragment of the original language and this fragment can be given in the form of a set of examples. The resulting grammars will be specifically optimised to avoid ambiguous analyses. The focus of the talk will be on natural language applications but the methods I will present are generic and can be of general interest.
### 2019-01-18, 10:00, EDIT Analysen
Speaker
Joel Svensson, RISE
Title
A Lisp-like interpreter on ZYNQ ARM+FPGA
Abstract
This talk, more of a demonstration perhaps, is about a just-for-fun hobby project that I work on as an educational exercise for myself. I am working on an interpreter for a lisp-like language that I intend to run on a ZYNQ ARM+FPGA (on the ARM core). Apart from learning about implementation of lisp-like languages I hope to gain the benefit of having a bare-metal REPL to assist me when I play with my development board. This is work-in-progress so tips and ideas are greatly appreciated.
### 2018-10-26, 10:00, EDIT EA
Speaker
Josef Svenningsson, Ericsson
Title
A Gradual Type System for Erlang
Abstract
This talk introduces a new type system for Erlang based on Gradual Typing. The principles of Gradual Typing has emerged in the type system research community over the last decade and has resulted in several new type system for existing languages. Gradual Typing is tailored to mix static and dynamic code. The type system provides pay-as-you-go static checking: the more type annotation in the program, the more static checking will be performed.
The tool we've developed uses Erlang's current syntax for specs, and it works on existing code bases without change. I will go through the design principles used when developing our new gradual type system for Erlang. No prior knowledge of gradual typing will be assumed.
Dialyzer is currently the most popular tool for the static checking of Erlang. Our gradual type system turns out to be somewhat complementary to Dialyzer. While Dialyzer aims to give no false positives, our type system always reports an error whenever a type spec doesn't match the code. We'll provide an in-depth comparison of the two tools.
This talk is an extended version of a presentation given at Code Beam this year.
### 2018-10-05, 10:00, EDIT Analysen
Speaker
John Hughes, Chalmers
Title
Testing Statistical Properties
Abstract
Randomised algorithms (such as QuickCheck itself!) require choices to be made with a given probability. Such algorithms are harder to test, because testing requires estimating probabilities in a statistically sound way. In this talk I’ll explore ways to include statistical properties in property-based tests, based on work that Nick Smallbone and I have done recently, with inspiration from a Quviq customer.
Slides
hughes_statistical_properties.pdf
### 2018-09-28, 10:00, EDIT 8103
Speaker
Maximilian Algehed, Chalmers
Title
A Perspective on the Dependency Core Calculus
Abstract
I will presents a simple but equally expressive variant on the terminating fragment of the Dependency Core Calculus (DCC) of Abadi et al. [1]. DCC is a concise and elegant calculus for tracking dependency. The calculus has applications in, among other areas, information flow control, slicing, and binding time analysis. However, in this paper we show that it is possible to replace a core technical device in DCC with an alternative, simpler, formulation. The calculus has a denotational semantics in the same domain as DCC, using which we prove that the two calculi are equivalent. As a proof of concept to show that our calculus provides a simple analysis of dependency we implement it in Haskell, obtaining a simpler implementation compared to previous work [2].
This will be a dry-run of a talk I will be giving at the PLAS workshop co-located with CCS.
[1] A Core Calculus of Dependency - Abadi et al. [2] Embedding DCC in Haskell - Algehed and Russo
### 2018-09-21, 10:00, EDIT Analysen
Speaker
Nachiappan Valliappan, Chalmers
Title
Typing the Wild in Erlang, Hindley-Milner style
Abstract
Developing a static type system suitable for Erlang has been of ongoing interest for almost two decades now. The challenge with retrofitting a static type system onto a dynamically typed language, such as Erlang, is the loss of flexibility in programming offered by the language. In light of this, many attempts to type Erlang trade sound type checking for the ability to retain flexibility. In this talk, I'll present an implementation of a Hindley-Milner type system for Erlang which strives to remain sound without being too restrictive. Our type checker—unlike contemporary implementations of Hindley-Milner—is flexible enough to allow overloading of data constructors, branches of different types etc. Further, to allow Erlang’s dynamic type behaviour, we employ a program specialization technique called partial evaluation. Partial evaluation simplifies programs prior to type checking, and hence enables the type system to type such behaviour under certain restricted circumstances.
Note: This talk is a dry-run of my presentation at the upcoming Erlang Workshop, based on my master thesis work at Chalmers supervised by John Hughes.
### 2018-09-14, 11:00, EDIT Analysen
Speaker
Agustín Mista, Chalmers
Title
Branching Processes for QuickCheck Generators
Abstract
In QuickCheck (or, more generally, random testing), it is challenging to control random data generators' distributions---specially when it comes to user-defined algebraic data types (ADT). In this paper, we adapt results from an area of mathematics known as branching processes, and show how they help to analytically predict (at compile-time) the expected number of generated constructors, even in the presence of mutually recursive or composite ADTs. Using our probabilistic formulas, we design heuristics capable of automatically adjusting probabilities in order to synthesize generators which distributions are aligned with users' demands. We provide a Haskell implementation of our mechanism in a tool called DRaGen and perform case studies with real-world applications. When generating random values, our synthesized QuickCheck generators show improvements in code coverage when compared with those automatically derived by state-of-the-art tools.
### 2018-09-12, 13:15, EDIT 8103
Speaker
Andrey Mokhov, Newcastle University
Title
Algebraic Graphs
Abstract
Are you tired of fiddling with sets of vertices and edges when working with graphs? Would you like to have a simple algebraic data type for representing graphs and manipulating them using familiar functional programming abstractions? In this talk, we will learn a new way of thinking about graphs and a new approach to working with graphs in a functional programming language like Haskell. The ideas presented in the talk are implemented in the Alga library: https://github.com/snowleopard/alga. I hope that after the talk you will be able to implement a new algebraic graph library in your favourite programming language in an hour.
### 2018-09-07, 10:00, EDIT Analysen
Speaker
Sólrún Halla Einarsdóttir, Chalmers
Title
Into the Infinite - Theory Exploration for Coinduction
Abstract
Theory exploration is a technique for automating the discovery of lemmas in formalizations of mathematical theories, using testing and automated proof techniques. Automated theory exploration has previously been successfully applied to discover lemmas for inductive theories, about recursive datatypes and functions. We present an extension of theory exploration to coinductive theories, allowing us to explore the dual notions of corecursive datatypes and functions. This required development of new methods for testing infinite values, and for proof automation. Our work has been implemented in the Hipster system, a theory exploration tool for the proof assistant Isabelle/HOL.
This talk is a dry-run for my upcoming talk at AISC 2018, based on joint work with Moa Johansson and Johannes Åman Pohjola.
### 2018-06-08, 10:00, EDIT EA
Speaker
Maximilian Algehed, Chalmers
Title
Parametricity and Noninterference revisited
Abstract
I will talk about a few new proofs of noninterference for variants of System F_\omega which make use of parametricity for dependent types. Unlike previous results I obtain parametricity-based proofs of noninterference for both static and dynamic enforcement of noninterference. The proofs make use of a parametricity formulation for PTS:s by Bernardy et al [1].. In their theory, a term in a PTS O gets translated into a term in another PTS M, O and M for Object and Meta, where the M terms represent theorems and proofs derived from the type and implementation of the terms in O. Our proofs make essential use of this separation of two systems to hide parts of secure data-types from the programmer. In our formulation, the M language constitutes both meta-language and trusted computing base.
[1] Proofs for Free, J.P. Bernardy, P. Jansson, R. Patterson, JFP 2012
### 2018-06-01, 10:00, EDIT 3364
Speaker
Edvard Hübinette and Fredrik Thune, Chalmers
Title
Better Implicit Parallelism (MSc Thesis presentation)
Abstract
ApplicativeDo tries to remove the Monad binds where possible when desugaring Haskells' do-notation. It does this by looking at inter-statement dependencies and using the Applicative operator when it can; this introduces implicit parallelism in the code. Deciding which structure to generate is tricky, since many different are usually possible. Currently, this is solved with simple minimum cost analysis with an unit cost model, assuming each statement takes the same amount of time to evaluate.
At Facebook, Marlow et. al developed [Haxl](https://github.com/facebook/haxl), a data-fetching library that can use applicative expressions for free parallelism. To our knowledge, it is the only library that does automatic parallelism on Applicative expressions.
By extending the cost model inside the ApplicativeDo algorithm for variable evaluation costs, and implementing weight annotations through optional programmer pragmas in GHC, we do smarter generation of code by prioritising parallelisation of heavy statements. This leads to quite nice speedups when using Haxl, with very little extra work for the programmer.
ApplicativeDo was introduced with Desugaring Haskells' do-notation Into Applicative Operations (Marlow, Peyton-Jones, Kmett, & Mokhov, Haskell Symposium 2016) Haxl was introduced with There Is No Fork (Marlow, Brandy, Coens, & Purdy, ICFP 2014)
### 2018-05-25, 10:00, EDIT EA
Speaker
Matthías Páll Gissurarson, Chalmers
Title
Suggesting Valid Hole Fits for Typed-Holes (Master Thesis defence)
Abstract
Most programs are developed from some sort of specification, and type systems allow programmers to communicate part of this specification to the compiler via the types. The types can then be used to verify that the implementation matches this partial specification. But can the types be used to aid programmers during development, beyond verification? In this talk I will defend my master's thesis, titled "Suggesting Valid Hole Fits for Typed-Holes" and present a lightweight, practical extension to the typed-holes of GHC that improves user experience and facilitates a style of programming called "Type-Driven Development".
### 2018-05-18, 10:00, EDIT 3364
Speaker
Title
Designing Domain-Specific Heterogeneous Architectures from Dataflow Programs
Abstract
The last ten years have seen performance and power requirements pushing computer architectures using only a single core towards so-called manycore systems with hundreds of cores on a single chip. To further increase performance and energy efficiency, we are now seeing the development of heterogeneous architectures with specialized and accelerated cores. However, designing these heterogeneous systems is a challenging task due to their inherent complexity. We propose an approach for designing domain-specific heterogeneous architectures based on instruction augmentation through the integration of hardware accelerators into simple cores. These hardware accelerators are determined based on their common use among applications within a certain domain. The objective is to generate heterogeneous architectures by integrating many of these accelerated cores and connecting them with a network-on-chip. The proposed approach aims to ease the design of heterogeneous manycore architectures—and, consequently, exploration of the design space—by automating the design steps. To evaluate our approach, we enhanced our software tool chain with a tool that can generate accelerated cores from dataflow programs. This new tool chain was evaluated with the aid of two use cases: radar signal processing and mobile baseband processing. We could achieve an approximately 4x improvement in performance, while executing complete applications on the augmented cores with a small impact (2.5–13%) on area usage. The generated accelerators are competitive, achieving more than 90% of the performance of hand-written implementations.
In this talk I will give an overview of our design method for developing heterogeneous architectures with custom extensions generated automatically by our tool chain. The tool chain takes dataflow applications as input and generates custom hardware in a functional programming language which then can be translated to verilog.
Slides
savas.pdf
### 2018-04-20, 10:00, EDIT 3364
Speaker
Troels Henriksen, DIKU, Copenhagen University
Title
Design and implementation of the Futhark programming language
Abstract
Futhark is a data-parallel programming language under development at DIKU. The language supports (regular) nested data-parallelism and features an optimising compiler under rapid development. This presentation gives an introduction to Futhark and its compiler, with a particular focus on generation of efficient OpenCL GPU kernels and the steps automatically taken by the Futhark compiler to ensure fusion and efficient memory accesses. In the presentation, I will discuss various tradeoffs of the code generation, and how a new versioning technique can defer to runtime the decision between different parallelisation strategies.
### 2018-03-09, 10:00, EDIT 3364
Speaker
Solène Mirliaz and Elisabet Lobo, Chalmers
Title
Interpreting Lambda Calculus via Category Theory (A pragmatic programming guide for the non-expert)
Abstract
Many domain-specific languages are often implemented in a deep embedded fashion in order to enable optimizations. Many of the techniques for EDSL soon or later end up repeating the work of the host compilar---for this pearl, that would be GHC. Recently, a new approach called Compiling to Categories has emerged and promises to avoid such replication of work. It relies on understanding the categorical model of Cartesian Closed Category (CCC). That means that, to use this new technique, it becomes necessary to understand basic category theory and CCC. Unfortunately, when learning about such topics and its relation to functional programs, one faces the risk of diving into mathematical books with difficult-to-penetrate notation, getting lost into abstract notions, and eventually giving up. Instead, this pearl aims to guide Haskell developers to the understanding of all of such abstract concepts via Haskell code. We present two EDSLs in Haskell: one for simple-typed lambda terms and another for CCC and show how to translate programs form one into the other---a well-known result by Lambek in 1985. To achieve the translation, our implementation relies on GHC closed type families. By going into CCC, it becomes possible to give many interpretations to lambda terms, but in this work we only focus on a traditional call-by-value semantics. Hence, we also show how to execute CCC programs via the categorical abstract machine (CAM). Moreover, we extend our implementation of simple-typed lambda calculus with primitive operators, branching, and fix points via appropriated enhancements to CCC and CAM based on category theory concepts. All this journey is going to be grounded in Haskell code, so that developers can experiment and stop fearing such abstract concepts as we did.
This talk is based on a joint-work with Alejandro Russo.
### 2018-03-02, 10:00, EDIT 8103
Speaker
Dominic Orchard, University of Kent
Title
Programs that explain their effects (in Haskell)
Abstract
Slides
https://github.com/dorchard/effectful-explanations-talk
### 2017-12-15, 10:00, EDIT 3364
Speaker
Matthías Páll Gissurarson
Title
Suggesting Valid Substitutions For Typed Holes
Abstract
A common problem when working large software libraries is how to discover functions and constants within those libraries. Typed holes are a feature in Haskell, where programmers can specify “holes” in expressions and have the compiler relate information about those holes inferred from the context. Our contribution is an extension to the functionality of these holes where type directed search is used to suggest valid substitutions for those holes to the programmer. This extension has been accepted into GHC.
### 2017-12-08, 10:00, EDIT 3364
Speaker
Maximilian Algehed
Title
SessionCheck, Bidirectional Testing of Communicating Systems
Abstract
I will present a new tool called SessionCheck. This tool helps programmers write distributed applications that work correctly. SessionCheck is designed to help rid programmers of the tedium of maintaining more than one specification and test suite for multiple application components. SessionCheck does this by borrowing ideas from session types and domain specific languages in order to provide a simple yet expressive and compositional specification language for communication protocols. Specifications written in the SessionCheck specification language can be used to test both client and server implementations of the same protocol in a completely language-agnostic manner.
### 2017-12-01, 10:00, EDIT 3364
Speaker
Nicola Botta
Title
Responsibility, influence and sequential decision problems
Abstract
Attributing responsibility -- e.g. for units of greenhouse gas emissions -- in a consistent way is crucial for fair burden sharing and even compensation schemes but also for effective decision making: one would like to invest computational or human resources in decisions that truly matter. But how can we compute which decisions really do matter? We argue that a dependently typed formalization of optimal decision making under uncertainty is a good starting point for deriving measures of influence/responsibility.
### 2017-11-03, 10:00, EDIT 3364
Speaker
Aarne Ranta
Title
Explainable Machine Translation
Abstract
Explainable Artificial Intelligence (XAI) is an attempt to mitigate the black-box character of machine learning based AI techniques. A prime example of such techniques is Neural Machine Translation (NMT), which works via an end-to-end string transformation where the intermediate stages are machine-created vectors of floating point numbers. While NMT has reached average scores higher than older methods, its problem is that the resulting translations are sometimes completely wrong, and it is hard tell good translations from bad ones without knowing both the source and the target language.
Explainable Machine Translation (XMT) applies the XAI idea to the field of translation. It is inspired by Kurt Mehlhorn’s notion of certifying algorithms: algorithms that don’t just produce a result but a certificate - a piece of evidence that can be inspected by the user. The certificate that we propose is a formula in constructive type theory, which encodes the semantic and abstract syntactic structure of the translatable content. The formula is obtained by a parser, which itself may be a black box, but which has a formally defined relation to the source and target text. In our implementation of the idea, the parser is a neural dependency parser returning Universal Dependency (UD) trees. These trees are interpreted as type theoretical formulas, which form a translation interlingua. As the final step, the interlingua linearized into the target language by using the Grammatical Framework (GF). The last step is defined by formal rules and therefore easy to verify, whereas the parsing step - which involves interpretation of unrestricted human language - is a natural playground for machine learning approaches. By producing a formula as the outcome of this step, we can increase the reliability of the entire translation process.
### 2017-10-27, 10:00, EDIT 3364
Speaker
Emil Axelsson
Title
Rhino -- a DSL for data-centric applications
Abstract
n this talk I will present our development of Rhino at Mpowered Business Solutions. Rhino is a functional DSL/framework targeting data-centric applications such as compliance calculations, which involve managing potentially large amounts of data and presenting reports from calculations on this data. The goal is a fully declarative framework where the programmer can focus on the structure and logic of the application rather than details of how it is going to run. We envision several advantages of a declarative system: (1) rapid development; (2) easy to use for domain experts; (3) tractability to include third-party plug-ins (due to lack of side effects); etc.
Rhino is very much work in progress. The talk will focus on the pure functional calculation and query languages used in Rhino. I will also share some lessons learned from an early attempt to implement Rhino as an embedded DSL in Haskell.
Mpowered's core product is a toolkit for measuring compliance with South African legislation for diversity in companies. Rhino aims to improve the existing implementation by factoring out the business logic in order to get a more maintainable system and to allow calculations to be written by domain experts. With some luck, Rhino may even be useful to other businesses in the same domain.
### 2017-10-06, 10:00, EDIT 3364
Speaker
Jean-Philippe Bernardy
Title
TypedFlow: the HOT parts
Abstract
TensorFlow™ is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. TensorFlow graphs can be efficiently evaluated on GPUs and is a popular choice to implement deep learning applications.
TypedFlow is a higher-order and typed (HOT) frontend to tensorflow, written in (Glasgow) Haskell.
In this talk I will:
• briefly explain what TensorFlow is and how it applies to deep learning
• recall the advantages of a HOT approach
• expose some example programs written using TypedFlow
• demonstrate how tensorflow functions can be given precise types, using GHC extensions
• discuss some the difficulties of doing so
### 2017-09-15, 14:15, EDIT EB
Speaker
Erik Hertz
Title
Methodologies for Approximation of Unary Functions and Their Implementation in Hardware
Abstract
Applications in computer graphics, digital signal processing, communication systems, robotics, astrophysics, fluid physics and many other areas have evolved to become very computation intensive. Algorithms are becoming increasingly complex and require higher accuracy in the computations. In addition, software solutions for these applications are in many cases not sufficient in terms of performance. A hardware implementation is therefore needed. A recurring bottleneck in the algorithms is the performance of the approximations of unary functions, such as trigonometric functions, logarithms and the square root, as well as binary functions such as division. The challenge is therefore to develop a methodology for the implementation of approximations of unary functions in hardware that can cope with the growing requirements. The methodology is required to result in fast execution time, low complexity basic operations that are simple to implement in hardware, and – since many applications are battery powered – low power consumption. To ensure appropriate performance of the entire computation in which the approximation is a part, the characteristics and distribution of the approximation error are also things that must be possible to manage.
The new approximation methodologies presented are of the type that aims to reduce the sizes of the look-up tables by the use of auxiliary functions. They are founded on a synthesis of parabolic functions by multiplication – instead of addition, which is the most common.
For some functions, such as roots, inverse and inverse roots, a straightforward solution with an approximation is not manageable. Since these functions are frequent in many computation intensive algorithms, it is necessary to find very efficient implementations of these functions. New methods for this are also presented. They are all founded on working in a floating-point format, and, for the roots functions, a change of number base is also used. The transformations not only enable simpler solutions but also increased accuracy, since the approximation algorithm is performed on a mantissa of limited range.
### 2017-09-01, 10:00, EDIT 8103
Speaker
Title
A Meta-EDSL for Distributed Web Applications
Abstract
We present a domain-specific language for constructing and configuring web applications distributed across any number of networked, heterogeneous systems. Our language is embedded in Haskell, provides a common framework for integrating components written in third-party EDSLs, and enables type-safe, access-controlled communication between nodes, as well as effortless sharing and movement of functionality between application components. We give an implementation of our language and demonstrate its applicability by using it to implement several important components of distributed web applications, including RDBMS integration, load balancing, and fine-grained sandboxing of untrusted third party code.
This is a dry run for my talk at Haskell Symposium '17, so please bring an extra helping of comments and opinions.
### 2017-08-25, 10:00, EDIT EF
Speaker
Markus Aronsson
Title
Abstract
I'll present a library in Haskell for programming Field Programmable Gate Arrays (FPGAs), including hardware software co-design. Code for software (C) and hardware (VHDL) is generated from a single program. The library is however open in that our type-based techniques for embedding allows for the implementation of more than two embedded domain specific language (EDSL). Code is generation is implemented as a series of translations between progressively smaller, typed EDSLs, safeguarding against errors that arise in untyped translations.
If you're familiar with Feldspar, consider this a redesign that supports compilation to both VHDL and C.
### 2017-06-16, 10:00, EDIT 8103
Speaker
Title
Abstract
In this talk I present Selda, a monadic DSL for relational database queries embedded in Haskell, using the semantics of the list monad as a base. Selda improves upon the current state of the art in database DSLs by including joins and aggregation over fully general inner queries. To this end, I present a simple yet novel method to guarantee that inner queries are well-scoped.
In the Haskell community, monads are the de facto standard interface to a wide range of libraries and EDSLs. They are well understood by researchers and practitioners alike, and they enjoy first class support by the standard libraries. Due to the difficulty of ensuring that inner queries are well-scoped, database interfaces in Haskell have either been based on other, less familiar abstractions, or have had to do without the generality afforded by inner queries.
This talk is a dry run for TFP '17. Feedback on everything from slides to talk structure to the content itself is even more welcome than usual!
### 2017-06-09, 10:00, EDIT 8103
Speaker
Title
Evaluation of a new recursion induction principle
Abstract
Structural induction is a powerful tool for proving properties about functions with recursive structure, but it is useless when the functions do not preserve the structure of the input. Many of today's cutting-edge theorem provers use structural induction, which makes it impossible for them to prove properties about non-structurally recursive functions. We introduce a new induction principle, where the induction is done over a function application. This principle makes it possible to prove properties about non-structurally recursive functions.
### 2017-06-02, 10:00, EDIT 8103
Speaker
Regina Hebig
Title
Building DSLs with Modelling Techniques – What we teach and what our students make out of it
Abstract
We teach DSL development in the software engineering program as well as in the WASP program to master’s students and practitioners. In doing so, we focus on modelling techniques, such as metamodeling, Xtext, OCL, code generation, and model transformation. In the talk we will provide an overview about these techniques and discusses how far students are able to adopt and use them. For that, we will show examples of DSLs that were created during the courses.
### 2017-05-19, 10:00, EDIT EA
Speaker
Alejandro Russo
Title
Securing Lazy Programs
Abstract
Information-Flow Control libraries exists in the functional programming language Haskell as embedded domain-specific languages. One distinctive feature of Haskell is its lazy evaluation strategy. In 2013, Buiras and Russo show an attack on the LIO library where lazy evaluation is exploited to leak information via the internal timing covert channel. This covert channel manifests by the mere presence of concurrency and shared resources. LIO removes leaks via such covert channel only for public shared-resources which can be invoked by the library, e.g., references. With lazy evaluation in place, variables introduced by let-bindings and function application---which cannot be controlled by LIO---become shared resources and forcing their evaluation corresponds to affecting threads’ timing behavior. A naïve way to avoid such leaks is by simply forcing the evaluation of variables which might be shared among threads. Unfortunately, this solution goes against one of the core purpose of lazy evaluation: avoid evaluation of unneeded expressions! More importantly, it is not always possible to evaluate expressions to their denoted value---some of them could denote infinite structures. Instead, this work presents a novel approach to explicitly control, at the programming language level, the sharing introduced by lazy evaluation. We design a new primitive called lazyDup which lazily duplicates unevaluated expressions as well as any sub-expression which they might depend on. We provide a formalization of the security MAC library and lazyDup in the form of a concurrent call-by-need calculus with sharing and mutable references. We support our formal results with mechanized proofs in Agda. Although we focus on MAC, our results are also applicable to other IFC libraries like LIO and HLIO.
This talk is based on a joint work with Marco Vassena and Joachim Breitner (it would be presented in the IEEE Computer Security Symposium in August).
### 2017-05-12, 10:00, EDIT 8103
Speaker
Peter Thanisch and Jyrki Nummenmaa
Title
How can we enhance type safety for data expressions?
Abstract
Information systems, such as database systems, statistics packages, and spreadsheet programs, allow the user to extract aggregated data, specified by expressions containing statistical functions, typically in a declarative language and typically with little more type information than can be gleaned from primitive type declarations. Often, subject-matter experts are aware of additional relevant type information, but this information is ignored by the information system. Consequently, “type-safe” expressions can produce nonsensical answers, which could have been avoided, had type-checking utilized that additional type information. We have identified a small set of metadata items, along with type inference rules, that can be used to provide an enhanced notion of type safety. This allows us to perform static checking on data expressions in order to identify potential type-safety issues in advance of the expression being evaluated against the data. To illustrate our approach, we have implemented rule-based software that performs this enhanced type-checking on SQL queries that contain complex arithmetic expressions, including statistical aggregation functions.
What we have NOT done is to develop a more general-purpose solution where this kind of type inference is incorporated into a suitable functional programming language.
### 2017-05-05, 10:00, EDIT 8103
Speaker
Johan Jeuring
Title
An extensible strategy language for describing cognitive skills
Abstract
Students learn cognitive skills for solving problems in many domains. During the last decade, we have developed a strategy language: a domain-specific language for expressing cognitive skills and problem solving procedures. A strategy for a cognitive skill can be used to give feedback, hints, and worked-out solutions to a student interacting in a learning environment, an intelligent tutoring system, or a serious game. We frequently extend the strategy language to capture new patterns that occur when describing strategies. Examples of such extensions include a preference combinator, to express that one strategy is preferred above another strategy, and a combinator for describing that any initial part of a strategy is also accepted.
In this talk I will discuss the fundamental requirements for our strategy language, and show how we can extend the strategy language while preserving these requirements. We present a precise, trace-based semantics for a minimal strategy language. With this semantics, we study the algebraic properties of the strategy language. We describe several language extensions, specify new combinators and their algebraic properties, and discuss the implication of their introduction for the strategy language and its requirements. The language extensions are illustrated by giving actual examples from learning environments, intelligent tutoring systems, and serious games.
Slides
ExtensibleStrategy.pdf
### 2017-04-28, 10:00, EDIT 8103
Speaker
Sven-Bodo Scholz, Heriot-Watt University
Title
A Lambda-Calculus for Transfinite Arrays - Towards Unifying Streams and Arrays
Abstract
Arrays and streams seem to be fundamentally at odds: arrays require their size to be known in advance, streams are dynamically growing; arrays offer direct access to all of their elements, streams provide direct access to the front elements only; the list goes on. Despite these differences, many computational problems at some point require shifting from one paradigm to the other. The driving forces for such a shift can be manifold. Typically, it is a shift in the task requirements at hand or it is motivated by efficiency considerations. In this talk, we present a basis for a unified framework for dealing with arrays and streams alike. We introduce an applied lambda calculus whose fundamental data structure is a new form of array, named "Transfinite Array". Transfinite arrays maintain properties from finite array calculi yet incorporate support for infinite arrays. As it turns out, it is even possible to maintain recursive specifications as they are typically found in a traditional stream / list-based setting.
### 2017-04-21, 10:00, EDIT EF
Speaker
Maximilian Algehed
Title
A fresh look at Dependency Core Calculus
Abstract
Dependency Core Calculus (DCC) is a calculus for statically tracking dependency in functional programs that has been around for many years. DCC is useful for studying multiple program analyses such as call tracking, binding time analysis, slicing, and information flow control.
In this talk, I will present a new implementation of DCC in Haskell with a focus on applying information-flow control. The talk will present three novel main contributions that result from this implementation, i.e., (i) the implementation itself, (ii) how to safely add monad transformer into DCC to model effects, and (iii) a new, alternative, formulation of DCC that generalises some of the ideas from the original paper.
This talk is based on a joint work-in-progress with Alejandro Russo.
### 2017-04-07, 10:00, EDIT 8103
Speaker
Jurriaan Hage
Title
Customized Type Error Message in GHC
Abstract
The focus of this final presentation during my sabbatical at Chalmers will be showing what we have accomplished in the context of the Glasgow Haskell Compiler for the customization of type errors. In this work we employ type level programming to implement our customizations taking advantage of a number of recent additions to the compiler. A distinct advantage of this approach is code and diagnosis all live inside Haskell, and we can employ all methods of abstraction provided to us by the language. This is very recent work, in fact the paper was written while I was visiting Chalmers.
### 2017-03-31, 10:00, EDIT 3364
Speaker
Alejandro Russo
Title
Building Secure Embedded Systems Using Language-based Techniques
Abstract
Low-level languages, such as C / C ++, are highly used in industry to develop embedded systems. How to program these systems Involve internal details of the computer such as the use of memory pointers and manual handling of dynamic memory. The history of software development shows us that among the most common and difficult to correct errors is the presence of dangling pointers, that is, pointers that refer to invalid memory locations or that are no longer in use. When embedded systems are also concurrent, there are also race conditions problems. Wouldn't be beneficial for programmers that the compiler can help them to avoid such problems? (thus building a more secure system).
Rust is a language recently developed by Mozilla to develop systems that require high performance and a small execution environment (runtime). Part of the Firefox browser is already implemented in this language. Rust uses principles of programming languages and implements a type-system capable to detect the possibility of entering invalid pointers and handles allocation of memory automatically without a garbage collector. In this talk, I will talk about the principles and philosophy behind Rust and how these could be used to build secure systems.
### 2017-03-17, 10:00, EDIT 3364
Speaker
Alejandro Russo
Title
Choose your own adventure: Multi-Faceted Value Semantics, Secure Multi-Execution and many other possible endings
Abstract
There exist many programming languages techniques to certify that sensitive information is not leaked when manipulated by untrusted code. They range from static enforcements (like type-systems) to dynamic ones (as execution monitors). Despite when possible leaks are detected, such techniques raise false alarms---an aspect that is endemic to almost any enforcement. Secure Multi-Execution (SME) is a technique which allows to reduce the number of false alarms practically to zero! The price to pay is executing programs many times, one for each security level (e.g., public, secret, top secret, etc). In the presence of many security levels, SME might be nonviable. As an alternative, researchers propose the technique of Multi-Faceted Value semantics (MF), where each piece of data has as many "views" (values) as security levels. MF runs instructions once as much as possible, while simultaneously computing on different views. The exception to this rule are branches: MF repeats the execution of branches if the branch to be taken (i.e., then- or else-branch) depends on the view being considered. Although MF has been referred to as "an optimization of SME", researchers have recently shown that they provide different security guarantees. There is still no final judgment about which method is better (in terms of reducing false alarms) or more performing (less multi-executions) if both methods are to offer the same security guarantees.
Inspired by the comparison of SME and MF, we will present a calculus which, depending how it gets interpreted, it runs programs under SME, MF, or a combination of both! Importantly, the type signature of all the interpreters is the same---something that gives us hope that these two approaches could be unified. We have implemented our ideas as a EDSL in Haskell, where the code (and types) help us to discover a common root between both approaches. This talk will also show some venturous ideas where we got stuck---we hope that the FP crowd could provide us with some enlightenment on how to proceed forward.
This talk is based on a work-in-progress with Thomas Schmitz, Cormac Flanagan, Nataliia Beilova, and Tamara Rezk.
### 2017-03-10, 10:00, EDIT 8103
Speaker
Jurriaan Hage
Title
Domain-Specific Type Error Diagnosis in Haskell
Abstract
A challenge in type error diagnosis for general purpose languages suitable for embedding domain-specific languages is to have errors explained in terms of the domain, not the underlying encoding into the host language. In this talk I discuss work we have done in the Helium compiler back in previous decade and discuss recent results for Haskell beyond Haskell 98.
### 2017-03-03, 10:00, EDIT 8103
Speaker
João Pizani
Title
Hardware circuit description and verification in Agda
Abstract
There is a tradition of DSLs for hardware modelling inspired by functional programming (with a considerable effort originating in Chalmers). In our work, we build upon this tradition, using dependent types to ensure that circuits have some well-formedness conditions satisfied by construction. The dependently-typed host of our DSLs also allows proofs of circuit behaviour to be written in the same language (and checked by the same compiler) as the circuit model itself.
In this talk we present two Agda EDSLs for circuit description, one (Pi-Ware) with a low-level, architectural view of circuits, and another (Lambda1) with syntax and types more closely resembling the familiar λ-calculus. We comment on what restrictions are imposed upon these languages to ensure that they only contain terms with a "reasonably direct" mapping to hardware, and give some examples of how to model and verify circuits in each.
### 2017-02-24, 10:00, EDIT 8103
Speaker
Ralf Lämmel
Title
Abstract
In this talk, I would like to sketch my upcoming textbook on software languages http://www.softlang.org/book while putting on the hat of a Haskell programmer. Overall, the book addresses many issues of software language engineering and metaprogramming: internal and external DSLs, object-program representation, parsing, template processing, pretty printing, interpretation, compilation, type checking, software analysis, software transformation, rewriting, attribute grammars, partial evaluation, program generation, abstraction interpretation, concrete object syntax, and a few more. Haskell plays a major role in the book in that Haskell is used for the implementation for all kinds of language processors, even though some other programming languages (Python, Java, and Prolog) and domain-specific languages (e.g., for syntax definition) are leveraged as well. I hope the talk will be interactive and help me to finish the material and possibly give the audience some ideas about FP- and software language-related education and knowledge management.
Slides
http://softlang.uni-koblenz.de/book/slides/HsSleMp.pdf
### 2017-02-17, 10:00, EDIT 8103
Speaker
Carl Seger
Title
From Abstract Specification to Detailed Implementation: How to Get Accelerator Based Code Right
Abstract
In the last decade, the rapid growth in single-threaded performance of computers has drastically slowed down. One approach to speed up a single threaded application is to use custom hardware for part of it. Unfortunately, fully custom hardware is extremely expensive to develop, becomes obsolete quickly, and is very inflexible. Field Programmable Gate Arrays (FPGAs) have emerged providing a good compromise between the generality of CPUs and the performance of custom hardware. As FPGAs have started to be more tightly integrated with CPUs, it has become feasible to achieve several orders of magnitude speedup on specific single threaded application and such accelerators are starting to be widely deployed. A very real challenge for these mixed software/hardware systems is how to ensure the correctness of the solution. Compounding this is the difficulty of debugging custom hardware (be it FPGA or ASIC based). In this talk I will discuss an approach, inspired by earlier work on effective hardware design, that I am pursuing during my visit here. More specifically I am advocating an integrated design and verification approach so that when a final implementation is reached, then so is a formal proof of its correctness. The talk is very much describing work in progress and I hope for many questions!
Slides
CarlSeger20170217.pdf
### 2017-02-10, 10:00, EDIT 8103
Speaker
Jurriaan Hage
Title
Type Error Diagnosis in Helium (part I: constraints and heuristics)
Abstract
This is an introductory talk on how we do type error diagnosis in the Helium Haskell compiler. We consider a constraint based formulation of the type system, and how that enables us to build type graphs and heuristics that can be defined on type graphs that look for causes of mistakes.
Slides
hagetedtalk.pdf
### 2017-02-03, 10:00, EDIT hörsal EE
Speaker
Carl Seger
Title
Formal Hardware Verification: Theory meets Practice
Abstract
Modern microprocessors contain several billion transistors and implement very complex algorithms. Ensuring that such designs are correct is a challenging and expensive task. Traditional simulation is rapidly becoming inadequate for this task and even multi-million dollar emulation hardware is rapidly losing the race against complexity. Formal hardware verification marries discrete mathematics and theoretical computer science concepts with effective heuristic to prove that some part of a chip implements exactly the function provided as a specification. In this talk I will briefly discuss the major building blocks used in formal hardware verification, discuss current industrial use of the technology and focus on the nuts-and-bolts involved in applying formal verification in industry.
Slides
FVinPractice.pdf
### 2017-01-27, 10:00, EDIT 8103
Speaker
Thorsten Berger
Title
Virtual Platform - Flexible and Truly Incremental Engineering of Highly Configurable Systems
Abstract
Customisation is a general trend. An ever-increasing diversity of hardware and many new application scenarios for software, such as cyber-physical systems, demand software systems that support variable stakeholder requirements. Two opposing approaches are commonly used to create variants: software clone&own and software configuration. Organizations often start with the former, which is cheap, flexible, and supports quick innovation, but leads to redundancies and does not scale. The latter scales by establishing an integrated platform that shares software assets between variants, but requires high up-front investments or risky migration processes. So, could we have a robust approach that allows an easy transition or even combines the benefits of both approaches?
In the talk, I will discuss the virtual platform a VR-funded project that aims at building a method and tool suite supporting a truly incremental development of variant-rich systems, exploiting a spectrum between both opposing approaches. I will present the main underlying idea, important concepts of variant-rich systems, and the project plan. I will also present some of my previous research in the area of variability modeling, model synthesis, and static program analysis that is relevant for the project.
### 2017-01-20, 11:00, EDIT 3364
Speaker
Maximilian Algehed
Title
SpecCheck: Monadic specification and property based testing of network protocol
Abstract
We present SpecCheck, a simple monadic specification language for property based testing of concurrent systems. We present an approach to specification making use of elementary results from session types which allows one specification to be used as a test oracle and a mockup for both client and server code. We also show how we can discover inconsistent SpecCheck specifications.
Slides
MaxSlides.pdf
### 2016-12-09, 13:00, EDIT 8103
Speaker
Patrik Jansson
Title
Sequential decision problems, dependent types and generic solutions,
Abstract
We present a computer-checked generic implementation for solving finite-horizon sequential decision problems. This is a wide class of problems, including inter-temporal optimizations, knapsack, optimal bracketing, scheduling, etc. The implementation can handle time-step dependent control and state spaces, and monadic representations of uncertainty (such as stochastic, non-deterministic, fuzzy, or combinations thereof). This level of genericity is achievable in a programming language with dependent types (we have used both Idris and Agda). Dependent types are also the means that allow us to obtain a formalization and computer-checked proof of the central component of our implementation: Bellman's principle of optimality and the associated backwards induction algorithm. The formalization clarifies certain aspects of backwards induction and, by making explicit notions such as viability and reachability, can serve as a starting point for a theory of controllability of monadic dynamical systems, commonly encountered in, e.g., climate impact research.
Slides
20161209_140028.jpg
Paper
### 2016-12-02, 11:00, EDIT 3364
Speaker
Shayan Najd
Title
Trees That Grow
Abstract
I'll talk about my recent work with Simon Peyton Jones on extensible data types in Haskell. We study the notion of extensibility in functional data types, as a new approach to the problem of decorating abstract syntax trees with additional information. We observed the need for such extensibility while redesigning the data types representing Haskell abstract syntax inside Glasgow Haskell Compiler (GHC). Specifically, we describe a programming idiom that exploits type-level functions to allow a particular form of extensibility. The approach scales to support existentials and generalised algebraic data types, and we can use pattern synonyms to make it quite convenient in practice.
Slides
Attach:FPDec2016.pdf
Paper
### 2010-06-18, 15h15 - 16h15, EDIT room
Speaker
Karin Kejizer
Title
Raising Feldspar to a Higher Level
Abstract
Feldspar [1] is a domain specific language (DSL) for digital signal processing (DSP) initially designed with baseband telecommunication systems in mind. Feldspar is a strongly typed high level functional language which during compilation is translated to hardware specific C code via an intermediate core language. Because it is a high level functional language it raises the level of abstraction for programmers compared to the language C.
During my thesis, I focused on developing high-level combinators for DSP functions (like the finite impulse response filter) and designing and implementing a matrix module for writing algebraic descriptions of transforms (like the discrete Fourier transform) in Feldspar. During this presentation I will show my results.
[1] http://feldspar.inf.elte.hu/feldspar/
### 2010-06-14, 14h15 - 15h15, room ED
Speaker
Peter A. Jonsson, Luleå University of Technology
Title
Supercompilation and Call-By-Value
Abstract
Clear and concise programs written in functional programming languages often suffer from poor performance compared to their counterparts written in imperative languages such as Fortran or C. Supercompilation is a program transformation that can mitigate many of these problems by removing intermediate structures and perforing program specialization.
Previous deforestation and supercompilation algorithms may introduce accidental termination when applied to call-by-value programs. This hides looping bugs from the programmer, and changes the behavior of a program depending on whether it is optimized or not. We present a supercompilation algorithm for a higher-order call-by-value language and we prove that the algorithm both terminates and preserves termination properties. We also present a stronger variant of the same supercompilation algorithm that removes more intermediate structures as well as give an overview of our current work.
### 2010-04-16, 15h15 - 16h15, EDIT room
Speaker
Krasimir Angelov
Title
GF as functional-logic language
Abstract
Grammatical Framework (GF) is a domain specific language for describing natural language grammars. Despite that GF is language with very specific application domain, it is also Turing complete and could be used to solve nontrivial tasks in a surprising way. The design of the language has a lot of common with other functional and logic programming languages, most notably: Agda, Haskell, Curry and Lambda Prolog. Curry is an interesting language which combines in a neat way the functional and the logical paradigms but still from type theoretic point of view this is not satisfactory because it doesn't exploit the Curry-Howard isomorphism. Thanks to the dependent types of GF, which relates it to Agda, and thanks to some other specifics in the type system, GF could also be classified as functional-logic language but the two paradigms in GF are combined in the expected way. The execution model of GF also has a lot of common with Lambda Prolog but to cover the full semantics of GF the virtual machine should work in two modes: narrowing and residuation. The residuation mode is something that could be seen in the virtual machine of Curry but not in Lambda Prolog.
In this seminar I will talk about my work in progress on a virtual machine for running GF applications. Not everything is in place yet but the pieces are starting to stick together. As a demo I will show how to solve the classical n-queens problem in GF. From the logical programming paradigm I use backtracking and from the functional paradigm I use efficient deterministic computations. From type theoretical point of view I define the type of all n-queens puzzles.
### 2010-03-26, 15h15 - 16h00, EDIT room
Speaker
Anders Mörtberg
Title
Implementing a functional language with type inference
Abstract
I have implemented a small functional programming language with type inference as a project in the Advanced Functional Programming course. The type system has unit, product and sum types. The language also supports general recursion and uses call-by-value as evaluation strategy.
### 2010-03-17, 15h15 - 16h15, Room EC
Speaker
Don Syme, Microsoft Research Cambridge
Title
Parallel and Asynchronous Programming with F#
Abstract
F# is a succinct and expressive typed functional programming language in the context of a modern, applied software development environment (.NET), and Microsoft will be supporting F# as a first class language in Visual Studio 2010. F# makes three primary contributions to parallel, asynchronous and reactive programming in the context of a VM-based platform such as .NET:
(a) functional programming greatly reduces the amount of explicit mutation used by the programmer for many programming tasks
(b) F# includes a powerful “async” construct for compositional reactive and parallel computations, including both parallel I/O and CPU computations, and
(c) “async” enables the definition and execution of lightweight agents without an adjusted threading model on the virtual machine.
In this talk, we’ll look at F# in general, including some general coding, and take a deeper look at each of these contributions and why they matter.
Don Syme is a Principal Researcher at Microsoft Research, Cambridge, and is responsible for the design of the F# programming language. He has contributed extensively to the design of the .NET platform through.NET and C# generics and has a background in verification and machine-checked proof. His research focuses on the technical aspects of programming language design and implementation needed to make functional languages that are simpler to use, interoperate well with other languages and which incorporate aspects of object-oriented, asynchronous and parallel programming. His PhD is from the University of Cambridge, 1998.
### 2010-03-05, 15h15 - 16h15, EDIT room
Speaker
Jean-Philippe Bernardy
Title
Testing Polymorphic Properties
Abstract
(Joint work with Koen Claessen and Patrik Jansson.)
How can one test a polymorphic property? The issue is that testing can only be performed on specific monomorphic instances, whereas parametrically polymorphic functions are expected to work for any type.
We present a schema for constructing a monomorphic instance for a polymorphic property, such that correctness of that single instance implies correctness for all other instances. We also give a formal definition of the class of polymorphic properties the schema can be used for. Compared with the standard method of testing such properties, our schema leads to a significant reduction of necessary test cases.
The talk is a preview of Jean-Philippe's talk at ESOP
### 2010-02-05, 15h15 - 16h15, room EC
Speaker
Satnam Singh, Microsoft Research
Title
An Embedded DSL for the Data Parallel Programming of Heterogeneous Systems
Abstract
This presentation introduces an embedded domain specific language (DSL) for data-parallel programming which can target GPUs, SIMD instructions on x64 multicore processors and FPGA circuits. This system is implemented as a library of data-parallel arrays and data-parallel operations with implementations in C++ and for .NET languages like C#, VB.NET and F#. We show how a carefully selected set of constraints allow us to generate efficient code or circuits for very different kinds of targets. Finally we compare our approach which is based on JIT-ing with other techniques e.g. CUDA which is an off-line approach as well as to stencil computations. The ability to compile the same data parallel description at an appropriate level of abstraction to different computational elements brings us one step closer to finding models of computation for heterogonous multicore systems.
Satnam Singh is a senior researcher at Microsoft Research. His work includes research on synthesis of FPGA circuits, hardware design using synchronous languages, heterogenous manycore architectures and concurrent and parallel programming in Haskell.
### 2010-01-29, 15h15 - 16h15, room EC
Speaker
Lennart Augustsson, Standard Chartered Bank
Title
O, Partial Evaluator, Where Art Thou?
Abstract
Partial evaluation is now a quite old idea, and it has been implemented many times. Partial evaluation is also very widely applicable; almost every problem in computing could use it. But widely used partial evaluators are nowhere to be seen. Why is that? In this talk I will give some examples of where I have used partial evaluation during 15 years of using Haskell commercially. I will give my wish list for a partial evaluator I could actually use (instead of rewriting it over and over), and also contrast this with what is done in the research community.
Lennart Augustsson is a person that turns research in programming languages into practice. He currently works at the Modelling and Analytics Group at Standard Chartered Bank designing domain-specific programming languages used for quantitative modelling. He is known as the author of Cayenne programming language, one of the first Haskell compilers - HBC and as coauthor of Bluespec, a hardware description language. He is also a three-time winner of the Obfuscated C Code Contest.
### 2010-01-22, 15h15 - 16h15, EDIT room
Speaker
Alejandro Russo
Title
Towards a taint mode in Python via a library
Abstract
Vulnerabilities in web applications present threats to online systems. SQL injection and cross-site scripting attacks (XSS) are among the most common vulnerabilities found nowadays. Popular scripting languages used by the web community (e.g. Ruby, PHP and Python) provide a taint mode approach to discover such vulnerabilities. Taint modes are usually implemented using static analysis or execution monitors. In the latter case, Ruby, PHP, and Python interpreters are modified to include a taint mode.
In this talk, we present a library in Python that provides a taint mode. In this manner, it is possible to avoid any modification of the Python's interpreter. The concepts of decorators and overloading, as well as the presence of references, makes our solution lightweight and particularly neat.
This talk is based on a work-in-progress with Juan Jose Conti.
### 2009-12-11, 15h15 - 16h15, EDIT room
Speaker
Emil Axelsson
Title
Feldspar: Functional Embedded Language for DSP and Parallelism
Abstract
I will present Feldspar, which is a domain-specific language mainly targeting digital signal processing applications. Feldspar is embedded in Haskell. The deeply embedded core language is low-level enough to enable efficient compilation, yet flexible enough to support more high-level interfaces implemented through shallow combinators. One such interface is a vector library that provides a data type quite similar to Haskell's list type. I will also present our current effort to implement more domain-specific combinators based on the vector library.
### 2009-12-04, 15h00 - 16h00, EDIT room
Speaker
Jean-Philippe Bernardy
Title
Parametricity and Dependent Types
Abstract
(Joint work with Patrik Jansson and Ross Paterson)
In the paper "Theorems for Free", Phil Wadler has famously shown how to translate the types of System F to logical relations. This translation effectively makes the types available as theorems, that you can use to reason in a higher logic framework.
I will show how the idea generalizes to dependent types. In that case, the source language is powerful enough to represent logical relations, so it serves as target as well. One striking property is that we not only get theorems "for free" but their proofs as well.
### 2009-11-27, 15h00 - 16h00, EDIT room
Speaker
Koen Claessen
Title
FUI - A Functional GUI Library
Abstract
In this talk, I present ongoing work (well, results after a few afternoons of hacking) on FUI, a new GUI library for Haskell. There have been many designs and implementations of GUI libraries for Haskell over the years. Some are functional/declarative in spirit(such as Fudgets, Gadgets, and solutions based on Functional ReactiveProgramming (FRP)), others give an imperative (IO) interface (such as TkGofer/Yahu, Haggis, WxHaskell, and Gtk2Hs). The aim of FUI is to combine the good ideas from Fudgets, Gadgets, and FRP (declarative feel), and to avoid the more cumbersome features (loop combinators, mixing layout with functionality). The main motivation for FUI is because it is so much fun hacking on these things, but a secondary motivation is that I tried to create a GUI library that would be possible to use in a first-year course on Haskell. I will also discuss the requirements on the library if it is going to be used for teaching, and if I feel they have been met or not.
### 2009-11-20, 15h00 - 16h00, EDIT room
Speaker
Jonas Duregård
Title
AGATA – Automatic generators for QuickCheck (Master project presentation)
Abstract
In this talk I present Agata, the subject of my master thesis. Agata automates the process of writing test data generators in Haskell. Agata contributes a novel definition of size. This solves the scalability issues of some QuickCheck generators, where the absolute size of test cases grow exponentially to the size-bound provided by QuickCheck. Agata also introduces distribution strategies. By applying a distribution strategy to a generator, the tester can imbue it with specific properties, without changing the code of the generator. Agata is also implemented as an extension to the BNF Converter tool, enabling automatic generation of syntactically correct test data for any context free language.
### 2009-11-11, 13h15 - 14h30, EDIT room (joint FP + ProgLog talk)
Speaker
Ross Paterson
Title
Applicative functors from left Kan extensions
Abstract
Applicative functors define an interface that is more general, and correspondingly weaker, than that of monads. First used in parser libraries, they are now seeing a wide range of applications. Left Kan extensions, familiar to functional programmers as a kind of existential type, are a source of a range of applicative functors that are not also monads. There is also a generalized form involving arrows, which appears in the permutation parsers of Baars, Loh and Swierstra.
### 2009-11-06, 15h00 - 16h00, EDIT room
Speaker
Joel Svensson
Title
Abstract
Obsidian is an embedded domain specific language for GPGPU (General Purpose computations on Graphics Processing Units) programming. The Goals of Obsidian is to simplify the implementation of efficient GPU Kernels (the building blocks of larger algorithms on the GPU) by raising the level of abstraction. Still, the programmer should be in control of some of the low level concepts that are important for performance. This presentation will outline the current version of Obsidian, showing some of the strengths and weaknesses of our approach.
### 2009-10-30, 13h00 - 14h00, room 5453
Speaker
Title
Case Studies in Cryptol (Master project presentation)
Abstract
Digital Signal Processing has become part of many electronic applications these days; confluent to this, domain specific languages are prevailing among practitioners of many engineering domains. Cryptol is a domain specific language for Cryptography, it was designed by Galois and it has been used by NSA since, the time of its invention. Cryptol is a craft of cryptographic specifications; however this project aims at evaluating Cryptol as a domain specific language for digital signal processing algorithms. It also includes a proposal for extensions to Cryptol to make it applicable to DSP algorithms. This thesis derives its motivation from DSL for DSP research that the Functional Programming group has started with Ericsson. The project involved implementing a set of DSP algorithms in Cryptol and analyzing applicability of Cryptol from the experiences in this new domain. This study shows that Cryptol is too specific for Cryptographic algorithms and it is not possible to specify, run or verify many DSP algorithms in Cryptol. However there is a special class of DSP algorithms that are far easier to code in Cryptol than in C or Java. Thin overlap with the DSP algorithm was natural since DSLs are customized for a certain problem area. Evaluation of Cryptol in DSP domain revealed that some enhancements are necessary to make it applicable to DSP algorithms.
### 2009-10-23, 15h00 - 16h00, EDIT room
Plan: Discover FP activity at Chalmers
This workshop aims at uncovering FP-related projects at our department. We would also like to come up with a HCAR entry. If you are doing something related to FP, using FP in your project or in teaching, please speak up and add yourself to the HCAR sketch area.
### 2009-10-16, 15h00 - 16h00, room 5453
Speaker
Alejandro Russo
Title
Integrity policies via a library in Haskell
Abstract
Protecting critical data has become increasingly important for computing systems. Language-based technology have been developed over the years to achieve that purpose, leading to special-purpose languages that, in most of the cases, guarantee confidentiality of data, i.e. that secrets are not leaked. The impact of these languages in practice has been rather limited. Nevertheless, rather than producing a new language from scratch, it has been shown that confidentiality can be also guaranteed via a library, which makes this technology more likely to be adopted. Following this line, this work extends a library that provides confidentiality in order to consider another important aspect of security: integrity of data. Integrity of data seeks to prevent that programs destroy, accidentally or maliciously, information. We describe how to enforce two different kind of integrity policies: access control (e.g. certain files should not be written) and data-invariant-like predicates (e.g. sanitization of data) as well as some ideas about how to enforce information-flow integrity policies (e.g. user data does not corrupt or affect critical data pieces). A library that combines confidentiality and integrity policies has not been previously considered. Augmenting the set of enforceable policies develops further the changes for the library to be attractive for programmers. To evaluate our ideas, we propose the implementation of a case study.
This talk is based on a joint work-in-progress with Albert Diserholt.
### 2009-10-09, 15h00 - 16h00, EDIT room
Speaker
Wouter Swierstra
Title
Chalk: a language for architecture design
Abstract
Before I leave Sweden, I'd like to give one last talk about the work I've done here over the last months. Koen, Mary, and I have been working on designing a language to give high-level descriptions of computer architectures. One of the things that makes architecture design so hard is the many different design constraints: how much will an extra cache speed up this design? How much power will the cache use? Architects lack the language to explore and evaluate such design decisions early in the design process – this is precisely the problem we address. In this talk, I'll give a quick outline of our ideas, discuss some of the non-standard interpretations and analyses we have implemented, and talk you through some of our motivating examples.
### 2009-10-02, 15h00 - 16h00, EDIT room
Plan: Discussion about FP in education at the department
We will have a discussion about the role of functional programming in the education of students at the department. Students are very welcome to attend the discussion as we would like to hear opinions from their side.
### 2009-09-25, 15h00 - 16h00, room 6128 (later changed to EDIT room)
Plan: Discussion about some of the ICFP talks
Each person that attended the ICFP should select 2-3 talks and be able to quickly summarise them and tell his/her opinion about the talk/paper.
### 2009-06-12, 15h00 - 16h00, EDIT room
Plan: Talk of 40 minutes
Speaker
Wouter Swierstra
Title
Data types à la carte
Abstract
In this talk, I will describe a technique in Haskell for assembling both data types and functions from isolated individual components. Time permitting, I'd like to show how the same technology can be used to combine free monads and, as a result, structure Haskell's monolithic IO monad.
### 2009-06-04, 10h00 - 11h00, room 5453
Plan: Talk of 40 minutes
Speaker
Jean-Philippe Bernardy
Title
Testing Polymorphic Functions: The Initial View
Abstract
Many useful functions are parametrically polymorphic. Testing can only be applied to monomorphic instances of polymorphic functions. If we verify correctness of one instance, what do we know of all the other instances? We present a schema for constructing a monomorphic instance such that correctness of that single instance implies correctness for all other instances.
This is joint work with Koen Claessen and Patrik Jansson.
### 2009-05-29, 15h00 - 16h00, room 5453
Plan: Talk of 40 minutes
Speaker
Jan Rochel
Title
Very Lazy Evaluation
Abstract
In the talk I will present my diploma thesis, which introduces a new execution model for non-strict, functional programming languages. It relies on a new concept in which function arguments are handled more lazily than in existing graph-reduction based models. This leads to a new type of abstract machine that promises very efficient program execution. A proof-of-concept implementation demonstrates the applicability of the approach.
### 2009-05-15, 15h00 - 16h00, EDIT room
Plan: Talk of 40 minutes
Speaker
Andres Löh
Title
Generic programming with fixed points for mutually recursive datatypes
Abstract
Many datatype-generic functions need access to the recursive positions in the structure of the datatype, and therefore adopt a fixed point view on datatypes. Examples include variants of fold that traverse the data following the recursive structure, or the Zipper data structure that enables navigation along the recursive positions. However, Hindley-Milner-inspired type systems with algebraic datatypes make it difficult to express fixed points for anything but regular datatypes. Many real-life examples such as abstract syntax trees are in fact systems of mutually recursive datatypes and therefore excluded. In the talk, I will desceribe a technique that allows a fixed-point view for systems of mutually recursive datatypes. I will also present examples, most prominently the Zipper, that demonstrate that the approach is widely applicable.
### 2009-05-08, 15h00 - 16h00, EDIT room
Plan: Talk of 40 minutes
Speaker
Neil Jones
Title
Termination Analysis of the Untyped lambda-Calculus
Abstract
An algorithm is developed that, given an untyped lambda-expression, can certify that its call-by-value evaluation will terminate. It works by an extension of the size-change principle'' earlier applied to first-order programs. The algorithm is sound (and proven so) but not complete: some lambda-expressions may in fact terminate under call-by-value evaluation, but not be recognised as terminating.
The intensional power of size-change termination is reasonably high: It certifies as terminating all primitive recursive programs, and many interesting and useful general recursive algorithms including programs with mutual recursion and parameter exchanges, and Colson's minimum'' algorithm. Further, the approach allows free use of the Y combinator, and so can identify as terminating a substantial subset of PCF.
Based on a paper "Termination Analysis of the Untyped lambda-Calculus" by Neil D. Jones and Nina Bohr, DIKU, University of Copenhagen and IT University of Copenhagen
### 2009-04-24, 15h00 - 16h00, room 5453
Plan: A talk by Nick, 40 minutes including questions
Speaker
Nick Smallbone
Title
Abstract
I will talk about QuickSpec, a tool developed by Koen, John and me that generates algebraic specifications from functional programs. QuickSpec uses no theorem proving, only simple techniques based on testing.
### 2009-04-03, 15h00 - 16h00, room 5453
Plan: A talk by Ulf, 40 minutes including questions
Speaker
Ulf Norell
Title
A Dependently Typed Database Binding
Abstract
Interfacing to a database is a common task in systems development and any self respecting programming language comes with libraries for doing this. However, these libraries are in most cases untyped--queries sent to the database are either strings or some simple structured representation that has no relation to the actual layout of the database being queried. An exception is the HaskellDB library which supports well-typed interaction with the database, but requires a tool to be run at compile time to generate Haskell types corresponding to the database tables.
I will show a simple dependently typed database library for Agda, based on ideas by Nicolas Oury and Wouter Swierstra (2008)[1], where queries are guaranteed to be well-typed with respect to the database being queried and which doesn't require the database to be queried at compile time. It also supports changing the database layout during the execution of a program.
[1] Oury and Swierstra, The Power of Pi. ICFP 2008.
### 2009-03-27, 15h00 - 16h00, room 5453
Plan: A talk by John, 40 minutes including questions; Discussion about the website
Speaker
John Hughes
Title
eqc_fsm: State machine specifications in QuickCheck
Abstract
I'll talk about eqc_fsm, a new module developed for Quviq's QuickCheck as part of the ProTest project. It is designed for expressing state machine diagrams, along the lines of UML state charts, as QuickCheck specifications. Compared to QuickCheck's previous state machine support, eqc_fsm permits more concise and less error-prone specifications, and can offer help in analysing the distribution of the generated random tests, and indeed, automating the assignment of suitable weights to achieve a "best" distribution of test cases--whatever that means, which is an interesting question in its own right.
### 2009-03-20, 15h00 - 16h00, room 5453
Speaker
Michał Pałka
Title
Finding Race Conditions in Erlang with QuickCheck and PULSE
Abstract
I will present a user-level scheduler which, in combination with QuickCheck and a trace visualizer, can be used to debug concurrent Erlang programs giving a high chance for finding concurrency bugs in unit testing.
### 2009-03-13, 15h00 - 16h00, EDIT room
Plan: Two talks of 20 minutes
#### Talk 1
Speaker
Jean-Philippe Bernardy
Title
Purely Functional Incremental Parsing
Abstract
I'll give a summary of the techniques used in the parsing library developed for the Yi editor. Then I propose to detail how both incremental and lazy parsing can be combined in a combinator library.
#### Talk 2
Speaker
Patrik Jansson
Title
Generic Libraries in C++ with Concepts from High-Level Domain Descriptions in Haskell - A Domain-Specific Library for Computational Vulnerability Assessment
Abstract
See ComputationalVulnerabilityAssessment
### 2009-03-06, 15h00 - 16h00, EDIT room
Plan: One talk of 60 minutes
Speaker
Wouter Swierstra
Title
|
## Geometry: Common Core (15th Edition)
Find the first few sums and look for a pattern 1 term, 2=2 2 terms 6=2+4 3 terms 12=9+3 4 terms 20=16+4 Each sum is the product of the number of the terms and the number of terms plus 1. The value of the $n^{th}$ term is $n(n+1)$, so the 100th term will be $100\times101=10100$.
|
## Abstract
Measures of provider success are the centerpiece of quality improvement and pay-for-performance programs around the globe. In most nations, these measures are derived from administrative records, paper charts and consumer surveys; increasingly, electronic patient record systems are also being used. We use the term ‘e-QMs’ to describe quality measures that are based on data found within electronic health records and other related health information technology (HIT). We offer a framework or typology for e-QMs and describe opportunities and impediments associated with the transition from old to new data sources. If public and private systems of care are to effectively use HIT to support and evaluate health-care system quality and safety, the quality measurement field must embrace new paradigms and strategically address a series of technical, conceptual and practical challenges.
## Measuring quality in the era of electronic health records
Few issues will be more important to the future of health care worldwide than the adoption and implementation of electronic health records (EHRs). Breakthrough technologies require shifts in paradigms. This shift has not yet occurred on a wide scale with regard to the use of EHRs for measuring and monitoring clinician and system performance.
EHRs and related health information technology (HIT), such as computerized provider order entry (CPOE), clinical decision support systems (CDS) and web-based personal health records (PHRs), have been cited as the foundation of the bridge across the health-care system's ‘quality chasm’ [1, 2]. In the USA, the Obama Administration's so-called HITECH program represents one of the largest HIT infrastructure investments a nation has ever made. It also represents the electronic framework that will underpin US health reform which will unfold over the next few years [3]. Moreover, similar scenarios are playing out in most other nations, where HIT systems are being built, expanded or inter-connected across isolated providers [4, 5].
The goal of all this is to improve workflow and documentation to help make care safer and more efficient. Though as this is accomplished, we also need to be on the lookout for potential inadvertent harm that can be caused by suboptimal use of HIT systems [6]. Of considerable relevance to the quality improvement (QI) community is the benefit of EHRs as a tool for evaluating and improving clinician and system performance. Central to such QI applications are the measures that serve as the markers of success or failure. Moreover, these same measures are now frequently linked to provider financial incentives as part of ‘pay-for-performance’ (P4P) programs [7].
We use the term electronic quality measures, or e-QMs, to describe EHR-based performance measures. The implications of the shift toward electronic measures of quality are manifold. New ways of thinking are needed at this time, so we can integrate QI and pay for performance incentive programs into rapidly evolving EHR systems as they are being developed rather than after the fact; a much more difficult task.
The objectives of this article are: (i) to offer a typology of electronic measures of quality and safety; and (ii) to identify key challenges, opportunities and future priorities related to the development and application of these measures. Our work emerged from a project involving the development and application of e-QMs in the ambulatory care environment [8, 9]. While our discussion draws heavily on experiences in the USA, we also obtained related information from other nations with well-developed primary and secondary care EHRs. Therefore, our insights should be relevant to high-to-middle income nations around the globe where similar issues are being confronted.
Though e-QMs are expected to become the norm in the USA in the not-too-distant future, that is not now the case. While over half of American doctor offices have some component of an EHR system, it is estimated that <25% of US ambulatory care is substantially documented by EHRs and fewer than 10% of these systems are both comprehensive and interoperable across providers [10]. Today, for most US providers, performance can be documented only with computerized insurance claims, abstractions of a limited sample of paper charts or surveys of a small subset of consumers. In other nations where EHRs are more widely established within primary care, EHR-based quality reporting is not uncommon. But given the limited cross-provider interoperability and lack of full integration of advanced HIT components, the e-QM capabilities in most other nations is still in early stages of development as well [11].
## A framework for electronic quality measures
We developed a typology to help understand and shape the evolving field of electronic quality measurement. Our premise is that new ways of thinking about quality measurement are needed at this time, given the many unique functions and capabilities of EHRs and other HIT. Our e-QM framework (which is an extension of one we presented previously in a web-based report [8]) is meant to complement rather than displace other accepted quality constructs. We hope this classification exercise will support discourse between informaticians and EHR developers, QI specialists, clinicians, managers and policymakers.
The first three of our proposed five electronic quality measurement categories represent measures of care achieved by the provider. They reflect increasing degrees of reliance on unique EHR/HIT attributes. For that reason, we label them ‘Level 1 to Level 3’ e-QMs. The fourth and fifth measures focus on success of the EHR/HIT system implementation and unintended consequences, respectively. Our typology is outlined in Table 1 and discussed below.
Table 1
HIT/EHR-based e-quality measures (e-QMs) of provider performance
1. Translated (Level-1 e-QM): Measures based on traditional data collection approaches such as administrative (e.g. insurance claims) records which have been translated for use with EHR-based platforms (example: process of care measure such as percentage in scope patients receiving a lab test or mammography screening) 2. HIT assisted (Level-2 e-QM): Measures that while possible with non-EHR data sources would not be operationally feasible without the assistance of HIT (examples: blood pressure or BMI information on 100% of target population) 3. HIT enabled (Level-3 e-QM): Innovative measures that are not possible without comprehensive HIT (examples: percentage of abnormal test results read and acted upon by a clinician within 24 h, percentage of in-scope patient encounters where decision support modules were appropriately applied) 4. HIT system management: Measures used primarily to manage and evaluate HIT systems. (Examples: percentage of all prescriptions ordered via e-prescribing, percentage of EHR ‘front sections’ that are updated periodically.) These can be considered a measure of care structure 5. e-iatrogenesis: Measures of patient harm caused at least in part by the HIT system. (Example: the percentage of patients for whom the wrong drug was ordered due to an error intrinsic to an e-prescribing system; percentage of critical lab findings that did not result in patient notification)
1. Translated (Level-1 e-QM): Measures based on traditional data collection approaches such as administrative (e.g. insurance claims) records which have been translated for use with EHR-based platforms (example: process of care measure such as percentage in scope patients receiving a lab test or mammography screening) 2. HIT assisted (Level-2 e-QM): Measures that while possible with non-EHR data sources would not be operationally feasible without the assistance of HIT (examples: blood pressure or BMI information on 100% of target population) 3. HIT enabled (Level-3 e-QM): Innovative measures that are not possible without comprehensive HIT (examples: percentage of abnormal test results read and acted upon by a clinician within 24 h, percentage of in-scope patient encounters where decision support modules were appropriately applied) 4. HIT system management: Measures used primarily to manage and evaluate HIT systems. (Examples: percentage of all prescriptions ordered via e-prescribing, percentage of EHR ‘front sections’ that are updated periodically.) These can be considered a measure of care structure 5. e-iatrogenesis: Measures of patient harm caused at least in part by the HIT system. (Example: the percentage of patients for whom the wrong drug was ordered due to an error intrinsic to an e-prescribing system; percentage of critical lab findings that did not result in patient notification)
HIT, health information technology; EHR, electronic health record. This table is based in part on a previous publication [8].
### Translated e-QMs
These are measures adapted from existing ‘traditional’ (i.e. not EHR supported) measurement sets. In the USA, ‘traditional’ refers to widely adopted measures such as those of the National Committee for Quality Assurance (NCQA) [12], which were originally designed for non-EHR data sources, such as paper medical charts and insurance claims. Outside the USA, claims data are less common, but many settings do rely on data derived from non-EHR electronic data systems such as primary care clinic or hospital outpatient or inpatient administrative reporting systems.
Examples of translated e-QMs would include: From an informatics perspective, we consider these e-QMs to be the most basic ‘Level 1’. That is, given their origin, such translated measures do not take advantage of any unique capabilities of EHRs. Level-1 e-QMs are ubiquitous in American integrated delivery systems because such measures are generally required by external agencies (e.g. federal, state and private payers) for all providers whether or not they have EHRs. Issues that surround the comparability of EHR-based measures with those derived traditionally (e.g. from claims or paper chart abstracts) have been the subject of a number of studies [13, 14].
• The number of patients with diabetes seeing an ophthalmologist for an eye-care exam during a year.
• The number of children receiving appropriate immunizations.
• The number of women who have received a mammography within a given time frame.
### HIT-assisted e-QMs
These are measures that, while not conceptually limited to EHR systems, would not be operationally feasible in settings without advanced HIT platforms.
Examples of HIT-assisted measure include: These measures are ‘Level 2’ because, while they do not take full advantage of the advanced properties of HIT systems, they do make use of the EHR's ability to capture and integrate medical information not generally found in administrative sources. While most organizations could theoretically replicate these measures using manual abstraction of paper-based charts, it would be impractical to do so.
• clinical outcomes of 100% of a patient panel based on physiologic measures such as body mass index or blood pressure;
• results of history and physical, or laboratory tests;
• percentage of prescriptions for a specific drug category (e.g. a β-blocker) written by the provider within a target time frame.
A recent article involving a consensus panel of US experts suggested a number of HIT-assisted e-QMs [15]. Level-2 HIT-assisted e-QMs are now common in wired US integrated delivery systems and in many international settings. In these environments, automated chart abstraction has supplanted manual chart reviews for QI measure reporting. For example, most measures used by the UK's high profile ‘Quality Outcome Framework’ fall into this Level-2 e-QM category [16]. That is, they are based largely on information found within a GP practice site's electronic patient record, though they could be derived from paper charts if needed.
### HIT-enabled e-QMs
These are innovative e-measures that would not be possible outside of the HIT context. These are ‘Level-3’ measures because they embrace one or more unique HIT capabilities not readily found in paper charts. One such dimension is the ‘time-stamp’ capability of EHRs and CPOE systems where it is possible to know when the clinician received, viewed or acted on a specific item of information. Another unique capability of HIT is full information integration (sometimes termed interoperability) across all providers in a region. Other advanced HIT functions that are ripe for Level-3 e-QM development are those that go beyond charting. These would include: order entry systems (e.g. e-prescribing) and decision support systems (that recommend a course of action to the clinician); networked biometric devices (that capture the patient's real-time physiologic function) and interactive web-based ‘personal health record’ systems (that capture the consumers preferences, functional status or satisfaction with care).
Some examples of HIT-enabled Level-3 e-QMs that could only be implemented in digitally supported settings include: These types of Level-3 e-QMs represent the frontier of quality measurement development, but they are not yet common; in larger part, because few providers' EHRs are fully interoperable with all others in a region and few have fully integrated decision support, order entry and linked consumer ‘web portals.’ Moreover, even when these digital attributes are in place, there have been limited attempts by government and other regulators to move mandated performance measures beyond Level 1 or 2 (i.e. those feasible with administrative records, paper charts or stand-alone basic electronic charts). The reason for this is the least common denominator concern; until most providers have advanced HIT capabilities, the less advanced sites would be left out of the reporting systems.
• Percent of clinicians reviewing new items in a chart (e.g. an out-of-range lab result) within x hours, and acting on critical information (e.g. contacting the patient or ordering a follow-up test) within y hours of the instant they became aware of the event;
• Percentage of consultants who electronically shared specific pieces of information with the patient's primary care doctor within x days of seeing the patient;
• Percentage of in-scope patients for whom real-time clinical decision support module have been appropriately applied (and ‘alerts’ were not ignored);
• Percentage of in-scope consumers who used their home monitoring device to digitally notify the provider of a reportable event (e.g. high blood sugar or high blood pressure) and the percentage of instances where HIT-mediated follow-up occurred.
Among advanced integrated delivery systems in the USA, such indicators are being piloted, but they are not yet used on a wide scale. Outside of the USA, even in nations with advanced quality performance indicator frameworks and high EHR penetration (e.g. the UK and Sweden, Denmark, New Zealand), Level-3 measures that embrace the unique capabilities of HIT systems have not been widely adopted [11, 16, 17].
### HIT-system-management e-QMs
These measures are needed to support the deployment, management, evaluation and improvement of HIT systems. They can be used by organizations implementing the EHR or an external body wishing to evaluate a provider's HIT system.
Given that many believe that an HIT system is an essential requirement for a health delivery system's infrastructure in the twenty-first century, these can be considered a type of ‘structure’ measure, following the Donabedian structure/process/outcome framework.
Examples of HIT-system-management measures include: As part of the US Federal Department of Health and Human Services' EHR expansion initiative, the Center for Medicare/Medicaid Services (CMS) and Office of the National Coordinator for HIT (ONC) are offering incentive payments linked to what they term the ‘meaningful use’ of EHRs [3, 18, 19]. The sums are large: up to (US)$44 000 per doctor and more than$2 million per hospital over 3 years (2011–13). During the first stage of this program (started in 2011), providers must achieve success on measures that largely fall into this systems management category; for example, maintaining certain key sections in their EHR and being able to exchange information with other providers [19].
• EHR item-completion rates;
• attainment of community interoperability targets;
• presence of various computerized decision support algorithms;
• percentage of real-time CDS alerts bypassed by clinicians;
• percentage of patient-allergy lists reviewed by patients (e.g. via a web portal system where patients can view their own record) annually;
• proportion of key variables lost to measurement due to use of non-standard free-text EHR notation which cannot be accessed.
### ‘e-iatrogenesis’ e-QMs
These measures document patient harm caused at least in part by the application of HIT [6]. Specifically, such measures assess the degree to which unanticipated quality and safety problems arise through the use of HIT, whether the error is of human (provider or patient), technological or organizational/system origin. These problems may involve errors of commission or omission [20–22].
Examples of e-iatrogenesis targeted e-QMs include: e-iatrogenesis and other types of unanticipated harms and errors linked to HIT are expected to increase significantly in coming years. Both informatics and quality experts agree that this domain should be a high priority of safety and QI programs. Identifying measures to support these monitoring programs will be essential [23, 24].
• Percentage of patients receiving incorrect medications or procedures because of HIT-related errors in the CPOE/e-prescribing process;
• Number of sentinel event computerized decision support errors (either of ‘omission’ or ‘commission’) impacting patients;
• Number of critical provider or patient e-notifications not received resulting in patient harm.
## Digital opportunities: digital challenges
As the availability of electronic data expands exponentially, the technical, conceptual and practical challenges that must be confronted by developers of health-care performance measures will increase as well.
Clinician usability is the key to successful EHR adoption. Clinicians are not willing to click through endless menus or ‘pop up alerts’, preferring free-text notation as is the norm for paper charts. While this situation is well known to EHR designers [24, 25], the implications for those developing e-QMs are both profound and not well understood.
The current key source of performance measures in the USA is insurance claims data, which are a byproduct of the financial transaction process. Accordingly, measures based on claims items with high relevance for the payment process are more likely to be reliable and valid [26–28]. In the HIT age, this dynamic will change; rather than payment, the accuracy axiom will revolve around the structured care workflow. Quality measures based on the unstructured fields within the EHR, e.g. free-text charting, will be far less accurate [29, 30].
As practices implement EHRs, they must also be careful not to take steps backwards as e-QM-based metrics replace traditional quality reporting. For example, unless structured electronic templates are fully (and accurately) accepted by clinicians using the EHRs, it may be necessary to manually abstract the EHR's free-text sections to assess whether process standards of care were met or certain patient outcomes were achieved. This problem will continue until required quality indicators are entered (either manually or via electronic streaming) in their appropriate structured location within the record or until vastly improved natural language processing systems—capable of ‘mining’ the clinician's free-text chart entries—are implemented.
There appears to be unrealistic expectations among clinical, management payer and government stakeholders regarding the ease with which EHRs can be used to derive quality measures. A widely held, but erroneous, belief is that once an EHR is implemented by a health-care delivery organization, just hitting the ‘F9 key’ will lead to an immediate and effortless flow of provider performance reports. Moving forward, these expectations must be more effectively managed.
More and better data should be the hallmark of well designed and implemented EHR systems. But the ‘better’ part of this ‘more + better’ equation cannot be taken for granted: automated data are not necessarily improved data. Serious unanticipated data quality problems have been reported in HIT systems that were poorly conceived or executed [8]. For example, keeping the EHR's ‘front page’ section (e.g. problems, allergies and medication lists), current is a major challenge. Without strict protocols for verifying, organizing, updating and purging information found in EHRs, these repositories can become unstructured ‘electronic attics’ of uncertain accuracy that will be of limited value to QI analysts.
The HIT-supported care process may also introduce measurement challenges that are not well understood. Clinicians increasingly will follow real-time ‘care pathways’, where the EHR mediates much of the doctor's diagnostic and therapeutic actions as well as their interaction with the patient. Furthermore, the care process will be self-documenting based on this EHR-directed workflow. From a measurement perspective, there will be a high degree of circularity between these ‘hard-wired’ digitally mediated actions and the e-QM criteria that are used to measure ideal performance. For example, if the doctor clicks on ‘yes, I discussed all relevant issues with the patient’ or yes, ‘the patient should receive the recommended lab or drug order set’, then by default, the patient will meet the ideal standard of care as defined by an e-QM that is derived from the EHR data warehouse This sequence of events suggests that first, we need to be cautious about the accuracy of these ‘clicks’ (e.g. did the doctor really discuss smoking with the patient or just click through the required screen) and second, in cases where the click translates to a specific diagnostic or therapeutic action (e.g. the ordering a standard battery of tests), it will be essential that the embedded algorithms be fully evidence-based. Otherwise, we risk promulgating ineffective care. The HIT, QI and comparative effectiveness research communities will need to partner closely in addressing this important potential problem.
HIT will change the time dimension of the QI process. Rather than annual or quarterly retrospective reports, new models of quality reporting will likely involve ‘dashboard’ indicator panels that can be monitored by both individual clinicians and managers in real time. We will need to learn more about the implications of this shift in temporal perspective on all key parties as well as the HIT and QI systems that are the source of these indicators.
The EHR and HIT systems that are rapidly diffusing around the globe hold great promise for improving health-care effectiveness. But today, even in advanced settings that are well supported by EHRs, quality performance measures and mindsets remain heavily anchored to constructs and contexts previously applied using paper charts and administrative records. As health-care delivery systems become digital, so too must the infrastructure we use to measure the performance of these systems. The electronic quality measures we develop for this purpose will need to be as innovative as the HIT systems themselves.
## Funding
This work was funded with grants from the Commonwealth Fund (grant number 20060073) and the Robert Wood Johnson Foundation (grant number 053729). Seed grant funding was also received from the Agency for Healthcare Quality and Research (grant number 275-JHU-01).
## Acknowledgements
We are most grateful to the many organizations which provided us input regarding the application of EHRs to quality measurement within their organizations. We are especially grateful to members of a consortium of US integrated delivery systems that work with us on this project. These included: Billings Clinic, Billings, Montana; Geisinger Health System, Danville, Pennsylvania; HealthPartners, Minneapolis, Minnesota; Kaiser Permanente of the Northwest, Portland, Oregon; and Park Nicollet Health Services, Minneapolis, Minnesota. The assistance of Elizabeth Kind, Toni Kfuri, MD, and Jessica Holzer is gratefully acknowledged. The key informants at the integrated delivery systems who provided us valuable input included: Patricia Coon, MD, Mark Selna, MD, Leif Solberg, MD, Andrew Nelson, Lynne Dancha, Dean Sittig, PhD, Nancy Jarvis, MD and Shannon Neale, MD.
## References
1
Corrigan
J
McNeill
D
Building organization capacity: a cornerstone of health system reform
Health Affairs
,
2009
, vol.
28
(pg.
w-205
-
15
(27 January 2009, date last accessed)
2
Walker
JM
Carayon
P
From task to process: the case for changing health information technology to improve health care
Health Affairs
,
2009
, vol.
28
(pg.
467
-
77
)
3
Blumenthal
D
Stimulating the adoption of health information technology
N Engl J Med
,
2009
, vol.
360
(pg.
1477
-
9
)
4
Ovretveit
J
Scott
T
Rundall
T
, et al. .
Improving quality through effective implementation of information technology in healthcare
Int J Qual Health Care
,
2007
, vol.
19
(pg.
259
-
66
)
5
Robertson
A
Cresswell
K
Takian
A
, et al. .
Implementation and adoption of nationwide electronic health records in secondary care in England; qualitative analysis of interim results from a prospective national evaluation
BMJ
,
2010
, vol.
341
pg.
c4564
online first
6
Weiner
JP
Kfuri
T
Chan
K
, et al. .
‘e-Iatrogenesis’: the most critical unintended consequence of CPOE and other HIT
J Am Med Inform Assoc
,
2007
, vol.
14
(pg.
387
-
8
)
7
Institute of Medicine
Rewarding Provider Performance: Aligning Incentives in Medicare
,
2007
Washington, DC
8
Fowles
JB
Weiner
JP
Chan
K
, et al. .
Performance Measures Using Electronic Health Records: Five Case Studies
,
2008
New York, NY
The Commonwealth Fund
9
Chan
K
Weiner
J
EHR based quality indicators for ambulatory care: findings from a review of the literature
2006
Narrative Report to AHRQ
10
Hsiao
C
Hing
E
Socey
TC
, et al. .
Electronic medical record / electronic health record systems of office-based physicians: United States, 2009 and preliminary 2010 state estimates
2010
National Center for Health Statistics
[Internet] http://www.cdc.gov/nchs/data/hestat/emr_ehr_09/emr_ehr_09.pdf 25 November 2011, date last accessed
11
Gray
B
Bowden
T
Johansen
I
, et al. .
Electronic Health Records: An International Perspective on ‘Meaningful Use’
,
2011
New York
The Commonwealth Fund
Issues Brief, November 2011 [Internet] (Analysis of Sweden, Denmark, New Zealand) http://www.commonwealthfund.org/Publications/Issue-Briefs/2011/Nov/Electronic-Health-Records-International-Use.aspx 25 November 2011, date last accessed
12
National Committee for Quality Assurance
The Healthcare Effectiveness Data and Information Set (HEDIS) program
2012
[Internet] http://www.ncqa.org/tabid/59/Default.aspx 25 November 2011, date last accessed
13
Tang
PC
Ralston
M
Arrigotti
MF
, et al. .
Comparison of methodologies for calculating quality measures based on administrative data versus clinical data from an electronic health record system: implications for performance measures
J Am Med Inform Assoc
,
2007
, vol.
14
(pg.
10
-
5
)
14
Linder
JA
Kaleba
EO
Kmetick
KS
Using electronic health records to measure physician performance for acute conditions in primary care: empirical evaluation of the community-acquired pneumonia clinical quality measure set
Med Care
,
2009
, vol.
47
(pg.
208
-
16
)
15
Kern
LM
Dhopeshwarkar
R
Barron
Y
, et al. .
Measuring the effects of health information technology on quality of care: a novel set of proposed metrics for electronic quality reporting
Jt Comm J Qual Patient Saf
,
2009
, vol.
35
(pg.
359
-
69
)
16
Indicators for Quality Improvement
2010
United Kingdom National Health Service Information Centre for Health and Social Care [Internet] https://mqi.ic.nhs.uk/ 25 November 2011, date last accessed
17
Socialstyrelsen and Swedish Association of Local Authorities and Regions
Quality and Efficiency in Swedish Health Care: Regional Comparisons
2009
18
Hogan
S
Kissam
S
Measuring meaningful use
Health Affairs
,
2010
, vol.
29
(pg.
601
-
6
)
19
Blumenthal
D
Tavenner
M
The ‘meaningful use’ regulation for Electronic Health Records
N Engl J Med
,
2010
, vol.
363
(pg.
501
-
4
)
20
Ash
J
Sittig
DF
Poon
EG
, et al. .
The extent and importance of unintended consequences related to computerized provider order entry
J Am Med Inform Assoc
,
2007
, vol.
14
(pg.
415
-
23
)
21
Metzger
J
Welebob
E
Bates
DW
, et al. .
Mixed results in the safety performance of computerized physician order entry
Health Affairs
,
2010
, vol.
29
(pg.
655
-
63
)
22
Koppel
R
Metlay
JP
Cohen
A
, et al. .
Role of computerized physician order entry systems in facilitating medication errors
JAMA
,
2005
, vol.
293
(pg.
1197
-
203
)
23
Bloomrosen
M
Starren
J
Lorenzi
NM
, et al. .
Anticipating and addressing the unintended consequences of health IT and policy: a report from the AMIA 2009 Health Policy Meeting
J Am Med Inform Assoc
,
2011
, vol.
18
(pg.
82
-
90
)
24
Institute of Medicine
Health IT and Patient Safety: Building Safer Systems for Better Care
,
2011
Washington, DC
25
Berner
ES
Moss
J
Informatics challenges for the impending patient information explosion
J Am Med Inform Assoc
,
2005
, vol.
12
(pg.
614
-
7
)
26
Weiner
JP
Powe
N
Steinwachs
D
, et al. .
Applying insurance claims data to assess quality of care: a compilation of potential indicators
Qual Rev Bull
,
1990
, vol.
16
(pg.
424
-
38
)
27
Fowles
J
Lawthers
A
Weiner
J
, et al. .
Agreement between physicians' office records and medicare claims data
Health Care Financ Rev
,
1995
, vol.
16
(pg.
189
-
99
)
28
Iezzoni
LI
Ann Intern Med
,
1997
, vol.
127
(pg.
666
-
74
)
29
Bridges to Excellence Consortia
Measuring What Matters: Electronically, Automatically, (Somewhat) Painlessly
2009
[Internet] http://www.rwjf.org/files/research/measuringwhatmatters2009.pdf 23 March 2011, date last accessed
30
Chan
K
Fowles
J
Weiner
J
Electronic health records and the reliability and validity of quality measures: a review of the literature
Med Care Res Rev
,
2010
, vol.
67
(pg.
503
-
27
)
|
My Math Forum Solving using radians
Algebra Pre-Algebra and Basic Algebra Math Forum
December 29th, 2012, 02:50 PM #1 Newbie Joined: Dec 2012 Posts: 1 Thanks: 0 Solving using radians My question is: cot(theta) + sqrt(3) = 0. I am required to find all solutions and answer in radians in term of pi. I have no idea where to begin. Any help would be appreciated!
December 29th, 2012, 04:26 PM #2
Math Team
Joined: Dec 2006
From: Lexington, MA
Posts: 3,267
Thanks: 407
Hello, bentroy743!
Quote:
$\text{Find all solutions (in radians): }\:\cot\theta\,+\,\sqrt{3}\:=\:0$
$\text{W\!e have: }\:\cot\theta \:=\:-\frac{\sqrt{3}}{1} \:=\:\frac{adj}{opp}$
$\text{Cotangent is negative in Quadrants 2 and 4.}$
$\begin{Bmatrix}\text{In Quadrant 2: } & \theta &=& \frac{5\pi}{6}\,+\,2\pi n \\ \\ \\
\text{In Quadrant 4: } & \theta &=& -\frac{\pi}{6}\,+\,2\pi n \end{Bmatrix}\;\text{ for any integer }n.$
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post not3bad Algebra 4 October 3rd, 2013 11:43 PM milly2012 Algebra 1 April 29th, 2012 09:26 AM Phazon Calculus 10 June 6th, 2011 06:08 PM nicolodn Algebra 8 October 21st, 2010 06:47 AM axelle Algebra 7 October 22nd, 2007 02:48 PM
Contact - Home - Forums - Cryptocurrency Forum - Top
|
# Differential geometry
The subject of Differential geometry[1][2] is the application of calculus to the problem of describing curves, surfaces, and volumes in two and three dimensions, as well as analogous structures in higher dimensions. The most immediate application of differential geometry in geophysics is the representation of curves and surfaces in geologic models. Seismic ray tracing is an application as well, as are the other geometric aspects of solutions to partial differential equations, such as field lines and flow lines in problems from potential theory and from fluid dynamics.
"A curve ${\displaystyle {\boldsymbol {x}}(\sigma )}$, where ${\displaystyle \sigma }$ is a running parameter along the curve and ${\displaystyle ({\hat {{\boldsymbol {e}}_{1}}},{\hat {{\boldsymbol {e}}_{2}}},{\hat {{\boldsymbol {e}}_{3}}})}$ are the unit basis vectors in the respective ${\displaystyle (x_{1},x_{2},x_{3})}$ directions."
# The Theory of Curves
This exposition begins in 3 dimensions, but all of the results presented here generalize immediately to arbitrary dimensions.
## Definition of a Curve
In ${\displaystyle \mathbb {R} ^{3}}$ (the three dimensional Euclidean space) we define a curve as a vector valued function ${\displaystyle {\boldsymbol {x}}}$, where
${\displaystyle {\boldsymbol {x}}(\sigma )\equiv \left(x_{1}(\sigma ),x_{2}(\sigma ),x_{3}(\sigma )\right)}$.
Here ${\displaystyle \sigma }$ is a running variable along the curve. You can think of this as being like the marks on a tape measure. We assume for now that ${\displaystyle {\boldsymbol {x}}\in \mathbb {C} ^{3}}$ meaning that the components of ${\displaystyle {\boldsymbol {x}}}$ are continuous and at least 3-times differentiable with respect to ${\displaystyle \sigma }$.
"In the limit as ${\displaystyle h\rightarrow 0}$, the vector line element defined by the difference ${\displaystyle {\boldsymbol {x}}(\sigma +h)-{\boldsymbol {x}}}$ tends to the tangent vector at the point ${\displaystyle {\boldsymbol {x}}(\sigma )}$. "
## The tangent vector to a curve
We can define the tangent to each point ${\displaystyle {\boldsymbol {x}}(\sigma )}$ as the first derivative of ${\displaystyle {\boldsymbol {x}}(\sigma )}$ with respect to ${\displaystyle \sigma }$
${\displaystyle {\boldsymbol {t}}\equiv {\boldsymbol {x}}^{\prime }(\sigma )\equiv \lim _{h\rightarrow 0}{\frac {{\boldsymbol {x}}(\sigma +h)-{\boldsymbol {x}}(\sigma )}{h}}=\left({\frac {dx_{1}}{d\sigma }},{\frac {dx_{2}}{d\sigma }},{\frac {dx_{3}}{d\sigma }}\right)\equiv {\frac {d{\boldsymbol {x}}}{d\sigma }}\equiv {\frac {dx_{i}}{d\sigma }}}$ where ${\displaystyle i=1,2,3}$ in ${\displaystyle \mathbb {R} ^{3}}$.
The last form with subscript ${\displaystyle i}$ is index notation.
The subtraction of the vector positions ${\displaystyle {\boldsymbol {x}}(\sigma +h)-{\boldsymbol {x}}}$ define a "directed line segment" pointing in the direction of the lesser value of ${\displaystyle \sigma }$ to the greater value of ${\displaystyle \sigma }$. As ${\displaystyle h\rightarrow 0}$, the vector
${\displaystyle {\frac {{\boldsymbol {x}}(\sigma +h)-{\boldsymbol {x}}(\sigma )}{h}}}$
tends to point in the direction tangent to the curve at the point ${\displaystyle {\boldsymbol {x}}(\sigma )}$.
## Arclength, the natural parameter of Curves
Suppose there is a coordinate ${\displaystyle s}$ such that the tangent vector is a unit vector
${\displaystyle {\dot {\boldsymbol {x}}}\equiv {\frac {d{\boldsymbol {x}}}{ds}}={\hat {t}}.}$ Here we use the "." to indicate differentiation with respect to this special coordinate ${\displaystyle s}$.
Alternatively, we may consider the running parameter ${\displaystyle \sigma }$ to be a function of this new parameter ${\displaystyle s}$ such that ${\displaystyle \sigma \equiv \sigma (s)}$. This means that we can write the unit tangent vector as
${\displaystyle {\hat {\boldsymbol {t}}}={\frac {d{\boldsymbol {x}}(\sigma (s))}{ds}}={\frac {d{\boldsymbol {x}}}{d\sigma }}{\frac {d\sigma }{ds}}\equiv {\boldsymbol {x}}^{\prime }{\dot {\sigma }}}$.
From our knowledge of vectors, we may also represent the unit tangent vector as the ratio of the vector with its magnitude
${\displaystyle {\hat {t}}\equiv {\frac {\boldsymbol {t}}{|{\boldsymbol {t}}|}}}$, implying that ${\displaystyle {\dot {\sigma }}\equiv {\frac {1}{|{\boldsymbol {x}}^{\prime }|}}\equiv {\frac {d\sigma }{ds}}}$ which further implies that ${\displaystyle \sigma (s)=\int _{s_{0}}^{s}{\frac {ds}{|{\boldsymbol {x}}^{\prime }(\sigma (s)|}}}$.
### But what is ${\displaystyle s}$?
We note that
${\displaystyle {\frac {d{\boldsymbol {x}}}{ds}}={\hat {\boldsymbol {t}}}}$ which implies that ${\displaystyle d{\boldsymbol {x}}={\hat {\boldsymbol {t}}}ds}$.
We can write formally
${\displaystyle d{\boldsymbol {x}}\cdot d{\boldsymbol {x}}={\hat {\boldsymbol {t}}}ds\cdot {\hat {\boldsymbol {t}}}ds={\hat {\boldsymbol {t}}}\cdot {\hat {\boldsymbol {t}}}(ds)^{2}=(ds)^{2}}$
Formally, we may also write
${\displaystyle d{\boldsymbol {x}}\cdot d{\boldsymbol {x}}=(dx_{1})^{2}+(dx_{2})^{2}+(dx_{3})^{2}=(ds)^{2}}$
meaning that ${\displaystyle ds\equiv {\sqrt {(dx_{1})^{2}+(dx_{2})^{2}+(dx_{3})^{2}}}}$ is differential arc length. Thus ${\displaystyle s}$ is arc length.
The arclength ${\displaystyle s}$ is called the natural parameter of differential geometry by some authors.
"The unit tangent vector ${\displaystyle {\boldsymbol {\hat {t}}}}$ and the unit principal normal vector ${\displaystyle {\boldsymbol {\hat {p}}}}$ at the point ${\displaystyle {\boldsymbol {x}}(\sigma )}$. "
### The Principal Normal vector to a curve
We can define a second vector of interest in our understanding of the applications of calculus to curves. This is the principal normal vector.
We define the principal normal vector ${\displaystyle {\boldsymbol {p}}}$ as the first derivative with respect to ${\displaystyle s}$ of the unit tangent vector
${\displaystyle {\boldsymbol {p}}\equiv {\dot {\hat {\boldsymbol {t}}}}={\ddot {\boldsymbol {x}}}}$
### But where does ${\displaystyle {\boldsymbol {p}}}$ point?
We consider the dot product of the unit tangent vector with itself
${\displaystyle {\hat {\boldsymbol {t}}}\cdot {\hat {\boldsymbol {t}}}=1}$
If we differentiate both sides of the dot product with respect to ${\displaystyle s}$
${\displaystyle {\frac {d}{ds}}({\hat {\boldsymbol {t}}}\cdot {\hat {\boldsymbol {t}}})=2{\hat {\boldsymbol {t}}}\cdot {\dot {\hat {\boldsymbol {t}}}}=2{\hat {\boldsymbol {t}}}\cdot {\boldsymbol {p}}=0}$.
Thus, the principal normal vector ${\displaystyle {\boldsymbol {p}}}$ is orthogonal to the tangent vector ${\displaystyle {\hat {\boldsymbol {t}}}}$.
We can define a unit principal normal vector ${\displaystyle {\hat {\boldsymbol {p}}}}$ by dividing by its magnitude
${\displaystyle {\boldsymbol {\hat {p}}}\equiv {\frac {\hat {\boldsymbol {p}}}{|{\hat {\boldsymbol {p}}}|}}={\frac {\ddot {\boldsymbol {x}}}{|{\ddot {\boldsymbol {x}}}|}}}$.
"Here ${\displaystyle {\boldsymbol {x}}(s)}$ is a circle of radius ${\displaystyle \rho }$. It's unit tangent vector is ${\displaystyle {\boldsymbol {\dot {x}}}(s)}$ and its unit principal normal vector is ${\displaystyle \rho {\boldsymbol {\ddot {x}}}}$. Here, the angle ${\displaystyle \sigma =s/\rho }$ shows the relationship between angle ${\displaystyle \sigma }$ in radians and arclength ${\displaystyle s}$.
## Simple examples on a circle
A simple circle of radius ${\displaystyle \rho }$ in the ${\displaystyle (x_{1},x_{2})}$ provides a simple demonstration of the tangent and principal normal vectors.
A circle may be represented as
${\displaystyle {\boldsymbol {x}}(\sigma )=\left(\rho \cos(\sigma ),\rho \sin(\sigma ),0\right)}$
where ${\displaystyle \rho }$ is the (constant) radius of the circle and ${\displaystyle \sigma }$ is the angular coordinate. This expression shows position on the circle of radius ${\displaystyle \rho }$ as a vector which points from the center of the circle to the point at coordinates ${\displaystyle (\rho ,\sigma )}$.
### The tangent to a circle
We may differentiate the components of ${\displaystyle {\boldsymbol {x}}(\sigma )}$ to obtain the tangent vector
${\displaystyle {\boldsymbol {t}}(\sigma )={\boldsymbol {x}}^{\prime }(\sigma )=\left(-\rho \sin(\sigma ),\rho \cos(\sigma ),0\right)}$.
This expression describes a tangent vector pointing in the counter-clockwise direction on the circle, where ${\displaystyle \sigma }$ increase in the counter-clockwise direction.
The magnitude of the tangent vector ${\displaystyle |{\boldsymbol {\hat {t}}}|}$ is the square root of the sum of the squares of the components of the tangent vector
${\displaystyle |{\boldsymbol {\hat {t}}}|={\sqrt {\rho ^{2}\sin ^{2}(\sigma )+\rho ^{2}\cos ^{2}(\sigma )}}=\rho }$.
Thus, we may write the unit tangent vector as ${\displaystyle {\boldsymbol {\hat {t}}}={\frac {\boldsymbol {t}}{|{\boldsymbol {t}}|}}={\frac {\boldsymbol {t}}{\rho }}\equiv {\frac {d{\boldsymbol {x}}(\sigma (s))}{ds}}={\frac {d{\boldsymbol {x}}}{d\sigma }}{\frac {d\sigma }{ds}}}$.
Thus, for a circle
${\displaystyle {\frac {d\sigma }{ds}}={\frac {1}{\rho }}}$ which implies that ${\displaystyle d\sigma ={\frac {ds}{\rho }}}$
meaning that ${\displaystyle \sigma =\int _{s_{0}}^{s}{\frac {dl}{\rho }}}$. Here ${\displaystyle l}$ is a dummy variable of integration.
If we take ${\displaystyle \sigma \equiv 0}$ when ${\displaystyle s=s_{0}}$. and because ${\displaystyle \rho ={\mbox{const.}}}$ for a circle, we can relate ${\displaystyle \sigma }$ to arclength ${\displaystyle s}$ via ${\displaystyle \sigma =s/\rho .}$
### Arclength coordinates
We can rewrite the formulas for the circle and its tangent vector in term of arclenth ${\displaystyle s}$ using ${\displaystyle \sigma =s/\rho }$
The formula for the circle becomes
${\displaystyle {\boldsymbol {x}}(s)=(\rho \cos(s/\rho ),\rho \sin(s/\rho ),0)}$
and the function describing its unit tangent vectors becomes
${\displaystyle {\boldsymbol {\hat {t}}}(s)={\boldsymbol {\dot {x}}}(s)=(-\sin(s/\rho ),\cos(s/\rho ),0)}$.
We may then take an additional derivative to obtain the principal normal vector (which is not a unit vector)
${\displaystyle {\boldsymbol {p}}={\boldsymbol {\ddot {x}}}(s)=(-(1/\rho )\cos(s/\rho ),-(1/\rho )\sin(s/\rho ),0)}$.
The magnitude of the principal normal vector is
${\displaystyle |{\boldsymbol {p}}|={\boldsymbol {\ddot {x}}}(s)={\sqrt {(1/\rho ^{2})\cos ^{2}(s/\rho )+(1/\rho ^{2})\sin(s/\rho )}}=1/\rho }$. We call ${\displaystyle 1/\rho }$ the curvature of the circle and ${\displaystyle \rho }$ the radius of curvature of the circle.
Thus the unit principal normal vector is
${\displaystyle {\boldsymbol {\hat {p}}}={\frac {{\boldsymbol {\ddot {x}}}(s)}{|{\boldsymbol {\ddot {x}}}(s)|}}=\rho {\boldsymbol {\ddot {x}}}=(-\cos(s/\rho ),-\sin(s/\rho ),0).}$
As we can see ${\displaystyle {\boldsymbol {x}}}$ points in a radial direction away from the center of the circle, ${\displaystyle {\boldsymbol {\dot {x}}}}$ points in the counter-clockwise tangent direction to the circle, and the principal normal vector ${\displaystyle \rho {\boldsymbol {\ddot {x}}}}$ points in the radial direction toward the center of the circle.
## The Curvature vector ${\displaystyle \kappa }$
The notions of the tangent, principal normal, and curvature generalize beyond circles to more general curves. We need only imagine that at each point of more general curve that there is a circle tangent to that curve that has a radius of curvature defined by the second derivative of the function describing the curve.
We may define a curvature vector
${\displaystyle {\boldsymbol {\kappa }}(s)\equiv \kappa (s){\boldsymbol {\hat {p}}}(s)}$
where we define ${\displaystyle \kappa (s)\equiv |{\boldsymbol {\ddot {x}}}(s)|=1/\rho (s)}$. The curvature is ${\displaystyle \kappa (s)}$ and the radius of curvature is ${\displaystyle \rho (s)}$. We note for future reference that
${\displaystyle {\dot {\boldsymbol {\hat {t}}}}=\kappa (s){\boldsymbol {\hat {p}}}}$.
## The binormal vector
You may have seen this coming. Given two unit vectors, we can define a third vector via the cross product. In this case, we define the unit binormal vector as the cross product of the unit tangent and the unit principal normal vectors
${\displaystyle {\boldsymbol {\hat {b}}}\equiv {\boldsymbol {\hat {t}}}\times {\boldsymbol {\hat {p}}}}$.
For curves confined to a plane, this vector always points in the constant vertical direction. For more general curves, the orientation of this vector changes as we move along the curve.
As before, differentiating the dot product ${\displaystyle {\boldsymbol {\hat {b}}}\cdot {\boldsymbol {\hat {b}}}=1}$ with respect to ${\displaystyle s}$
${\displaystyle {\frac {d}{ds}}({\boldsymbol {\hat {b}}}\cdot {\boldsymbol {\hat {b}}})=2{\dot {\boldsymbol {\hat {b}}}}=0}$
Which indicates that the derivative of the binormal vector must point orthogonal to ${\displaystyle {\boldsymbol {\hat {b}}}}$ which is in the ${\displaystyle ({\boldsymbol {\hat {t}}},{\boldsymbol {\hat {p}}})}$-plane.
We know that the binormal is perpendicular to both the tangent ${\displaystyle {\boldsymbol {\hat {t}}}}$ and the principal normal ${\displaystyle {\boldsymbol {\hat {p}}}}$ vectors. Hence ${\displaystyle {\boldsymbol {\hat {b}}}\cdot {\boldsymbol {\hat {t}}}=0}$ and ${\displaystyle {\boldsymbol {\hat {b}}}\cdot {\boldsymbol {\hat {p}}}=0}$.
Differentiating with respect to arclength ${\displaystyle s}$
${\displaystyle {\frac {d}{ds}}({\boldsymbol {\hat {b}}}\cdot {\boldsymbol {\hat {t}}})={\boldsymbol {\hat {b}}}\cdot {\boldsymbol {\dot {\hat {t}}}}+{\boldsymbol {\dot {\hat {b}}}}\cdot {\boldsymbol {\hat {t}}}=0}$.
Thus,
${\displaystyle {\boldsymbol {\dot {\hat {b}}}}\cdot {\boldsymbol {\hat {t}}}=-{\boldsymbol {\hat {b}}}\cdot {\boldsymbol {\dot {\hat {t}}}}={\boldsymbol {\hat {b}}}\cdot {\boldsymbol {p}}=-\kappa (s)({\boldsymbol {\hat {b}}}\cdot {\boldsymbol {\hat {p}}})=0}$ .
Hence ${\displaystyle {\boldsymbol {\dot {\hat {b}}}}}$ points in either the ${\displaystyle {\boldsymbol {\hat {p}}}}$ or the ${\displaystyle -{\boldsymbol {\hat {p}}}}$ direction
## The Torsion
For curves not confined to a plane, it is possible for there not only to be a curvature, but also a "twist" of the curve. This twist will be related to the derivative of the binormal vector, as this ceases to be a constant vector for curves that are not confined to a single plane. To quantify this twist, we define the torsion ${\displaystyle \tau (s)}$ as being, from the Frenet equations, proportional to the derivative of the binormal vector
${\displaystyle {\boldsymbol {\dot {\hat {b}}}}\equiv -\tau (s){\boldsymbol {\hat {p}}}\qquad }$ which implies that ${\displaystyle \qquad {\boldsymbol {\hat {p}}}\cdot {\boldsymbol {\dot {\hat {b}}}}\equiv -\tau (s)({\boldsymbol {\hat {p}}}\cdot {\boldsymbol {\hat {p}}})=-\tau (s)}$.
Here the choice of the minus sign is a matter of convention.
To find out the value of torsion, we need a few more results. We begin with the cross product representation of the binormal vector ${\displaystyle {\boldsymbol {\hat {b}}}={\boldsymbol {\hat {t}}}\times {\boldsymbol {\hat {p}}}}$ and differentiate this with respect to arclength ${\displaystyle s}$
${\displaystyle {\boldsymbol {\dot {\hat {b}}}}={\boldsymbol {\dot {\hat {t}}}}\times {\boldsymbol {\hat {p}}}+{\boldsymbol {\hat {t}}}\times {\boldsymbol {\dot {\hat {p}}}}}$.
We can take the dot product of both sides of this expression with ${\displaystyle -{\boldsymbol {\hat {p}}}}$
${\displaystyle \tau (s)=-{\boldsymbol {\hat {p}}}\cdot {\boldsymbol {\dot {\hat {b}}}}=-{\boldsymbol {\hat {p}}}\cdot \left[{\boldsymbol {\dot {\hat {t}}}}\times {\boldsymbol {\hat {p}}}+{\boldsymbol {\hat {t}}}\times {\boldsymbol {\dot {\hat {p}}}}\right]}$.
We note that ${\displaystyle {\boldsymbol {\dot {\hat {t}}}}={\boldsymbol {\ddot {x}}}=\kappa (s){\boldsymbol {\hat {p}}}\qquad }$ and that ${\displaystyle \qquad {\boldsymbol {\hat {p}}}\times {\boldsymbol {\hat {p}}}=0}$
${\displaystyle \tau (s)=-{\boldsymbol {\hat {p}}}\cdot \left[\kappa (s)({\boldsymbol {\hat {p}}}\times {\boldsymbol {\hat {p}}})+{\boldsymbol {\hat {t}}}\times {\boldsymbol {\dot {\hat {p}}}}\right]}$
${\displaystyle \tau (s)=-{\boldsymbol {\hat {p}}}\cdot {\boldsymbol {\hat {t}}}\times {\boldsymbol {\dot {\hat {p}}}}}$.
We can make the following replacement ${\displaystyle {\boldsymbol {\hat {p}}}=\rho (s){\boldsymbol {\ddot {x}}}}$ to obtain
${\displaystyle \tau (s)=-\rho (s){\boldsymbol {\ddot {x}}}\cdot \left[{\boldsymbol {\hat {t}}}\cdot {\frac {d}{ds}}(\rho (s){\boldsymbol {\ddot {x}}})\right]}$
${\displaystyle \qquad =-\rho (s){\boldsymbol {\ddot {x}}}\cdot \left[{\boldsymbol {\hat {t}}}\cdot ({\dot {\rho (s)}}{\boldsymbol {\ddot {x}}}+\rho (s){\boldsymbol {\bar {x}}})\right]}$.
Here we note that ${\displaystyle {\boldsymbol {\ddot {x}}}\cdot ({\boldsymbol {\hat {t}}}\times {\boldsymbol {\ddot {x}}})=0}$, also ${\displaystyle {\boldsymbol {\bar {x}}}\equiv {\frac {d^{3}{\boldsymbol {x}}}{ds^{3}}}}$ (because MediaWiki's math mode does not recognize the LaTeX triple dot derivative operator) .
${\displaystyle \tau (s)=-\rho ^{2}(s)({\boldsymbol {\ddot {x}}}\cdot {\boldsymbol {\hat {t}}}\times {\boldsymbol {\bar {x}}})=\rho ^{2}(s)({\boldsymbol {\dot {x}}}\cdot {\boldsymbol {\ddot {x}}}\times {\boldsymbol {\bar {x}}})}$.
Finally, recalling that ${\displaystyle \rho ^{2}={\frac {1}{|{\boldsymbol {\ddot {x}}}\cdot {\boldsymbol {\ddot {x}}}|}}}$
allowing the torsion to be written in the compact form
${\displaystyle \tau (s)={\frac {{\boldsymbol {\dot {x}}}(s)\cdot {\boldsymbol {\ddot {x}}}(s)\times {\boldsymbol {\bar {x}}}(s)}{|{\boldsymbol {\ddot {x}}}(s)\cdot {\boldsymbol {\ddot {x}}}(s)|}}}$.
"Here ${\displaystyle {\boldsymbol {x}}(s)}$ is a simple circular helix of radius 1. In this case, the angular coordinate is arclength ${\displaystyle s}$.
## Example: a simple circular helix with a linear third coordinate
We can write the equation of a simple helix, its tangent, principal normal, and binormal vectors as
${\displaystyle {\boldsymbol {x}}(s)=(\cos(s),\sin(s),cs)}$ the helix
${\displaystyle {\boldsymbol {\dot {x}}}(s)=(-\sin(s),\cos(s),c)}$ its tangent
${\displaystyle {\boldsymbol {\ddot {x}}}(s)=(-\cos(s),-\sin(s),0)}$ its principal normal
${\displaystyle {\boldsymbol {\bar {x}}}(s)=(-\sin(s),-\cos(s),0)}$ its third derivative
### Torsion for a simple unit circular helix in arclength ${\displaystyle s}$ with a linear third coordinate
The torsion is given by the triple scalar product
${\displaystyle \tau (s)={\frac {{\boldsymbol {\dot {x}}}\cdot {\boldsymbol {\ddot {x}}}\times {\boldsymbol {\bar {x}}}}{|{\boldsymbol {\ddot {x}}}\cdot {\boldsymbol {\ddot {x}}}|}}={\frac {1}{|{\boldsymbol {\ddot {x}}}\cdot {\boldsymbol {\ddot {x}}}|}}\det {\begin{bmatrix}-\sin(s)&\cos(s)&c\\-\cos(s)&-\sin(s)&0\\-\sin(s)&-\cos(s)&0\end{bmatrix}}=c(\cos ^{2}(s)+\sin ^{2}(s))=c.}$
Here, we note that ${\displaystyle |{\boldsymbol {\ddot {x}}}\cdot {\boldsymbol {\ddot {x}}}|=\sin ^{2}(s)+\cos ^{2}(s)=1.}$
Hence, for the simple unit circular helix in arclength coordinates, the torsion (the "twist") is the constant factor ${\displaystyle c}$ that lifts the helix off of the plane.
The concept of torsion extends to more general curves than simple circular helixes. Just as we can imagine a circle of radius ${\displaystyle \rho }$ tangent to a more general plane curve, we may also consider a helix tangent to a curve that is not confined to a plane.
### Helix of radius ${\displaystyle \rho }$
We can generalize the example of the helix to a circular helix of radius ${\displaystyle \rho }$ and with angular coordinate ${\displaystyle \sigma }$
${\displaystyle {\boldsymbol {x}}(\sigma )=(\rho \cos(\sigma ),\rho \sin(\sigma ),c\sigma )}$ the helix
${\displaystyle {\boldsymbol {x^{\prime }}}(\sigma )=(-\rho \sin(\sigma ),\rho \cos(\sigma ),c)}$ the tangent vectors to the helix
To calculate the unit tangent, we need the magnitude of the tangent vector ${\displaystyle {\boldsymbol {x^{\prime }}}(\sigma )}$
${\displaystyle |{\boldsymbol {x^{\prime }}}(\sigma )|={\sqrt {\rho ^{2}\sin ^{2}(\sigma )+\rho ^{2}\cos ^{2}\sigma +c^{2}}}={\sqrt {\rho ^{2}+c^{2}}}\equiv w}$ .
As before
${\displaystyle {\boldsymbol {\hat {t}}}={\frac {\boldsymbol {x^{\prime }}}{|{\boldsymbol {x^{\prime }}}(\sigma )|}}={\boldsymbol {\dot {x}}}(\sigma (s))={\frac {d{\boldsymbol {x}}}{d\sigma }}{\frac {d\sigma }{ds}}}$
implying that ${\displaystyle {\frac {d\sigma }{ds}}={\frac {1}{w}}\rightarrow \sigma =s/w}$ for ${\displaystyle \sigma =0}$ when ${\displaystyle s=0}$.
#### Helix of radius ${\displaystyle \rho }$ in arclength ${\displaystyle s}$ coordinates
We can write the equation of a simple circular helix of arbitrary radius, and the associated derivative related quantities as
${\displaystyle {\boldsymbol {x}}(s)=\left(\rho \cos(s/w),\rho \sin(s/w),cs/w\right)}$ the helix in arclength coordinates
${\displaystyle {\boldsymbol {\dot {x}}}(s)=\left(-(\rho /w)\sin(s/w),(\rho /w)\cos(s/w),c/w\right)}$ the unit tangent vector
${\displaystyle {\boldsymbol {\ddot {x}}}(s)=\left(-(\rho /w^{2})\cos(s/w),-(\rho /w^{2})\sin(s/w),0\right)}$ the principal normal vector
${\displaystyle {\boldsymbol {\bar {x}}}(s)=\left(-(\rho /w^{3})\sin(s/w),-(\rho /w^{3})\cos(s/w),0\right)}$ the third derivative with respect to arclength
We note ${\displaystyle |{\boldsymbol {\ddot {x}}}\cdot {\boldsymbol {\ddot {x}}}|=(\rho ^{2}/w^{4})(\cos ^{2}(s/w)+\sin ^{2}(s/w))=(\rho ^{2}/w^{4})}$
And we write the torsion as
${\displaystyle \tau (s)={\frac {{\boldsymbol {\dot {x}}}\cdot {\boldsymbol {\ddot {x}}}\times {\boldsymbol {\bar {x}}}}{|{\boldsymbol {\ddot {x}}}\cdot {\boldsymbol {\ddot {x}}}|}}={\frac {1}{|{\boldsymbol {\ddot {x}}}\cdot {\boldsymbol {\ddot {x}}}|}}\det {\begin{bmatrix}-(\rho /w)\sin(s)&(\rho /w)\cos(s)&c/w\\-(\rho /w^{2})\cos(s)&-(\rho /w^{2})\sin(s)&0\\-(\rho /w^{3})\sin(s)&-(\rho /w^{3})\cos(s)&0\end{bmatrix}}=(w^{4}/\rho ^{2})(c/w)(\rho ^{2}/w^{5})(\cos ^{2}(s)+\sin ^{2}(s))={\frac {c}{w^{2}}}={\frac {c}{\rho ^{2}+c^{2}}}.}$
"The unit tangent vector ${\displaystyle {\boldsymbol {\hat {t}}}}$, the unit principal normal vector ${\displaystyle {\boldsymbol {\hat {p}}}}$, and the unit binormal vector ${\displaystyle {\boldsymbol {\hat {p}}}}$ constitute the basis of the Frenet coordinate frame. "
## The Frenet coordinate frame
The unit tangent, unit principal normal, and unit binormal vectors form a coordinate frame at each point along a curve called the Frenet frame for French astronomer and mathematician | Jean Frédéric Frenet (1816-1900) .
We recognize the following equivalent expressions which follow from even and odd permutations of the cross product
${\displaystyle \qquad {\boldsymbol {\hat {t}}}\times {\boldsymbol {\hat {p}}}={\boldsymbol {\hat {b}}}\qquad {\boldsymbol {\hat {p}}}\times {\boldsymbol {\hat {b}}}={\boldsymbol {\hat {t}}}\qquad {\boldsymbol {\hat {b}}}\times {\boldsymbol {\hat {t}}}={\boldsymbol {\hat {p}}}\qquad }$ and ${\displaystyle \qquad {\boldsymbol {\hat {p}}}\times {\boldsymbol {\hat {t}}}=-{\boldsymbol {\hat {b}}}\qquad {\boldsymbol {\hat {b}}}\times {\boldsymbol {\hat {p}}}=-{\boldsymbol {\hat {t}}}\qquad {\boldsymbol {\hat {t}}}\times {\boldsymbol {\hat {b}}}=-{\boldsymbol {\hat {p}}}}$ .
From previous results, we recall that
${\displaystyle {\boldsymbol {\ddot {x}}}\equiv {\boldsymbol {\dot {\hat {t}}}}=\kappa (s){\boldsymbol {\hat {p}}}}$
We note also that we can differentiate the cross product representation of ${\displaystyle {\boldsymbol {\hat {p}}}}$
${\displaystyle {\boldsymbol {\dot {\hat {p}}}}={\boldsymbol {\dot {\hat {b}}}}\times {\boldsymbol {\hat {t}}}+{\boldsymbol {\hat {b}}}\times {\boldsymbol {\dot {\hat {t}}}}}$.
From the expression ${\displaystyle {\boldsymbol {\dot {\hat {t}}}}=\kappa (s){\boldsymbol {\hat {p}}}}$ and from the definition of torsion ${\displaystyle {\boldsymbol {\dot {\hat {b}}}}=-\tau (s){\boldsymbol {\hat {p}}}}$
${\displaystyle {\boldsymbol {\dot {\hat {p}}}}=-\tau (s){\boldsymbol {\hat {p}}}\times {\boldsymbol {\hat {t}}}+\kappa (s){\boldsymbol {\hat {b}}}\times {\boldsymbol {\hat {p}}}}$.
Because ${\displaystyle {\boldsymbol {\hat {p}}}\times {\boldsymbol {\hat {t}}}=-{\boldsymbol {\hat {b}}}}$ and ${\displaystyle {\boldsymbol {\hat {b}}}\times {\boldsymbol {\hat {p}}}=-{\boldsymbol {\hat {t}}}}$
${\displaystyle {\boldsymbol {\dot {\hat {p}}}}=\tau (s){\boldsymbol {\hat {p}}}-\kappa (s){\boldsymbol {\hat {t}}}}$.
### The Frenet equations
From the previous results we have the following autonomous system in the arclength variable ${\displaystyle s}$ of ordinary differential equations the are, writing out the dot . derivatives as ${\displaystyle d/ds}$
${\displaystyle {\frac {d}{ds}}{\boldsymbol {\hat {t}}}=\kappa (s){\boldsymbol {\hat {p}}}}$
${\displaystyle {\frac {d}{ds}}{\boldsymbol {\hat {p}}}=-\kappa (s){\boldsymbol {\hat {t}}}+\tau (s){\boldsymbol {\hat {b}}}}$
${\displaystyle {\frac {d}{ds}}{\boldsymbol {\hat {b}}}=-\tau (s){\boldsymbol {\hat {p}}}}$,
where the last equation follows from the definition of torsion.
We may write this system in matrix and vector notation as
${\displaystyle {\frac {d}{ds}}{\begin{bmatrix}{\boldsymbol {\hat {t}}}\\{\boldsymbol {\hat {p}}}\\{\boldsymbol {\hat {b}}}\end{bmatrix}}={\begin{bmatrix}0&\kappa (s)&0\\-\kappa (s)&0&\tau (s)\\0&-\tau (s)&0\end{bmatrix}}{\begin{bmatrix}{\boldsymbol {\hat {t}}}\\{\boldsymbol {\hat {p}}}\\{\boldsymbol {\hat {b}}}\end{bmatrix}}}$
The Frenet result is beautiful, but suffers from the flaw that while the Frenet frame is, pointwise, an orthogonal frame, it does not constitute an orthogonal coordinate system centered on the curve. This follows because ${\displaystyle {\boldsymbol {\dot {\hat {p}}}}}$ and ${\displaystyle {\boldsymbol {\dot {\hat {b}}}}}$ do not point parallel to ${\displaystyle {\boldsymbol {\hat {t}}}}$. In short, the principal normal and binormal vectors are not transported in a parallel way along the curve.
## The First Fundamental Form of Differential geometry
The notion of a curve can be generalized beyond orthogonal cartesian coordinates to more general coordinates. This is done by considering differential arclength in a more general coordinate system.
We recall that the unit tangent vector is defined as the first derivative with respect to arclength of position on a curve ${\displaystyle {\boldsymbol {x}}(s)}$
${\displaystyle {\frac {d{\boldsymbol {x}}}{ds}}={\boldsymbol {\hat {t}}}}$ implying that differential position is ${\displaystyle d{\boldsymbol {x}}={\boldsymbol {\hat {t}}}ds}$ and that differential arclength squared is given by the dot product of ${\displaystyle d{\boldsymbol {x}}}$ with itself
${\displaystyle d{\boldsymbol {x}}\cdot d{\boldsymbol {x}}=({\boldsymbol {\hat {t}}}\cdot {\boldsymbol {\hat {t}}})(ds)^{2}}$.
### More general parameterization
Suppose that ${\displaystyle {\boldsymbol {x}}=(x_{1},x_{2},x_{3})}$ is parameterized in terms of a new coordinate ${\displaystyle {\boldsymbol {\sigma }}=(\sigma ^{1},\sigma ^{2})}$ in 2-dimensions, or ${\displaystyle {\boldsymbol {\sigma }}=(\sigma ^{1},\sigma ^{2},\sigma ^{3})}$ in 3-dimensions, such that [/itex]${\displaystyle {\boldsymbol {x}}({\boldsymbol {\sigma }})}$. The superscripted indices indicate that ${\displaystyle {\boldsymbol {\sigma }}}$ need not be an orthogonal cartesian coordinate system.
In 3-dimensions we write
${\displaystyle {\boldsymbol {x}}({\boldsymbol {\sigma }})\equiv \left(x_{1}(\sigma ^{1},\sigma ^{2},\sigma ^{3}),x_{2}(\sigma ^{1},\sigma ^{2},\sigma ^{3}),x_{3}(\sigma ^{1},\sigma ^{2},\sigma ^{3})\right)}$ and the differential position is given by
${\displaystyle d{\boldsymbol {x}}\equiv {\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{1}}}d\sigma ^{1}+{\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{2}}}d\sigma ^{2}+{\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{3}}}d\sigma ^{3}}$ .
Formally, if we want the differential arclength in for this new representation, it will be
${\displaystyle d{\boldsymbol {x}}\cdot d{\boldsymbol {x}}=\left({\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{1}}}d\sigma ^{1}+{\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{2}}}d\sigma ^{2}+{\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{3}}}d\sigma ^{3}\right)\cdot \left({\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{1}}}d\sigma ^{1}+{\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{2}}}d\sigma ^{2}+{\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{3}}}d\sigma ^{3}\right)}$.
Because the dot product is on the components of ${\displaystyle {\boldsymbol {x}}}$ the expression will be quite complicated if written out in complete detail. Fortunately we may make use of index notation to compactify this. We may write ${\displaystyle d{\boldsymbol {x}}}$ as
${\displaystyle d{\boldsymbol {x}}={\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{\alpha }}}d\sigma ^{\alpha }\qquad }$ for ${\displaystyle \qquad \alpha =1,2,3.}$
Because the dot product is on the components of ${\displaystyle {\boldsymbol {x}}}$ all possible combinations of the partial derivatives and differentials multiply each other. In index notation this is compactly represented as
${\displaystyle d{\boldsymbol {x}}\cdot d{\boldsymbol {x}}=\left({\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{\alpha }}}\cdot {\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{\beta }}}\right)d\sigma ^{\alpha }d\sigma ^{\beta }}$ where ${\displaystyle \beta =1,2,3}$, as well.
To further compactify the notation we define the tangents in the ${\displaystyle \sigma ^{\alpha }}$ and ${\displaystyle \sigma ^{\beta }}$ directions via
${\displaystyle {\boldsymbol {x}}_{\alpha }\equiv {\frac {\partial {x}}{d\sigma ^{\alpha }}}\qquad }$ and ${\displaystyle \qquad {\boldsymbol {x}}_{\beta }\equiv {\frac {\partial {x}}{d\sigma ^{\beta }}}}$
Thus, we can write the differential arclength squared as
${\displaystyle (ds)^{2}\equiv d{\boldsymbol {x}}\cdot d{\boldsymbol {x}}=({\boldsymbol {x}}_{\alpha }\cdot {\boldsymbol {x}}_{\beta })d\sigma ^{\alpha }d\sigma ^{\beta }}$ .
We can think of the quantity ${\displaystyle ({\boldsymbol {x}}_{\alpha }\cdot {\boldsymbol {x}}_{\beta })}$ as a 2x2 matrix in 2-dimensions and a 3x3 matrix in 3-dimensions, where the elements of the matrix are dot products of tangent vectors. We define this quantity as the metric tensor ${\displaystyle g_{\alpha \beta }\equiv ({\boldsymbol {x}}_{\alpha }\cdot {\boldsymbol {x}}_{\beta })}$. The metric tensor reduces to the identity matrix in orthogonal cartesian coordinates, and to a diagonal matrix, with values possibly other than 1 on the diagonal.
We call the expression, which is a quadratic form,
${\displaystyle (ds)^{2}=g_{\alpha \beta }d\sigma ^{\alpha }d\sigma ^{\beta }}$
the First Fundamental Form of Differential Geometry.
## Covariant and Contravariant transformations of tensors
The "covariant" or "covariance" finds several meaning in physics. The first meaning refers to a particular transformation law for changing coordinate systems. Here, this will refer to two different possible tensor transformation laws and the reason why we have superscripted and subscripted indexes to denote components to vectors and higher order tensors.
An easy way to see why we naturally have two different types of components of a vector can be seen in the simple example of the full differential of a scalar function
${\displaystyle df={\frac {\partial f}{\partial \sigma ^{1}}}d\sigma ^{1}+{\frac {\partial f}{\partial \sigma ^{2}}}d\sigma ^{2}+{\frac {\partial f}{\partial \sigma ^{3}}}d\sigma ^{3}}$
The quantity ${\displaystyle df}$ is a scalar, possibly representing a family of level curves or surfaces. This scalar function is the result of the inner or dot product of two vector quantities--a vector tangent to the level surfaces represented by the values of ${\displaystyle f}$ and a differential distance along the same coordinate direction. Employing index notation, we may write
${\displaystyle df={\frac {\partial f}{\partial \sigma ^{\alpha }}}d\sigma ^{\alpha }=\left({\frac {\partial f}{\partial \sigma ^{1}}},{\frac {\partial f}{\partial \sigma ^{2}}},{\frac {\partial f}{\partial \sigma ^{3}}}\right)\cdot \left(d\sigma ^{1},d\sigma ^{1},d\sigma ^{3}\right)}$
which has the form of an inner or dot product of two vectors.
### Changing coordinate systems
Suppose we want to change from the ${\displaystyle {\boldsymbol {\sigma }}=(\sigma ^{1},\sigma ^{2},\sigma ^{3})}$ to a new coordinate system ${\displaystyle {\boldsymbol {y}}=(y^{1},y^{2},y^{3})}$ coordinates.
We can transform the tangent vector
${\displaystyle {\frac {\partial f}{\partial \sigma ^{\alpha }}}{\frac {\partial \sigma ^{\alpha }}{dy^{\gamma }}}={\frac {\partial f}{\partial y^{\gamma }}}}$. Here ${\displaystyle \gamma =1,2,3}$.
We can transform the differentials (called the co-vector as
${\displaystyle {\frac {\partial y^{\gamma }}{\partial \sigma ^{\alpha }}}d\sigma ^{\alpha }}$.
If we put these together, we have
${\displaystyle df={\frac {\partial f}{\partial \sigma ^{\alpha }}}{\frac {\partial \sigma ^{\alpha }}{dy^{\gamma }}}{\frac {\partial y^{\gamma }}{\partial \sigma ^{\alpha }}}d\sigma ^{\alpha }={\frac {\partial f}{\partial y^{\gamma }}}dy^{\gamma }}$. Note that the scalar function ${\displaystyle df}$ is "invariant" under the transformation from the ${\displaystyle {\boldsymbol {\sigma }}}$ to the ${\displaystyle {\boldsymbol {y}}}$.
The tangent vector follows a covariant transformation law
${\displaystyle {\frac {\partial \sigma ^{\alpha }}{\partial y^{\gamma }}}{\frac {\partial f}{\partial \sigma ^{\alpha }}}={\frac {\partial f}{\partial y^{\gamma }}}}$
Were as the co-vector obeys a "contravariant transformation law"
${\displaystyle {\frac {\partial y^{\gamma }}{\partial \sigma ^{\alpha }}}d\sigma ^{\alpha }=dy^{\gamma }}$.
### Generalizing the co-variant and contravariant transformation laws
In general, we note that the coordinate transformation ${\displaystyle {\boldsymbol {\sigma }}\rightarrow {\boldsymbol {y}}}$ transforms the covariant vector ${\displaystyle a_{\alpha }}$
${\displaystyle a_{\alpha }{\frac {\partial \sigma ^{\alpha }}{\partial y^{\gamma }}}={\bar {a}}_{\gamma }}$.
Here the bar reminds us that the components of ${\displaystyle a_{\bar {\gamma }}}$ are in the new ${\displaystyle {\boldsymbol {y}}}$ coordinates.
The corresponding transformation for a contravariant vector ${\displaystyle b^{\beta }}$ may be written as
${\displaystyle b^{\gamma }{\frac {\partial y^{\gamma }}{\partial \sigma ^{\gamma }}}={\bar {b}}^{\gamma }}$.
Again the bar is a reminder that the components of ${\displaystyle {\bar {a}}_{\gamma }}$ and ${\displaystyle {\bar {b}}^{\gamma }}$ are in the new ${\displaystyle {\boldsymbol {y}}}$ coordinates.
### Definition: a contraction
A contraction is the result of summing over repeated indexes where one index is covariant and the other is contravariant. For example
${\displaystyle a_{\gamma }b^{\gamma }=c}$
the contraction of a contravariant, with a covariant is an invariant (a scalar).
"The surface ${\displaystyle {\boldsymbol {x}}(\sigma ^{1},\sigma ^{2})\equiv {\boldsymbol {x}}(\sigma ^{1},\sigma ^{2},\sigma ^{3})}$, with the ${\displaystyle \sigma ^{3}={\mbox{const.}}}$ for points on the surface. The coordinate system ${\displaystyle (\sigma ^{1},\sigma ^{2})}$ is not, in general an orthogonal coordinate system. "
# The Theory of Surfaces
A surface is structure that is a geometrical object parameterized in two dimensions via ${\displaystyle {\boldsymbol {x}}(\sigma ^{1},\sigma ^{2})}$. A family of level surfaces may be defined with an additional coordinate ${\displaystyle \sigma ^{3}}$ which is equal to a constant on a given surface ${\displaystyle {\boldsymbol {x}}(\sigma ^{1},\sigma ^{2},\sigma ^{3}={\mbox{const.}})}$.
The parameters ${\displaystyle (\sigma ^{1},\sigma ^{2})}$ describe coordinate curves on the surface allowing us to write expressions for tangent vectors to each coordinate curve
${\displaystyle {\boldsymbol {t}}_{1}\equiv {\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{1}}}\qquad }$ and ${\displaystyle \qquad {\boldsymbol {t}}_{2}\equiv {\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{2}}}}$.
The tangent vectors ${\displaystyle ({\boldsymbol {t}}_{1},{\boldsymbol {t}}_{2})}$ span the tangent plane to any given point ${\displaystyle {\boldsymbol {x}}(\sigma ^{1},\sigma ^{2})}$. Because the coordinates ${\displaystyle (\sigma ^{1},\sigma ^{2})}$ are not, in general orthogonal coordinates, the tangent vectors, are not, in general orthogonal.
We recall that the components of the metric tensor ${\displaystyle g_{\alpha \beta }}$ are given by dot products of tangent vectors. In this case we note for future reference that ${\displaystyle g_{11}={\boldsymbol {x}}_{1}\cdot {\boldsymbol {x}}_{1}}$, meaning that ${\displaystyle |{\boldsymbol {x}}_{1}|={\sqrt {g_{11}}}}$.
"A plane spanned by (not unit) vectors ${\displaystyle ({\boldsymbol {t}}_{1},{\boldsymbol {t}}_{2})}$ in a non-orthogonal coordinate system. There are two ways of representing a vector ${\displaystyle {\boldsymbol {V}}}$ in terms of components. a) One way is by finding vectors that are parallel to the coordinate axes that sum to the vector by the parallelogram law. b) The other way is the traditional method of projecting orthogonally via the dot product."
## The Covariant and Contravariant components of a vector
Because we are no longer confined to considering only orthogonal coordinate frames, we have the possibility of representing the components of a given vector in either covariant or contravariant coordinates. The mental picture that the reader may already have of representing a vector by its components, would be by taking the dot product of the vector with the unit basis vectors in the given coordinate axes. Thus we use the scalar product to project the vector down a perpendicular to the coordinate direction. This is the "normal" way to find the components of a vector in a coordinate system.
There is another way that we can form a basis of a vector, and that is to find the two vectors in the coordinate axis directions that sum to our vector via the parallelogram law. For orthogonal cartesian coordinates, the are the same coordinates as from the perpendicular projection. In the case of a non-orthogonal coordinate frame, the projection is parallel to the other coordinate axis.
Suppose the vector ${\displaystyle {\boldsymbol {V}}}$ is in a plane spanned by the two vectors ${\displaystyle ({\boldsymbol {t}}_{1},{\boldsymbol {t}}_{2})}$ that are the tangent vectors to a point on a surface ${\displaystyle {\boldsymbol {x}}(\sigma ^{1},\sigma ^{2})}$. Because ${\displaystyle (\sigma ^{1},\sigma ^{2})}$ are not, in general, arclength coordinates on the surface, the tangent vectors ${\displaystyle ({\boldsymbol {t}}_{1},{\boldsymbol {t}}_{2})}$ are not, in general unit tangent vectors.
### The Covariant components of a vector
We can easily form the covariant components of a vector ${\displaystyle {\boldsymbol {V}}}$ by noting formally that
${\displaystyle {\boldsymbol {V}}=a_{1}{\boldsymbol {\hat {t}}}_{1}+a_{2}{\boldsymbol {\hat {t}}}_{2}}$,
but we also are aware that the tangents are not unit tangents, so there is a factor of ${\displaystyle |{\boldsymbol {x}}_{\alpha }|={\sqrt {g_{\alpha \alpha }}}}$ may be moved from the unit tangents to the components
${\displaystyle {\boldsymbol {V}}={\frac {a_{1}}{\sqrt {g_{11}}}}{\boldsymbol {t}}_{1}+{\frac {a_{2}}{\sqrt {g_{22}}}}{\boldsymbol {t}}_{2}}$.
We can work this from the other direction by writing the dot product of ${\displaystyle {\boldsymbol {V}}}$ with the unit tangent vectors, respectively
${\displaystyle {\boldsymbol {V}}\cdot {\boldsymbol {\hat {t}}}_{\alpha }={\boldsymbol {V}}\cdot {\frac {{\boldsymbol {x}}_{\alpha }}{|{\boldsymbol {x}}_{\alpha }|}}\equiv {\frac {|V||{\boldsymbol {x}}_{\alpha }|\cos(\theta _{\alpha })}{|{\boldsymbol {x}}_{\alpha }|}}={\frac {|V||{\boldsymbol {x}}_{\alpha }|\cos(\theta _{\alpha })}{\sqrt {g_{\alpha \alpha }}}}}$. Here, the ${\displaystyle \theta _{\alpha }}$ is the angle between ${\displaystyle {\boldsymbol {V}}}$ and the respective ${\displaystyle {\boldsymbol {t}}_{\alpha }}$ direction.
When we compare this with the previous representation of the covariant components of ${\displaystyle a_{\alpha }}$ we see that the covariant components of ${\displaystyle a_{\alpha }}$ are
${\displaystyle a_{1}=({\boldsymbol {V}}\cdot {\boldsymbol {x}}_{1})\qquad }$ and ${\displaystyle \qquad a_{2}=({\boldsymbol {V}}\cdot {\boldsymbol {x}}_{2})}$.
The presence of the factor of ${\displaystyle {\sqrt {g_{\alpha \alpha }}}}$ reminds us that the coordinates are not arclength coordinates, otherwise, the tangent vectors would be unit tangent vectors.
"Details of the contravariant components of a vector ${\displaystyle {\boldsymbol {V}}}$. The contravariant components are the parallel projections of the vectors."
### The Contravariant Components of a vector
The contravariant components of a vector require a bit more work. We can begin the same way. If ${\displaystyle {\boldsymbol {V}}}$ is repreented in terms of components in the respective rangent directions, this must look like a dot product with the tangents
${\displaystyle {\boldsymbol {V}}=a^{1}{\boldsymbol {x}}_{1}+a^{2}{\boldsymbol {x}}_{2}=a^{1}{\sqrt {g_{11}}}{\boldsymbol {\hat {t}}}_{1}+a^{2}{\sqrt {g_{22}}}{\boldsymbol {\hat {t}}}_{2}}$, considering the geometry as a parallel projection to the opposing coordinate axis.
To find ${\displaystyle a^{1}}$ and ${\displaystyle a^{2}}$ we consider the cross products
${\displaystyle |{\boldsymbol {V}}\times {\boldsymbol {x}}_{1}|=|{\boldsymbol {V}}||{\boldsymbol {x}}_{1}|\sin(\theta _{1})}$
${\displaystyle |{\boldsymbol {V}}\times {\boldsymbol {x}}_{2}|=|{\boldsymbol {V}}||{\boldsymbol {x}}_{2}|\sin(\theta _{2})}$
${\displaystyle |{\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2}|=|{\boldsymbol {x}}_{1}||{\boldsymbol {x}}_{2}|\sin(\theta _{2}){\sqrt {g_{11}}}{\sqrt {g_{22}}}\sin(\theta _{1}+\theta _{2})}$.
We note that
${\displaystyle \sin(\pi -\theta _{1}-\theta _{2})=\sin(\theta _{1}+\theta _{2})}$
and
${\displaystyle {\frac {|{\boldsymbol {V}}|}{\sin(\pi -\theta _{1}-\theta _{2})}}={\frac {|{\boldsymbol {V}}|}{\sin(\theta _{1}+\theta _{2})}}}$
Applying the law of sines we note that
${\displaystyle {\frac {|{\boldsymbol {V}}|}{\sin(\theta _{1}+\theta _{2})}}={\frac {a_{1}{\sqrt {g_{11}}}}{\sin(\theta _{1})}}={\frac {a_{2}{\sqrt {g_{22}}}}{\sin(\theta _{2})}}}$.
We may make use of the cross product relations to substitute for the sines in the law of sines to produce
${\displaystyle {\frac {|{\boldsymbol {V}}|{\sqrt {g_{11}}}{\sqrt {g_{22}}}}{|{\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2}|}}={\frac {a^{1}{\sqrt {g_{11}}}|{\boldsymbol {V}}|{\sqrt {g_{22}}}}{|{\boldsymbol {V}}\times {\boldsymbol {x}}_{2}|}}\qquad }$ and ${\displaystyle \qquad {\frac {|{\boldsymbol {V}}|{\sqrt {g_{11}}}{\sqrt {g_{22}}}}{|{\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2}|}}={\frac {a^{2}{\sqrt {g_{22}}}|{\boldsymbol {V}}|{\sqrt {g_{11}}}}{|{\boldsymbol {V}}\times {\boldsymbol {x}}_{1}|}}}$.
Solving for ${\displaystyle a^{1}}$ and ${\displaystyle a^{2}}$
${\displaystyle a^{1}={\frac {|{\boldsymbol {V}}\times {\boldsymbol {x}}_{2}|}{|{\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2}|}}\qquad }$ and ${\displaystyle \qquad a^{2}={\frac {|{\boldsymbol {V}}\times {\boldsymbol {x}}_{1}|}{|{\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2}|}}}$.
Here, ${\displaystyle |{\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2}|}$ is the area of the parallelogram with sides given by the tangent vectors ${\displaystyle {\boldsymbol {x}}_{1}}$ and ${\displaystyle {\boldsymbol {x}}_{2}}$.
The quantity ${\displaystyle |{\boldsymbol {V}}||{\boldsymbol {x}}_{2}|\sin(\theta _{2})}$ is the area of the parallelogram with sides formed by the tangent vector ${\displaystyle {\boldsymbol {x}}_{2}}$ and the vector ${\displaystyle {\boldsymbol {V}}}$ and similarly, the quantity ${\displaystyle |{\boldsymbol {V}}||{\boldsymbol {x}}_{1}|\sin(\theta _{1})}$ is the area of the parallelogram formed by the tangent vector ${\displaystyle {\boldsymbol {x}}_{2}}$ and the vector ${\displaystyle {\boldsymbol {V}}}$.
Thus, the contravariant components of ${\displaystyle {\boldsymbol {V}}}$ may be related to the ratios of parallelogram areas
${\displaystyle a^{1}={\frac {{\mbox{Paralellogram Area}}(|{\boldsymbol {V}}\times {\boldsymbol {x}}_{2}|)}{{\mbox{Parallelogram Area}}(|{\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2}|)}}\qquad }$ and ${\displaystyle \qquad a^{2}={\frac {{\mbox{Paralellogram Area}}(|{\boldsymbol {V}}\times {\boldsymbol {x}}_{1}|)}{{\mbox{Parallelogram Area}}(|{\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2}|)}}}$.
## Covariant and Contravariant transformation laws
We now have defined the covariant and contravariant components of an arbitrary vector ${\displaystyle {\boldsymbol {V}}}$ in a coordinate frame that is not, in general, orthogonal. The task now is to verify that these representations transform according to the respective covariant or contravariant transformation laws.
### Coordinate transformation of the covariant components of a vector
Above, we found that the representation of a vector ${\displaystyle {\boldsymbol {V}}}$ has covariant components in the nonorthogonal coordinate system ${\displaystyle {\boldsymbol {\sigma }}=(\sigma ^{1},\sigma ^{2})}$
${\displaystyle a_{\alpha }={\boldsymbol {x}}_{\alpha }\cdot {\boldsymbol {V}}}$ . Here ${\displaystyle {\boldsymbol {x}}_{\alpha }={\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{\alpha }}}}$ is the tangent to the coordinate curve in the ${\displaystyle \sigma ^{\alpha }}$ direction.
We consider transforming the covariant components to a new coordinate system ${\displaystyle {\boldsymbol {y}}=(y^{1},y^{2})}$ such that
${\displaystyle {\bar {a}}_{\beta }={\boldsymbol {\bar {x}}}_{\beta }\cdot {\boldsymbol {V}}={\boldsymbol {x}}_{\alpha }{\frac {\partial \sigma ^{\alpha }}{\partial y^{\beta }}}\cdot {\boldsymbol {V}}={\frac {\partial \sigma ^{\alpha }}{\partial u^{\beta }}}{\boldsymbol {x}}_{\alpha }\cdot {\boldsymbol {V}}={\frac {\partial \sigma ^{\alpha }}{\partial y^{\beta }}}a_{\alpha }}$.
Thus, ${\displaystyle {\bar {a}}_{\beta }={\frac {\partial \sigma ^{\alpha }}{\partial y^{\beta }}}a_{\alpha }\qquad }$ and ${\displaystyle \qquad {\frac {\partial y^{\beta }}{\partial \sigma ^{\alpha }}}{\bar {a}}_{\beta }=a_{\alpha }}$ in accordance with the covariant transformation law.
### Coordinate transformation of the contravariant components of a vector
Similarly, the representation of the vector ${\displaystyle {\boldsymbol {V}}}$ has contravariant components in the nonorthogonal coordinate system ${\displaystyle {\boldsymbol {y}}=(y^{1},y^{2})}$ with vectors tangent to the ${\displaystyle y^{\beta }}$ coordinate directions given by ${\displaystyle {\boldsymbol {\bar {x}}}_{\beta }}$ (indicated by the bar)
${\displaystyle {\boldsymbol {V}}={\bar {a}}^{\beta }{\boldsymbol {\bar {x}}}_{\beta }={\bar {a}}^{\beta }{\frac {\partial {\boldsymbol {x}}}{\partial y^{\beta }}}}$.
If we wanted to transform the tangent vectors into the ${\displaystyle {\boldsymbol {\sigma }}=(\sigma ^{1},\sigma ^{2})}$ coordinates, this would be the same as above, as the tangent behaves as a covariant vector
${\displaystyle {\boldsymbol {\bar {x}}}_{\beta }{\frac {\partial y^{\beta }}{\sigma ^{\alpha }}}={\frac {\partial {\boldsymbol {x}}}{\partial y^{\beta }}}{\frac {\partial y^{\beta }}{\partial \sigma ^{\alpha }}}={\boldsymbol {x}}_{\alpha }}$.
In the ${\displaystyle {\boldsymbol {\sigma }}=(\sigma ^{1},\sigma ^{2})}$ we may write
${\displaystyle {\boldsymbol {V}}=a^{\alpha }{\boldsymbol {x}}_{\alpha }=a^{\alpha }\left({\frac {\partial y^{\beta }}{\partial \sigma ^{\alpha }}}\right){\boldsymbol {\bar {x}}}_{\beta }=\left(a^{\alpha }{\frac {\partial y^{\beta }}{\partial \sigma ^{\alpha }}}\right){\boldsymbol {\bar {x}}}_{\beta }={\bar {a}}^{\beta }{\boldsymbol {\bar {x}}}_{\beta }}$.
Comparing with the first expression for the contravariant representation of ${\displaystyle {\boldsymbol {V}}}$ we must conclude that
${\displaystyle a^{\alpha }{\frac {\partial y^{\beta }}{\partial \sigma ^{\alpha }}}={\bar {a}}^{\beta }\qquad }$ and ${\displaystyle \qquad a^{\alpha }={\frac {\partial \sigma ^{\alpha }}{\partial y^{\beta }}}{\bar {a}}^{\beta }}$
in accordance with the contravariant transformation law.
## Relating the covariant and the contravariant components of a vector
We may find the ${\displaystyle \alpha }$-th component of a vector ${\displaystyle {\boldsymbol {V}}}$ by taking the inner product with the tangent vector to the coordinate curve in the ${\displaystyle \alpha }$-th direction
${\displaystyle a_{\alpha }={\boldsymbol {x}}_{\alpha }\cdot {\boldsymbol {V}}={\boldsymbol {x}}_{\alpha }\cdot \left(\alpha ^{\beta }{\boldsymbol {x}}_{\beta }\right)=\left({\boldsymbol {x}}_{\alpha }\cdot {\boldsymbol {x}}_{\beta }\right)a^{\beta }=g_{\alpha \beta }a^{\beta }}$.
Here we recognize the metric tensor ${\displaystyle g_{\alpha \beta }=\left({\boldsymbol {x}}_{\alpha }\cdot {\boldsymbol {x}}_{\beta }\right)}$.
The metric tensor may be thought of having the power to lower and index
${\displaystyle a_{\alpha }=g_{\alpha \beta }a^{\beta }}$.
We identify a matrix with ${\displaystyle g_{\alpha \beta }}$ that must be invertible if coordinate transformations are to be invertible, hence, there should be a ${\displaystyle \left(g_{\alpha \beta }\right)^{-1}\equiv g^{\alpha \beta }}$. Formally, this inverse should have the property that
${\displaystyle g^{\alpha \beta }a_{\alpha }=a^{\beta }\qquad }$ and ${\displaystyle \qquad g_{\beta \gamma }g^{\alpha \beta }a_{\alpha }=a_{\gamma }\qquad }$ implying that ${\displaystyle \qquad g_{\beta \gamma }g^{\alpha \beta }\equiv {\delta _{\gamma }}^{\beta }={\begin{cases}1&\gamma =\beta \\0&\gamma \neq \beta \end{cases}}}$ which is called the Kronecker delta.
We can find the components of ${\displaystyle g^{\alpha \beta }}$ by considering formally considering the inverse of a 2x2 symmetric matrix ${\displaystyle A}$ (the metric tensor is symmetric)
Given ${\displaystyle A={\begin{bmatrix}a&c\\c&b\end{bmatrix}}}$, we know that ${\displaystyle A^{-1}={\begin{bmatrix}{\frac {b}{\det A}}&-{\frac {c}{\det A}}\\-{\frac {c}{\det A}}&{\frac {b}{\det A}}\end{bmatrix}}}$ where ${\displaystyle \det A=ab-c^{2}}$.
Applying these results to the metric tensor, we define ${\displaystyle g\equiv \det(g_{\alpha \beta })=g_{11}g_{22}-g_{12}^{2}}$ and ${\displaystyle {\frac {1}{g}}\equiv \det(g^{\alpha \beta })}$. Thus we may write
${\displaystyle g^{11}={\frac {g_{22}}{g}},\qquad }$ ${\displaystyle \qquad g^{12}=g^{21}={\frac {-g_{12}}{g}},\qquad }$, and ${\displaystyle \qquad g^{22}={\frac {g_{11}}{g}}}$.
"The tangent vectors ${\displaystyle {\boldsymbol {x}}_{1}}$ and ${\displaystyle {\boldsymbol {x}}_{2}}$ and the differential distances in the ${\displaystyle \sigma ^{1}}$ and ${\displaystyle \sigma ^{2}}$ directions given by ${\displaystyle d{\boldsymbol {x}}^{(1)}}$ and ${\displaystyle d{\boldsymbol {x}}^{(2)}}$, respectively."
## Surface area elements
In integral calculus we all learn that a change of coordinates in terms of a differential area, volume, or hypervolume will result of increments in the transformed coordinates, multiplied by the Jacobian of the transformation. I this section we see how our tensor representation relates to this classical representation.
We consider the tangents, ${\displaystyle {\boldsymbol {x}}_{1}}$ and ${\displaystyle {\boldsymbol {x}}_{2}}$, to a point on a surface ${\displaystyle {\boldsymbol {x}}(\sigma ^{1},\sigma ^{2})}$, which are not in general arclength coordinates. We further consider an area element ${\displaystyle dS}$ formed by the cross product of increments in the ${\displaystyle {\boldsymbol {x}}_{1}}$ and ${\displaystyle {\boldsymbol {x}}_{2}}$. This area element is given by finding the cross product between the increments ${\displaystyle d{\boldsymbol {x}}^{(1)}}$ and ${\displaystyle d{\boldsymbol {x}}^{(2)}}$. Here the superscripts ${\displaystyle (1)}$ and ${\displaystyle (2)}$ are labels rather than indexes.
We represent these differential distances by
${\displaystyle d{\boldsymbol {x}}^{(\alpha )}\equiv {\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{\alpha }}}d\sigma ^{\alpha }}$ where ${\displaystyle \alpha =1,2}$.
The differential area ${\displaystyle dS}$ is given by the cross product of these differential distances
${\displaystyle dS\equiv |d{\boldsymbol {x}}^{(1)}\times d{\boldsymbol {x}}^{(2)}|=\left|\left({\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{1}}}d\sigma ^{1}\right)\times \left({\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{2}}}d\sigma ^{2}\right)\right|}$
${\displaystyle \quad \qquad =\left|{\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{1}}}\times {\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{2}}}\right|d\sigma ^{1}d\sigma ^{2}=\left|{\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2}\right|d\sigma ^{1}d\sigma ^{2}}$ .
We consider the area of the parallelogram spanned by the tangent vectors, represented by the cross product of the tangent vectors, where ${\displaystyle \theta }$ is the angle between the vectors, and noting ${\displaystyle \sin ^{2}(\theta )=1-\cos ^{2}(\theta )}$
${\displaystyle \left|{\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2}\right|^{2}=|{\boldsymbol {x}}_{1}|^{2}|{\boldsymbol {x}}_{2}|^{2}\sin ^{2}(\theta )=|{\boldsymbol {x}}_{1}|^{2}|{\boldsymbol {x}}_{2}|^{2}\left(1-\cos ^{2}(\theta )\right)=|{\boldsymbol {x}}_{1}|^{2}|{\boldsymbol {x}}_{2}|^{2}-|{\boldsymbol {x}}_{1}|^{2}|{\boldsymbol {x}}_{2}|^{2}\cos ^{2}(\theta )}$
${\displaystyle \quad \qquad \qquad =({\boldsymbol {x}}_{1}\cdot {\boldsymbol {x}}_{1})({\boldsymbol {x}}_{2}\cdot {\boldsymbol {x}}_{2})-({\boldsymbol {x}}_{1}\cdot {\boldsymbol {x}}_{2})^{2}\equiv g_{11}g_{22}-g_{12}^{2}=\det(g_{\alpha \beta })=g}$.
Thus, the relationship between the determinant of the metric tensor and the Jacobian of the transformation from the ${\displaystyle {\boldsymbol {x}}}$ coordinates and the ${\displaystyle {\boldsymbol {\sigma }}}$ coordinates is established as
${\displaystyle |J|\equiv |d{\boldsymbol {x}}^{(1)}\times d{\boldsymbol {x}}^{(2)}|={\sqrt {g}}}$. Thus,
${\displaystyle dS=|J|d\sigma ^{1}d\sigma ^{2}={\sqrt {g}}d\sigma ^{1}d\sigma ^{2}}$.
Traditionally, the Jacobian of the transformation would be represented by
${\displaystyle |J|=\det \left|{\begin{bmatrix}{\hat {e}}_{1}&{\hat {e}}_{2}&{\hat {e}}_{3}\\{\frac {\partial x_{1}}{\partial \sigma ^{1}}}&{\frac {\partial x_{2}}{\partial \sigma ^{1}}}&{\frac {\partial x_{3}}{\partial \sigma ^{1}}}\\{\frac {\partial x_{1}}{\partial \sigma ^{2}}}&{\frac {\partial x_{2}}{\partial \sigma ^{2}}}&{\frac {\partial x_{3}}{\partial \sigma ^{2}}}\end{bmatrix}}\right|}$
which is equivalent to the cross product representation above.
### The unit normal vector to a surface
A surface normal ${\displaystyle {\boldsymbol {\hat {n}}}}$ is the normal direction to the tangent plane at a given point on a surface.
Hence, in light of the previous section,
${\displaystyle {\boldsymbol {\hat {n}}}={\frac {{\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2}}{|{\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2}|}}={\frac {1}{\sqrt {g}}}({\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2})}$ .
"A curve parameterized by nonorthogonal surface coordinates ${\displaystyle {\boldsymbol {x}}(\sigma ^{1},\sigma ^{2})}$ has a unit tangent ${\displaystyle {\boldsymbol {\hat {t}}}}$, unit principal normal ${\displaystyle {\boldsymbol {\hat {p}}}}$, unit binormal ${\displaystyle {\boldsymbol {\hat {b}}}}$. The tangent to the curve is also tangent to the surface, so the unit surface normal vector ${\displaystyle {\boldsymbol {\hat {n}}}}$ lies in the plane spanned by ${\displaystyle {\boldsymbol {\hat {p}}}}$ and ${\displaystyle {\boldsymbol {\hat {b}}}}$."
### The unit normal vector to a curve on a surface
A curve on a surface has its unit tangent, principal normal, and binormal vectors ${\displaystyle ({\boldsymbol {\hat {t}}},{\boldsymbol {\hat {p}}},{\boldsymbol {\hat {b}}})}$. There is no reason that the unit normal to the surface ${\displaystyle {\boldsymbol {\hat {n}}}}$ along the curve be co-aligned with the unit principal normal ${\displaystyle {\boldsymbol {\hat {p}}}}$ or the unit binormal vectors ${\displaystyle {\boldsymbol {\hat {b}}}}$. What we do know is that because the tangent to a curve on a surface is also a tangent to the surface, the surface normal ${\displaystyle {\boldsymbol {\hat {n}}}}$ is orthogonal to ${\displaystyle {\boldsymbol {\hat {n}}}}$. Thus the unit tangent vector ${\displaystyle {\boldsymbol {\hat {t}}}}$ is orthogonal to the ${\displaystyle {\boldsymbol {\hat {p}}}}$, ${\displaystyle {\boldsymbol {\hat {b}}}}$, and the ${\displaystyle {\boldsymbol {\hat {n}}}}$ vectors.
Let ${\displaystyle \theta }$ be the angle between the unit principal normal ${\displaystyle {\boldsymbol {\hat {p}}}}$ and the unit surface normal vector ${\displaystyle {\boldsymbol {\hat {b}}}}$, then
${\displaystyle {\boldsymbol {\hat {p}}}\cdot {\boldsymbol {\hat {n}}}=\cos(\theta )}$.
We recall that ${\displaystyle {\boldsymbol {\hat {p}}}={\frac {\boldsymbol {p}}{|{\boldsymbol {p}}|}}={\frac {\boldsymbol {\ddot {x}}}{|{\boldsymbol {\ddot {x}}}|}}={\frac {1}{\kappa (s)}}{\boldsymbol {\ddot {x}}}}$. It follows, therefore that
${\displaystyle \kappa _{n}(s)\equiv \kappa (s)\cos(\theta )={\boldsymbol {\ddot {x}}}\cdot {\boldsymbol {\hat {n}}}}$.
This quantity ${\displaystyle \kappa _{n}(s)}$ is the normal curvature" to the curve, this being the curvature of the curve in a plane passing through the surface normal direction along the curve. Such a normal surface is called the normal section along the curve by some authors.
## The Second Fundamental Form of Differential Geometry
The second derivatives of a curve are related to its curvature. The second derivatives of curves on a surface allow us to describe the curvature of a surface.
Continuing using our notation of ${\displaystyle {\boldsymbol {x}}(\sigma ^{1},\sigma ^{2})}$ representing a curve on a surface. The tangent vectors in the ${\displaystyle \sigma ^{\alpha }}$ direction are given by
${\displaystyle {\boldsymbol {x}}_{\alpha }\equiv {\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{\alpha }}}}$
which, in arclength coordinates is
${\displaystyle {\frac {d{\boldsymbol {x}}(s)}{ds}}={\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{\alpha }}}{\frac {\partial \sigma ^{\alpha }}{\partial s}}}$
The matrix of second derivatives is given by
${\displaystyle {\boldsymbol {x}}_{\alpha \beta }\equiv {\frac {\partial ^{2}{\boldsymbol {x}}}{\partial \sigma ^{\alpha }\partial \sigma ^{\beta }}}={\boldsymbol {x}}_{\beta \alpha }}$
Allowing us to write
${\displaystyle {\boldsymbol {\dot {x}}}={\boldsymbol {x}}_{\alpha }{\dot {\sigma }}^{\alpha }}$
${\displaystyle {\boldsymbol {\ddot {x}}}={\boldsymbol {\dot {x}}}_{\alpha }{\dot {\sigma }}^{\alpha }+{\boldsymbol {x}}_{\alpha }{\ddot {\sigma }}^{\alpha }}$
We note that
${\displaystyle {\boldsymbol {\dot {x}}}_{\alpha }={\frac {\partial {\boldsymbol {x}}_{\alpha }}{\partial \sigma ^{\beta }}}{\frac {\partial \sigma ^{\beta }}{\partial s}}={\boldsymbol {x}}_{\alpha \beta }{\dot {\sigma }}^{\beta }}$.
Subsitituting this in the expression above, we have
${\displaystyle {\boldsymbol {\ddot {x}}}={\boldsymbol {x}}_{\alpha \beta }{\dot {\sigma }}^{\alpha }{\dot {\sigma }}^{\beta }+{\boldsymbol {x}}_{\alpha }{\dot {\sigma }}^{\beta }}$.
If we take the dot product of the surface unit normal vector ${\displaystyle {\boldsymbol {\hat {n}}}}$ with both sides of this result we have
${\displaystyle {\boldsymbol {\hat {n}}}\cdot {\boldsymbol {\ddot {x}}}=({\boldsymbol {\hat {n}}}\cdot {\boldsymbol {x}}_{\alpha \beta }){\dot {\sigma }}^{\alpha }{\dot {\sigma }}^{\beta }+({\boldsymbol {\hat {n}}}\cdot {\boldsymbol {x}}_{\alpha }){\dot {\sigma }}^{\beta }}$.
Because ${\displaystyle {\boldsymbol {\hat {n}}}}$ is orthogonal to the tangent plane ${\displaystyle ({\boldsymbol {\hat {n}}}\cdot {\boldsymbol {x}}_{\alpha })=0}$ in the second term on the right yielding
${\displaystyle {\boldsymbol {\hat {n}}}\cdot {\boldsymbol {\ddot {x}}}=({\boldsymbol {\hat {n}}}\cdot {\boldsymbol {x}}_{\alpha \beta }){\dot {\sigma }}^{\alpha }{\dot {\sigma }}^{\beta }}$.
Alternatively, we could take the ${\displaystyle {\frac {\partial }{\partial \sigma ^{\beta }}}}$ of ${\displaystyle {\boldsymbol {\hat {n}}}\cdot {\boldsymbol {x}}_{\alpha }=0}$
${\displaystyle {\frac {\partial {\boldsymbol {\hat {n}}}}{\partial \sigma ^{\beta }}}\cdot {\boldsymbol {x}}_{\alpha }+{\boldsymbol {\hat {n}}}\cdot {\boldsymbol {x}}_{\alpha \beta }=0}$ .
We define the second fundamental tensor ${\displaystyle b_{\alpha \beta }}$ as
${\displaystyle b_{\alpha \beta }\equiv {\boldsymbol {\hat {n}}}\cdot {\boldsymbol {x}}_{\alpha \beta }=-{\boldsymbol {\hat {n}}}_{\beta }\cdot {\boldsymbol {x}}_{\alpha }}$.
This suggests that in arclength variables
${\displaystyle b_{\alpha \beta }{\dot {\sigma }}^{\alpha }{\dot {\sigma }}^{\beta }=(-{\boldsymbol {\hat {n}}}_{\beta }\cdot {\boldsymbol {x}}_{\alpha }){\dot {\sigma }}^{\alpha }{\dot {\sigma }}^{\beta }=-{\frac {\boldsymbol {\hat {n}}}{ds}}\cdot {\frac {d{\boldsymbol {x}}}{ds}}}$.
Finally, we may infer the Second Fundamental Form of Differential Geometry
${\displaystyle b_{\alpha \beta }d\sigma ^{\alpha }d\sigma ^{\beta }=-d{\boldsymbol {x}}\cdot d{\boldsymbol {\hat {n}}}}$
Where the right hand side represents the dot product of incremental changes in position with incremental changes in the surface normal vector.
## Normal Curvature
In the light of the results for the second fundamental tensor and the second fundamental form, we can return to results derived above for the normal curvature ${\displaystyle \kappa _{n}}$
${\displaystyle \kappa _{n}(s)=\kappa (s)\cos(\theta )={\boldsymbol {\ddot {x}}}\cdot {\boldsymbol {\hat {n}}}=({\boldsymbol {\hat {n}}}\cdot {\boldsymbol {x}}_{\alpha \beta }){\dot {\sigma }}^{\alpha }{\dot {\sigma }}^{\beta }=b_{\alpha \beta }{\dot {\sigma }}^{\alpha }{\dot {\sigma }}^{\beta }}$.
Introducing another variable ${\displaystyle t(s)}$ we may write
${\displaystyle \kappa (s)\cos(\theta )=b_{\alpha \beta }{\dot {\sigma }}^{\alpha }{\dot {\sigma }}^{\beta }=b_{\alpha \beta }{\frac {d\sigma ^{\alpha }}{dt}}{\frac {dt}{ds}}{\frac {d\sigma ^{\beta }}{dt}}{\frac {dt}{ds}}={\frac {b_{\alpha \beta }{\frac {d\sigma ^{\alpha }}{dt}}{\frac {d\sigma ^{\beta }}{dt}}}{\left({\frac {ds}{dt}}\right)^{2}}}}$
implying that
${\displaystyle \kappa (s)\cos(\theta )={\frac {b_{\alpha \beta }d\sigma ^{\alpha }d\sigma ^{\beta }}{\left(ds\right)^{2}}}={\frac {b_{\alpha \beta }d\sigma ^{\alpha }d\sigma ^{\beta }}{g_{\alpha \beta }d\sigma ^{\alpha }d\sigma ^{\beta }}}}$.
Where we have substituted for ${\displaystyle (ds)^{2}}$ from the first fundamental form. Thus, the normal curvature ${\displaystyle \kappa _{n}(s)}$ is the second fundamental form, divided by the first fundamental form. We can also define a normal curvature vector ${\displaystyle {\boldsymbol {k}}_{n}}$
${\displaystyle {\boldsymbol {k}}_{n}\equiv \kappa _{n}{\boldsymbol {\hat {n}}}={\frac {b_{\alpha \beta }d\sigma ^{\alpha }d\sigma ^{\beta }}{g_{\alpha \beta }d\sigma ^{\alpha }d\sigma ^{\beta }}}{\boldsymbol {\hat {n}}}}$
Note that the numerator and denominator of the normal curvature are evaluated separately. There is no summation of repeated indexes in the numerator and denominator.
### The extrema of Normal Curvature---the Principal Curvatures and the Principal Curvature directions
We want to find the maximum and minimum curvatures at a particular point. To do this, we must take the expression for the normal curvature, differentiate it with respect to some local coordinates and sent the result equal to zero.
If we are in the local coordinates ${\displaystyle l^{\alpha }}$ and ${\displaystyle l^{\beta }}$ in the neighborhood of a given location, the result for the normal curvature suggests the following relation
${\displaystyle (b_{\alpha \beta }-\kappa _{n}g_{\alpha \beta })l^{\alpha }l^{\beta }\equiv a_{\alpha \beta }l^{\alpha }l^{\beta }}$
Differentiating with respect to ${\displaystyle l^{\gamma }}$ we have
${\displaystyle {\frac {\partial }{\partial l^{\gamma }}}(a_{\alpha \beta }l^{\alpha }l^{\beta })={\frac {\partial a_{\alpha \beta }}{\partial l^{\gamma }}}l^{\alpha }l^{\beta }+a_{\alpha \beta }{\frac {\partial l^{\alpha }}{\partial l^{\gamma }}}l^{\beta }+a_{\alpha \beta }{\frac {\partial l^{\beta }}{\partial l^{\gamma }}}l^{\alpha }}$.
Because the first and second fundamental tensors tend to constants at any given point on a surface, the derivatives of ${\displaystyle a_{\alpha \beta }}$ vanish as the coordinates ${\displaystyle l^{\alpha }\rightarrow 0}$ and ${\displaystyle l^{\beta }\rightarrow 0}$. We note also that the derivatives ${\displaystyle {\frac {\partial l^{\alpha }}{\partial l^{\gamma }}}\equiv \delta _{\gamma }^{\alpha }}$ and ${\displaystyle {\frac {\partial l^{\beta }}{\partial l^{\gamma }}}\equiv \delta _{\gamma }^{\beta }}$, yielding
${\displaystyle a_{\gamma \beta }l^{\beta }+a_{\alpha \gamma }l^{\alpha }=2a_{\gamma \alpha }l^{\alpha }=0}$
where the ${\displaystyle \beta }$ index was changed to ${\displaystyle \alpha }$ in the first term, and where ${\displaystyle a_{\gamma \alpha }}$ is recognized as being symmetric because ${\displaystyle b_{\alpha \beta }}$ and ${\displaystyle g_{\alpha \beta }}$ are symmetric. Finally we may write
${\displaystyle (b_{\alpha \gamma }-\kappa _{n}g_{\alpha \gamma })l^{\alpha }=0}$.
If we multiply by ${\displaystyle g^{\gamma \beta }}$
${\displaystyle (g^{\gamma \beta }b_{\alpha \gamma }-\kappa _{n}g^{\gamma \beta }g_{\alpha \gamma })l^{\alpha }=0}$
this has the effect of raising one of the indexes of the second fundamental tensor ${\displaystyle g^{\gamma \beta }b_{\alpha \gamma }={b_{\alpha }}^{\beta }}$ and to make to make the metric tensor into a Kronecker delta ${\displaystyle g^{\gamma \beta }g_{\alpha \gamma }={\delta _{\alpha }}^{\beta }}$ resulting in
${\displaystyle ({b_{\alpha }}^{\beta }-\kappa _{n}{\delta _{\alpha }}^{\beta })l^{\alpha }=0}$
which is the eigenvalue problem
${\displaystyle \det({b_{\alpha }}^{\beta }-\kappa _{n}{\delta _{\alpha }}^{\beta })=0}$
with ${\displaystyle \kappa _{n}}$ being the eigenvalue. Because this is a symmetric problem, this implies that the eigenvectors are orthogonal representing two orthogonal principal directions of curvature, with the eigenvalues ${\displaystyle \kappa _{n}=\kappa _{1}}$ and ${\displaystyle \kappa _{n}=\kappa _{2}}$ representing to principal curvatures of the surface.
We can construct the solution to this problem with a few definitions. First, we represent the determinant of the second fundamental tensor as ${\displaystyle b\equiv \det(b_{\alpha \gamma })}$. The determinant of the contravariant form of the metric tensor is ${\displaystyle \det(g^{\beta \gamma })={\frac {1}{g}}}$. The determinant of ${\displaystyle \det(b_{\alpha }^{\beta })=\det(g^{\beta \gamma })\det(b_{\alpha \gamma })={\frac {b}{g}}}$. We also recognize that ${\displaystyle \det(b_{\alpha }^{\beta })=(b_{1}^{1})(b_{2}^{2})-(b_{1}^{2})^{2}}$
Furthermore, we know that the determiniant of a matrix is the product of its eigenvalues so ${\displaystyle {\frac {b}{g}}=\kappa _{1}\kappa _{2}}$. Writing the eigenvalue problem as the determinant
${\displaystyle \det({b_{\alpha }}^{\beta }-\kappa _{n}{\delta _{\alpha }}^{\beta })=\det {\begin{bmatrix}{b_{1}}^{1}-\kappa _{n}&{b_{1}}^{2}\\{b_{2}}^{1}&{b_{2}}^{2}-\kappa _{n}\end{bmatrix}}=0}$.
Multiplying this out, we obtain the characteristic equation of the eigenvalue problem,
${\displaystyle (b_{1}^{1}-\kappa _{n})(b_{2}^{2}-\kappa _{n})-(b_{1}^{2})^{2}=0}$
${\displaystyle (b_{1}^{1})(b_{2}^{2})-(b_{1}^{1}+b_{2}^{2})\kappa _{n}+(\kappa _{n})^{2}-(b_{1}^{2})^{2}=0}$
${\displaystyle (\kappa _{n})^{2}-(b_{1}^{1}+b_{2}^{2})\kappa _{n}+(b_{1}^{1})(b_{2}^{2})-(b_{1}^{2})^{2}=0}$.
Here we recognize that the last two terms are the determinant of ${\displaystyle b_{\alpha }^{\beta }}$ allowing us to rewrite the characteristic equation as
${\displaystyle (\kappa _{n})^{2}-(b_{1}^{1}+b_{2}^{2})\kappa _{n}+(b_{1}^{1})(b_{2}^{2})-(b_{1}^{2})^{2}=0}$.
Another fact that we can make use of is that the trace of a non-singular symmetric matrix is invariant, so that ${\displaystyle b_{1}^{1}+b_{2}^{2}=\kappa _{1}+\kappa _{2}}$ and that the determinant of the matrix is the product of its eigenvalues, so ${\displaystyle \det(b_{\alpha }^{\beta })=\kappa _{1}\kappa _{2}}$.
Given this information, we can construct the following equivalent forms of the characteristic equation
${\displaystyle \kappa _{n}^{2}+(\kappa _{1}+\kappa _{2})\kappa _{n}+\kappa _{1}\kappa _{2}}$
${\displaystyle \kappa _{n}^{2}+(\kappa _{1}+\kappa _{2})\kappa _{n}+{\frac {b}{g}}}$.
### Gaussian Curvature, Mean Curvature
The product of the principal curvatures is called the Gaussian curvature ${\displaystyle K\equiv \kappa _{1}\kappa _{2}}$.
The sum average of the principal curvatures is called the mean curvature ${\displaystyle H={\frac {1}{2}}(\kappa _{1}+\kappa _{2})}$.
This allows us to write an additional form of the characteristic equation of the principal curvatures
${\displaystyle \kappa _{n}^{2}+2H\kappa _{n}+K}$
where ${\displaystyle K}$ is the Gaussian curvature and ${\displaystyle H}$ is the mean curvature.
Because ${\displaystyle b_{\alpha }^{\beta }}$ is symmetric, the eigenvectors of this matrix are orthogonal. These directions are called the principal directions.
### Umbilics and elliptic, parabolic, and hyperbolic points of a surface
The sign of the curvature of a surface is determined from an examination of the unit surface normal vector ${\displaystyle {\boldsymbol {\hat {n}}}}$ and the principal normal vector ${\displaystyle {\boldsymbol {\hat {p}}}}$. Both of these vectors lie in the plane perpendicular to the tangent to the curve in question. Therefore the curvature of the surface is given by
${\displaystyle {\boldsymbol {\hat {n}}}\cdot {\boldsymbol {\hat {p}}}={\begin{cases}>0&{\mbox{positive}}\\<0&{\mbox{negative}}\end{cases}}.}$
As we have seen before normal curvature is given by the ratio of the second and first fundamental forms
${\displaystyle {\boldsymbol {k}}_{n}\equiv \kappa _{n}{\boldsymbol {\hat {n}}}={\frac {b_{\alpha \beta }d\sigma ^{\alpha }d\sigma ^{\beta }}{g_{\alpha \beta }d\sigma ^{\alpha }d\sigma ^{\beta }}}{\boldsymbol {\hat {n}}}}$.
Here, because the first fundamental form is always positive, we see that the sign of the normal curvature is related only to the values of the elements of the second fundamental tensor ${\displaystyle b_{\alpha }^{\beta }}$. The second fundamental form is positive or negative definite if and only if the determinant ${\displaystyle b_{11}b_{22}-(b_{12})^{2}=\kappa _{1}\kappa _{2}}$ is positive. In this case the principal curvatures ${\displaystyle \kappa _{1}}$ and ${\displaystyle \kappa _{2}}$ both have the same sign, either positive of negative. The principal directions are orthogonal, thus an inward curving (negative) or outward curving (positive) ellipsoid is described by the principal directions. Thus, any point of a surface that has principal curvatures of the same sign is called an elliptic point. All points of an ellipsoid are elliptic points.
If ${\displaystyle b_{11}b_{22}-(b_{12})^{2}=\kappa _{1}\kappa _{2}=0}$ because only one principal curvature is zero, then the point is called a parabolic point. A value of a ${\displaystyle \kappa _{n}}$ means that the curvature is zero and that a line on which a curvature is zero is called an asymptotic direction.
If ${\displaystyle \kappa _{1}<0}$ and ${\displaystyle \kappa _{2}>0}$ at a point, then the point is called a hyperbolic (or saddle) point.
If ${\displaystyle \kappa _{n}={\mbox{const.}}}$ then we note from
${\displaystyle (b_{\alpha \beta }-\kappa _{n}g_{\alpha \beta })l^{\alpha }=0}$ that the second fundamental tensor is proportional to the first fundamental tensor ${\displaystyle b_{\alpha \beta }=\lambda (\sigma ^{1},\sigma ^{2})b_{\alpha \beta }}$. Such a point is called an umbilic. In this case, the determinants of the first and second fundamental tensors are related via ${\displaystyle b=\lambda ^{2}g}$.
If ${\displaystyle \lambda >0}$ then the point is called an elliptic umbilic. If ${\displaystyle \lambda =0}$ then ${\displaystyle b=0}$ and the point is a flat spot or a parabolic umbilic.
Umbilics may be considered pathological points on a surface.
### The Formulae of Weingarten
Just as for curves, the Frenet equations provide a natural coordinate frame at each point of a curve ${\displaystyle ({\boldsymbol {\hat {t}}},{\boldsymbol {\hat {p}}},{\boldsymbol {\hat {b}}})}$, surfaces have a natural frame at each point.
This frame consists of the surface normal and the two tangent directions, given by ${\displaystyle ({\boldsymbol {x}}_{1},{\boldsymbol {x}}_{2},{\boldsymbol {\hat {n}}})}$ . As with the Frenet frame, we can find relationships between the members of this frame and their derivatives, though these are not, in general, arclength coordinates.
As earlier in these notes, we may differentiate the dot product of a unit vector with itself, ${\displaystyle {\boldsymbol {\hat {n}}}\cdot {\boldsymbol {\hat {n}}}=1}$ to obtain an orthogonal vectors ${\displaystyle {\boldsymbol {x}}_{\alpha }\equiv {\frac {\partial {\boldsymbol {\hat {x}}}}{\partial \sigma ^{\alpha }}}}$
${\displaystyle {\frac {\partial ({\boldsymbol {\hat {n}}}\cdot {\boldsymbol {\hat {n}}})}{\partial \sigma ^{\alpha }}}=0\rightarrow {\boldsymbol {\hat {n}}}_{\alpha }\cdot {\boldsymbol {\hat {n}}}=0}$.
While we have retained the hat symbol in the derivative, we recognize that the derivative of a unit vector is, in general, not a unit vector.
Because the derivative of the unit surface normal points orthogonal to the tangent plane at the point where it is defined, the derivatives must be proportional to the tangent vectors. Writing this proportionality constant by ${\displaystyle {C_{\alpha }}^{\mu }}$
${\displaystyle {\boldsymbol {\hat {n}}}_{\alpha }={C_{\alpha }}^{\mu }{\boldsymbol {x}}_{\mu }}$ .
We must find the value of ${\displaystyle {C_{\alpha }}^{\mu }}$. We can take the dot product of a tangent vector with each side of this equation
${\displaystyle {\boldsymbol {\hat {n}}}_{\alpha }\cdot {\boldsymbol {x}}_{\lambda }={C_{\alpha }}^{\mu }{\boldsymbol {x}}_{\mu }\cdot {\boldsymbol {x}}_{\lambda }}$.
We recognize that ${\displaystyle {\boldsymbol {x}}_{\mu }\cdot {\boldsymbol {x}}_{\lambda }\equiv g_{\mu \lambda }\qquad }$ and that ${\displaystyle \qquad {\boldsymbol {\hat {n}}}_{\alpha }\cdot {\boldsymbol {x}}_{\lambda }=-b_{\alpha \mu }={C_{\alpha }}^{\mu }g_{\mu \lambda }}$.
The contravariant form of the metric tensor ${\displaystyle g^{\lambda \nu }}$ can be used to raise the index
${\displaystyle {\boldsymbol {\hat {n}}}_{\alpha }\cdot {\boldsymbol {x}}_{\lambda }g^{\lambda \nu }=-b_{\alpha \lambda }g^{\lambda \nu }=-{b_{\alpha }}^{\nu }={C_{\alpha }}^{\mu }g_{\mu \lambda }g^{\lambda \nu }={C_{\alpha }}^{\mu }{\delta _{\mu }}^{\nu }={C_{\alpha }}^{\nu }}$,
where we have used the fact that that ${\displaystyle \qquad g_{\mu \lambda }g^{\lambda \nu }\equiv {\delta _{\mu }}^{\nu }}$.
Thus, we may write the formulae of Weingarten as
${\displaystyle {\boldsymbol {\hat {n}}}_{\alpha }=-{b_{\alpha }}^{\beta }{\boldsymbol {x}}_{\beta }}$.
Thus, the derivative of the surface normal vector is related to the tangent vectors through the mixed form of the second fundamental tensor.
### The Formulae of Gauss
For the Formulae of Gauss, we need to consider derivatives of the tangent vectors.
The second partial derivative, which is the derivative of the tangent vector must yield a result that depends both on the tangent directions and on the normal direction. The other way of looking at this is that we need to find a way to redefine the derivative in these non-orthogonal coordinates that acts like the derivative that we are familiar with.
We write
${\displaystyle {\boldsymbol {x}}_{\alpha \beta }\equiv {\frac {\partial ^{2}{\boldsymbol {x}}}{\partial \sigma ^{\alpha }\partial \sigma ^{\beta }}}={\Gamma _{\alpha \beta }}^{\gamma }{\boldsymbol {x}}_{\gamma }+a_{\alpha \beta }{\boldsymbol {\hat {n}}}}$
The coefficients ${\displaystyle a_{\alpha \beta }}$ and ${\displaystyle {\Gamma _{\alpha \beta }}^{\gamma }}$ remain to be determined.
The term with the tangent vector can be eliminated by taking the dot product of both sides with the unit surface normal vector ${\displaystyle {\boldsymbol {\hat {n}}}}$. Because ${\displaystyle {\boldsymbol {\hat {n}}}\cdot {\boldsymbol {x}}_{\gamma }=0}$ and because ${\displaystyle {\boldsymbol {\hat {n}}}\cdot {\boldsymbol {x}}_{\alpha \beta }\equiv b_{\alpha \beta }}$, we may immediately identify that
${\displaystyle a_{\alpha \beta }\equiv b_{\alpha \beta }}$ yielding
${\displaystyle {\boldsymbol {x}}_{\alpha \beta }={\Gamma _{\alpha \beta }}^{\gamma }{\boldsymbol {x}}_{\gamma }+b_{\alpha \beta }{\boldsymbol {\hat {n}}}}$.
We may eliminate the term with ${\displaystyle {\boldsymbol {\hat {n}}}}$ by dotting both sides with a tangent vector
${\displaystyle {\boldsymbol {x}}_{\alpha \beta }\cdot {\boldsymbol {x}}_{\lambda }={\Gamma _{\alpha \beta }}^{\gamma }{\boldsymbol {x}}_{\gamma }\cdot {\boldsymbol {x}}_{\lambda }}$. Here, we recognize that ${\displaystyle {\boldsymbol {x}}_{\gamma }\cdot {\boldsymbol {x}}_{\lambda }\equiv g_{\gamma \lambda }}$.
Multiplying both sides by ${\displaystyle g^{\lambda \kappa }}$
${\displaystyle {\boldsymbol {x}}_{\alpha \beta }\cdot {\boldsymbol {x}}_{\lambda }g^{\lambda \kappa }={\Gamma _{\alpha \beta }}^{\gamma }g_{\gamma \lambda }g^{\lambda \kappa }={\Gamma _{\alpha \beta }}^{\gamma }{\delta _{\gamma }}^{\kappa }={\Gamma _{\alpha \beta }}^{\kappa }}$ where and we note that ${\displaystyle g_{\gamma \lambda }g^{\lambda \kappa }\equiv {\delta _{\gamma }}^{\kappa }}$.
This permits us to solve for the coefficients
${\displaystyle {\Gamma _{\alpha \beta }}^{\kappa }={\boldsymbol {x}}_{\alpha \beta }\cdot {\boldsymbol {x}}^{\kappa }={\boldsymbol {x}}_{\alpha \beta }\cdot {\boldsymbol {x}}_{\lambda }g^{\lambda \kappa }}$.
### Christoffel Symbols
The ${\displaystyle {\Gamma _{\alpha \beta }}^{\kappa }}$ are called Christoffel symbols. Though indexed, these items do not transform as tensors do, and are therefore symbols rather than tensors. There are variations on the notations for the Christoffel symbols.
### Christoffel symbols of the first kind
With all indices covariant, we define the Christoffel symbols of the first kind as ${\displaystyle \Gamma _{\alpha \beta \lambda }={\boldsymbol {x}}_{\alpha \beta }\cdot {\boldsymbol {x}}_{\lambda }}$.
We can immediately see that there is a symmetry of the first two indexes of the Christoffel symbol of the first kind
${\displaystyle \Gamma _{\alpha \beta \lambda }=\Gamma _{\beta \alpha \lambda }}$ because ${\displaystyle {\boldsymbol {x}}_{\alpha \beta }={\boldsymbol {x}}_{\beta \alpha }}$.
It is apparent that we can generate Christoffel symbols of the first kind by differentiating the covariant metric tensor
${\displaystyle {\frac {\partial g_{\alpha \lambda }}{\partial \sigma ^{\beta }}}={\frac {\partial ({\boldsymbol {x}}_{\alpha }\cdot {\boldsymbol {x}}_{\lambda })}{\partial \sigma ^{\beta }}}={\boldsymbol {x}}_{\alpha \beta }\cdot {\boldsymbol {x}}_{\lambda }+{\boldsymbol {x}}_{\alpha }\cdot {\boldsymbol {x}}_{\lambda \beta }\equiv \Gamma _{\alpha \beta \lambda }+\Gamma _{\lambda \beta \alpha }}$.
Permuting indexes
${\displaystyle {\frac {\partial g_{\lambda \beta }}{\partial \sigma ^{\alpha }}}\equiv \Gamma _{\lambda \alpha \beta }+\Gamma _{\beta \alpha \lambda }}$
and
${\displaystyle {\frac {\partial g_{\beta \alpha }}{\partial \sigma ^{\lambda }}}\equiv \Gamma _{\beta \lambda \alpha }+\Gamma _{\alpha \lambda \beta }}$ .
|
Find the centroid of rectangular wall whose height is 12 ft. and base length of wall is 24 ft. Y2 = 1+ (2 / 2) = 2 You can even repeat this process several times, and the result is kind like peeling an onion. Furthermore, one can say that centroid refers to the geometric center of a particular plane figure. List of centroids From Wikipedia, the free encyclopedia The following diagrams depict a list of centroids. Now lets try a composite shape, which is slightly more complicated. Centroid of square lies where, diagonals intersects each other. Shapes can also be subtracted by using a negative area. Solution . Now repeating the same method as completed for the X-axis, we can break the shapes apart to calculate the area. C = [ (x1 + x2 + x3)/ 3, (y1 + y2 + y3)/ 3] Where, C denotes centroid of the triangle. Chapter 9: Center of Gravity and Centroid Chapter Objectives • To discuss the concept of the center of gravity, center of mass, and the centroid. The formula for the centroid is given below, don’t worry if it looks overly complicated, following a breakdown of the variables will we go through a very basic example and it will all make sense. Now using the original equation, we can split Xi and Ai according to both shapes, this lets us calculate the area of each shape (A1 and A2). How do you calculate the centroid of any shape? Centroid of a triangle = ((x 1 +x 2 +x 3)/3, (y 1 +y 2 +y 3)/3) Centroid Formula For Different Shapes. What is Centroid? Moreover, it is the arithmetic mean position of all the points which exist in the figure. A = 5 * 2 = 10 A = 5 * 2 = 10 In this example, we need to split the shape in two different ways. 17 Wednesday, November 7, 2012 Centroids ! X1 = 2 / 2 = 1 Remember that the centroid coordinate is the average x and y coordinate for all the points in the shape. Below is the list of centroids for common shapes. Now we tackle the Y-Axis, to do this we need to split the shape up into different sub-shapes to have a continuous axis running through the whole shape. 2 r π. 705 Centroid of parabolic segment by integration; 706 Centroid of quarter circle by integration; 707 Centroid of quarter ellipse by integration; 708 Centroid and area of spandrel by integration X̄ = Coordinate Location (Our Answer!) Centroid of rectangle lies at intersection of two diagonals. You will find out how useful and powerful knowing how to calculate the centroid can be, in particular when assessing the shear capacity of an object using the first moment of area. Once again to help you follow through the example, the datum or reference axis (Xo & Yo) is put onto the drawing and therefore we should have the same Xi and Yi values. This will be the x, y, and z coordinates of the point that is the centroid of the shape. As shown below. As you become more comfortable, you can do this mentally. Then we will look at more complex composite shape, after which you will be finding centroids of shapes in your sleep! The coordinate system, to locate the centroid with, can be anything we want. Before going ahead, see if you can calculate Y1 and Y2 for both shapes. A1 = 1 * 2 = 2 The centroid is also known as the average of values of a set of points. So, this concludes the end of the tutorial on how to calculate the centroid of any shape. Centroid is an interesting concept in mathematics and physics. Tilt-slab construction (aka tilt-wall or tilt-up) In this section we'll see how to find the centroid of an area with straight sides, then we'll extend the concept to areas wit… Now, let us learn the centroid formula by considering a triangle. This definition extends any object into the n-dimensional space: its centre of mass is that the average position of all points altogether coordinate directions. • To show how to determine the location of the centroid for a body of arbitrary shape. The convex hull has all the points on the "outside" of the set of points. A centroid of an object X in n -dimensional space is the intersection of all hyperplanes that divide X into two parts of equal moment about the hyperplane. If the shape has a line of symmetry, that means each point on one side of the line must have an equivalent point on the other side of the line. If the shapes overlap, the triangle is subtracted from the rectangle to make a new shape. This page references the formulas for finding the centroid of several common 2D shapes. A1 = 5 * 2 = 10, ȳ = Coordinate Location (Our Answer!) A2 = 1 * 3 = 3. Being able to calculate the centroid is extremely important for the structural analysis of members, it is involved in the various calculations for different section properties, thankfully, it is really easy to calculate! • If the area (or section or body) has one line of symmetry, the centroid will lie somewhere along the line of symmetry. In general, it can be defined as some extent wherever a cut of the form will stay dead balanced on the tip of a pin. When a shape is subtracted just treat the subtracted area as a negative area. Informally, it is the "average" of all points of X . Centroid Formula. {\displaystyle {\frac {2r} {\pi }}} L = π r 2. Lets tackle the X axis first. Centroid of rectangular areas . See below. Below is the list of centroids for common shapes. You can refer to this table in the future when solving for problems requiring you to find the centroid: List of centroids for common shapes. Diagonals intersect at width (b/2) from reference x-axis and at height (h/2) from reference y-axis. The center of gravity will equal the centroid if the body is homogenous i.e. Example. Get all latest content delivered straight to your inbox. For a shape such as a square it is very easy to find the centroid with simple mathematics, or just through looking at it. Putting both X-bar and Y-bar together, we get the co-ordinates of (1.25, 1.25) for the centroid of the composite shape. Center of Mass and Centroids Center of Mass A body of mass m in equilibrium under the action of tension in the cord, and resultant W of the gravitational forces acting on all particles of the body.-The resultant is collinear with the cord Suspend the body at different points-Dotted lines show lines of action of the resultant force in each case. Calculating the centroid involves only the geometrical shape of the area. In order to take advantage of the shape symmetries though, it seems appropriate to place the origin of axes x, y at the circle center, and orient the x axis along the diametric base of the semicircle. Here is the breakdown of the variables in the equation for the X-Axis centroid, • To use the Theorems of Pappus and Guldinus for finding the surface area and volume for a body having axial symmetry. {\displaystyle {\frac {2r} {\pi }}} 2 r π. Much like the centroid calculations we did with 2D shapes, we are looking to find the shape's average coordinate in each dimension. As shown in the figure. Please do not enter any spam link in the comment box, Healthy hair and lifestyle To maintain a healthier lifestyle and hair in this loc…. Centroid formula for all shapes of Areas In general, it can be defined as some extent wherever a cut of the form will stay dead balanced on the tip of a pin. Another formula for the centroid is. G (h 2, b + 2 a 3 (a + b) h) Let’s look at an example to see how to use this formula. Following are the properties of the centroid: It is defined as the centre of the object. Question: Find the centroid of a trapezium of height 5 cm whose parallel sides are 6 cm and 8 cm. A1 = 5 * 2 = 10. To calculate X1 and X2 we have to look at the whole shape, as this is the distance between the centroid of A1 or A2 and the datum that we set, X0. The same type of formula could be found for finding the y centroid 1 1 n ii i n i i xA x A = = = ∑ ∑ 1 1 n ii i n i i yA y A = = = ∑ ∑ 32 Centroids by Integration . A = The total area of all the shapes This generalized formula for the x direction above is simply area one times x̄ one, plus area two times x̄ two, plus area three times x̄ three, adding up as many shapes as you have in this fashion and then dividing by the overall area of your combined shape. Simple right? For convex shapes, the centroid lays inside the object; for concave ones, the centroid can lay outside (e.g., in a ring-shaped object). The Centroid Formula is given by. ... What this means is that the centroid of this shape is, on the xy coordinate plane, 5.667 inches in the positive x direction and 5.1667 inches in the positive y direction. {\displaystyle C_ {k}= {\frac {\int zS_ {k} (z)\;dz} {\int S_ {k} (z)\;dz}}} where Ck is the k th coordinate of C, and Sk ( z) is the measure of the intersection of X with the hyperplane defined by the equation xk = z. How do we find the center of mass for such an uneven shape? Centroid formula for all shapes of an area element: how to find centroid. Units are not relevant for the centroid. A2 = 1 * 2 = 2. The equations are the same for the y location of the overall centroid, except you will instead be using ȳ values in your equations. Here is the breakdown of the variables in the equation for the X-Axis centroid, X̄ = The location of the centroid in the X Axis A = The total area of all the shapes Xi = The distance from the datum or reference axis to the centre of the shape i The centroid should always lie inside the object. A centroid is the central point of a figure and is also called the geometric center. Xi = The distance from the datum or reference axis to the centre of the shape i y1 = 5 / 2 = 2.5 Ai = The area of shape i. patwari vacancy 2020 | ਪੰਜਾਬ ਪਟਵਾਰੀ ਭਰਤੀ 2020 | punjab patwari previous paper, Take care of your hairs in this summer, look shiny. In this post we will explore the centroid, it will be full of information in text, equations and pictorial forms with examples that are solved step by step to help you understand and apply equations to calculate the centroid of a shape and the first moment of area and most importantly, why we need too! The centroid is the term for 2-dimensional shapes. A = (1 * 1) + (3 * 1) = 4 Here, the list of centroid formula is given for different geometrical shapes. In other words, it’s the average of a set of points, weighted by their respective values. y 1, y 2, y 3 are the y-coordinates of the vertices of a triangle. constant density. Centroid Properties and Formula. The formula for the centroid can be expressed as a ratio of integrals, ... the centroid of the combination of several basic shapes can be calculated as and where is the centroid of each basic shape and is the area of each corresponding shape. A1 = 1 The coordinates of the centroid of the trapezium are given by the following formula. Integration formulas for calculating the Centroid are: When calculating the centroid of a complex shape. It may sound confusing but with a few pictures it’ll be clear as rain. It is also the centre of gravity. The centroid … This definition extends any object into the n-dimensional space: its centre of mass is that the average position of all points altogether coordinate directions. To calculate the centroid of a combined shape, sum the individual centroids times the individual areas and divide that by the sum of the individual areas as shown on the applet. To help follow through the example the datum or reference axis (Xo & Yo) is put onto the drawing and therefore we should have the same Xi and Yi values. Centroids ! With composite shapes, we need to split the shape into individual shapes (sub-shapes, if you like). There is a table in the back cover of your book that gives you the location of local centroids for a select group of shapes ! X1 = 1 / 2 = 0.5 X̄ = Coordinate Location (Our Answer!) The best way to do these calculations is with a table or spreadsheet. To make it clearer which to solve for, using the equations, below is the shapes separated. To put it very simply, the centroid is the centre of a shape, such as in a 2x2 square, the centroid of the co-ordinates would be (1, 1). In tilt-slab construction, we have a concrete wall (with doors and windows cut out) which we need to raise into position. Derive the formulas for the location of semicircle centroid. Divide the shape up into a combination of known shapes. The formula for the area of a triangle is the base multiplied by the height and all of this divided by 2 (b*h/2). The centroid is the point of concurrency of all the medians. Step 1. If you do this, and throw out the points that are on the hull, you'll be throwing out the outliers, and the points that remain will give a more "representative" centroid. x 1, x 2, x 3 are the x-coordinates of the vertices of a triangle. If these were not the results obtained, check the work as there may have been a mistake in the process. First thing to note in the example is there is only one shape, we will call this shape 1. Subtract the area and first moment of the circular cutout. It is the point which corresponds to the mean position of all the points in a figure. Y1 = 1 / 2 = 0.5 same area and shape. •Compute the coordinates of the area centroid by dividing the first moments by the total area. centroid. ; Σ is summation notation, which basically means to “add them all up.”; The same formula, with y i substituting for x i, gives us the y coordinate of the centroid.. Finding the Centroid of Two Dimensional Shapes Using Calculus. To split it into sub-shapes and ensure that the Xi line follows through both, the solution is given below. Square is figure whose all dimensions are same. centroid locations, we can use this formula to locate the centroid of the composite shape 1 1 n ii i n i i xA x A = = = ∑ ∑ 4 Centroids by Composite Areas . Below is a composite shape made up of a square and a rectangle, our aim, to find the co-ordinates of the centroid. This means that the average value (aka. 3 Monday, November 12, 2012 Centroid by Composite Bodies ! For instance, the centroid of a circle and a rectangle is at the middle. From this we can then apply the formulas as above to calculate X-bar and Y-bar for the co-ordinates of the centroid. So we break down the variables and calculate them step by step. Centroids of Lines, Areas, and Volumes Centroid is a geometrical property of a body When density of a body is uniform throughout, centroid and CM coincide dV V Lines : Slender rod, Wire Cross-sectional area = A ρand A are constant over L dm = ρAdL ; Centroid = CM L zdL z L ydL y L xdL x ∫ ∫ ∫ = = = Areas : Body with small but constant thickness t To solve the centroid we look at each axis separately, the answers to each provide the co-ordinates (Xi, Yi). Centroid of square lies where, diagonals intersect each other. Find the centroid of square whose breadth and thickness is 5 ft. the centroid) must lie along any axis of symmetry. ȳ = Coordinate Location (Our Answer!) x 2 + y 2 = r 2. C k = ∫ z S k ( z ) d z ∫ S k ( z ) d z. Examples. A = (1 * 2) + (2 * 1) = 4 {\displaystyle \,\!x^ {2}+y^ {2}=r^ {2}} and in the first quadrant. We don't want the wall to crack as we raise it, so we need to know the center of mass of the wall. We need to ensure that the distance from the datum to the centre of the shape runs through all of the composite shape. https://www.youtube.com/watch?v=BfRte3uy0ys. However, when we have composite shapes, (two shapes together), or even just more complex shapes in general, the easiest, fastest and most efficient way to calculate the centroid is using an equation. Finding the centroid of a triangle or a set of points is an easy task - formula is really intuitive. List of centroids for common shapes. X̄ = The location of the centroid in the X Axis Below is the solution, shown graphically with the co-ordinates (X̄ ,ȳ) of the centroid of the 5x2 rectangle. It is the point that matches to the center of gravity of a particular shape. Formula for Centroid. SOLUTION: •Divide the area into a triangle, rectangle, and semicircle with a circular cutout. The center of mass is the term for 3-dimensional shapes. Where: x i is the distance from the axis to the centroid of the simple shape,; A i is the area of the simple shape. {\displaystyle L= {\frac {\pi r} {2}}} The points on the circle. We do this by summing up all the little bits of volume times the x, y, or z coordinate of that bit of volume and then dividing that sum by the total volume of the shape. Below is a rectangle, our aim, to find the co-ordinates of the centroid. •Find the total area and first moments of the triangle, rectangle, and semicircle. Remember A in the first part of the equation for the whole composite shape! X2 = 1+ (1/ 2) = 1.5 In the figures, the centroid is marked as point C. Its position can be determined through the two coordinates x c and y c, in respect to the displayed, in every case, Cartesian system of axes x,y.General formulas for the centroid of any area are provided in the section that follows the table. Diagonals intersect at width ( b/2 ) from reference y-axis both X-bar Y-bar. After which you will be the x, y 3 are the properties of the into... Now lets try a composite shape up of a set of points is an easy -... Subtract the area from this we can break the shapes overlap, the centroid formula for all shapes... Sub-Shapes, if you can do this mentally out ) which we need to split it into sub-shapes and that...! x^ { 2 } } } and in the figure are 6 cm and 8 cm like the of. Is 24 ft x-axis and at height ( h/2 ) from reference y-axis •Divide the area, ȳ of. 1.25, 1.25 ) for the whole composite shape, we are looking to find...., check the work as there may have been a mistake in the figure shapes of an area:... Words, it is the shapes apart to calculate the centroid: is. Hull has all the points in a figure and is also known as the average of a plane. This process several times, and semicircle with a table or spreadsheet be anything we want, can anything! Properties of the centroid we look at more complex composite shape made up of complex... Sides are 6 cm and 8 cm a concrete wall ( with doors and windows cut out ) we... Coordinates of the 5x2 rectangle of known shapes equation for the co-ordinates of ( 1.25, )! Free encyclopedia the following diagrams depict a list of centroids from Wikipedia, the centroid of a particular.. Just treat the subtracted area as a negative area your sleep reference x-axis and at height ( ). Do this mentally of centroid formula by considering a triangle Theorems of Pappus and Guldinus for finding centroid., diagonals intersect each other 5 cm whose parallel sides are 6 cm and 8 cm, if... Task - formula is really intuitive for, using the equations, below is a shape... The centre of the equation for the whole composite shape volume for a body of arbitrary shape,. Figure and is also called the geometric center for such an uneven shape ∫ z S k ( z d... Centroid we look at more complex composite shape the circular cutout furthermore, one can say that centroid refers the... A combination of known shapes breadth and thickness is 5 ft using a centroid formula for all shapes area are: calculating., after which you will be the x, y 3 are the of... All latest content delivered straight to your inbox shapes apart to calculate X-bar and Y-bar for the location of area... Area centroid by dividing the first quadrant = 10 y1 = 5 / 2 = 10 =! Position of all the medians step by step whole composite shape how do you the! Is subtracted from the datum to the center of gravity will equal the centroid is point. The centroid of a square and a rectangle is at the middle calculate. See if you can do this mentally have a concrete wall ( with doors and windows cut )! The results obtained, check the work as there may have been a mistake in the figure }. Method as completed for the location of semicircle centroid the 5x2 rectangle the... How to find the shape in two different ways, shown graphically with the co-ordinates of the set points. Need to split the shape runs through all of the centroid of any shape the equation the. For common shapes the trapezium are given by the total area centroid formula for all shapes first of... 5X2 rectangle, Yi ) 12 ft. and base length of wall is 24.. Volume for a body having axial symmetry solve the centroid involves only geometrical. Of concurrency of all the points on the outside '' of all the points which in. The geometric center of mass is the list of centroids for common.! * 2 = 10 y1 = 5 / 2 = 10 centroid formula for all shapes moment. Centroid involves only the geometrical shape of the centroid of a figure and is also called geometric. That centroid refers to the center of gravity will equal the centroid of rectangular wall height. First moments of the centroid calculations we did with 2D shapes, we are looking to find the (... And 8 cm given below use the Theorems of Pappus and Guldinus for finding the centroid we at... To note in the example is there is only one shape, which is more! Obtained, check the work as there may have been a mistake in the first.. Datum to the center of mass is the centroid involves only the geometrical of... The free encyclopedia the following diagrams depict a list of centroids for common.! Can say that centroid refers to the center of gravity of a shape! A combination of known shapes the coordinate system, to find the shape up into a combination known! Triangle, rectangle, our aim, to find the shape up into a combination of known.! The centroid we look at more complex composite shape any axis of symmetry length wall... Term for 3-dimensional shapes above to calculate the centroid of the centroid of a trapezium of 5! You can do this mentally 2 = 2.5 A1 = 5 * 2 = 10 y1 = 5 / =. The formulas for the x-axis, we will call this shape 1 into combination... Uneven shape 's average coordinate in each dimension work as there may have been a mistake in the.... A square and a rectangle, and z coordinates of the composite shape 3-dimensional shapes Xi line through!, rectangle, our aim, to locate the centroid if the shapes separated and ensure that the from. Can be anything we want ( with doors and windows cut out ) which we need to into... Respective values •compute the coordinates of the centroid we look at each axis separately, the triangle is just! The area a table or spreadsheet into a combination of known shapes is... Shapes in your sleep the co-ordinates of the area centroid by composite Bodies for 3-dimensional shapes where diagonals... A new shape made up of a set of points a rectangle, and semicircle a. See if you can calculate y1 and Y2 for both shapes homogenous i.e determine the location of the vertices a... By step of any shape plane figure average coordinate in each dimension the! 12 ft. and base length of wall is 24 ft ( z ) d.. { \frac { 2r } { \pi } } } L = π 2! Equal the centroid ) must lie along any axis of symmetry system, to locate the centroid square... For common shapes do this mentally your sleep center of mass is the centroid of square where! So, this concludes the end of the centroid of the composite shape moment of the vertices a... Shapes ( sub-shapes, if you like ) and thickness is 5 ft reference x-axis and at (... Locate the centroid is the arithmetic mean position of all points of x trapezium are given by the total and! The figure of points, weighted by their respective values equation centroid formula for all shapes the centroid of any?..., the centroid find the shape into individual shapes ( sub-shapes, if you can y1! It is the shapes separated to the geometric center any axis of symmetry like ) shapes, need... To each provide the co-ordinates ( X̄, ȳ ) of the circular cutout in the part. For, using the equations, below is the list of centroids for common.., 1.25 ) for the location of the area to solve for, using the equations below! We look at more complex composite shape made up of a triangle 5x2 rectangle the x, 3... Work as there may have been a mistake in the process like peeling an onion this will be centroids... Example, we need to split the shape up into a triangle Pappus and Guldinus finding. Subtracted from the datum to the geometric center of mass is the point! The distance from the datum to the centre of the trapezium are given the. Matches to the centre of the object ’ S the average of particular! And Y2 for both shapes shape into individual shapes ( sub-shapes, if you can calculate y1 and for! And is also called the geometric center of a triangle or a set of points is an concept. At the middle result is kind like peeling an onion whose parallel sides are 6 cm 8... To make it clearer which to solve the centroid by using a negative.. In a figure and is also known as the centre of the 5x2 rectangle of! Shape is subtracted just treat the subtracted area as a negative area by... Co-Ordinates of ( 1.25, 1.25 ) for the whole composite shape you can do this.! Area as a negative area given below split it into sub-shapes and that... Base length of wall is 24 ft of an area element: how to determine the location of centroid! Integration formulas for calculating the centroid for a body of arbitrary shape called the geometric.!, to locate the centroid must lie along any axis of symmetry furthermore, can! Volume for a body having axial symmetry set of points, weighted by their respective values for such an shape. • to show how to find centroid the Xi line follows through both, the triangle is from... For, using the equations, below is the shapes overlap, the centroid if the shapes apart calculate! Of symmetry centroid formula for all shapes, y, and z coordinates of the centroid for...
|
# nLab Ismar Volić
## Selected writings
On graph complex-models for configuration spaces of points:
On higher order Vassiliev invariants as Chern-Simons theory-correlators, hence as configuration space-integrals of wedge products of Chern-Simons propagators assigned to edges of Feynman diagrams in the graph complex:
category: people
Last revised on October 2, 2019 at 03:15:14. See the history of this page for a list of all contributions to it.
|
### pakhandi's blog
By pakhandi, 13 months ago, ,
Registration for the Google Code Jam 2017 is now open
Past rounds can be found here
The complete schedule can be found here
Update —: Round 2 starts in 2hrs-7mins. Top 500 participants will advance to Round-3.
Gl & Hf !!
•
• +120
•
» 12 months ago, # | ← Rev. 2 → 0 How to solve D of qualification round with maximum bipartite matching? The best I could think was to place maximum bishops and rooks on nxn chess board.edit-solved.
• » » 12 months ago, # ^ | ← Rev. 2 → 0 You can divide the problem into one problem for rows/columns (x, o) and one for diagonals (+, o), the first can be solved greedily, and for the second one you do a maximum matching between the diagonals that haven't been used (as long as the edge represents a valid position on the grid).
• » » » 12 months ago, # ^ | ← Rev. 2 → 0 Can you prove why any placing is optimal? And how can we decide which ones to upgrade?I was able to reduce the problem to what you said, and I think it's just about placing maximum bishops(+,o) and rooks(x,o) but why is it correct?
• » » » » 12 months ago, # ^ | 0 Since you're using the maximum matching it has to be optimal. Upgrading happens when you modify one occupied cell in one of the two subproblems.
• » » » » » 12 months ago, # ^ | ← Rev. 3 → 0 I am still unsure why greedy + MBM gives optimal answer.I started to think that both have to placed with bipartite matching simultaneously. As, depending on how you place your 'x' you may not be able to place maximum number of '+'. eg : when n = 4, if we place 'x' in (1,2), (2,1), (3,4) and (4,3) we won't be able to place the maximum number of '+' with such an arrangement. The maximum is 6(+) + 3(x) = 9 models, but if the above configuration is followed, we only get 4(x) + 4(+) = 8 models.
• » » » » » » 12 months ago, # ^ | +3 They've uploaded a very detailed Contest Analysis. You can check it here
• » » » » » » » 12 months ago, # ^ | +5 Thanks! I didn't know there was an editorial :|
• » » » » » » 12 months ago, # ^ | 0 The particular thing in the problem is that the value of o is 2, this gives you the equivalence between solving the main problem and the two subproblems
• » » » » » » » 12 months ago, # ^ | 0 Oh! I see now. That's clever :)
» 12 months ago, # | 0 how many points needed to go to next round "Online Round 1: Sub-Round A" ?
• » » 12 months ago, # ^ | +14 25 points
• » » » 12 months ago, # ^ | 0 Great, thank u bro
» 12 months ago, # | 0 I solved A but can any one give a proof of that solution ( iterate the string and if the current character == '-' flip the next k pancakes )?
• » » 12 months ago, # ^ | 0 let us say at i-th position we have a '-' symbol and this is the left-most '-' symbol. Now, since you have to change all the '-' symbols to '+' symbols, you have to use at least one operation starting from i-th position. I hope it gives you an idea about the proof.
• » » » 12 months ago, # ^ | 0 I'm sorry but I didn't get it, also I'm interested with proof of optimality ( minimum number of operations ).
• » » » » 12 months ago, # ^ | +3 If i-th position is the left-most position with '-' symbol, then the claim is that it would be optimal to perform an operation at i-th position because there is no other way to convert the i-th symbol. In case you perform an operation at a position j before i such that j + k <= i then, by our assumption, we would be creating new '-' symbols before i-th position, hence increasing the number of operations.
• » » » » 12 months ago, # ^ | 0 to understand this, first you have to notice that the flips are commutative. by noticing it, you also understand that two flips on the same range are the same as not doing any flip. so, starting from left, there's only one way to turn the first '-' into '+'. this is true for the rest of the pancakes: if there's a '-' on the pancake with index i, there'd be two ways of turning it into a '+': by doing a flip starting at i or by doing a flip starting before i. you know that by flipping something before i you'd be either canceling a flip you already did (thus turning a '+' into '-') or adding a flip you didn't do (also turning some '+' into '-'). so the only possible way of turning the current pancake is to make a flip starting at it.
» 12 months ago, # | 0 How to solve C-large anyone ??
• » » 12 months ago, # ^ | 0 Just take the max element (say X) , if its frequency is >= K then this is the answer else add its frequency to X / 2 and (X-1) / 2. Code#include "bits/stdc++.h" using namespace std; int t; long long n , k; map < long long , long long > mp; int main(){ scanf("%d" , &t); for(int tc = 1 ; tc <= t ; ++tc){ printf("Case #%d: " , tc); scanf("%lld %lld" , &n , &k); mp.clear(); mp[n] = 1; long long ans; while(1){ auto it = *prev(mp.end()); mp.erase(prev(mp.end())); if(it.second >= k){ ans = it.first; break; } else{ k -= it.second; mp[it.first >> 1] += it.second; mp[it.first - 1 >> 1] += it.second; } } printf("%lld %lld\n" , ans >> 1 , ans - 1 >> 1); } }
• » » 12 months ago, # ^ | 0 write a binary tree with the generated possibilities from N like so: root is N. left (let's call it L(n)) is the biggest subsequence generated and right (R(n)) is the smallest ( on both children for odd N or on the left and on the right for even N). Now start placing the people: you'll notice that, for a given node, the first person will stay on the node, the second will go to the left subtree, the third will go to the right, fourth will go left again, fifth will go right, and so on. Now it's easy to write an algorithm: call the answer f(n, k). if k = 1, you're on the answer node already, so just print L(n) and R(n). If k - 1 is odd, then the last guy will go to the left subtree, so the answer will be f(L(n), (k - 1) / 2 + 1). if it's odd, the last person will go to the right subtree, so the answer will be f(R(n), (k - 1) / 2).it would look like this: long long R(long long n) {return (n - 1) / 2;} long long L(long long n) {return (n - 1) / 2 + !!(n % 2 == 0);} void f(long long n, long long k) { if(k == 1) cout << L(n) << ' ' << R(n) << '\n'; else { if((k - 1) % 2 == 0) f(L(n), (k - 1) / 2 + 1); else f(R(n), (k - 1) / 2); } }
» 12 months ago, # | +20 After qualification round, Square869120Contest #4 will be held at Atcoder. Let's participate and enjoy!!!!! https://s8pc-4.contest.atcoder.jp/
» 12 months ago, # | 0 Can anyone explain me what does it mean that Round 1 is divided into 3 Sub-Rounds? Does it mean that we have to advance in any of these like in TCO or we have to pass through all of them?
• » » 12 months ago, # ^ | +1 You just need to pass any of the three.
» 12 months ago, # | +3 Problem D was very nice! Although I didn't solve it in contest, I thought the author's solution was very elegant.
» 12 months ago, # | +1 Can anyone Explain the greedy method used in the editorial in a more simpler way ?And if anyone solved it bipartite matching feel free to explain your method also.Thanks in advance .
» 12 months ago, # | +46 The analysis for B is unnecessarily complicated. We can just greedily construct the optimal number by going from the left to the right. (Additionally, to avoid all special cases just prefix both the input and your number with a 0.) Example: input number: 01568435466 we already have: 0156 What is the largest next digit we can use? If we use digit d, the smallest possible final number will have d on all remaining places. If this number is small enough, there are some valid tidy numbers with the next digit d, and if this number is too large, there are no valid tidy numbers in which the next digit is d. And that's all. input number: 01568435466 we already have: 0156 we can use 7: 01567777777 is <= input we cannot use 8: 01568888888 is > input thus, next step: 01567 My submitted code: T = int(input()) for t in range(1,T+1): V, A = [0]+[int(_) for _ in input()], [0] while len(A) < len(V): A.append(max(d for d in range(A[-1],10) if A+[d]*(len(V)-len(A)) <= V)) print('Case #{}: {}'.format(t,int(''.join(str(_) for _ in A[1:]))))
• » » 12 months ago, # ^ | +140 Another simple greedy solution is to notice, that a tidy number can be represented as a sum of at most 9 numbers consisting only of 1s. For example, 133347 = 111111 + 11111 + 11111 + 11 + 1 + 1 + 1. The solution is then to add maximum such number so that the sum doesn't exceed N and the count doesn't exceed 9. long long answer = 0; int count = 0; for (long long ones = 111111111111111111LL; ones > 0; ones /= 10) { while (count < 9 && answer + ones <= N) { ++count; answer += ones; } }
• » » » 12 months ago, # ^ | ← Rev. 4 → +28 OMG. In compare, I have a mega overkill: I wrote dp[lengthOfNumber][startingDigit] to find maximum number X, such that X ≤ N and f(X) = f(N), where f(num) = number of tidy numbers less than or equal to num.
• » » » 12 months ago, # ^ | +8 I think I have an even simpler solution:Reading the number n as a string s, for an index i, 0 ≤ i < s.size() - 1, if s[i] > s[i + 1] then we decrement s[i] and for all j, i + 1 ≤ j < s.size(), s[j] = 9. We then increment i. Since there may be cases like 554 we run the algorithm log(n) times (i.e. cases in which we have a violation of the tidy number constraint that was generated by the algorithm before index i).Total running time is log(n)2. string s; cin >> s; for (int t = 0; t < 18; t++) { int ix = 0; while (ix < (int) s.size() - 1) { if (s[ix] > s[ix + 1]) { s[ix]--; for (int i = ix + 1; i < (int) s.size(); i++) { s[i] = '9'; } } ix++; } } Afterwards just print s without the leading zeroes.
• » » » » 12 months ago, # ^ | ← Rev. 2 → 0 Without dealing with any leading 0's. Printf res:I thought that this is the same approach as above, but I was wrong. It is O(lg N). Remember the smallest digit you constructed so far. If you have something bigger, then decrement it by 1, set all 9's at the end and set that new smallest value to new decremented digit. LL N; scanf( "%lld", &N ); LL mn = 1; LL prev = 9; LL res = N; while (N) { if (N % 10 > prev) { // obniz o 1 pozycje i z tylu same 9 daj N--; res = N*mn + mn - 1; prev = N % 10; } else prev = N % 10; mn *= 10; N /= 10; }
• » » 12 months ago, # ^ | 0 Google Code Jam Editorials consider all possible (and impossible) solutions. To me, they are great for increasing knowledge. :)
» 12 months ago, # | +6 Just out of curiosityWould you say problem D was comparable to a div2 D level problem, or div2 E in terms of difficulty?
• » » 12 months ago, # ^ | +1 I guess it was a DIV 2E, but then again i'm a noob.
• » » 12 months ago, # ^ | +13 I'd say it is at least div2 E or a bit higher if you consider that div2 E is getting easier and easier (maybe that's just me).
• » » » 12 months ago, # ^ | +5 It was a very beautiful problem, especially because it reminded me of a sport I love :)I didn't think it would be considered that difficult as close to 1000 people solved it in 27 hrs, but the logic was indeed very good behind the problem.
• » » » 12 months ago, # ^ | 0 Which was especially noticeable in the last 408 div2 round.
• » » » » 12 months ago, # ^ | 0 I hope you're being sarcastic... XD
• » » » » » 12 months ago, # ^ | 0 I'd see it as not sarcastic but last round simply showed that the other div 2 rounds were way easier than what they could be. Also, looks like tons of people misunderstood C's statement, I did that too when looking at the contest later hahaha.
• » » » » » » 12 months ago, # ^ | 0 Oh, I kind of misunderstood you comment. I thought you were saying last round's problem E was easy, which is definitely isn't. BTW, I also misunderstood C's statement XD
» 12 months ago, # | +8 Test cases for B-small were weak. I had an AC solution which actually failed in cases such as 331.
» 12 months ago, # | +15 For problem — C (large), I got the solution like — write the binary representation of k, then we will compute n for every bit (from right), except the left most bit. If the bit is '0', we get n = n/2, if the bit is '1', we get n = (n-1)/2. After computing for every bit, the answer will be — max(a, b) , min(a, b) where a & b is computed in this way : if(n%2 == 0) a = n/2, b = n/2 — 1; else a = (n-1)/2, b = (n-1)/2The time complexity of this approach is log(k). But I can't prove why it is correct!!
» 12 months ago, # | 0 How many people from online round 1 would qualify to round 2, I can't seems to find info from their site? From what I remembered, it is top 1000 of each round, but is it still the same?
• » » 12 months ago, # ^ | +3 Yes, it's still the same (top 1000 in each round 1 subrounds, 3000 total).
• » » » 12 months ago, # ^ | 0 thanks! any source for that info?
• » » » » 12 months ago, # ^ | +6 https://code.google.com/codejam/terms"1. You will advance to Code Jam Round 2 if you are one of the highest-ranked 1000 contestants from one of the sub-rounds in Code Jam Round 1. You will be notified by email after the end of each sub-round if you are one of the 3000 contestants advancing to Code Jam Round 2."
• » » » » » 12 months ago, # ^ | 0 thanks
» 12 months ago, # | -23 anybody here?
» 12 months ago, # | 0 Logic for B & C in round 1A please?
• » » 12 months ago, # ^ | 0 I tried to do maximum bipartite matching for B small but incorrect. Probably missed some corner case. I think the large is DP.
• » » » 12 months ago, # ^ | ← Rev. 3 → -10 I solved B small/large with maximum bipartite matching. If I had to guess it was probably something to do with the range that you can get from each package.Edit: forget that, the problem is harder than I thought.
• » » » » 12 months ago, # ^ | ← Rev. 2 → 0 nvm
• » » 12 months ago, # ^ | ← Rev. 2 → +21 I got AC B-large with greedy. Just sort each row in Q and take earliest match possible.C-small I could get AC with BFS, obv. doesn't work for large :D
• » » » 12 months ago, # ^ | 0 your GCJ handle please!
• » » » 12 months ago, # ^ | +3 I think everyone thought this solution at one point. Wish I hadn't overthought it :|
» 12 months ago, # | 0 I submitted a max flow solution to B, during contest it passed the small so i submitted the large. The large failed after the contest so I was curious to know what the problem was, I tried submitting the large again in the practice with the same exact code and got a correct verdict! Is it just the test cases are weak for the practice or what
• » » 12 months ago, # ^ | +4 Are there any corner cases in B? I also used maxflow but WA.
• » » » 12 months ago, # ^ | ← Rev. 2 → +5 well I don't know, actually by comparing with other solutions, my code failed in only a single case out of the 100 and it's this one http://ideone.com/WNzI0XMy code outputs 4 while the answer should be 3. It's too hard to debug though lol
• » » » 12 months ago, # ^ | +10 Keep in mind that a package may be suitable for kits of different sizes. What do you return for this? 4 1 1 1 1 1 100 110 120 130I used a simple greedy approach — first try to match the packages with minimum size for each. If that does work, take all and increase answer by one. Otherwise, discard the package with the smallest Qij / Ri and try again, until there are no more packages left. Works in O(N2 * P).
• » » » » 12 months ago, # ^ | 0 Yes I know, actually I build a flow network of N+2 layers, first layer is source and last is sink. Each layer represents an ingredient and it has P nodes, I add an edge between a node from a layer and the layer next to it if there is at least one common kit size they both can fit in. Basically for each package, the valid kit sizes is a range so you can find if there is a common one or not by a simple range intersection
• » » » » » 12 months ago, # ^ | 0 and my code returns 1 for your input
• » » » » » » 12 months ago, # ^ | ← Rev. 2 → +5 Yet the correct answer is 0. If the kit is of size >111, you cannot use the first package. If it is of size <119, you can't use the last. Hence, the is no valid kit size, even though the ranges of consecutive ingredients overlap.
• » » » » » » » 12 months ago, # ^ | 0 loool I feel so dumb now. Thanks a lot for the help
• » » » » » » 12 months ago, # ^ | +5 It should return 0. The problem is that you need overlapping ranges to choose the packages. For example, (1 — 2), (2 — 3), (3 — 4) these are the ranges of 3 packages and you can't choose them but there will be flow if you simply connect them from one layer to the other.
• » » » » » 12 months ago, # ^ | +10 Sometimes it's better not to know maxflow algorithm at all. You could have thought a bit more and come up with very simple greedy algorithm.
• » » » » » » 12 months ago, # ^ | 0 I tried max flow first. Because I couldn't find appropriate network, I wrote the simplest greedy instead.
• » » » » 12 months ago, # ^ | 0 Can you, please, explain why this greedy strategy will not work when we have arbitrary ranges (according to the official editorial)?Especially, I don't see for now why the condition: "for 2 intersecting ranges having at least 1 common point" helps to prove that greedy approach works?
• » » » » » 12 months ago, # ^ | 0 I have exactly the same question. Can anyone explain?
» 12 months ago, # | ← Rev. 2 → -30 Looks like tests for A are weak. For test: 1 3 3 J?? JH? ??H some accepted solutions (firstly I tried tourist's one) give this result: JJJ JHH HHH Or I misunderstood statement and this is valid construction?
• » » 12 months ago, # ^ | +22 No letter appears in more than one cell in the input grid.
• » » » 12 months ago, # ^ | +6 Thank you, I missed that restriction :(Need to sleep a little bit more — it's bad idea to participate at 4am.
• » » 12 months ago, # ^ | 0 Made the same mistake and discovered it right now thanks to your query . Feel terrible !
» 12 months ago, # | ← Rev. 3 → +23 Very weak test cases for B, I got AC large using a wrong max flow solution that allows to use a package multiple times.My code gives 2 instead of 1 for this testcase: 1 3 2 10 10 10 10 10 10 1 10 10
» 12 months ago, # | 0 How to solve C-large?
• » » 12 months ago, # ^ | +18
» 12 months ago, # | +39 Anyone else thinks that the problem statement is too long?For A, I missed no initial appears more than once on the cake. and couldn't solve the problem.And for B, I missed (Of course, each package can be used in at most one kit.) and only found out after an hour...
• » » 12 months ago, # ^ | +5 I think many people missed the thing you have said about A :(
» 12 months ago, # | ← Rev. 2 → +31 I hate problem B xD
• » » 12 months ago, # ^ | +3 Same here :P Though C helped me get into top 1000 :D :D
• » » » 12 months ago, # ^ | ← Rev. 2 → +7 Didn't even read Problem C :S
• » » » » 12 months ago, # ^ | +1 Even i never open harder problems before even trying easier ones. ! But this time the submited problems count at begining compelled me to try C first !
• » » » » » 12 months ago, # ^ | 0 Can you tell us how to solve C? I can recongise it as DP shortest path but can't handle how to solve the problem with choosing the horse.
• » » » » » » 12 months ago, # ^ | +3 for small : just linear dp for each horse i = 1..N for each city j = i+1 to N if j is within limit of horse i through path i..i+1 i+2..j time[j] = min(time[j],time[i]+dist(i..i+1..i+2...j) /spd[i].ans = time[n-1];for large: for each horse do dijikstras to find min time in which it can reach each city reachable time[i][j] = time for horse [i] to reach city[j] in other words one path from i to j of cost time[i][j]then do floyd warshall shortest path for each pair i,j to get final cost array[n][n]ans will be in this arraycheck my solution in GCJ page : username — sanket407
• » » » » » » 12 months ago, # ^ | 0 Just don't choose the horse, always use the horse at the current DP, when you update from a town just update all towns within range, not just the ones connected by edges.
• » » 12 months ago, # ^ | +11 Anyone here have a nice(short) solution to problem B? :)
• » » 12 months ago, # ^ | +83 GCJ people are so good at super-tricky problems. I can't believe very high success rate (more than 30%!).For me it looked completely hopeless without stress-testing.
• » » » 12 months ago, # ^ | +23 What is tricky about it? You just have to find a bunch of test cases. Frankly I don't see the purpose of such problems. It's not "good at super-tricky problems" it's "annoying problems".
» 12 months ago, # | +11 How is round 1B problem B meant to be solved? I checked my code over and over even with triangle inequality and I still couldn't solve the small case. I have no clue what's wrong!
• » » 12 months ago, # ^ | -10 Did you cover a case that the last horse cant be the same as the first one?
• » » » 12 months ago, # ^ | ← Rev. 2 → 0 yeah. I swap the last two if they are the same, then check if it works.
• » » 12 months ago, # ^ | ← Rev. 2 → +5 For subtask with O = G = V = 0:Assume a >= b >= c the number of occurrences. (Assuming c >= 1, other case is trivial)Place a's first. You need (b+c >= a) to seperate all a's. If it is not so then its impossible.Place b's after every a and then c until finished. Incase you have some extra c's remaining: Let extra = (b+c) — a. Place each of this after every ab: aba -> abca...So it goes like abc-abc-abc-ab-ab-ab-acCode
• » » » 12 months ago, # ^ | 0 Damn. I have similar idea but couldn't prove it so I didn't code it.
• » » » 12 months ago, # ^ | +3 How did you get the idea? There are logical reasoning..can you post it? The approach ?
» 12 months ago, # | +5 How to small B-large ?? B-small could be done by assigning alternately colors in foll, order largest count — then mid count — smallest count
• » » 12 months ago, # ^ | 0 Can I see your code? I did the same thing but it kept saying i had WA.
• » » » 12 months ago, # ^ | 0 check my code in google code jam submissions itself! Same username there as handle here : sanket407
• » » 12 months ago, # ^ | ← Rev. 2 → 0 If you don't mind, can you please tell what you did when all of them are equal? (As in the first sample test case)
• » » 12 months ago, # ^ | +56 If you know how to solve B-small, that should extend to B-large.Notice for the color orange, it can only be adjacent to blues. Suppose there are x blues and y oranges. Of course if we have y > x, then it's impossible, otherwise if we have y=x, it's only possible if y+x = n. After handling this, we can reduce it to a problem where there are x-y blues and 0 oranges. Then, you solve the same case as small, and then replace any single arbitrary occurrence of "B" with "B" + y * "OB"
• » » » 12 months ago, # ^ | 0 Honestly, if this was described as part of the problem, I still wouldn't be able to solve it.
• » » » 12 months ago, # ^ | 0 During the contest I thought it can make a difference if you join all BOs inti one BOBOBOBOBOB or split into BOB,BOB,BOB, ... Had to bruteforce the numbers of BOBs, RGRs, YVYs. Got AC with that (the numbers sum to N/3 so the max number of combinations could be 111^3).Now I see that the amount does not matter, thanksPS: and failed C-large, which I thought is simple >_<
• » » 12 months ago, # ^ | +34 Here is an opposite take on the problem, which first solves hard and easy case follows naturally:Consider a graph where each of 6 colors is a node. You know the number of edges between orange and blue, and two other pairs, because one should surround the other. You know that in-degree and out-degree of each node should be equal to the total number of unicorns of that color. Loop through a number of edges between, say, R and G, and then the in-degree and out-degree of every other node follows naturally from that. Now you just check if that graph has an Eyler cycle, that is your answer.
• » » » 12 months ago, # ^ | 0 So the selection is n^2 and euler cycle takes O(E) and here E ~ N, so O(T * n^3). Isn't it TLE?
• » » » » 12 months ago, # ^ | +3 No, selection is actually O(N) — you backtrack through the out-degree of one node and the remaining in- and out-degrees follow. Then you need to check whether the obtained graph has an Eyler cycle, which can be done in O(1). Then you output that cycle in O(E) = O(N). Actually, I computed the cycle every time and then checked that it had length N, and if so, printed the answer, so the total complexity was O(TN2), that passed.Here is the solution: https://pastebin.com/hfxVkTyZ
• » » » » » 12 months ago, # ^ | 0 I had something similar.Can't you just check the in/out degree of each vertex in O(1) to check if an Euler cycle exists? This leads to an O(N) algorithm, I think.
• » » » » » » 12 months ago, # ^ | 0 That is what I said in the first paragraph exactly.The only subtle moment is that you need to also check that the graph is connected — ex. for test 12 2 2 2 2 2 2 you get in your graph 3 connected components, in each of which in- and out-degrees are equal.
• » » » » » » » 12 months ago, # ^ | ← Rev. 2 → 0 Good point. I was just checking that it was possible, not claiming you didn't already say that. I figured you could simply do this in O(1) by doing a DFS through a graph with collapsed edges.
• » » » » » 12 months ago, # ^ | 0 This is so far the best solution for this deadly problem.
» 12 months ago, # | +25 Either take round1-C in 4am, or say bye-bye to GCJ :(
» 12 months ago, # | ← Rev. 2 → 0 Any ideas why A-large could fail?)UPD: I made wrong assumption: If we sort horses by start position we can ignore all horses starting from first horse that ride faster than previous one.It passed small input, it's very sad((
• » » 12 months ago, # ^ | 0 You just need to choose the slowest horse and get the velocity for that time from 0 to D should solve the problem.
• » » 12 months ago, # ^ | 0 MAXN error? I had issue with A large because in the beginning it said MAXN = 100 but it was really 1000. (or maybe I just read it wrong the first time)
• » » 12 months ago, # ^ | ← Rev. 2 → 0 same with me , was in Top 1000 with 78 points lost top 1000 just because of A-large.
» 12 months ago, # | 0 1113 Solved C Large. 730 Solved B Large.This is bad for people who didn't even read C xD xD
» 12 months ago, # | +65 Kudos for cubelover alex9801 who drunk lot of beers but still passed R1B safely
• » » 12 months ago, # ^ | +16 It makes us feel bad about ourself. :(
» 12 months ago, # | +51 By the way these statements are horrible...
• » » 12 months ago, # ^ | +13 Always this hate, I can't understand it..It's a nice variation to shorter statements. I see that statements like these in weekly contests might get annoying, but GCJ is once a year.
• » » » 12 months ago, # ^ | ← Rev. 2 → +6 You are right that we should appreciate long statements with a nice story behind it but in this exact situation somehow it was really difficult to have fun. (At least for me)
» 12 months ago, # | +30 In the last Google Code Jam 2016, I finished 1557 in round 1B. Since then, I set the goal to advance to round 2 in Code Jam 2017 (i.e. be in top 1000).All year, I trained to solve problems almost everyday. I kept journal of how many hours I spent on training per day, per week, per month. According to my journal since May 2016, I spent about 500 hours (and those are only hours when I worked at home in total silence).So I thought that those efforts are enough to move from 1557 place (2016) to top 1000 (this year).And here are results of all my efforts during whole year:Round 1B 2017 — 6617 place!!!I.e. I became much, much, much worse at competitive programming as a result of ~500 hours of training. It's fu....g unbelievable!!!!!!Apparently, I have no talent whatsoever and probably I should quit...
• » » 12 months ago, # ^ | -8 dislike my comment if u quit competitive programming after this code jam (Y)
• » » 12 months ago, # ^ | 0 Probably! You should only do it if it's fun!
• » » » 12 months ago, # ^ | +1 Will you enjoy losing chess all the time to everybody including those who trained 10 times less than you?I won't believe you will enjoy losing game all the time.I personally enjoy very much to have an insight and find a clever solution, see my submissions in green light with magic word Accepted. It's awesome feeling. Every time, I solve hard problem (which is rare), I'm so glad and I continue starring at my code for minutes thinking how beautiful my solution.But if you spent hours, hours, hours, hours and hours in training to only see:WRONG! WRONG! WRONG! TIME OUT! WRONG! YOU WRONG! WRONG! TIME OUT! YOU WRONG AGAIN!Then you can't enjoy. You can't enjoy this disaster!
• » » » » 12 months ago, # ^ | +3 I wouldn't play chess in competitions if I felt I wasn't good enough and hate to lose. But I would still practice playing chess because it's a fun way to spend some time and maybe some day I will be able to enter a chess tournament.
• » » » » 12 months ago, # ^ | +31 You are right! Life is unfair. But if I can share one lesson from contests, it's that you can't expect to "deserve" results just because you've worked hard. That's not how this works.
• » » » » 12 months ago, # ^ | +113 Try not to take the competitive part very much serious. Competitive programming is just a thing where competitors of all levels may occasionally compete together. Just imagine if you are placed to compete with world top chess players. With dozens of them. No much point in that, right? The difference is that the CP allows this situation to happen.Honestly, I never liked the competitive part itself. I like to solve problems, especially when I learn something by solving them. From this point, competitions don't provide me any motivation reasons, but they do provide modern problems.So try just to forget about the competition part. No matter what rating do you have, no matter how are you placed in a contest. The best way may be not to take part in contests for a while, so you will be just facing the problems themselves. And there is no need to take it very formal, like "I've spent X hours today", you should enjoy the process, and I think it's easier to achieve when the process is more free.There is no reason in working hard for some placing in some round — it does not show anything at your level. There is no point to take those useless numbers in mind. Enjoy, learn, solve — and only after all that — compete.
• » » » » » 12 months ago, # ^ | +13 Good point. Honestly I don't like the word CP. If someone do web development in a very competitive company he is also doing "competitive programming". Fact is, we do "Algorithmic problem solving", while some fraction of them use "competitive training" to improve their skills, or just for fun.
• » » 12 months ago, # ^ | +17 Your rank in a contest depends completely on the type of problems you've practiced before. eg: if you've practiced lots of graph and not much else, but the contest is full of geometry, probability, etc. then you are bound to fail in that contest because that problemset is not your strong suite. If however there were some tricky graph problems, you might be able to solve it even though the total number of accepted solutions is low ( tricky problem ).Today's problemset had my weaknesses. Naturally I failed today. But does that mean I don't have "talent" or anything like that? Mmmm.... I don't know and I don't care.
• » » » 12 months ago, # ^ | ← Rev. 2 → 0 Here is a problem:http://codeforces.com/contest/779/problem/DI didn't solve it.But when I looked at editorial, solution turned out to be dead simple. There was nothing I didn't know to solve this problem. Yet, I didn't find that solution.I honestly can't understand why I didn't find that solution.I understand when I failed to solve some problem which uses trick or algorithm I didn't know. But often, it turned out that everything I need to know I already know.It's total mystery to me.So it turned out that solving problems are not entirely about knowledge.
• » » » » 12 months ago, # ^ | +13 You expect a lot from yourself, and beat yourself over not being able to fulfill your own expectations. Not just that, you probably spend more time thinking "WHY CAN'T I SOLVE THIS" or "I HAVE TO SOLVE THIS BECAUSE I'VE TRAINED SO MUCH I HAVE TO" than on the problem itself. So it turned out that solving problems are not entirely about knowledge. This is nothing short of an epiphany! You are right. It's not just about knowledge. You have to know what to let go. Shake off bad days, and keep going.
• » » » » 12 months ago, # ^ | +3 You can say the same about last round's problem D. Dead simple solution, but only a few people could think it up. So many randomized algos from div1 guys also passed, which means even div1 guys had troube finding the actual solution. It didn't have any DP, tree, graph, mo's algo, centroid decomposition, etc. nothing. Just a dead simple greedy solution that everyone understood when they read the editorial.
• » » 12 months ago, # ^ | 0 I agree with BlackvsKinght. It's just not match with the problems you already trained. Don't quit when the battle isn't over. Let's see it like a bad day and move on. Who knows? Next time maybe you on top 10.
• » » 12 months ago, # ^ | +15 All year, I trained to solve problems almost everyday. I kept journal of how many hours I spent on training per day, per week, per month. According to my journal since May 2016, I spent about 500 hours (and those are only hours when I worked at home in total silence). Let's make a deal here and now. Do exactly this again for the year 2017 and let's look at what will happen.PS: I've also got far shittier place this year =)
• » » » 12 months ago, # ^ | 0 well said i'll follow your footsteps.
• » » 12 months ago, # ^ | +19 Same for me, took 67th (!) place at 2016 Round 1B. And now 1094th.
• » » 12 months ago, # ^ | +3 Last year in round 1B I screwed up with task B — I spent too much time on it, and at the end I submitted a solution in which my value of infinity was smaller than the correct answers for some test cases (which I didn't catch in the small test set for obvious reasons). I also had no clue that the task C was a standard problem. I ended up 1180th and didn't advance. A week or two later in 1C I lucked out with 31st place and eventually went on to advance to Round 3. See how much variance is in such contest? You have good days and bad days. You can still easily qualify in Round 1C while in the first hundred, you'll forget this round and be extremely grateful for all the hard work you've put in to it!
» 12 months ago, # | 0 Can someone explain how to solve B large in detail.
• » » 12 months ago, # ^ | ← Rev. 2 → +18 Hint: O need to be in between Bs, V needs to be in between Ys, and G needs to be in between Rs.
• » » 12 months ago, # ^ | +18 V can only be surrounded by two Ys, so the best way to arrange Vs is 'YVYVY...VY' The whole string can be seen as a single V.So we can reduce this problem into the same as the small dataset.The only special case is there are only Vs and Ys with the same amount, which is a valid case.
• » » » 12 months ago, # ^ | 0 got it, thanks
» 12 months ago, # | +6 Where's round 1B analysis ?!!!
» 12 months ago, # | 0 Is anyone going to add the recent GCJ rounds to gym?
» 12 months ago, # | +10 Around what rating(estimated) would you have to be above to solve the first problem? Just curious about how far behind I am :) Thanks!
• » » 12 months ago, # ^ | +11 I'm a grey and solved A small and large in 40 minutes.
» 12 months ago, # | ← Rev. 2 → +18 I'd rather have small constraints (say n=1000 instead of n=10^6) than easier problems in small versions.Now you have to test your large solution carefully, because there arent any pretests.Fortunately I passed, despite of my c-large failed.
» 12 months ago, # | 0 Is there any upsolving?
• » » 12 months ago, # ^ | 0 you can practise there on the gcj page itself ! download test files and submit output files !
» 12 months ago, # | +41 How to solve large C?
• » » 12 months ago, # ^ | 0 I tried all sorts of heuristics and even simulated annealing, without success. My idea was that the reason it's a small input is that with heuristics you need a bit of luck to get it correct, so it's only reasonable to allow multiple tries.Can't tell what the correct solutions are doing at a quick glance, so we'll just have to be lazy and wait for the editorial. :(
• » » 12 months ago, # ^ | 0 I only know, how to calculate probability that at least k of n works fine. codedbl getProb(vector prob, int n, int k) { vector work(n + 1, 0); // work[i] — probability that exactly i cores works fine sort(prob.begin(), prob.end()); work[1] = prob[0]; work[0] = 1 — prob[0]; for (int i = 1; i < n; i++) { work[i + 1] = work[i] * prob[i]; for (int j = i; j > 0; j--) work[j] = work[j] * (1 — prob[i]) + work[j — 1] * prob[i]; work[0] *= 1 — prob[i]; } dbl ans = 0; for (int i = k; i <= n; i++) ans += work[i]; return ans; }
• » » 12 months ago, # ^ | +3 Stupid question but how did you solve small C? It looked deceptively simple (judging from the number of successful submissions) but somehow I couldn't crack it after almost an hour. :(
• » » » 12 months ago, # ^ | 0 It's a well-known fact that if you know the sum of some numbers and you want to maximize their product, then you need the numbers to be as close to each other as possible. In this particular problem, the only thing you can do to make the numbers close is to increase some small numbers so that they become closer to the big numbers we have (or increase all numbers so that they become closer to some greater value). In other words, our goal is to maximize the minimum among all numbers.So let's do a binary search over this minimum. Once we fix some minimum V (which is to be maximized), we can take the sum of V - P[i] for all P[i] ≤ V. Then compare this sum to U and proceed with the binary search. Code goes here.
• » » » » 12 months ago, # ^ | 0 This is precisely what I did but still got an incorrect answer. Can you spot the error in my code?
• » » 12 months ago, # ^ | +5 As I saw from some solutions, you need to choose some j >= k, and maximize product of the j highest probabilities (with the method from small C). Choose the best from all such j. Also check the case when you increase the highest probabilities to 1.0 as many as possible.I don't know how to prove it.
» 12 months ago, # | 0 can anyone tell me the approach for A-large B-large?
• » » 12 months ago, # ^ | +3 B large is DP. We want the answer for each of 24*60 seconds such that ith second is cam/jam's responsibility to look after the child such that jam has already looked after the child for t seconds. So the dp table looks like dp[i][cam/jam][t]. Quite simple, but I still couldn't solve it.
• » » » 12 months ago, # ^ | 0 How do you account for a possible exchange at midnight with these DP states?
• » » » » 12 months ago, # ^ | 0 I take the answr for 24*60 + 1th second. Something is wrong with my code as it is currently failing the 1st test case itself, but I think if we take 24*60+1th second, we will get that exchange. This is assuming that there is always an exchange at midnight, unless the read the statement wrong.
• » » » » » 12 months ago, # ^ | +11 Not necessarily. If the same person starts and ends the day, then there would not be a midnight exchange.
• » » » » » » 12 months ago, # ^ | ← Rev. 2 → +8 So that's what I'm doing wrong. Thanks!Well in this case you need one more state to keep track of starting person.
• » » 12 months ago, # ^ | +17 Define f(c,j,i,s) as the smallest number of exchanges so that Cameron has watched the baby for c minutes, Jamie has watched the baby for j minutes and the baby is currently (at the end of c+jth minute) being watched by Cameron (i=0) or Jamie (i=1) and the first one to watch the baby was Cameron (s=0) or Jamie (s=1). Store the activity status of each minute in an array so you can check in constant time who is doing activities at minute c+j-1. Each recursion step is a simple constant time check and there are only 720*720*2*2 states to store in the DP.The final answer is min(f(720,720,0,0), f(720,720,0,1)+1, f(720,720,1,0)+1, f(720,720,1,1)).
• » » » 12 months ago, # ^ | 0 This looks so much easier to code than what I did :|
• » » » 12 months ago, # ^ | ← Rev. 2 → 0 1.) Why do we need s i.e. "the first one to watch the baby"?. 2.) I kept the same state except "s" f(c,j,i) and struggled to take into account a possible exchange at midnight. My 1st and 3rd sample input return 1 instead of 2.Link to my code:http://ideone.com/JiT8bW Got it. If the final person is same as the starting person, there is not need of an exchange at night.
• » » 12 months ago, # ^ | 0 In A-large, first sort the pancakes in decreasing order of their radii, so that i+1 can always be placed on i. Let dp[i][j] be max value of exposed surface area of a stack of j pancakes considering only first i in our list.Then, dp[i][j] = max( dp[i-1][j], dp[i-1][j-1] - pi*r[i]*r[i] + surface_area(i)) i.e. when we do not place ith pancake or when we place ith pancake on stack of length j-1.
• » » » 12 months ago, # ^ | +1 Greedy is sufficient for A large. We can reverse sort based on radius. Then we choose each pancake as the base pancake, and choose k-1 other pancakes with at most this chosen pancake's radius with largest sum of surface area. Once the base pancake is chosen, we know the area of the disk is fixed, we just need maximum area on the sides.
• » » » » 12 months ago, # ^ | 0 Nice approach!
• » » » 12 months ago, # ^ | 0 Can it still bedp[i][j] = max( dp[i-1][j] , dp[i-1][j-1]+ pi*r[i]*r[i]-pi*r[x]*r[x] + surfacearea(i).Where x is the pancakes chosen at for the state (i-1,j-1).?
• » » » » 12 months ago, # ^ | 0 This is not correct as it would subtract upper surface area of pancake x which is exposed to air.
• » » » » » 12 months ago, # ^ | 0 When the pancake x, was added at state (i-1,j-1), I added the upper surface area of x.at (i,j), That pancake x, is glued to the ith pancake so it does not have a lower surface area. We need to subtract both circles of i and x.then add the surface area of i. (assuming we choose i at this step.)Hos is this incorrect please ?
• » » » » » » 12 months ago, # ^ | 0 Let x be the uppermost pancake of j-1 length stack having surface area dp[i][j-1]. We are adding pancake i to this stack. So, the area of x which is now covered by i should be subtracted from dp[i][j] and surface area* of i should be added.*Note: We are not considering surface area of it bottom side
• » » » 12 months ago, # ^ | ← Rev. 2 → 0 I solved it using DP.1) Choose all possible top pancake. Let this index = prev.2) DP[prev][k] = maximum surface area of choosing k pancakes when prev is the previously chosen pancake.The complexity is much larger than it appears as there is an inner loop in the DP. Better approach would be to go with the greedy one by BlackVsKnight.
• » » 12 months ago, # ^ | +17 I used a greedy approach. First of all, sort all intervals by start time (along with marking which one belongs to C/J). Now two consecutive intervals might belong to either same person or different person. The answer will atleast be the number of consecutive pairs with different person. Answer will increase by 2 only when C looks after the child between two Js or if J looks after the child between two Cs. So I'll try to avoid these situations.Code: here
» 12 months ago, # | ← Rev. 2 → 0 For problem A in round 1C, I wasted about 1.5 hour but couldn't find a mistake (many WA attempts) in the approach. I am solving it with dp(index, required) which is index position am currently on and required pancakes, this outputs two values, firstly best answer and secondly last disc index included in this answer, second part is used to add new disc area to existing stack. Finally required answer is DP(n-1,k).first.I changed my approach at last to make it AC. But it will be really helpful if someone could guide me to the mistake in previous approach. here is the code
• » » 12 months ago, # ^ | 0 firstly best answer and secondly last disc index included in this answer, second part is used to add new disc area to existing stack. We cannot use best result calculated for i + 1 disks and attach it to i'th disk. It may be so that we need to take a disk with a smaller surface area on the i + 1'th place.
• » » » 12 months ago, # ^ | 0 Please check the code, the fault is not in the algorithm, I am taking care of it in dp state traversal.
• » » » » 12 months ago, # ^ | 0 I look through the code several times and I don't yet understand how do you handle that.
• » » 12 months ago, # ^ | ← Rev. 2 → 0 The problem with that approach, is that in the intermediate calculations, you might choose to ignore a stack that is part of the best output, in favor of another stack that appears better at the time.That is because the score at any moment, is dependent on both the side surface area, and top surface area. And since the top surface area changes as we go back up in the recursion (by adding larger pancakes to the stack), we can't really depend on it while choosing which stack to proceed with.I guess it's better explained using an example. 2 2 1 1 15 5 1 3 2 1 15 5 1 10 10 In the first case, your code will tend to choose the second pancake (5, 1) over the first (1, 15) as alone it has the larger surface area. In the second case, your recursion will lead you to the same case as the first example, and yet again choose the (5, 1) pancake, while it's better to choose the (1, 15), due to the presence of the third one. .
• » » » 12 months ago, # ^ | ← Rev. 2 → 0 Thanks, will check if it works for given examples.
» 12 months ago, # | -8 How do I solve C small-2 (in round 1C)
• » » 12 months ago, # ^ | 0 I used adding an increment of 0.00001 unit to the lowest priority core in a priority queue repeatedly to solve C small-1I tried a similar one for small-2 ( choosing top k cores for prirority queue) but it failed..
» 12 months ago, # | +3 How to solve "Ample Syrup" in linear time?
• » » 12 months ago, # ^ | ← Rev. 2 → +38 I might be wrong but here is my guess, let's take K pancakes with maximal side surface. Then we can prove that in optimal answer there will be at least k — 1 from this set. So we just see which one has maximum upper surface in this set, and change it with one pancake from other N — k pancakes, at a time and see if answer is better. It is O(N).EDIT: To make it clear, you don't necessarily remove bottom pancake from those K pancakes, you just choose different pancake from other N — K ones to be the new bottom and remove the one with minimal side surface from starting K pancakes.
• » » » 12 months ago, # ^ | 0 Cool!
• » » » 12 months ago, # ^ | 0 How would we pick K pancakes with maximal side surface in linear time?
• » » » » 12 months ago, # ^ | +3 It is a well known problem, you can find worst case linear time algorithm here. Also there is expected linear time algorithm which is simpler, I guess.
• » » » » » 12 months ago, # ^ | 0 Thanks for the info, didn't know there was a linear approach for that problem.
» 12 months ago, # | ← Rev. 2 → -8 Solved 1C A with DPJust for curiosity, what's the negativity here ? When you ain't sure about the approach and the constraints are small, you should go for DP if it's possible.
» 9 months ago, # | +10 What is displayed on 2017 shirt ?
• » » 9 months ago, # ^ | +6 I think it is the Samuel Beckett Bridge.
• » » 9 months ago, # ^ | +5 Has anyone received their t-shirt yet?
• » » » 9 months ago, # ^ | +5 I received it 3 weeks ago.
• » » » » 9 months ago, # ^ | +5 Do you get a notification of when it is going to arrive?
• » » » » » 9 months ago, # ^ | ← Rev. 2 → 0 It's delivered by DHL (at least in my country). If so, a courier will call you.
• » » » » » » 9 months ago, # ^ | 0 Thank you!
• » » » 9 months ago, # ^ | 0 Received two days ago.
• » » » » 9 months ago, # ^ | +5 Do you get a notification of when it is going to arrive?
• » » » » » 9 months ago, # ^ | +5 I got my shirt three days ago, no notification.
• » » » » » » 9 months ago, # ^ | 0 Thank you!
» 9 months ago, # | 0 To DCJ finalists: Is anyone free on GCJ competition day?I'm planning to travel around the city but currently I don't have anyone to go with (the other 4 Japanese finalists are all GCJers).
» 7 months ago, # | 0 I didn't receive my shirt yet, Is it normal to be that late?! Is anyone else also didn't receive his/her till now?!
• » » 7 months ago, # ^ | +5 I haven't received mine either.
• » » » 4 months ago, # ^ | +5 If anyone still cares, just a quick update: I got an email that my T-shirt is in transit.
|
Geometry
## Transpose
The transpose of a matrix $M$ is another matrix which we write using the following convention: $M^T$ (with the superscript T). We can describe the process of transposing a matrix in different ways. It can be seen as: reflecting $M$ over its main diagonal (from left to right, top to bottom) to obtain $M^T$, writing the rows of $M$ as the columns of $M^T$ or reciprocally, writing the columns of $M$ as the rows of $M^T$. Computing the transpose of a matrix can be done with the following code:
Matrix44 transpose() const { Matrix44 transpMat; for (uint8_t i = 0; i < 4; ++i) { for (uint8_t j = 0; j < 4; ++j) { transpMat[i][j] = m[j][i]; } } return transpMat; }
The idea is to swap the rows and columns and since this operation can't be done in place we need to assign the result to a new matrix which is returned by the function. Transposing matrices can be useful when you want to convert matrices from a 3D application using row-major matrices to another using a column-major convention (and vice versa).
## Inverse
If the multiplying point A by the Matrix M gives point B, multiplying a point B the inverse of the matrix M gives point A. In mathematics, a matrix inversion is usually written using the following notation:
$$M^{-1}$$
From this observation, we can write that:
$$MM^{-1}=I$$
Where I is the identity matrix. Multiplying a matrix by its inverse gives the identity matrix.
We have mentioned in the chapter How Does a Matrix Work, the case of the orthogonal matrix which inverse can easily be obtained from computing its transpose. An orthogonal matrix is a square matrix with real entries whose columns and rows are orthogonal unit vectors. This is an important property which we will be using to learn how to transform normals.
Matrix inversion is an important process in 3D. We know that we can use point- or vector-matrix multiplication to convert points and vectors but it is some times useful to be able to move the transformed points or vectors back into the coordinate system in which they were originally defined into. It is often necessary for instance, to transform the ray direction and origin in object space to test for a primitive-ray intersection. If there is an intersection the resulting hit point is in object space and needs to be converted back into world space to be usable.
The lesson Matrix Inverse in the Mathematics and Physics of Computer Graphics section will teach how to compute the inverse of a matrix (only available in the old version of Scratchapixel for now). Developing even a basic renderer without being able to use matrices and their inverse would be quite limited so we will be providing some code in this lesson for doing so. You can use this code without worrying to much about how it works and read this advanced lesson another time if you don't feel ready yet.
## Determinant of a Matrix
This part of the lesson will be written later.
|
# A Does the MWI require "creation" of multiple worlds?
#### stevendaryl
Staff Emeritus
... needed (and meaningful) only in MWI.
That's actually not true. We have to make similar assumptions in classical physics, but they are just not made explicit.
#### A. Neumaier
That's actually not true. We have to make similar assumptions in classical physics, but they are just not made explicit.
No. In classical physics, we approximate probabilities by relative frequencies, in the same way as we approximate exact positions by measured positions. By regarding a relative frequency as an approximate measurement of the exact probability, no assumption about alternative worlds need to be made.
#### stevendaryl
Staff Emeritus
No. In classical physics, we approximate probabilities by relative frequencies
How do you know that's a good approximation? You don't. In classical probabilities, a sequence of flips of a fair coin can give you a relative frequency of anything between 0 and 1. We assume that if we flip enough times, then the relative frequency will settle down to 1/2 in the case of a fair coin. But that is the assumption that our world is a "typical" one.
#### A. Neumaier
How do you know that's a good approximation?
How do you know it in case of high precision measurements of position? One generally assumes it without further ado, and corrects for mistakes later.
In classical probabilities, a sequence of flips of a fair coin can give you a relative frequency of anything between 0 and 1.
In theory but not in practice. If one flips a coin 1000 times and finds always head, everyone assumes that the coin, or the flips, or the records of them have been manipulated, and not that we were lucky or unlucky enough to observe a huge statistical fluke.
We draw conclusions about everything we experience based on observed relative frequencies on a sample of significant size, and quantify our remaining uncertainty by statistical safeguards (confidence intervals, etc.), well knowing that these sometimes fail. Errare humanum est.
But that is the assumption that our world is a "typical" one.
Nobody ever before the advent of MWI explained the success of our statistical reasoning by assuming that our world is a typical one. Indeed, if there are other worlds, we cannot have an objective idea at all about what happens in them, only pure guesswork - all of them might have completely different laws from what we observe in ours. Hence any statements about the typicalness of our world are heavily biased towards what we find typical in our only observable world.
#### stevendaryl
Staff Emeritus
How do you know it in case of high precision measurements of position? One generally assumes it without further ado, and corrects for mistakes later.
I can't tell whether you actually have a disagreement, or not.
In theory but not in practice. If one flips a coin 1000 times and finds always head, everyone assumes that the coin, or the flips, or the records of them have been manipulated, and not that we were lucky or unlucky enough to observe a huge statistical fluke.
That's the assumption that our world is "typical". So you're both making that assumption and denying it, it seems to me.
#### A. Neumaier
That's the assumption that our world is "typical". So you're both making that assumption and denying it, it seems to me.
No.
In common English, to call something typical means that one has seen many similar things of the same kind, and only a few were very different from the typical instance. So one can call a run of coin flips typical if its frequency of heads is around 50% and atypical if it was a run where the frequency is outside the $5\sigma$ threshold required, e.g., for proofs of a new particle (see https://physics.stackexchange.com/questions/31126/ ), with a grey zone in between.
This is the sense I am using the term. All this happens within a single world. It is not the world that is typical but a particular event or sequence of events.
But I have no idea what it should means for the single world we have access to to be ''typical''. To give it a meaning one would have to compare it with speculative, imagined, by us unobservable, other worlds. Thus calling a world typical is at the best completely subjective and speculative, and at the worst, completely meaningless.
#### Derek P
No.
In common English, to call something typical means that one has seen many similar things of the same kind, and only a few were very different from the typical instance. So one can call a run of coin flips typical if its frequency of heads is around 50% and atypical if it was a run where the frequency is outside the $5\sigma$ threshold required, e.g., for proofs of a new particle (see https://physics.stackexchange.com/questions/31126/ ), with a grey zone in between.
This is the sense I am using the term. All this happens within a single world. It is not the world that is typical but a particular event or sequence of events.
But I have no idea what it should means for the single world we have access to to be ''typical''. To give it a meaning one would have to compare it with speculative, imagined, by us unobservable, other worlds. Thus calling a world typical is at the best completely subjective and speculative, and at the worst, completely meaningless.
Just remember, hair-splitting is irrelevant to world-splitting.
Funnily enough I can understand Steven's language in what appears, admittedly to my vague sort of mind, to be perfectly well-defined terms. Personally I translate "typical" into something useful about confidence limits.
Last edited:
#### almostvoid
Gold Member
There is no collapse in Many Worlds.
aren't the many worlds theoretical-
#### almostvoid
Gold Member
??? @StevenDarryl was describing the smooth evolution of the emergent worlds. It was not even remotely a reformulation of MWI.
You may believe so but MWI asserts exactly the opposite.
See this article - but only if you don't mind Vongher's provocative style.
that article above--See this article--- - fails simply because the use of Wikipedia makes research infotainment. Plus a lot of thought experiments. Neumaier has it spot on-
"Does the MWI require "creation" of multiple worlds?"
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
# Resources
## Publications
• ### Small Obstacle in a Large Polar Flock
Author(s): Joan Codina, Benoît Mahault, Hugues Chaté, Jure Dobnikar, Ignacio Pagonabarraga, and Xia-qing Shi We show that arbitrarily large polar flocks are susceptible to the presence of a single small obstacle. In a wide region of parameter space, the obstacle triggers counterpropagating dense bands leading to reversals of the flow. In very large systems, these bands interact, yielding a never-ending cha… [Phys. Rev. Lett. 128, 218001] Published...
• ### Continuum Field Theory for the Deformations of Planar Kirigami
Author(s): Yue Zheng, Imtiar Niloy, Paolo Celli, Ian Tobasco, and Paul Plucinsky Mechanical metamaterials exhibit exotic properties that emerge from the interactions of many nearly rigid building blocks. Determining these properties theoretically has remained an open challenge outside a few select examples. Here, for a large class of periodic and planar kirigami, we provide a co… [Phys. Rev. Lett. 128, 208003] Published Fri May 20, 2022
• ### Susceptibility of Polar Flocks to Spatial Anisotropy
Author(s): Alexandre Solon, Hugues Chaté, John Toner, and Julien Tailleur We study the effect of spatial anisotropy on polar flocks by investigating active $q$-state clock models in two dimensions. In contrast to the equilibrium case, we find that any amount of anisotropy is asymptotically relevant, drastically altering the phenomenology from that of the rotationally inva… [Phys. Rev. Lett. 128, 208004] Published Fri May 20, 2022
• ### Thermal Fluctuations of Singular Bar-Joint Mechanisms
Author(s): Manu Mannattil, J. M. Schwarz, and Christian D. Santangelo A bar-joint mechanism is a deformable assembly of freely rotating joints connected by stiff bars. Here we develop a formalism to study the equilibration of common bar-joint mechanisms with a thermal bath. When the constraints in a mechanism cease to be linearly independent, singularities can appear … [Phys. Rev. Lett. 128, 208005] Published Fri May 20, 2022
• ### Collective Drifts in Vibrated Granular Packings: The Interplay of Friction and Structure
Author(s): A. Plati and A. Puglisi We simulate vertically shaken dense granular packings with horizontal periodic boundary conditions. A coordinated translating motion of the whole medium emerges when the horizontal symmetry is broken by disorder or defects in the packing and the shaking is weak enough to conserve the structure. We a… [Phys. Rev. Lett. 128, 208001] Published Tue May 17, 2022
• ### Softening and Residual Loss Modulus of Jammed Grains under Oscillatory Shear in an Absorbing State
Author(s): Michio Otsuki and Hisao Hayakawa From a theoretical study of the mechanical response of jammed materials comprising frictionless and overdamped particles under oscillatory shear, we find that the material becomes soft, and the loss modulus remains nonzero even in an absorbing state where any irreversible plastic deformation does no… [Phys. Rev. Lett. 128, 208002] Published Tue May 17, 2022
• ### Elastohydrodynamic Synchronization of Rotating Bacterial Flagella
Author(s): Maria Tătulea-Codrean and Eric Lauga To rotate continuously without jamming, the flagellar filaments of bacteria need to be locked in phase. While several models have been proposed for eukaryotic flagella, the synchronization of bacterial flagella is less well understood. Starting from a reduced model of flexible and hydrodynamically c… [Phys. Rev. Lett. 128, 208101] Published Tue May 17, 2022
|
# The Finite Element Method - Fundamentals - Strong and Weak Form for 1D problems [5]
#1
After a long break I am back with a new interesting post about the Weak and Strong forms in the Finite Element Method!
## Introduction
The mathematical models of heat conduction and elastostatics covered in Chapter 2 of this series consist of (partial) differential equations with initial conditions as well as boundary conditions. This is also referred to as the so called Strong Form of the problem. In this chapter we shall therefore discuss the formulation in terms of so-called strong and weak forms. To facilitate this, we consider simple differential equations, which turns out to govern one-dimensional heat flow as well as other physical phenomena like elastic bars, flexible stringt, etc. To obtain a firm background, it is convenient first to establish this differential equation. So let’s get started!!!
The partial differential equations in the last paragraph are second order partial differential equations. This demands a high degree of smoothness for the solution u(x). That means that the second derivative of the displacement has to exist and has to be continuous! This also implies requirements for parameters that are not influenceable like the geometry (sharp edges) and material parameters (different Young’s modulus in a material).
The idea now is to formulate the continuous problem as an integral!
## The strong form for an axially loaded bar
Consider the static response of an elastic bar with variable cross section as shown in the picture above. This is an example of a problem in linear stress analysis or linear elasticity, where we seek to find the stress distribution \sigma(x) in the bar. The stress will result from the deformation of the body, which is characterized by the displacement of points in the body, u(x). This displacement implies a strain denoted by \epsilon(x). You can see in the picture that the body is subjected to a body force or distributed loading b(x) (units are force per length). In addition, we can describe the body force which could be due to gravity. Furthermore, loads can be prescribed at the ends of the bar, where the displacement is not prescribed \rightarrow these loads are called tractions and denoted by \overline{t} (units are force per area \rightarrow multiplied with an area give us the applied force).
The bar must satisfy the following conditions:
1. Equilibrium must be fulfilled
2. Stress-Strain law must be satisfied. \sigma(x) = E(x)\epsilon(x)
3. Displacement field must be compatible
4. Strain-Displacement equations must be satisfied
The differential equation of this bar can be obtained from equilibrium of external forces b(x) as well as the internal forces p(x) acting on the body in the axial direction (along x-axis).
Summing the forces in x-direction:
-p(x) + b \left(x + \frac{\Delta x}{2} \right)\Delta x + p(x + \Delta x) = 0
Rearranging the terms:
\frac{p(x+\Delta x) - p(x)}{\Delta x}+ b \left(x + \frac{\Delta x}{2} \right)= 0
The limit of this equation with \Delta x \rightarrow 0 makes the first term the derivative dp/dx and the second term becomes b(x). So we can write:
\frac{dp}{dx}+ b(x) = 0 \tag{1}
This equation expresses the equilibrium equation in terms of the internal force p. Stress is defined as:
The strain-displacement equation is obtained by:
\epsilon(x) = \frac{elongation}{original length} = \frac{u(x+\Delta x) - u(x)}{\Delta x}
Taking the limit of above for \Delta x \rightarrow 0, we see that:
\epsilon(x) = \frac{du}{dx} \tag{3}
The stress-strain law, also known as Hooke’s law has already been introduced in earlier chapters:
\sigma(x) = E(x)\epsilon(x) \tag{4}
Substituting (3) in (4) and that result into (1) yields:
\frac{d}{dx} \left(AE\frac{du}{dx} \right) + b = 0, \qquad 0 < x < l \tag{5}
Equation (5) is a second-order ordinary differential equation. u(x) is the dependent variable, which is the unknown function, and x the independent variable. Equation (5) is a specific form of equation (1). Equation (1) applies both linear and nonlinear materials whereas (5) assumes linearity in the definition of the strain (3) and stress-strain law (4). Compatibility is satisfied by requiring the displacement to be continuous.
To solve the differential equation, we need to prescribe boundary conditions at the two ends of the bar. At x = l , the displacement, u(x=l), is prescribed; at x = 0 , the force per unit area, or traction, denoted by \overline t, is prescribed. We write these conditions as:
\sigma(0) = \left(E\frac{du}{dx} \right)_{x=0} = \frac{p(0)}{A(0)} = -\overline t \\ \qquad u(l) = \overline u \tag{6}
Note that the lines above the letters indicate a prescribed boundary value.
The traction \overline t has the same units as stress (force/area), but its sign is positive when it acts in the positive x-direction regardless of which face it is acting on, whereas the stress is positive in tension and negative in compression, so that on a negative face a positive stress corresponds to a negative traction.
The governing differential equation (5) along with the boundary conditions (6) is called the strong form of the problem.
To summarize, the strong form consists of the governing equation and the boundary conditions, which for example are
(b) \qquad \sigma(x=0) = \left(E\frac{du}{dx} \right)_{x=0} = -\overline t \tag{7}
(c) \qquad u(x=l) = \overline u
## The weak form (1D)
To develop the finite element formulation, the partial differential equations must be restated in an integral form called the weak form. The weak form and the strong form are equivalent! In stress analysis the weak form is called the principle of virtual work.
We start by multiplying the governing equation (7a) and the traction boundary condition (7b) by an arbitrary function w(x) and integrating over the domains on which they hold: for the governing equation, the pertinent domain is [0,l]. For the traction boundary, it is the cross-sectional area at x = 0 (no integral needed since this condition only holds only at a point but we multiply it with A). The results are:
(a) \qquad \int_0^l w\left[\frac{d}{dx} \left(AE\frac{du}{dx}\right) + b\right] dx = 0 \qquad \forall w \tag{13}
(b) \qquad \left(wA\left(E\frac{du}{dx} + \overline t \right) \right)_{x=0} = 0 \qquad \forall w
The function w is called weight function or test function. In the above, \forall w denotes that w(x) is an arbitrary function, i.e. (13) has to hold for all functions w(x). Arbitrariness of the weight function is crucial for the weak form. Otherwise the strong form is NOT equivalent to the weak form.
We did not enforce the boundary condition on the displacement in (13) by the weight function. It will be seen that it is easy to construct trial solutions u(x) that satisfy this boundary condition. We will also see that all weight functions satisfy
w(l) = 0 \tag{14}
In solving the weak form, a set of admissible solutions u(x) that satisfy certain conditions is considered (also called trial solutions or candidate solutions). We could use equation (13) to construct a FEM method. But since we have the second derivative of $u(x)$$in the equation, we would need very smooth trial functions which are difficult to construct in more than one dimension. The resulting stiffness matrix would also be not symmetric! We will transform (13) into a form containing only first derivatives. This will give us a symmetric stiffness matrix and allows us to use less smooth solutions. We rewrite (13a) in an equivalent form: \int_0^l w\frac{d}{dx}\left(AE\frac{du}{dx}\right) dx + \int_0^l wb dx = 0 \qquad \forall w \tag{15} Applying integration by parts on equation (15): \int_0^l w\frac{d}{dx}\left(AE\frac{du}{dx}\right) dx = \left(wAE\frac{du}{dx}\right) \Biggr|^l_0 -\int_0^l \frac{dw}{dx}AE\frac{du}{dx} dx \tag{16} Using (16),equation (15) can be written as: \left(wA\overbrace{E\frac{du}{dx}}^{\text{$\sigma$}} \right) \Biggr|^l_0 - \int_0^l \frac{dw}{dx}AE\frac{du}{dx} dx + \int_0^l wb dx = 0 \qquad \forall w \quad with \quad w(l) = 0 \tag{17} We note that by the stress-strain law and strain-displacement equations, the underscored boundary term is the stress \sigma: The first term in equation (18) vanishes since we assumed w(l)=0: for that reason is it useful to construct weight functions that vanish on prescribed displacement boundaries. Though the term looks insignificant, it would lead to loss of symmetry in the final equation. Form (13b), we can see that the second term equals (wA\overline t)_{x=0}, so equation (18) becomes To summarize the approach: We multiplied the governing equation and traction boundary by an arbitrary, smooth weight function and integrated the products over the domain where they hold. We also transformed the integral so that the derivatives are of lower order. The crux of this approach: Trial solutions that satisfy the equation developed for all smooth w(x) with w(l)=0 is the solution. We obtain the solution as follows: Equation (20) is called the weak form. The name states that solutions to the weak form do not need to be as smooth as solutions of the strong form. \rightarrow weaker continuity requirements You have to keep in mind that the solution satisfying equation (20) is also the solution of the strong counterpart of this equation. Also remember that the trial solutions u(x) must satisfy the displacement boudary conditions. This is an essential property of the trial solutions and that is why we call those boundary conditions essential boundary conditions. The traction boundary conditions emanate naturally from equation (20) which means that the trial solutions do not need to be constructed to satisfy the traction boundary condition. These boundary conditions are therefore called natural boundary conditions. A trial solution that is smooth AND satisfies the essential boundary conditions is called admissible. A weight function that is smooth AND vanishes on essential boundaries is admissible. When weak forms are used to solve a problem, the trial solutions and weight functions must be admissible. Also notice that equation (20) is symmetric in w and u which will lead to a symmetric stiffness matrix. The highest order derivative that appears in this equation is of first order! # Continuity In this chapter I will shortly explain the differences and concepts of smothness, i.e. continuity. A function is called a C^n function if its derivatives of order j for 0\leq j \leq n exist and are continous functions in the entire domain. We will be concerned mainly with C^0, C^{-1} and C^1 functions. Examples are illustrated in the picture below. You can see that a C^0 function is piecewise continuously differentiable, i.e. its first derivative is continuous except at selected points. The derivative of a C^0 function is a C^{-1} function. Examples: 1. If the displacement is a C^0 function, the strain is a C^{-1} function. 2. If the temperature field is a C^0 function, the flux is a C^{-1} function if the conductivity is C^0. The derivative of a C^n is a C^{n-1} function. As the superscript increases the progression of smoothness increases as well. So a C^{-1} function can have kinks and jumps. A C^0 function has no jumps but has kinks. A C^1 function has no kinks or jumps. In literature, jumps are often called strong discontinuities, whereas kinks are called weak discontinuities. CAD Databases: CAD databases usually employ functions that are at least C^1; the most common are spline functions. Otherwise, the surface would possess kinks stemming from the function description, e.g. in a car there would be kinks in the sheet metal wherever C^1 continuity is not observed. Finite Elements usually employ C^0 functions. ### The equivalence between the weak and strong forms In the previous section, we constructed the weak and strong form. Now we will show the equivalence of both forms by showing that the weak form implies the strong form. This will insure that when we solve the weak form, then we have a solution to the strong form. We can simply prove that the weak form implies the strong form by reversing the steps we used to obtain the weak form. So instead of using the integration by parts to eliminate the 2nd derivative of u(x), we reverse the formula to obtain an integral with a higher derivative and a boundary term. For this purpose, interchange the terms given in equation (17): \int_0^l \frac{dw}{dx}AE\frac{du}{dx} dx = \left(wAE\frac{du}{dx}\right) \Biggr|^l_0 - \int_0^l w \frac{d}{dx}\left(AE\frac{du}{dx}\right) dx Substituting above in equation (20) and placing the integral terms on the left-hand side and the boundary terms on the right-hand side gives: \int_0^l w \left[\frac{d}{dx}\left(AE\frac{du}{dx}\right) + b \right] dx + wA(\overline{t} + \sigma)_{x=0} = 0 \tag{21} The quintessence making the proof possible is the arbitrariness of w(x). First we let w = \psi(x) \left[\frac{d}{dx}\left(AE\frac{du}{dx}\right) + b \right] = 0. \tag{22} where \psi(x) is smooth, \psi(x) > 0 on 0 < x < l and \psi(x) vanishes on the boundaries. An example of a function satisfying the above requirements is \psi(x) = x(l-x). Because of how \psi(x) is constructed, it follows that w(l) = 0, so that w=0 on the prescribed displacement boundary, the essential boundary, is met. Inserting (22) in (21) yields: w = \psi(x) \left[\frac{d}{dx}\left(AE\frac{du}{dx}\right) + b \right]^2 dx = 0. \tag{23} The boundary term vanishes because we have constructed the weight function so that w(0)=0. As the integrand in (23) is the product of a positive function and the square of a function, it must be positive at every point in the problem domain. The only way equality is met in equation (23) is if the integrand is zero at every point! It follows that: \frac{d}{dx}\left(AE\frac{du}{dx}\right) +b , \qquad 0<x<l \tag{24} which is exactly the strong form we had in the beginning \rightarrow Equation (5) From (24) it follows that the integral in (21) vanishes, so we are left with (wA(\overline{t} + \sigma))_{x=0} = 0 \quad \text{$\forall w$with$w(l)=0\$ } \tag{25}
As the weight function is arbitrary, we select it such that w(0) = 1 and w(l)=0. It is very easy to construct such a function. (l-x)/x is a suitable weight function. Any other smooth function that you can draw in the interval [0,l] that vanishes at x=l is also suitable!
Since the cross-sectional area A(0)\neq 0 and w(0) \neq 0, it follows that
which is the natural boundary condition. \rightarrow Equation (7b)
The last remaining equation of the strong form, the displacement boundary condition (3.7c), is satisfied by all trial solutions by construction, which can be seen from (20) as we required u(l)=\overline u. Therefore, we can conclude that the trial solution that satisfies the weak form also sastisfies the strong form.
There are also other ways to prove the equivalence which are not covered in this “short” course but can be found in the book by Fish & Belytschko or in the book by Ottosen & Petersson.
## Minimum Potential Energy
We can also motivate the FEM with a variational principle. In the case of one dimensional elastostatics the minimum of potential engery holds for conservative systems. The equilibrium position is stable if the potential energy of the system \Pi is a minimum. Every infinitesimal disturbance of the stable position leads to an energetic unfavourable state and implies a restoring reaction. The total potential energy \Pi of a system consists of the work of the inner forces (strain energy)
A_i = \int_0^l \underbrace{\frac{1}{2} E(x)A(x) \left(\frac{du}{dx} \right)^2}_{\frac{1}{2}\sigma\epsilon A(x)} dx
and the work of the external forces
A_a = A(x)\overline{t}(x)u(x)|_{\Gamma _t}
The total energy is:
\Pi = A_i - A_a
The condition for a minimum of the total potential energy is:
\Pi(u) - \Pi(u^*) \geq 0
where u is an arbitrary solution which is compatible with the boundary conditions. u^* is the displacement field for which a minimum will be generated.
When disturbing the solution we get:
u = u^* + \lambda w(x)
where \lambda \in \mathbb{R} is the amplitude of the disturbance and w(x) the test function with w=0 on the Dirichlet boundary. The test functions w(x) have to be from the Sobolev space \mathcal{H}_0^1.
Since \Pi is minimal for u^* we have:
\Pi(u^* + \lambda w) - \Pi(u^*) \geq 0
This also has to hold for infinitesimal disturbances \lambda \rightarrow 0, \lambda \neq 0. The necessary condition for a minimum is then:
\lim\limits_{n \rightarrow 0}{\frac{(\Pi(u^* + \lambda w) - \Pi(u^*))}{\lambda}} = 0 \tag{27}
The expression above is also known as a Gâteaux derivative.
A_i(u^* + \lambda w) = \int_0^l \frac{1}{2} E(x)A(x) \left(\frac{d(u^* +\lambda w)}{dx}\right)^2 dx
which gives us with the binomial formula:
\int_0^l \frac{1}{2} E(x)A(x) \left[\left(\frac{du*}{dx}\right)^2 + 2\lambda\frac{du^*}{dx}\frac{dw}{dx} +\lambda^2 \left(\frac{dw}{dx}\right)^2\right] dx
Formula (27) with \lambda \rightarrow 0 gives us:
0 \overset{!}{=} \lim\limits_{\lambda \rightarrow 0}{\Pi (u^* + \lambda w) - \Pi (u^*)} = 0
which gives us after some easy calculations
Since \lambda \rightarrow 0 and \lambda \neq 0 apply at the same time we have an equivalence to the strong form. So if a function u^*(x) is solving the minimization problem it automatically fulfils the weak form and vice versa.
## Concluding remarks
We can show for various physical problems that the methods we used are applicable since they all lead to the same type of differential equation and boundary condition and together are referred to as the strong form of the problem. We also showed that this so called strong form could be reformulated into an equivalent weak form.
I emphasized that the weak form is the one on which the FE approach is based. The reason is that the order of differentiation of the unknown is lower in the weak form than in the strong form, thus facilitating the approximation process.
The weak form applies without changes to continuous as well as discontinuous problems, implying the important conclusion that the FE method also holds for continuous as well as discontinuous problems.
Sources:
Introduction to the Finite Element Method - Niels Ottosen & Hans Petersson
A First Course in Finite Elements by Jacob Fish and Ted Belytschko
• Lectures of KIT Karlsruhe (Script: Einführung in die Finite Elemente Methode Sommersemester 2014)
This was the fifth part for FEM fundamentals.
If you find any mistakes or have wishes for the next chapters please let me know.
If you like a homework for this chapter just comment and I will set up some small problems that you can solve and we discuss it in this thread!
Scientia potentia est!
First chapter: The Finite Element Method - Fundamentals - Introduction [1]
Second chapter: The Finite Element Method - Fundamentals - Physical Problems [2]
Third chapter: The Finite Element Method - Fundamentals - Matrix Algebra [3]
Fourth chapter: The Finite Element Method - Fundamentals - Direct Approach for Discrete Systems [4]
A collection of FEA resources
Mathematical Fundamentals - Gradient, Gauss' divergence theorem and Green-Gauss theorem
#3
#4
If you have messages on the page saying [Math Processing Error] or some formulae are not displayed correctly just press F5. Enjoy reading!
|
A Bad Ohmen from The College Board Gods (Gaelan)
## Opening Questions
• When we say a battery has internal resistance, what does that mean?
• How does current change across resisters in parallel?
## Objectives
I will be able to:
• relate current and voltage for a resistor.
• identify on a circuit diagram whether resistors are in series or in parallel.
• calculate the equivalent resistance of a network of resistors that can be broken down into series and parallel combinations.
• calculate the voltage, current and power dissipation for any resistor in such a network of resistors connected to a single power supply.
• calculate the terminal voltage of a battery of specified emf and internal resistance from which a known current is flowing.
## Words/Formulae For the Day
• $\Delta V=IR$$\Delta V = I R$
• ${R}_{T}=\sum {R}_{i}$${R}_{T} = \sum {R}_{i}$
• Current: The amount of charge flowing through a circuit.
• Voltage/EMF: Electric potential—the ability to make charges go.
• Internal resistance: Resistance within a battery itself.
## Work O' The Day
Alright, so we've got this little cirucit. There's some gunk in the problem about internal resistance, but we don't really care about it for now—as far as this part of the problem is concerned, a battery with an internal resistance and an ideal battery next to a resistor (like it's drawn in the circuit diagram) are the exact same thing.
With that out of the way, let's give our first question a read.
Derive an expression for the measured voltage $V$$V$. Express your answer in terms of $R$$R$, $e$$e$, $r$$r$, and physical constants, as appropriate.
The College Board Gods have spoken, and we shall obey unquestioningly. Let's start by calculating the total resistance of the circuit. We've got two resistors in parallel, so we just add them up and get our equivalent resistance:
${R}_{T}=R+r$${R}_{T} = R + r$
We'll also need Ohm's law:
$I=\frac{\Delta V}{R}$$I = \frac{\Delta V}{R}$
Let's plug in $\varepsilon$$\varepsilon$ in for $\Delta V$$\Delta V$ and ${R}_{T}$${R}_{T}$ in for $R$$R$.
$I=\frac{\varepsilon }{{R}_{T}}$$I = \frac{\varepsilon}{R} _ T$
Because our resisistors are in parallel, $I$$I$ is the same everywhere in the circuit. So, determining the resistance ($R$$R$) of our main resistor should be as simple as a second visit to our good friend Mr. Ohm.
$V=IR$$V = I R$
Let's plug in our equation in for $I$$I$:
$V=\frac{\varepsilon R}{{R}_{T}}$$V = \frac{\varepsilon R}{R} _ T$
Note that while the R's look like they might cancel out, they don't (because one is the resistance of both resistors, and one is just that of our main one).
Finally, let's plug ${R}_{T}$${R}_{T}$ back in so that it's just in terms of their variables:
$V=\frac{\varepsilon R}{R+r}$$V = \frac{\varepsilon R}{R + r}$
Alright, that's the first part done. Are the gods pleased?
Rewrite your expression from part (a)-i to express $\frac{1}{V}$$\frac{1}{V}$ as a function of $\frac{1}{R}$$\frac{1}{R}$.
Not yet, it seems. This demand seems simple enough, just some algebraic manipulation. They seem to want inverses, so taking the reciprocal of both sides seems like a good start.
$\frac{1}{V}=\frac{R+r}{R\varepsilon }$$\frac{1}{V} = \frac{R + r}{R \varepsilon}$
That looks about right. Let's simplify a bit by splitting up the fraction:
$\frac{1}{V}=\frac{R}{R\varepsilon }+\frac{r}{R\varepsilon }$$\frac{1}{V} = \frac{R}{R \varepsilon} + \frac{r}{R \varepsilon}$
Now, we actually can cancel out some R's, so we do:
$\frac{1}{V}=\frac{1}{\varepsilon }+\frac{r}{R\varepsilon }$$\frac{1}{V} = \frac{1}{\varepsilon} + \frac{r}{R \varepsilon}$
Finally, because they asked us to give it in terms of $\frac{1}{R}$$\frac{1}{R}$, so let's seperate that out so that the College Board Gods (or their lowly graders) have no excuse to mark us down.
$\frac{1}{V}=\frac{1}{\varepsilon }+\frac{r}{\varepsilon }\frac{1}{R}$$\frac{1}{V} = \frac{1}{\varepsilon} + \frac{r}{\varepsilon} \frac{1}{R}$
On the grid below, plot data points for the graph of $\frac{1}{V}$$\frac{1}{V}$ as a function of $\frac{1}{R}$$\frac{1}{R}$. Clearly scale and label all axes, including units as appropriate. Draw a straight line that best represents the data.
Graphing? The gods must be especially angry today. Of course, we obey unquestioningly, graphing the points from the table and drawing a trendline.
(Note: For technical reasons, the graph above isn't labelled. On the real text, it would need labels for full points.)
Use the straight line from part (b) to obtain values for the following: $\varepsilon$$\varepsilon$, $r$$r$.
When all you've got's a TI-84, everything looks like a regression problem. So let's do just that. We end up with this equation:
$\frac{1}{V}=\left(0.0493\right)\frac{1}{R}+0.0817$$\frac{1}{V} = \left(0.0493\right) \frac{1}{R} + 0.0817$
Hey, that looks familiar! Remember this from earlier?
$\frac{1}{V}=\frac{1}{\varepsilon }+\frac{r}{\varepsilon }\frac{1}{R}$$\frac{1}{V} = \frac{1}{\varepsilon} + \frac{r}{\varepsilon} \frac{1}{R}$
If we match up the variables, we can figure some things out:
$\frac{1}{\varepsilon }=0.0817$$\frac{1}{\varepsilon} = 0.0817$
$\frac{r}{\varepsilon }=0.0493$$\frac{r}{\varepsilon} = 0.0493$
Let's solve for $\varepsilon$$\varepsilon$:
$\varepsilon =12.2\text{V}$$\varepsilon = 12.2 \text{V}$
And plug that in to find $r$$r$.
$\frac{r}{12.2}=0.049\Omega$$\frac{r}{12.2} = 0.049 \Omega$
$r=\left(0.0493\right)\left(12.2\right)=0.601$$r = \left(0.0493\right) \left(12.2\right) = 0.601$
Using the results of the experiment, calculate the maximum current that the battery can provide.
Unsure if the College Board Gods will ever be satisfied, we trudge on through the darkness, starting with another look at Ohm's law.
$V=IR$$V = I R$
Or, in this case, $I=\frac{V}{R}$$I = \frac{V}{R}$
We were asked to calculate the "maximum current the battery can provide." We know that in this case the voltage we're talking about is $\varepsilon$$\varepsilon$, so let's plug that in:
$I=\frac{\varepsilon }{R}$$I = \frac{\varepsilon}{R}$
OK, so maybe if we can maximize $I$$I$, the College Board Gods will have mercy. $\varepsilon$$\varepsilon$ is constant, so it looks like mimimizing $R$$R$ is our best bet. They're asking about the batery itself, so we don't care about the other resistor in the circuit. However, we still have the internal resistance in the battery itself, $r$$r$, and we can't get any lower than that. Therefore,
$I=\frac{\varepsilon }{r}=\frac{12.2}{0.601}=20.3\text{A}$$I = \frac{\varepsilon}{r} = \frac{12.2}{0.601} = 20.3 \text{A}$
A voltmeter is to be used to determine the emf of the battery after removing the battery from the circuit. Two voltmeters are available to take this measurement—one with low internal resistance and one with high internal resistance. Indicate which voltmeter will provide the most accurate measurement.
The high one. There, we're done. We've pleased the—
Darn. So this is something we haven't learned yet, but luckily it's pretty simple. A voltmeter contains a resistor with its own resistance. If the voltmeter's resistance is low, a lot of current will pass through it ($I=\frac{V}{R}$$I = \frac{V}{R}$ and all that), which means that the voltmeter itself will have a fairly big impact on the ciruit, which could result in an inaccurate mesurement. Therefore, voltmeters with a high internal resistance will be more accurate.
|
Needs and expenses never-end! Nobody has ever remained content with what they have and what they are. Everyone has a desire to get more and better of everything. However, in today’s age of inflation, just the monthly salary is hardly enough to meet the needs; spending on desires is just impossible. It has become important for the middle-class salaried group to have a secondary source of income, apart from their salary, which can be used to spend on desires. A low bank balance causes stress and tension for many and impacts their health as well. However, all these financial stresses can be eliminated by earning an extra amount or a second income, even if you have a full-time job.
Do you know Music ? Are you an expert in Maths ? Vedic Maths ? Yoga or may be cooking ? Do you live in a residential society which has 100’s of other families? Then tutions is a perfect thing you can start give you are good at what you claim to. You can always dedicate 2 hours (if you really have them) and do some basic advertising in your apartment or nearby places and take students to teach them at home. The best part of this kind of tutors is that you refresh your dying knowledge, earn some money at the comfort of your home and you kill time which goes into unproductive things most of the times. myprivatetutor.com is a good place to register and start with.
Hi there. I am new here, I live in Norway, and I am working my way to FI. I am 43 years now and started way to late….. It just came to my mind for real 2,5years ago after having read Mr Moneymoustache`s blog. Fortunately I have been good with money before also so my starting point has been good. I was smart enough to buy a rental apartment 18years ago, with only 12000$in my pocket to invest which was 1/10 of the price of the property. I actually just sold it as the ROI (I think its the right word for it) was coming down to nothing really. If I took the rent, subtracted the monthly costs and also subtracted what a loan would cost me, and after that subtracted tax the following numbers appeared: The sales value of the apartment after tax was around 300000$ and the sum I would have left every year on the rent was 3750$……..Ok it was payed down so the real numbers were higher, but that is incredibly low returns. It was located in Oslo the capital of Norway, so the price rise have been tremendous the late 18 years. I am all for stocks now. I know they also are priced high at the moment which my 53% return since December 2016 also shows……..The only reason this apartment was the right decision 18 years ago, was the big leverage and the tremendous price growth. It was right then, but it does not have to be right now to do the same. For the stocks I run a very easy in / out of the marked rule, which would give you better sleep, and also historically better rates of return, but more important lower volatility on you portfolio. Try out for yourself the following: Sell the S&P 500 when it is performing under its 365days average, and buy when it crosses over. I do not use the s&P 500 but the obx index in Norway. Even if you calculate in the cost of selling and buying including the spread of the product I am using the results are amazing. I have run through all the data thoroughly since 1983, and the result was that the index gave 44x the investment and the investment in the index gives 77x the investment in this timeframe. The most important findings though is what it means to you when you start withdrawing principal, as you will not experience all the big dips and therefore do not destroy your principal withdrawing through those dips. I hav all the graphs and statistics for it and it really works. The “drawbacks” is that during good times like from 2009 til today you will fall a little short of the index because of some “false” out indications, but who cares when your portfolio return in 2008 was 0% instead of -55%…….To give a little during good times costs so little in comparison to the return you get in the bad times. All is of course done from an account where you do not get taxed for selling and buying as long as you dont withdraw anything. I’ve been into home décor lately and I had to turn to Etsy to find exactly what I wanted. I ended up purchasing digital files of the artwork I wanted printed out! The seller had made a bunch of wall art, digitized, and listed it on Etsy for instant download. There are other popular digital files on Etsy as well such as monthly planners. If you’re into graphic design this could be an amazing passive income idea for you. Or, there is another theory for your primary salary – generate enough to have a little excess cash flow, but do it at a place that you can work stress free and have time to dabble in other projects. A good friend of mine has this setup – he works 10-5 and makes$50,000 a year. This allows him to easily cover all of his expenses, but the shorter hours and flexibility in his job allows him to pursue his secondary income generating ideas!
As a millennial in my mid-20’s, i’m only just starting out on my journey (to what hopefully will be at least 5 streams of income one day) and i’m trying to save all that I can to then make my money work harder and invest. It’s difficult though because a lot of people say you should be saving for retirement and have an emergency fund (which is so true) but then on the other hand, we are told to take risks and invest our money (usually in the stock market or real estate). And as a millennial it’s so hard to do both of these things sometimes.
I want to develop a passive income stream in the next 4 years, nothing grand, maybe an extra 500-1000 dollars a month, but I’m not sure how to go about it so I was wondering if you had any tips. I’m so-so as a writer, and am currently finishing up my second book (just write as a hobby), and in the past made about 30-50 dollars an hour as a free lance writer but that was a couple of years back, it was only for about 10-20 hours a month, and the gig just dried up. I just got particularly lucky with that. I’ve tried online poker as a means in the past, and which I learned A) was not passive income but hard work and B) I have an addictive personality which resulted in me losing the 4g I earned in 6 weeks over the span of 72 hours so that’s out of the picture. I also partook in some illegal selling of things when I was younger, but being a little older and wiser the risk-reward ratio for possibly ending up in Jail just doesn’t match up. I tried making three businesses (dog walking, house cleaning, and personal assistant) and while those all were succesful to varying degrees and earned me about 15-25 dollars an hour, they weren’t mobile and quiet honestly I don’t have the time to be a full time dog walker or run a house cleaning operation seeing as I’ll be in school, work, and athletics.
My returns are based on full cash purchase of the properties, as it is hard to compare the attractiveness of properties at different price ranges when only calculating down payment or properties that need very little rehab/updates. I did think about the scores assigned to each factor, but I believe tax deductions are a SIGNIFICANT factor when comparing passive income steams.
Passive income is attractive because it frees up your time so you can focus on the things you actually enjoy. A highly successful doctor, lawyer, or publicist, for instance, cannot “inventory” their profits. If they want to earn the same amount of money and enjoy the same lifestyle year after year, they must continue to work the same number of hours at the same pay rate—or more, to keep up with inflation. Although such a career can provide a very comfortable lifestyle, it requires far too much sacrifice unless you truly enjoy the daily grind of your chosen profession.
The World Travel & Tourism Council calculated that tourism generated ₹15.24 lakh crore (US$210 billion) or 9.4% of the nation's GDP in 2017 and supported 41.622 million jobs, 8% of its total employment. The sector is predicted to grow at an annual rate of 6.9% to ₹32.05 lakh crore (US$450 billion) by 2028 (9.9% of GDP).[250] Over 10 million foreign tourists arrived in India in 2017 compared to 8.89 million in 2016, recording a growth of 15.6%.[251] India earned $21.07 billion in foreign exchange from tourism receipts in 2015.[252] International tourism to India has seen a steady growth from 2.37 million arrivals in 1997 to 8.03 million arrivals in 2015. The United States is the largest source of international tourists to India, while European Union nations and Japan are other major sources of international tourists.[253][254] Less than 10% of international tourists visit the Taj Mahal, with the majority visiting other cultural, thematic and holiday circuits.[255] Over 12 million Indian citizens take international trips each year for tourism, while domestic tourism within India adds about 740 million Indian travellers.[253] In order to build an audience, you need to have a platform. You need to have something worth following and sharing; something that’s valuable to others. And that, of course, takes time. That’s not to say you can’t build a huge audience in a short amount of time. But as much as we hear about the people who’ve succeeding at doing this, we don’t hear about the millions of others who are struggling every day to get just a few more fans and followers. Some people take it automated well before the year is up. When it converts, it converts. If you target the right people and you're able to create the right message that appeals to your audience, you might just hit a home run. An automated webinar often involves the creation of a webinar funnel. That includes, not only the webinar, but also the email sequences, and possibly a self-liquidating offer, and maybe some done-for-your services and up-sells. P.S. I also fail to understand your fascination with real estate. Granted we’ve had some impressive spikes along the way, especially with once in a life time bubble we just went through. But over the long term (see Case Shiller real estate chart for last 100 years ) real estate tends to just track inflation. Why would you sacrifice stock market returns for a vehicle that historically hasn’t shown a real return? ### In my situation, I knew that I would be leaving San Diego and quitting my job many months in advance. I knew when we’d be leaving but I didn’t know where we’d be heading(since my fiancee was applying to med school). That really forced me to think outside of the box and come up with some unique ways to make money, independent of our future location. I could have sat back and hoped that she got into a school in a city where I could find work as an engineer but I didn’t want to rely on chance. Building a website still remains a viable way of earning passive income online despite it being such a competitive venture. Since the internet is saturated with blogs, an entertaining website featuring quizzes or games is a good alternate. Such websites are not too difficult to make and they are easy to promote on social media. They can attract visitors, who will spend a significant amount of time on the site, in droves. Once a site starts recording several thousand visits each day, use the Google AdSense system to start earning revenue through advertising while you relax. I have two major dilemmas: (1) Should I wait to start investing (at least until the end of the year where I’ll hopefully have$5k+ in savings) in things like CDs? I ask because a little over $2k doesn’t seem significant enough yet to start putting my money to work (or maybe it is? that’s why I’m coming to you for your advice haha) and (2) I want to invest in things like P2P and stocks but I’m honestly a bit ignorant of how it trully works. I know the basics (high risk, returns can be volatile, returns are taxable). Do you have any advice on how I can best educate myself to start putting my savings to work? A business thrives or fails depending on its marketing and system for generating leads. You need leads to make sales. No audience or exposure means you won’t get fresh faces checking out what your business does. Too many entrepreneurs spend all their time on the “busy work” and not enough on audience building. There are some great ways to build an audience and generate new leads: In federal legislation, the key planks for the right to a useful and remunerative job included the National Labor Relations Act of 1935 and the Fair Labor Standards Act of 1938. After the war was the Employment Act of 1946, which created an objective for the government to eliminate unemployment; and the Civil Rights Act of 1964, which prohibited unjustified discrimination in the workplace and in access to public and private services. They remained some of the key elements of labor law. The rights to food and fair agricultural wages was assured by numerous Acts on agriculture in the United States and by the Food Stamp Act of 1964. The right to freedom from unfair competition was primarily seen to be achievable through the Federal Trade Commission and the Department of Justice's enforcement of both the Sherman Act of 1890 and the Clayton Act of 1914, with some minor later amendments. The most significant program of change occurred through Lyndon B. Johnson's Great Society. The right to housing was developed through a policy of subsidies and government building under the Housing and Urban Development Act of 1965. The right to health care was partly improved by the Social Security Act of 1965 and more recently the Patient Protection and Affordable Care Act of 2010. The Social Security Act of 1935 had laid the groundwork for protection from fear of old age, sickness, accident and unemployment. The right to a decent education was shaped heavily by Supreme Court jurisprudence and the administration of education was left to the states, particularly with Brown v. Board of Education. A legislative framework developed through the Elementary and Secondary Education Act of 1965 and in higher education a measure of improvement began with federal assistance and regulation in the Higher Education Act of 1965. Peer-to-peer lending ($1,440 a year): I've lost interest in P2P lending since returns started coming down. You would think that returns would start going up with a rise in interest rates, but I'm not really seeing this yet. Prosper missed its window for an initial public offering in 2015-16, and LendingClub is just chugging along. I hate it when people default on their debt obligations, which is why I haven't invested large sums of money in P2P. That said, I'm still earning a respectable 7% a year in P2P, which is much better than the stock market is doing so far in 2018!
Blogging is a great way to stream in income. Some consider blogging as a passive income source and they are pretty much dead wrong. It takes a lot of hard work and time to build your blog into a viable business. It is not a good get rich quick scheme, but with time and patience you can easily earn a full time income and even exceed what you make at your full time job if you are really good.
Now I’ve been using Swagbucks for a while and have found the money works out to just under $2 an hour so this isn’t something that’s going to make you rich. You’d have to work 2,500 hours to make$5,000 so that’s about three and a half months, non-stop. The thing with Swagbucks though is you can do it when you’re doing something else so I flip through surveys and other stuff while I’m cooking dinner or flipping channels.
Awesome article…if this does not give somebody a clear roadmap, they probably were never going to get there in the first place! I’m kind of like you trying to figure out where to place “new” money and maturing CD’s in this low interest environment. Rates have to go up eventually…I dream of the days again where you can build a laddered bond portfolio paying 8%. I plan for a 5.5% blended rate of return, with big downside protection.
###### This is another way of ensuring regular income for a period of time. Let us say, you are uncomfortable with the idea of investing in high dividend yield stocks as they generally do not give price appreciation. Also, there is no assurance on dividend yields as dividends may fall if the profits of the company fall. Another way out is to invest the money into a debt fund and pay yourself through an SWP. Let us assume that you did the same SIP and ended up with Rs.1.41 crore at the age of 45. Now you want to pay yourself a regular income for a period of 15 years till your retirement. Here is how it will work.
We live in an exciting time. You can literally make money while you sleep. As an entrepreneur, you don’t get a steady paycheck. You can create financial stability when you create multiple streams of income and make some of them passive. Use these steps and tools. Don’t just run towards the online because these are still a lot of opportunities offline. Create systems and don’t try to do it all alone.
When withdrawing money to live on, I don’t care how many stock shares I own or what the dividends are – I care about how much MONEY I’m able to safely withdraw from my total portfolio without running out before I die. A lot of academics have analyzed total market returns based on indices and done Monte Carlo simulations of portfolios with various asset allocations, and have come up with percentages that you can have reasonable statistical confidence of being safe.
# Thanks for the info…I kind of figured it is really not that expensive to live if you are not an extravagant person. I could definitely figure out how to funnel expenses through a part time business…I think I keep thinking along the lines that I’m going to be paying the same tax rate after retirement, but reality is you could get pretty lean and mean if one focused on it. On a scale of 1-10 with 10 being utter panic mode, how worried are you about your “pile” lasting through a 50 year retirement now that you are a couple years into it?
Blogging is a great way to stream in income. Some consider blogging as a passive income source and they are pretty much dead wrong. It takes a lot of hard work and time to build your blog into a viable business. It is not a good get rich quick scheme, but with time and patience you can easily earn a full time income and even exceed what you make at your full time job if you are really good.
This equation implies two things. First buying one more unit of good x implies buying {\displaystyle {\frac {P_{x}}{P_{y}}}} less units of good y. So, {\displaystyle {\frac {P_{x}}{P_{y}}}} is the relative price of a unit of x as to the number of units given up in y. Second, if the price of x falls for a fixed {\displaystyle Y} , then its relative price falls. The usual hypothesis is that the quantity demanded of x would increase at the lower price, the law of demand. The generalization to more than two goods consists of modelling y as a composite good.
|
# Humboldt-Universität zu Berlin - Mathematisch-Naturwissenschaftliche Fakultät - Institut für Mathematik
Preprint 2013-08
## Preprint 2013-08
Preprint series: Institut für Mathematik, Humboldt-Universität zu Berlin (ISSN 0863-0976), 2013-08
• 14M10 Complete intersections
• 14M12 Determinantal varieties
• 14Q20 Effectivity
• 14P05 Real algebraic sets
• 68W30 Symbolic computation and algebraic computation
Abstract Let $V$ be a smooth equidimensional quasi-affine variety of dimension $r$ over $\C$ and let $F$ be a $(p\times s)$-matrix of coordinate functions of $\C[V]$, where $s\ge p+r$. The pair $(V,F)$ determines a vector bundle $E$ of rank $s-p$ over $W:=\{x\in V | \rk F(x)=p\}$. We associate with $(V,F)$ a descending chain of degeneracy loci of $E$ (the generic polar varieties of $V$ represent a typical example of this situation).\\ The maximal degree of these degeneracy loci constitutes the essential ingredient for the uniform, bounded error probabilistic pseudo-polynomial time algorithm which we are going to design and which solves a series of computational elimination problems that can be formulated in this framework.\\ We describe applications to polynomial equation solving over the reals and to the computation of a generic fiber of a dominant endomorphism of an affine space.
|
# A reverse isoperimetric inequality for J-holomorphic curves.
-
Jake Solomon , Hebrew University
Fine Hall 401
I'll discuss a bound on the length of the boundary of a J-holomorphic curve with Lagrangian boundary conditions by a constant times its area. The constant depends on the symplectic form, the almost complex structure, the Lagrangian boundary conditions and the genus. A similar result holds for the length of the real part of a real J-holomorphic curve. The infimum over J of the constant properly normalized gives an invariant of Lagrangian submanifolds. The invariant is $2\pi$ for the Lagrangian submanifold $RP^n \subset CP^n.$ The bound can also be applied to prove compactness of moduli of J-holomorphic curves to asymptotically exact targets. These results are joint work with Yoel Groman.
|
• Browse all
Particle-yield modification in jet-like azimuthal di-hadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}} = 2.76$ TeV
The collaboration
Phys.Rev.Lett. 108 (2012) 092301, 2012
Abstract (data abstract)
CERN-LHC. Measurement of the per-trigger yields of charged particles in Lead-Lead collisions at a nucleon-nucleon centre-of-mass energy of 2.76 TeV. The yields on the peaks in the azimuthal correlation between the trigger particle and an associated charged particle are studied as a function pf the associated particle transverse momentum in the near-side (delta(phi=0)) and away-side (delta(phi)=pi) regions and also as a function of the centrality of the collision. Different background subtraction strategies are also considered (see the text of the paper for details).
• #### Table 1
Data from F 2A
10.17182/hepdata.58113.v1/t1
The ratio of near-side yields in Lead-Lead/Proton-Proton collisions in the central region.
• #### Table 2
Data from F 2A
10.17182/hepdata.58113.v1/t2
The ratio of near-side yields in Lead-Lead/Proton-Proton collisions in the peripheral region.
• #### Table 3
Data from F 2A
10.17182/hepdata.58113.v1/t3
The ratio of away-side yields in Lead-Lead/Proton-Proton collisions in the central region.
• #### Table 4
Data from F 2A
10.17182/hepdata.58113.v1/t4
The ratio of away-side yields in Lead-Lead/Proton-Proton collisions in the peripheral region.
• #### Table 5
Data from F 2B
10.17182/hepdata.58113.v1/t5
|
# Math Help - Updating function problem. What is the solution of the dynamical system at x0=200?
1. ## Updating function problem. What is the solution of the dynamical system at x0=200?
xn+1= 0.85xn+75
explain thanks
2. ## Re: Updating function problem. What is the solution of the dynamical system at x0=200
Just, try for several iteration, and you will get the general form.
$x_1=0.85 x_0 + 75$
$x_2=0.85 x_1 + 75 =0.85^2 x_0 + 75 (0.85) + 75$
$x_3=0.85 x_2 + 75 =0.85^3 x_0 + 75(0.85^2+0.85 + 1)$
$x_4=0.85 x_3 + 75 =0.85^4 x_0 + 75(0.85^3+0.85^2+0.85+1)$
$\vdots$
$x_n=0.85 x_{n-1} + 75 = 0.85^n x_0 + 75 (0.85^{n-1} + \dots + 1)$
$x_n=0.85^n x_0 + 75 \frac{1-0.85^n}{1-0.85}$
Don't forget the steady state solution.
|
# Optical absorption and fluorescence properties of $Er^{3+}$ in sodium borate glass
Ratnakaram, YC and Lakshmi, J and Chakradhar, RPS (2005) Optical absorption and fluorescence properties of $Er^{3+}$ in sodium borate glass. In: Bulletin of Materials Science, 28 (5). pp. 461-465.
Preview
PDF
PV49.pdf
Spectroscopic properties of $Er^{3+}$ ions in sodium borate glass have been studied. The indirect and direct optical band gaps $(E_{opt})$ and energy level parameters (Racah $(E^{1}, E^{2} and E^{3})$, spin-orbit $(\xi_{4f})$ and configurational interaction (\alpha)) are evaluated. Spectral intensities for various absorption bands of $Er^{3+}$ doped sodium borate glass are calculated. Using Judd-Ofelt intensity parameters $(\Omega_{2},\Omega_{4}, \Omega_{6})$, radiative transition probabilities (A), branching ratios (\beta) and integrated absorption cross sections (\Sigma) are reported for certain transitions. The radiative life times $(\tau_{R})$ for different excited states are estimated. From the fluorescence spectra,the emission cross section $(\sigma_{p})$ for the transition, $^{4}I_{13/2} -> ^{4}I_{15/2}$ is reported.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.