text
stringlengths 104
605k
|
---|
# TikZ: Make “scopenode” compatible with matrix
Context: This question is a follow-up of Best practice for creating TikZ pictures with nested elements. Symbol 1 provided an answer to this question, where \scopenode is defined.
\scopenode are scopes turned into nodes, i.e. one can: name the scope by name=foo; position the scope by at=(somewhere); and tune the position by anchor=something. They are basically awesome, since they can be nested.
Then, in How to make the use of tikzexternalize and saveboxes compatible?, cfr provided an answer improving these \scopenodes by enabling the display of both \scopenode's background and the content of the \scopenode. (Scopenode background would indeed be drawn above the content otherwise.)
Problem: I tried to include \scopenode in a TikZ matrix. However, I have some issues:
• with Symbol 1's solution, \scopenodes are well positioned, but their content do not appear because it's hidden behind the fill color.
• with cfr's solution, content is shown (and well positioned), but \scopenodes get messed up.
Question: How to make \scopenode compatible with TikZ matrix?
MWEs
(The example creates a matrix with one row and two columns. In both cells (A1 and B1), a scopenope is filled-drawn. A (red-orange) is south anchored, and B (yellow-green) is north anchored. In each scopenode, a path is drawn from (0,0) to (1,1)).
_______
| A | |
|---|---| <-- baseline
|___|_B_|
With Symbol 1's solution:
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{matrix}
\usetikzlibrary{backgrounds}
% \usetikzlibrary{external}
% \tikzexternalize
% \tikzset{external/prefix=build/}
\makeatletter
\newbox\tikz@sand@box
\newcount\tikz@scope@depth
\tikz@scope@depth111\relax
\def\scopenode[#1]#2{%
\begin{pgfinterruptboundingbox}%
% process the user option
\begin{scope}[name=tempscopenodename,at={(0,0)},anchor=center,#1]%
% try to extract positioning information: name, at, anchor
\global\let\tikz@fig@name\tikz@fig@name%
\global\let\tikz@node@at\tikz@node@at%
\global\let\tikz@anchor\tikz@anchor%
\end{scope}%
\let\tikz@scopenode@name\tikz@fig@name%
\let\tikz@scopenode@at\tikz@node@at%
\let\tikz@scopenode@anchor\tikz@anchor%
% try to typeset this scope
% we only need bounding box information
% the box itself will be discard
\setbox\tikz@sand@box=\hbox{%
\begin{scope}[local bounding box=tikz@sand@box\the\tikz@scope@depth,#1]%
#2%
\end{scope}%
}%
% goodbye. haha
\setbox\tikz@sand@box=\hbox{}%
% now typeset again
\begin{scope}[local bounding box=\tikz@scopenode@name]%
% use the bounding box information to reposition the scope
\pgftransformshift{\pgfpointanchor{tikz@sand@box\the\tikz@scope@depth}{\tikz@scopenode@anchor}%
\pgf@x-\pgf@x\pgf@y-\pgf@y}%
\pgftransformshift{\tikz@scopenode@at}%
\begin{scope}[#1]%
#2
\end{scope}%
\end{scope}%
\pgfkeys{/pgf/freeze local bounding box=\tikz@scopenode@name}%
\global\let\tikz@scopenode@name@smuggle\tikz@scopenode@name%
\end{pgfinterruptboundingbox}%
% make up the bounding box
\path(\tikz@scopenode@[email protected] west)(\tikz@scopenode@[email protected] east);%
% draw something, not necessary
\draw[#1](\tikz@scopenode@[email protected] west)rectangle(\tikz@scopenode@[email protected] east);%
}
\makeatother
\begin{document}
\begin{tikzpicture}[
remember picture,
inner sep=0pt,
outer sep=0pt,
]
\draw [help lines](-2,-2) grid (2,2);
\matrix[
column sep=2em,
row sep = 1em,
nodes in empty cells,
anchor=center,
nodes={anchor=center},
]
{
\scopenode[draw = red, fill = orange, anchor=south] {
\draw [blue] (0,0) -- (1,1);
};
&
\scopenode[draw = yellow, fill = green, anchor=north] {
\draw [black] (0,1) -- (1,0);
};
\\
};
\end{tikzpicture}
\end{document}
With cfr's solution:
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{matrix}
\usetikzlibrary{backgrounds}
% \usetikzlibrary{external}
% \tikzexternalize
% \tikzset{external/prefix=build/}
\makeatletter
\pgfdeclarelayer{scopenode}
\pgfsetlayers{background,scopenode,main}
\tikzset{%
on scopenode layer/.style={%
execute at begin scope={%
\pgfonlayer{scopenode}%
\let\tikz@options=\pgfutil@empty%
\tikzset{every on scopenode layer/.try,#1}%
\tikz@options%
},
execute at end scope={\endpgfonlayer}
},
}
% ateb Symbol 1: tex.stackexchange.com/a/…
\newbox\tikz@sand@box
\newcount\tikz@scope@depth
\tikz@scope@depth111\relax
\def\scopenode[#1]#2{% name=<enw>, at=<man>, anchor=<angor>
\begin{pgfinterruptboundingbox}%
% process the user option
\begin{scope}[name=tempscopenodename,at={(0,0)},anchor=center,#1]%
% try to extract positioning information: name, at, anchor
\global\let\tikz@fig@name\tikz@fig@name%
\global\let\tikz@node@at\tikz@node@at%
\global\let\tikz@anchor\tikz@anchor%
\end{scope}%
\let\tikz@scopenode@name\tikz@fig@name%
\let\tikz@scopenode@at\tikz@node@at%
\let\tikz@scopenode@anchor\tikz@anchor%
% try to typeset this scope
% we only need bounding box information
% the box itself will be discard
\setbox\tikz@sand@box=\hbox{%
\begin{scope}[local bounding box=tikz@sand@box\the\tikz@scope@depth,#1]%
#2%
\end{scope}%
}%
% goodbye. haha
\setbox\tikz@sand@box=\hbox{}%
% now typeset again
\begin{scope}[local bounding box=\tikz@scopenode@name]%
% use the bounding box information to reposition the scope
\pgftransformshift{\pgfpointanchor{tikz@sand@box\the\tikz@scope@depth}{\tikz@scopenode@anchor}%
\pgf@x-\pgf@x\pgf@y-\pgf@y}%
\pgftransformshift{\tikz@scopenode@at}%
\begin{scope}[#1]%
#2
\end{scope}%
\end{scope}%
\pgfkeys{/pgf/freeze local bounding box=\tikz@scopenode@name}%
\global\let\tikz@scopenode@name@smuggle\tikz@scopenode@name%
\end{pgfinterruptboundingbox}%
% make up the bounding box
\path(\tikz@scopenode@[email protected] west)(\tikz@scopenode@[email protected] east);%
% draw something, not necessary
\begin{scope}[on scopenode layer]%
\draw[#1](\tikz@scopenode@[email protected] west)rectangle(\tikz@scopenode@[email protected] east);%
\end{scope}%
}
\makeatother
\begin{document}
\begin{tikzpicture}[
remember picture,
inner sep=0pt,
outer sep=0pt,
]
\draw [help lines](-2,-2) grid (2,2);
\matrix[
column sep=2em,
row sep = 1em,
nodes in empty cells,
anchor=center,
nodes={anchor=center},
]
{
\scopenode[draw = red, fill = orange, anchor=south] {
\draw [blue] (0,0) -- (1,1);
};
&
\scopenode[draw = yellow, fill = green, anchor=north] {
\draw [black] (0,1) -- (1,0);
};
\\
};
\end{tikzpicture}
\end{document}
• Isn't it not well-positioned with my solution? It is shown and it is above the background i.e. it gets onto the right layer. But it is in the wrong place. – cfr Apr 18 '17 at 18:12
• @cfr I meant the content of the scopenode (i.e. the drawn line) is well positioned, when the scopenode itself (the filled-drawn square) is not. But my example is tricky: the two scopenodes are on the same row... but one (orange/red) is south-anchored when the second (green/yellow) is north anchored. So it should look like picture 1 + drawn lines. – ebosi Apr 18 '17 at 20:12
This is by far the best I can get:
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{matrix}
\usetikzlibrary{backgrounds}
% \usetikzlibrary{external}
% \tikzexternalize
% \tikzset{external/prefix=build/}
\makeatletter
\newbox\tikz@sand@box
\newcount\tikz@scope@depth
\newdimen\tikz@scope@shiftx
\newdimen\tikz@scope@shifty
\newdimen\tikz@scope@swx
\newdimen\tikz@scope@swy
\newdimen\tikz@scope@nex
\newdimen\tikz@scope@ney
\tikz@scope@depth111\relax
\def\scopenode[#1]#2{%
\begin{pgfinterruptboundingbox}%
% process the user option
\begin{scope}[name=tempscopenodename,at={(0,0)},anchor=center,#1]%
% try to extract positioning information: name, at, anchor
\global\let\tikz@fig@name@\tikz@fig@name%
\global\let\tikz@node@at@\tikz@node@at%
\global\let\tikz@anchor@\tikz@anchor%
\end{scope}%
\let\tikz@scopenode@name\tikz@fig@name@%
\let\tikz@scopenode@at\tikz@node@at@%
\let\tikz@scopenode@anchor\tikz@anchor@%
% try to typeset this scope
% we only need bounding box information
% the box itself will be discard
\setbox\tikz@sand@box=\hbox{%
\begin{scope}[local bounding box=tikz@sand@box\the\tikz@scope@depth,#1]%
#2%
\end{scope}%
}%
% goodbye. haha
\setbox\tikz@sand@box=\hbox{}%
% now typeset again
\begin{scope}[local bounding box=\tikz@scopenode@name]%
% use the bounding box information to reposition the scope
\pgfpointanchor{tikz@sand@box\the\tikz@scope@depth}{\tikz@scopenode@anchor}%
\tikz@scope@shiftx-\pgf@x%
\tikz@scope@shifty-\pgf@y%
\tikz@scopenode@at%
\pgftransformshift{\pgfpoint{\tikz@scope@shiftx}{\tikz@scope@shifty}}
% the background path
% lengthy, tedious calculation
\pgfpointanchor{tikz@sand@box\the\tikz@scope@depth}{south west}
\pgfpointanchor{tikz@sand@box\the\tikz@scope@depth}{north east}
\path(\tikz@scope@swx,\tikz@scope@swy)coordinate(tempsw)
(\tikz@scope@nex,\tikz@scope@ney)coordinate(tempne);
\path[#1](tempsw)rectangle(tempne);
% typeset the content for real
\begin{scope}[#1]%
#2%
\end{scope}%
\end{scope}%
\pgfkeys{/pgf/freeze local bounding box=\tikz@scopenode@name}%
\global\let\tikz@scopenode@name@smuggle\tikz@scopenode@name%
\end{pgfinterruptboundingbox}%
% make up the bounding box
\path(\tikz@scopenode@[email protected] west)(\tikz@scopenode@[email protected] east);%
% compatible code for matrix
\expandafter\pgf@nodecallback\expandafter{\tikz@scopenode@name@smuggle}%
}
\makeatother
\begin{document}
\begin{tikzpicture}[remember picture,inner sep=0pt,outer sep=0pt]
\draw[help lines](-2,-2)grid(2,2);
\matrix()
[
column sep=2em,
row sep=1em,
nodes in empty cells,
anchor=center,
nodes={anchor=center},
]
{
\scopenode[draw=red,fill=orange,name=aaa,anchor=south] {
\draw[blue](0,0)--(1,1)circle(.2);
};
&
\scopenode[draw=yellow,fill=green,name=bbb,anchor=north] {
\draw[black](0,1)--(1,0)circle(.1);
};
&
\scopenode[fill=cyan,name=ccc,anchor=east,scale=.8] {
\draw(0,0)--(1,1)circle(.3)--(2,0);
};
\\
\node(aaaa){};
&
\node(bbbb){};
&
\node(cccc){};
\\
};
\draw[->](2,2)node[above]{this is the orange scopenode}to[bend left](aaa.east);
\draw[->](-2,-2)node[below]{this is the green scopenode}to[bend left](bbb.west);
\draw[->](3,-1)node[right]{this is the cyan scopenode}to[bend left](ccc.south);
\end{tikzpicture}
\end{document}
# Importation information
Dear future me:
For your information, the content of the matrix is typeset only once in a hbox. And then they are moved to the corresponding cell. And then the whole matrix is moved to the desired position. The first movement is done by \pgf@matrix@shift@nodes@initial and the second by \pgf@matrix@shift@nodes@secondary. They are merely applying \pgf@shift@node to a node list. To register the scopenode, you added the line
\expandafter\pgf@nodecallback\expandafter{\tikz@scopenode@name@smuggle}%
so the scopenode is also moved.
Currently everything in the scopenode will be typeset twice. For nested scopenode, things are typeset 2depth times. This is really frustrating. Maybe someone can improve this by the way TikZ deals with matrix.
(However matrix cannot be nested. You win!)
Also, you changed
\global\let\tikz@fig@name\tikz@fig@name
to
\global\let\tikz@fig@name@\tikz@fig@name
so that the name of the scopenode cannot be accessed elsewhere.
In particular, TikZ will apply \pgf@shift@node to the matrix itself. If the matrix is unnamed, then the last scopenode will be shifted, which is unwanted. You spent two hours only to found this stupid bug. LEARN THE LESSON.
Also, you hardcoded the backgroundpath of the scopenode so that it is now filled/drawn before the content of the scope. (Hence the name backgroundpath) But the calculation is lengthy and seemingly redundant. I hope someone can improve it.
Nonetheless you avoided using pgfonlayer. That is great.
|
# Why has this question disappeared?
I encountered this question which references this question which has since been "deleted by the author". The fact that it was referenced by this other question, and the fact that it appears like it might have important information I find useful tells me that maybe this question shouldn't have even been allowed to be deleted by the author. But to my understanding those kinds of questions (ones that are useful) typically get answers and can't be deleted. I don't have 10k rep, so can someone show me what was in this question? Can this question be revived if shown to be useful?
OK, I've undeleted that question and its answer. The general SE consensus seems to be that SE owns the rights to the information and can do with it what it wants. In this case, I think it's better to have the information available than not.
I'll do some investigation to see whether undeleting it is OK. Here's a copy of the question and its answer. I'll delete this answer if I believe I can validly undelete the original.
Compared to standard (conventional) beamformers, adaptive beamformers such as minimum variance distortionless response (MVDR) have much better resolution and interference nulling capabilities. Briefly, the MVDR beamformer is as follows:
If $$\mathbf{a}_\theta$$ is the look-vector in the direction $$\theta$$ and if $$\widehat{\mathbf{R}}$$ is the covariance matrix of observations, then the adaptive weight vectors are calculated as
$$\mathbf{w}_\theta=\frac{\widehat{\mathbf{R}}^{-1} \mathbf{a}_\theta}{\mathbf{a}^H_\theta \widehat{\mathbf{R}}^{-1} \mathbf{a}_\theta}$$
and the output power of the MVDR beamformer is
$$B_{mvdr}(\theta)=\mathbf{w}_\theta^H \widehat{\mathbf{R}}\mathbf{w}_\theta$$
For the sake of completeness, the conventional beamformer output power is $$B_{conv}(\theta)=\mathbf{a}_\theta^H \widehat{\mathbf{R}}\mathbf{a}_\theta$$.
However, these adaptive beamformers are also prone to severe performance hits when there is a signal mismatch i.e., when the steering vector model for the signal of interest doesn't match what's in the data or when the actual array positions are perturbed from the array positions used in the steering/look vectors (which is what happens in practice). The performance degradation can be so large that it becomes inferior to the conventional beamformer.
How can one increase the robustness of adaptive beamformers (in this case, the MVDR, but applicable more generally) to signal mismatches?
### Example:
Here's the result from a simulation with two plane wave signals, one at $$+30^\circ$$ and the other at $$-22.5^\circ$$ and 10 dB down from the first, impinging on a 10 element uniform line array (spaced at $$\lambda/2$$, where $$\lambda$$ is the wavelength) in a noisy environment (the exact value of the noise power is immaterial here and you can assume it to be much lower than the signal powers). The covariance matrix is formed from 50 observations (snapshots). The top two panels show the outputs of the MVDR and the conventional beamformers and you can clearly see that the MVDR has high resolution compared to the conventional. Both do a good job of estimating the signal power.
However, when the element positions are slightly perturbed (perturbation is unknown to the beamformer), causing a mismatch in the actual signal steering vector and the model, the signal power estimates of the MVDR beamformer goes kaput. It can hardly measure them correctly, although it still does a good job of localising the signals. Comparatively, the conventional beamformer gets the relative powers right, although its harder to localise the signal due to increased sidelobes. (Note: The plots have all been normalized by the maximum power so although you can't see it, the peak at $$30^\circ$$ in the top left plot is at 0 dB. So this doesn't truly show how the actual power estimate degrades due to mismatches, as I just wanted to paint a qualitative picture.)
The title for the lower left plot should read MVDR: Perturbed array. I'll correct the mistake sometime soon. – yoda Sep 3 at 5:28 -->
Is it possible for me to improve the estimates of the signal powers using the MVDR beamformer in the perturbed case?
One approach that I'm aware of is to "load" the diagonal of $$\widehat{\mathbf{R}}$$ by some value $$\sigma_d^2$$ in order to stabilize the covariance matrix and make the adaptive vectors more robust to signal mismatches. This is often done in matched field processing and is referred to as diagonal loading or white noise constraint beamforming.
In other words, the robustified adaptive weight vectors are calculated as
$$\widehat{\mathbf{w}}_\theta=\frac{\left(\widehat{\mathbf{R}}+\sigma_d^2\mathbf{I}\right)^{-1} \mathbf{a}_\theta}{\mathbf{a}^H_\theta \left(\widehat{\mathbf{R}}+\sigma_d^2\mathbf{I}\right)^{-1} \mathbf{a}_\theta}$$
and the output beam power is now computed as
$$\widehat{B}_{mvdr}(\theta)=\widehat{\mathbf{w}}_\theta^H \left(\widehat{\mathbf{R}}+\sigma_d^2\mathbf{I}\right)\widehat{\mathbf{w}}_\theta$$
Here's the result comparing the no mismatch, mismatch due to perturbed array and diagonally loaded cases for the MVDR beamformer for the situation in the question. There is an improvement in the signal power estimation, at the expense of increased background noise power.
• This is definitely the kind of answer and question that I find useful, I'm surprised the author was able to delete it at all, I wonder if it was so old that current SE measures stopping this from happening didn't apply? – Krupip Dec 4 '19 at 19:59
• @whn Possibly! It was deleted on Feb 11 '12 at 13:15. And Lorem Ipsum was last seen on Mar 9 '14 at 16:53, so I can't really ask them. – Peter K. Dec 4 '19 at 20:15
|
# Downsampling
In signal processing, downsampling (or "subsampling") is the process of reducing the sampling rate of a signal. This is usually done to reduce the data rate or the size of the data.
The downsampling factor (commonly denoted by M) is usually an integer or a rational fraction greater than unity. This factor multiplies the sampling time or, equivalently, divides the sampling rate. For example, if 16-bit compact disc audio (sampled at 44,100 Hz) is downsampled to 22,050 Hz, the audio is said to be downsampled by a factor of 2. The bit rate is also reduced in half, from 1,411,200 bit/s to 705,600 bit/s, assuming that each sample retains its bit depth of 16 bits.
## Maintaining the sampling theorem criterion
Since downsampling reduces the sampling rate, it is usually a good idea to make sure the Nyquist–Shannon sampling theorem criterion is maintained relative to the new lower sample rate, to avoid aliasing in the resulting digital signal. To ensure that the sampling theorem is satisfied, or approximately so, a low-pass filter is used as an anti-aliasing filter to reduce the bandwidth of the signal before the signal is downsampled; the overall process (low-pass filter, then downsample) is sometimes called decimation.
If the original signal had been bandwidth limited, and then first sampled at a rate higher than the Nyquist rate, then the sampled signal may already have a bandwidth compliant with the requirements of the sampling theorem at the lower rate, so the downsampling can be done directly without any additional filtering. Downsampling only changes the sample rate, not the bandwidth, of the signal. The only reason to filter the bandwidth is to avoid the case where the new sample rate would become lower than the Nyquist rate and cause aliasing.
In some cases, the anti-aliasing filter can be a band-pass filter; the aliasing inherent in downsampling will then transpose a band of interest to baseband samples. A bandpass signal, i.e. a band-limited signal whose minimum frequency is different from zero, can be downsampled avoiding superposition of the spectra if certain conditions are satisfied; see undersampling.
## Downsampling by integer factor
Downsampling a sequence $\scriptstyle x[n]$ by retaining only every Mth sample creates a new sequence $\scriptstyle y[n] = x[nM].$ If the original sequence contains significant normalized frequency components in the region $\scriptstyle [0.5/M,\ 1-0.5/M]$ (cycles/sample), the downsampler should be preceded by a low-pass filter with cutoff frequency $\scriptstyle 0.5/M$.[note 1] In this application, such an anti-aliasing filter is referred to as a decimation filter, and the combined process of filtering (convolution) and downsampling is called decimation.
The process described above would generate an output sample for every input sample, and then M-1 of every M outputs would be discarded. Such is the process for an IIR filter that relies on feedback from output to input. With FIR filtering, it is an easy matter to compute only every Mth output. The calculation performed by a decimating FIR filter for the nth output sample is a dot product:
$y[n] = \sum_{k=0}^{K-1} x[nM-k]\cdot h[k],$
where the h[•] sequence is the impulse response, and K is its length. In a general purpose processor, after computing y[n], the easiest way to compute y[n+1] is to advance the starting index in the x[•] array by M, and recompute the dot product.
Impulse response coefficients taken at intervals of M form a subsequence, and there are M such subsequences (phases) multiplexed together. The dot product is the sum of the dot products of each subsequence with the corresponding samples of the x[•] sequence. Furthermore, because of decimation by M, the stream of x[•] samples involved in any one of the M dot products is never involved in the other dot products. Thus M low-order FIR filters are each filtering one of M multiplexed phases of the input stream, and the M outputs are being summed. This viewpoint offers a different implementation that might be advantageous in a multi-processor architecture. In other words, the input stream is demultiplexed and sent through a bank of M filters whose outputs are summed. When implemented that way, it is called a polyphase filter.
For completeness, we now mention that a possible implementation of each phase is to replace the coefficients of the other phases with zeros in a copy of the h[•] array, process the original x[•] sequence at the input rate, and decimate the output by a factor of M. The equivalence of this inefficient method and the implementation described above is known as the first Noble identity.[1]
## Downsampling by rational fraction
Let M/L denote the downsampling factor.
1. Upsample by a factor of L
2. Downsample by a factor of M
A proper upsampling design requires an interpolation filter after increasing the data rate and that a proper downsampling design requires a filter before eliminating some samples. These two low-pass filters can be combined into a single filter.
These two steps are generally not interchangeable. Downsampling results in a loss of data and, if performed first, could result in data loss if there is any data filtered out by the downsampler's low-pass filter. Since both interpolation and anti-aliasing filters are low-pass filters, the filter with the smallest bandwidth is more restrictive and can therefore be used in place of both filters. When the rational fraction M/L is greater than unity then L < M and the single low-pass filter should have cutoff at $\scriptstyle 0.5/M$ cycles/sample.
NOTE: Upsampling first is necessary in all cases where the rate is not an even multiple. E.g.: if a sample rate of 2x is changed to a rate of 1x by averaging every pair of samples this would be equivalent to a low pass filtering operation. But taking every other sample would be equivalent to up then down sampling in this special case where the multiple was 2 to 1, so there is no need to do an upsample first.
## Discrete-time Fourier transform (DTFT)
Let X(f) be the Fourier transform of any function, x(t), whose samples at some interval, T, equal the x[n] sequence. Then the DTFT of the x[n] sequence is the Fourier series representation of a periodic summation of X(f):
$\underbrace{ \sum_{n=-\infty}^{\infty} \overbrace{x(nT)}^{x[n]}\ e^{-i 2\pi f nT} }_{\text{DTFT}} = \frac{1}{T}\sum_{k=-\infty}^{\infty} X(f-k/T).$
(Eq.1)
When T has units of seconds, $\scriptstyle f$ has units of hertz. Replacing T with MT in the formulas above gives the DTFT of the decimated sequence, x[nM]:
$\sum_{n=-\infty}^{\infty} x(n\cdot MT)\ e^{-i 2\pi f n(MT)} \equiv \frac{1}{MT}\sum_{k=-\infty}^{\infty} X\left(f-\tfrac{k}{(MT)}\right).$
(Eq.2)
The periodic summation has been reduced in amplitude and periodicity by a factor of M. Aliasing occurs when adjacent copies of X(f) overlap. If an anti-aliasing filter is applied to the x[n] sequence, it should have a cutoff frequency $< \tfrac{0.5}{MT}$ hertz at sample-rate 1/T, or (equivalently) a cutoff $< \tfrac{0.5}{M}$ at normalized frequency 1.0 cycles/sample.
Alternatively, the sample-rate can be presumed to be held constant, meaning that the interval between the retained samples is reduced from MT to T. The resulting Fourier series is:
$\sum_{n=-\infty}^{\infty} x(n\cdot MT)\ e^{-i 2\pi f nT} = \frac{1}{T}\sum_{k=-\infty}^{\infty} X_M(f-k/T) = \frac{1}{MT}\sum_{k=-\infty}^{\infty} X\left(\tfrac{f-k/T}{M}\right) ,$
(Eq.3)
where:
$X_M(f)\ \stackrel{\mathrm{def}}{=}\ \mathcal{F}\left \{x(Mt)\right \} \equiv \tfrac{1}{M}\cdot X(f/M).$
The original periodicity is restored, but $\scriptstyle X(f/M)$ is M times wider than $\scriptstyle X(f),$ which can cause adjacent copies to overlap unless the x[n] sequence is pre-filtered as described above. Eq.2 and Eq.3 are identical, except for a frequency scale factor.
## Z-transform
The z-transform of the x[n] sequence is defined by:
$X(z)\ \stackrel{\mathrm{def}}{=} \sum_{n=-\infty}^{\infty} x[n]\ z^{-n},$ where z is a complex variable.[note 2]
On the unit circle, z is constrained to values of the form $e^{i \omega}.$ Then one cycle of $X(e^{i \omega}), \ \scriptstyle -\pi \ \le \ \omega \ \le \ \pi$ is identical to one period $\left(\scriptstyle -\frac{0.5}{T} \ \le \ f \ \le \ \frac{0.5}{T}\right)$ of Eq.1.
The z-transform of the decimated sequence is:
$X_M(z)\ \stackrel{\mathrm{def}}{=} \sum_{n=-\infty}^{\infty} x[nM]\ z^{-n},$
and one cycle of $X_M(e^{i \omega}), \ \scriptstyle -\pi \ \le \ \omega \ \le \ \pi$ is identical to one period of Eq.2 and Eq.3.
In terms of $X(z),$ it can be shown that:[2][3]
$X_M(z) = \frac{1}{M} \sum_{k=0}^{M-1} X\left(z^{\tfrac{1}{M}} \cdot e^{-i \tfrac{2\pi}{M} k}\right) = \frac{1}{M} \sum_{k=0}^{M-1} X\left( e^{\tfrac{i(\omega - 2\pi k)}{M} } \right).$
The periodicity of each "k" term is 2πM radians, and the terms are offset by multiples of 2π. So the periodicity of the summation is 2π (as required by the z-transform definition). The k=0 term is $X(e^{i \omega})$ stretched across 2πM radians, which means that it exceeds the unit circle and folds back on itself M-1 times, or (equivalently) it overlaps and is overlapped by the other M-1 terms of the summation. But if its expanded bandwidth is still limited to the region $\scriptstyle (-\pi \ < \ \omega \ < \ \pi),$ the folding/overlapping does not cause aliasing. That can be assured by an anti-alias filter with a cutoff frequency < π/M at frequency 2π (radians/sample), or (equivalently) a cutoff $< \tfrac{0.5}{M}$ at frequency 1.0 cycles/sample.
For comparison with the DTFT (Eq.2), ω = 2π corresponds to $\scriptstyle f=1/(MT).$ And it corresponds to $\scriptstyle f=1/T$ in the other Fourier series (Eq.3).
## Notes
1. ^ Realizable low-pass filters have a "skirt", where the response diminishes from near unity to near zero. So in practice the cutoff frequency is placed far enough below the theoretical cutoff that the filter's skirt is contained below the theoretical cutoff.
2. ^ In a discussion involving multiple types of transforms, it is a common practice to distinguish them on the basis of their arguments, rather than the function name.
• Fourier transform is denoted by $X(f)$ or $X(\omega).$
• Z transform is denoted by $X(z).$
• DTFT is denoted by $X(e^{i \omega})$ or $X(e^{i 2\pi fT}),$ but sometimes $X_{2\pi}(\omega)$ or $X_{1/T}(f).$
## Citations
1. ^ [Gilbert]; Nguyen, Truong (1996-10-01). Wavelets and Filter Banks (2 ed.). Wellesley,MA: Wellesley-Cambridge Press. pp. 100–101. ISBN 0961408871.
2. ^ Schniter, Phil (March 2006). "ECE-700 Multirate Notes". p. 2. Retrieved 2013-12-10.
3. ^ "DSP and Digital Filters (2013-3810)". 2013. p. 68. Retrieved 2013-12-10.
## References
• Oppenheim, Alan V.; Ronald W. Schafer, John R. Buck (1999). Discrete-Time Signal Processing (2nd ed.). Prentice Hall. ISBN 0-13-754920-2.
• Proakis, John G. (2000). Digital Signal Processing: Principles, Algorithms and Applications (3rd ed.). India: Prentice-Hall. ISBN 8120311299.
|
# Maximal number of rounds we can do distributing 64 diners on 8 groups in different ways if they can't meet each other more than once?
N=64 hungry diners come to a buffet.
We sit them at 8 different (s=8 people at each table) tables so that they get to know each other while they eat.
After a while we distribute them over the tables so that no one will ever match another person with whom they have eaten before.
We keep on redistributing them with different while they eat.
What is the maximal number of rounds each person can do?
Intuitively I would say the solution is rounds=(N-1)/(s-1)=9 but I don't know if it applies always nor if take into account all restrictions.
The same problem can be thought as a group of poker players playing against each other.
If the problem were smaller, 16 people in groups of 4 the solution would be 5 rounds:
a1 a2 a3 a4 - b1 b2 b3 b4 - c1 c2 c3 c4 - d1 d2 d3 d4
a1 b1 c1 d1 - a2 b2 c2 d2 - a3 b3 c3 d3 - a4 b4 c4 d4
a1 b2 c3 d4 - b1 a2 d3 c4 - c1 d2 a3 b4 - d1 c2 b3 a4
d1 b2 a3 c4 - b1 d2 c3 a4 - c1 a2 b3 d4 - a1 c2 d3 b4
a1 d2 b3 c4 - b1 c2 a3 d4 - c1 b2 d3 a4 - d1 a2 c3 b4
But I don't know how to do it for a larger problem apart from bruteforce, at least for the simpler case N=s^2.
Here https://cs.stackexchange.com/questions/67832/game-tournament-program-np-complete/67864#67864 a solution is given just for the case of subroups of 4.
How would you address this problem computationally?
How would you do it theoretically in a simple way?
• I believe this is a Transversal Design... It has been a while since I have worked with problems like this, but if you read up on Combinatorial Designs, I think you will find your solution. – InterstellarProbe Sep 14 at 12:50
## 2 Answers
The problem is readily solved for $N=q^2$ people with $q$ tables and $q$ chairs each if $q$ is a prime power:
Let $F$ be a finite field order $q$. We identify the tables (as well as the seats per table) with elements of $F$ and the people with elements of $F^2$.
For the $q-1$ possible picks for $a\in F^\times$, we can make a seating arrangement such that $(x,y)$ occupies seat $x$ at table $y+ax$. This is a bijection between people and chairs, i.e., nobody has to sit on someone else's lap nor is a seat leaft vacant: Seat $u$ at table $v$ is given to $(u,v-a^{-1}u)$ and no-one else.
In an extra round, we can place $(x,y)$ at seat $y$ at table $x$. Again, this bijects people with available seats.
We can tell exactly when $(x,y)$ meets $(x',y')$: If $x=x'$ they meet at table $x$ in the extra round. Otherwise, they meet in round $a$ for which $y+ax=y'+ax'$, i.e., when $a=(y-y')(x'-x)^{-1}$.
• This gives 8 rounds though right? The OP was asking about if 9 rounds was possible. – Steve D Sep 14 at 14:23
• Yes, now that I think about it more, I am convinced $a=0$ works, and that fully solves it. – Steve D Sep 14 at 14:39
Maybe interesting to see
http://www.mathpages.com/home/kmath388.htm
and these tables
https://web.archive.org/web/20050407074608/http://www.icparc.ic.ac.uk/~wh/golf/solutions.html#5-5-6
at 8 groups of 8 golfers for 9 weeks,
and the work of Ivan Dotu and Pascal Van Hentenryck
https://pdfs.semanticscholar.org/1070/d2f978fead4045b53640743a018ae518cccd.pdf
|
My Math Forum how to show that 2n + 1 = x ^ 2 ?
User Name Remember Me? Password
Math Software Math Software - Mathematica, Matlab, Calculators, Graphing Software
September 18th, 2018, 01:54 PM #1 Newbie Joined: Sep 2018 From: tunis Posts: 27 Thanks: 0 how to show that 2n + 1 = x ^ 2 ? how to show that 2n + 1 = x ^ 2 ? the solution is: n = 2 * (m ^ 2 - m) x = 1 - 2 * m and 1 + 3b + 3b ^ 2 = x ^ 3 the solution is: b = -1, x = 1 step by step I calculated that with a software it gives only the solution. Thank you. Last edited by skipjack; September 19th, 2018 at 10:27 AM.
September 18th, 2018, 02:54 PM #2 Global Moderator Joined: Dec 2006 Posts: 19,700 Thanks: 1804 For the second problem, the software didn't find the solution b = 0, x = 1. Last edited by skipjack; September 19th, 2018 at 10:27 AM.
September 18th, 2018, 07:24 PM #3
Math Team
Joined: May 2013
From: The Astral plane
Posts: 1,888
Thanks: 767
Math Focus: Wibbly wobbly timey-wimey stuff.
Quote:
Originally Posted by Ak23 how to show that 2n + 1 = x ^ 2 ? the solution is: n = 2 * (2 ^ m-m) x = 1-2 * m and 1 + 3b + 3b ^ 2 = x ^ 3 the solution is: b = -1, x = 1 step by step I calculated that with a software it gives only the solution. thank you.
I think we can simplify this a bit and get rid of the "x", computer or not. If we have $\displaystyle n^2$ then we can get the next square (integer) by adding 2n + 1. ie. $\displaystyle (n + 1)^2 = (n^2) + (2n + 1)$
Similar for the cube:
$\displaystyle (n + 1)^3 = (n^3) + (3n^2 + 3n + 1)$
-Dan
September 19th, 2018, 12:15 AM #4 Newbie Joined: Sep 2018 From: tunis Posts: 27 Thanks: 0 @topsquark 2n+1=4n+2 2n-4n=-1 2n=-1 n=-1/2 2.(-1/2)+1 =0 it does not work pair = odd Last edited by Ak23; September 19th, 2018 at 12:17 AM.
September 19th, 2018, 03:28 AM #5 Newbie Joined: Sep 2018 From: tunis Posts: 27 Thanks: 0 n=2*(m^2-m) x=1-2*m is solution when z^2=(n+1)^2 y^2=n^2 x^2=2n+1 It's not all numbers only with the squares that is on the odd numbers 1 3 5 7 9 11, .... 2N-1 9 25 49 .... ETC
September 19th, 2018, 03:56 AM #6 Newbie Joined: Sep 2018 From: tunis Posts: 27 Thanks: 0 m n x x^2 0 0 1 1 1 0 -1 1 2 4 -3 9 3 12 -5 25 4 24 -7 49 Last edited by Ak23; September 19th, 2018 at 03:59 AM.
September 19th, 2018, 05:17 AM #7 Global Moderator Joined: Dec 2006 Posts: 19,700 Thanks: 1804 If x is odd, it can be written as 1 - 2m. That means that x² = 4m² - 4m + 1 = 2(2m² - 2m) + 1 = 2n + 1, where n = 2(m² - m). If you have no objection, I'll correct your error in the original post, so that the thread can be shortened. Thanks from Ak23
September 19th, 2018, 05:59 AM #8 Newbie Joined: Sep 2018 From: tunis Posts: 27 Thanks: 0 how to find solutions? for x ^ 2 and x ^ 3
September 19th, 2018, 06:34 AM #9 Global Moderator Joined: Dec 2006 Posts: 19,700 Thanks: 1804 Doesn't my previous post show how to find the solutions for x²? Thanks from Ak23
September 19th, 2018, 07:33 AM #10 Newbie Joined: Sep 2018 From: tunis Posts: 27 Thanks: 0 yes I understood thank you very much
Tags show
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post mared Geometry 7 May 16th, 2015 09:50 PM Few_But_Ripe Complex Analysis 1 November 11th, 2011 09:08 AM notnaeem Real Analysis 4 August 16th, 2010 12:32 PM naserellid Algebra 2 August 15th, 2010 02:20 AM
Contact - Home - Forums - Cryptocurrency Forum - Top
|
# Math Help - Invertible functions
1. ## Invertible functions
i try do it but only first one #_# 2 and 3 i dont know how can do #_#
Thanks.
2. Originally Posted by liptonpc
i try do it but only first one #_# 2 and 3 i dont know how can do #_#
Thanks.
remember, $f^{-1}$ undoes $f$. This means, the output becomes the input. Let $y = f^{-1}(x)$, then
$x = 3y^2 - 1$
solve for y
(Now you should notice a problem here...this function does not have an inverse. not unless the domain was restricted some way)
for (iii) $\frac fg (x) = \frac {3x^2 - 1}{x + 5}$
now just plug in $x = 1$ and evaluate
3. $f^{-1}(x)=\sqrt\frac {x + 1}{3}
$
is it correct ?
4. Originally Posted by liptonpc
$f^{-1}(x)=\sqrt\frac {x + 1}{3}
$
is it correct ?
technically you need a +/- in front of that. which would make $f^{-1}$ not a function.
only bijective functions are invetible, and f isn't bijective
5. sorry i dont understand #_#
so the result is ?
6. Originally Posted by liptonpc
sorry i dont understand #_#
so the result is ?
f does not pass the horizontal line test and is hence not invertible. it has no inverse. not unless the domain was restricted somehow.
7. $f^{-1}(x)=\sqrt\frac {x + 1}{3}$
when i check
$f(f^{-1}(x))=x$
so why it not inverse?
8. Originally Posted by liptonpc
$f^{-1}(x)=\sqrt\frac {x + 1}{3}$
when i check
$f(f^{-1}(x))=x$
so why it not inverse?
because $f^{-1}(x) \ne \sqrt{\frac {x + 1}3}$, rather, $f^{-1} (x) = ~{\color{red} \pm}~ \sqrt {\frac {x + 1}3}$
which means $f^{-1}$ is not a function at all, since that does not pass the vertical line test.
9. if $f(x)=x^2-9$
$f^{-1}(x)=\pm(x+9)^{\frac{1}2}$
and same thing it not inverse?
10. Originally Posted by liptonpc
if $f(x)=x^2-9$
$f^{-1}(x)=\pm(x+9)^{\frac{1}2}$
and same thing it not inverse?
correct.
now on the other hand, if they had restricted the domain and said, $f(x) = x^2 - 9$ for $x \ge 0$, then you could say $f^{-1}(x) = \sqrt {x + 9}$ and that's the inverse function.
we don't need the negative square root, because the domain of our original function only has non-negative numbers.
11. Originally Posted by liptonpc
if $f(x)=x^2-9$
$f^{-1}(x)=\pm(x+9)^{\frac{1}2}$
and same thing it not inverse?
You use the vertical line test to check if something is a function. Then, if it is a function, you use the horizontal line test to see if it has an inverse. If it does not pass the horizontal line test, then it isn't an invertible function
The reason if because the (fake) inverse function that you would get would spit out 2 values for (almost) every x value when talking about parabolas, as shown by the $\pm$ in the problem above, and when that happens it cant be a function
12. yeah #_# because i usually think $x\ge0$ i forgot they didt say this ^^
thanks ^^
13. but if dont ask about inverse or not. Just ask Find $f^{-1}(x)$
$f^{-1}(x)=\pm(x+9)^{\frac{1}2}$
or $f^{-1}(x)=(x+9)^{\frac{1}2}$
???
14. Originally Posted by liptonpc
but if dont ask about inverse or not. Just ask Find $f^{-1}(x)$
$f^{-1}(x)=\pm(x+9)^{\frac{1}2}$
or $f^{-1}(x)=(x+9)^{\frac{1}2}$
haha, asking to find $f^{-1}$ is asking to find the inverse. if your function is not one-to-one (or onto), either by itself or by restricting its domain, then it does not have an inverse. the question makes no sense.
|
# Predicate formula to propositional formula
I have: \begin{align} \exists x \forall y P(x,y) \\ \end{align} where \begin{align} M=\{a,b\} \\ \end{align} I need to convert this formula to propositional logic. I know that if M is finite then you can eliminate quantifier, but what can I do when there is two quantifiers? Any hints, help would be appreciated
Try to eliminate quantifiers step by step: \begin{aligned} \exists x\forall yP(x,y) &\iff \exists x (P(x,a)\land P(x,b))\\ &\iff \left(P(a,a)\land P(a,b)\right)\lor \left(P(b,a)\land P(b,b)\right). \end{aligned}
|
Prescriptive Unitarity
# Prescriptive Unitarity
[ [ [
###### Abstract
We introduce a prescriptive approach to generalized unitarity, resulting in a strictly-diagonal basis of loop integrands with coefficients given by specifically-tailored residues in field theory. We illustrate the power of this strategy in the case of planar, maximally supersymmetric Yang-Mills theory (SYM), where we construct closed-form representations of all (-point NMHV) scattering amplitudes through three loops. The prescriptive approach contrasts with the ordinary description of unitarity-based methods by avoiding any need for linear algebra to determine integrand coefficients. We describe this approach in general terms as it should have applications to many quantum field theories, including those without planarity, supersymmetry, or massless spectra defined in any number of dimensions.
1,4]Jacob L. Bourjaily 2,4], Enrico Herrmann 3,4], Jaroslav Trnka CALT-TH-2017-19
[15pt]Prescriptive Unitarity
[-24pt]
• Niels Bohr International Academy and Discovery Center, University of Copenhagen
The Niels Bohr Institute, Blegdamsvej 17, DK-2100, Copenhagen Ø, Denmark
• Walter Burke Institute for Theoretical Physics,
California Institute of Technology, Pasadena, CA 91125, USA
• Center for Quantum Mathematics and Physics (QMAP),
Department of Physics, University of California, Davis, CA 95616, USA
• Kavli Institute for Theoretical Physics,
University of California, Santa Barbara, CA 93106, USA
Contents\@mkbothCONTENTSCONTENTS
toc
## 1 Introduction and Overview
There has been tremendous progress in recent years in the calculation and understanding of perturbative scattering amplitudes in quantum field theory. The scope of these insights and the powerful new computational tools that have resulted include many unexpected connections to modern developments in mathematics, e.g. [1, 2, 3, 4, 5, 6]. Most of these discoveries have been fueled by direct computation—pushing the limits of our theoretical reach (often for toy models) to uncover unanticipated, simplifying structures in the formulae that result, and using these insights to build more powerful tools. The lessons learned through such investigations include the (BCFW) on-shell recursion relations at tree- and loop-level, [7, 8] and [9]; the discovery of a hidden dual conformal invariance [10, 11, 12] as well as the duality to Wilson loops and correlation functions [13, 14, 15, 16, 17, 18, 19, 20]; the connection to Grassmannian geometry [21, 22, 23, 24, 25, 26, 27] and the amplituhedron [28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39]; various bootstrap methods [40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51]; the twistor string [52, 53] and its generalization to the scattering equation formalism [54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68]; to -cuts [69, 70, 71], and so on. For a broad overview of some of these developments, see e.g. [72, 73, 74, 75, 76, 77, 78, 79].
This progress has been fueled by very concrete computational targets often guided by specific physical questions. Beyond improving our predictive reach for e.g. collider physics applications, such computations are often critical to theoretical investigations. These include the ultraviolet properties of quantum gravity [80, 81, 82, 83] and the (mathematical) structures behind the functions and numbers that result from perturbation theory [84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94]. More generally, these efforts are often motivated by the enormous discrepancy between the difficulty of computations in field theory and the profound simplicities of the predictions that ultimately result. Some of these simplicities, such as finiteness and the logarithmic behavior of loop integrands are known to be in tension with the ways we normally represent amplitudes (see e.g. [81, 82, 95, 96, 97, 98, 99, 100]). Exposing such tension through direct computation can be very illuminating, often leading to new insights into what we hope will contribute to a better understanding of the foundations of quantum field theory.
Of all the methods used to push the limits of our computational reach into perturbation theory, the most generally applicable is also perhaps the most universal: generalized unitarity (see e.g. [101, 102, 103, 104, 105, 106, 107, 108]). The basic idea is very simple. Because loop integrands are rational functions, they should be determinable by their residues. And because the space of loop integrands—viewed as rational functions—is always finite-dimensional, we can expand any amplitude into a complete basis of such functions:
ALn=∑kckIk.\vspace−5pt\vspace−0.5pt (1.1)
(The size of this basis depends on the spacetime dimension and the power counting of the quantum field theory in question.) Given any complete basis , the coefficients of any loop amplitude in (1.1) can then be determined by linear algebra via the criterion that residues match field theory.
This approach is quite general: it can be used to find a representation of any perturbative amplitude in any quantum field theory expanded into a canonical basis of fixed integrals.111There are subtleties for amplitudes that are not fully ‘cut constructible’; but these can always be addressed (e.g. via dimensional regularization, see e.g. [104]), and will not concern us here. For recent applications of the unitarity method as well as integrand based reduction algorithms, see e.g. [109, 110, 111, 112, 113, 114, 115, 116]. Among the most important advantages of this approach (relative to the Feynman expansion, for example) is that the coefficients appearing in such an expansion, determined by cuts of loop amplitudes, are expressed in terms of on-shell functions—manifestly gauge invariant functions of observable states defining the theory.
The main problem with the traditional approach of generalized unitarity, however, is that it is not prescriptive: it requires an arbitrary choice of the basis of integrands and sufficient computer power to solve the linear algebra problem of matching field theory on all cuts. These issues rapidly become computationally prohibitive. More importantly however, as with any such problem in linear algebra, the form of the solution that results depends strongly on the choice of basis. Some bases are better than others.
This work is motivated by the desire to choose a ‘good’ basis of loop integrands—one for which each term supports a unique, defining cut of field theory.222Of course, Cauchy’s theorem does not allow a rational function to have only a single residue. In general, the integrands in our basis will contribute on many field theory cuts; however, the prescriptive nature is reflected in the fact that there is some cut (chosen from a ‘spanning set’ of cuts [117]) for each integrand not shared by any other—thereby fixing the integrand’s coefficient. In such a basis, no linear algebra is required: the coefficient of each integrand is simply the corresponding field theory cut (a specific on-shell function). If each coefficient is a single on-shell function, we will say that the representation in (1.1) is prescriptive; and we refer to the method followed to construct such a representation as prescriptive unitarity. This general strategy was introduced in ref. [118] and used in ref. [119] to construct closed-form representations of all two loop amplitudes in planar, maximally supersymmetric () Yang-Mills theory (SYM). In the only-slightly schematic formalism used in this work, that result can be recast as follows,
AL=2n=\raisebox−2.25pt\scalebox1.75$∑\raisebox2.0pt\scalebox0.6$L$$fL% \makebox[90.0pt][c]{\raisebox−24.62pt\includegraphics[scale=1]twoloopladdergeneral}\vspace−5pt\vspace−0.5pt (1.2) where each coefficient, , represents a single field theory residue. In this work, we describe how prescriptive unitarity can be applied to the case of planar SYM as our primary illustrative example. In particular, we show how this strategy can be used to construct an explicit, prescriptive representation for all -point NMHV scattering amplitudes through three loops: AL=3n=\raisebox−2.25pt\scalebox1.75∑\raisebox2.0pt\scalebox0.6W$$fW% \makebox[100.0pt][c]{\raisebox−47.3pt\includegraphics[scale=1]wheelintgeneral}\raisebox−2.25pt\scalebox1.75$+$\raisebox−2.25pt\scalebox1.75$∑\raisebox2.0pt\scalebox0.6$L$$fL\makebox[130.0pt][c]{\raisebox−34.76pt\includegraphics[scale=1]ladderintgeneral},\vspace−11.85pt\vspace−0.5pt (1.3) where every coefficient of every integral is a single, specific field-theory cut. We should emphasize that our choice to apply these ideas to this particularly simple quantum field theory is motivated mostly by brevity in illustration; we expect that such representations exist in general. This optimism, however, requires testing through explicit construction—at higher orders of perturbation and for more general theories. As we will see below, even for this simple quantum field theory, the existence of a three loop prescriptive basis of integrands is rather non-trivial. Therefore, this work represents a concrete test that prescriptive representations exist. This work is organized as follows. In section 2 we review the basic ingredients of generalized unitarity and introduce the prescriptive unitarity approach to the (re)construction of loop amplitudes at the integrand-level. After briefly describing the cuts of loop amplitudes and integrands in section 2.1, we discuss the traditional representation of one loop amplitudes via unitarity in section 2.2. In section 2.3 we illustrate the prescriptive reformulation of unitarity, describing in detail how such a representation of all two loop amplitudes in planar SYM can be found. The representation we construct in section 2.3 is different from that presented in ref. [119]. This is both for the sake of conceptual clarity (in order to make it more analogous to our three loop result) and also for brevity. Most importantly, we have decided to ignore making the exponentiation of infrared divergences manifest at the integrand-level. Finally, in section 2.4 we outline how the prescriptive approach could be applied to more general quantum field theories. In section 3 we use the prescriptive approach to construct a closed-form representation for all three loop amplitude integrands in planar SYM. The construction of the basis is described in section 3.1, and the choices of cuts (and coefficients) involved in defining this basis are illustrated and described in section 3.2. The complete description of the terms appearing in our representation of three loop amplitudes is given in Appendix A. We conclude section 3 with a general discussion of this representation in section 3.3. In section 4 we revisit one loop (prescriptive) unitarity for theories with general power counting in four dimensions before outlining the prospects for future work in section 5. Throughout this work, we have actively endeavored to keep our expressions free of any unnecessary reference to a particular choice of kinematical variables—namely, in the representation of loop integrands and their residues. Because of this, however, some of our work may appear unfamiliar even to the most expert of readers. We hope, however, that the examples and illustrations used in our review are sufficiently clear to make both our result and the more general strategy more accessible. ## 2 From Generalized to Prescriptive Unitarity The basic idea of generalized unitarity is very simple: because Feynman diagrams are rational functions prior to loop integration, the loop integrands of arbitrary scattering amplitudes are rational functions of the external and internal momenta; being rational functions, they are expandable into a complete basis of functions, with coefficients determined by residues (or ‘poles’). Schematically, suppose forms a complete basis of loop integrands (appropriate for a particular field theory), then an arbitrary scattering amplitude integrand, , can be represented: ALn=∑kckIk.\vspace−4pt\vspace−0.5pt (2.1) The coefficients in this expansion, , are determined by the criterion that the right hand side matches field theory on all residues (of arbitrary co-dimension). Depending on the choice of basis , the coefficients in (2.1) may be individual residues or arbitrarily complicated linear combinations thereof. A representation of the form (2.1) will be called prescriptive if every coefficient is a single field-theory cut. Before describing representations of the form (2.1) in more detail, it is worth taking a moment to explain why such a representation would be advantageous. The primary reasons are two-fold. First, although integrated amplitudes can be horrendously complicated transcendental functions, the residues of amplitude integrands are always gauge-invariant algebraic functions of physical observables constructed from tree amplitudes (and enjoying all the symmetries of tree-level -matrices). Thus, regardless of the complex linear combinations of cuts that may appear in the coefficients upon solving the constraints, the terms involved are (relatively) easy to compute, with complexity similar to that of tree amplitudes. Secondly, once a choice of basis is fixed, each integral may be integrated and tabulated irrespective of any particular quantum field theory (provided the basis is sufficiently general). And because loop integration remains considerably harder than integrand construction, it is extremely convenient to reuse integrals for many different computations or reduce them to a smaller set of master integrals, see e.g. [120, 121, 122, 123] and references therein. While we are not using integral reduction in this work, we are simplifying the work required to find integrand-level representations, after which integration-by-parts identities could be exploited. For related work trying to identify certain master integrands that are nonzero upon integration, see e.g. [124, 125, 126, 116]. ### 2.1 The Generalized Unitarity Approach to Integrand Construction In order to be more precise about the ingredients involved in representing an amplitude according to (2.1), it will be useful to define some basic notation and conventions. The kinds of integrands in which we will be interested consist of some number of ordinary Feynman propagators, of the form where is one of the loop momenta, and is some combination of external/internal momenta, with numerators given as polynomials in constructed out of Lorentz invariants. The degree of the numerator polynomials is dictated by the field theory in question. We will have more to say about the numerators soon, but for now let us introduce the notation used for the denominators. The integrals in which we are interested will have denominators corresponding to some scalar Feynman diagram—with each factor of the form . For reasons of notational simplicity (and kinematic agnosticism), we will write these propagators according to the edges of a Feynman graph: using ‘’ to denote the squared momentum flowing through edge ‘’ of the graph. For example, a one loop integrand involving five propagators (a ‘pentagon’) would be written: \raisebox−34.76pt\includegraphics[scale=1.36]pentagonintedgesbare\raisebox−2.25pt\scalebox1.75≡N(ℓ)(ℓ,a)(ℓ,b)(ℓ,c)(ℓ,d)(ℓ,e),\vspace−6pt\vspace−0.5pt (2.2) where is some polynomial in . For plane graphs, every edge can be unambiguously labelled by the Poincaré-dual faces which they connect; and each face can unambiguously labelled by external or internal momenta. Thus, we may use the same labels for edges as for external legs or internal momenta. Our convention will be that denotes the ‘external’ propagator333By external, we simply mean the Feynman propagator of a loop momentum which is on the exterior of the graph. preceding the external leg clockwise around the graph, and denotes an ‘internal’ propagator between loop momenta . Thus, the edges in (2.2) would correspond to a graph with external legs labelled: \raisebox−34.76pt\includegraphics[scale=1.36]pentagonintedges\raisebox−2.25pt\scalebox1.75⇔\raisebox−34.76pt\includegraphics[scale=1.36]pentagonintdual\raisebox−2.25pt\scalebox1.75⇔\raisebox−34.76pt\includegraphics[scale=1.36]pentagonintlegs\vspace−0.5pt (2.3) Throughout this work, we will label leg ranges spanning an arbitrary (but non-empty) length using the notation of the figure on the left in (2.3). Later on, we will also make use of dashed wedges to indicate ranges of legs that may possibly be empty. Although we have endeavored to keep our formulae kinematically agnostic, it is worth mentioning how natural this notation is when integrands are expressed in dual momentum coordinates. These coordinates are linearly related to ordinary momenta, but make momentum conservation (and translational invariance) manifest. Specifically, we could choose to express the external momentum as (with being understood); the points are called ‘dual momentum coordinates’. Differences in these coordinates then represent sums of consecutive momenta, , so that: (a,b)=(b,a)≡(xb\raisebox0.75pt\scalebox0.75−xa)2=(pa+pa+1+…+pb−1)2.\vspace−0.5pt (2.4) For planar integrands in these coordinates, each loop momentum would be assigned a dual coordinate , so that all propagators explicitly take the form or . #### 2.1.1 On-Shell Functions: the Cuts of Loop Amplitudes Given a basis of loop integrands, the criterion used to fix the coefficients in (2.1) is that the residues of the right hand side match field theory. We presume the reader understands how to compute the (multidimensional) residues (see e.g. [127]) of explicit, rational integrands. For a precise definition of cut integrals, see the recent work [128]. The residues of field theory amplitudes should also be familiar, but are worth reviewing—if only to clarify notation that will be used throughout this work. The residues of a scattering amplitude are those functions obtained from an off-shell (e.g. Feynman) loop integrand by setting some subset of internal particles on-shell. If starting from the Feynman expansion, it is not hard to see that the set of all Feynman graphs sharing some subset of internal propagators will span entire lower loop amplitudes at the vertices.444This is strictly true for theories with any amount of supersymmetry; for more general theories, this statement requires at least two propagators be cut. Thus, the residues of loop amplitudes correspond to graphs with amplitudes at each vertex separated by on-shell, internal states. Functions corresponding to such graphs are called on-shell functions, and they have played a key role in many of the recent developments in our understanding of scattering amplitudes. On-shell functions can be defined (and computed) in many ways—using many kinematical choices which may simplify their form for particular theories (in particular dimensions, etc.). Even though they have been most prominently featured in the realm of supersymmetric theories [21, 129, 130, 131, 132, 133] (see e.g. [134, 135] for some exceptions), they can generally be defined from first principles without reference to (off-shell) loop integrands in any quantum field theory. When represented as a graph of amplitudes at vertices indexed by , connected by edges indexed by (representing on-shell, but internal physical states), the corresponding on-shell function can be defined simply by: fΓ≡∏i(∑states∫dd−1% LIPSi)∏vAv.\vspace−0.5pt (2.5) This definition follows immediately from locality and unitarity. In (2.5), ‘’ denotes the measure over the ‘Lorentz invariant phase space’ of each on-shell, internal particle, and the summation over ‘states’ means over all non-kinematical quantum labels distinguishing particles in the theory—helicity, colour, etc. We hope that the reader appreciates that (2.5) has been written in so as to make clear that these objects are definable in arbitrary numbers of dimensions. For many of the on-shell functions important to this work, the phase space integrations in (2.5) are not entirely localized by the momentum conservation at the vertices; when this happens, becomes an (unspecified) integral over on-shell degrees of freedom. The integrand of is thus some generally algebraic (often rational) function of both external and internal, always on-shell degrees of freedom. As discussed above, on-shell functions may be equivalently defined as the iterated residues of off-shell loop amplitudes obtained by putting each edge in the diagram on-shell. On-shell functions defined in this way (as residues of loop amplitudes) appeared first historically in the context of generalized unitarity. While we prefer the first-principles definition (2.5), this historical view is useful to bear in mind. For example, considering on-shell functions as residues makes it easy to count how many ‘internal’ degrees of freedom exist for a given diagram: an -loop diagram with internal edges corresponds to a co-dimension residue of a -dimensional form—resulting in a function of remaining degrees of freedom. In this work we will mostly be concerned with -dimensional quantum field theories. Among the most important of all on-shell functions is the ‘unitarity’ cut: \raisebox−17.45pt\includegraphics[scale=1.36]bubblecuts≡1x1x2AL(ℓa(→x),pa,…,\raisebox0.75pt\scalebox0.75−ℓc(→x))AR(ℓc(→x),pc,…,\raisebox0.75pt\scalebox0.75−ℓa(→x)),\vspace−7pt\vspace−0.5pt (2.6) where are the on-shell momenta flowing through the corresponding edges—obtained as solutions to and expressed in terms of the 2 remaining degrees of freedom, . From this trivial (if essential) starting point, all one-loop on-shell functions can be obtained by taking iterated residues—e.g., (2.7) Here, the pole in must arise from one of the amplitudes in (2.6), and the resulting on-shell function has one internal degree of freedom—now denoted simply by ‘’. If one further residue is taken, the result is an on-shell function without any internal degrees of freedom—a ‘leading singularity’ (in four dimensions): Res(ℓ,d)=0(\raisebox32.0pt$$\raisebox−34.76pt\includegraphics[scale=1.36]trianglecut\raisebox32.0pt$$)=\raisebox−34.76pt\includegraphics[scale=1.36]boxcut≡fiabcd.\vspace−10pt\vspace−0.5pt (2.8) Although the on-shell function on the right-hand side (2.8) has no internal degrees of freedom, it carries a label ‘’ to distinguish between the two “quad-cut” solutions, denoted ‘’, to the four simultaneous on-shell conditions: (ℓ∗,a)=(ℓ∗,b)=(ℓ∗,c)=(ℓ∗,d)=0forℓ∗∈{Q1abcd,Q2abcd}.\vspace−5pt\vspace−0.5pt (2.9) The reason for making such a distinction is that the on-shell functions for the two solutions (which are now basically just products of tree amplitudes, evaluated on ) are related by parity but are generally distinct: most of the time, . The details of their functional form will not be important to us. However, that there are two solutions to cutting four propagators in four dimensions and that the resulting on-shell functions are generally distinct will be very useful facts to bear in mind. Although we will use (planar) SYM as the primary example throughout this work, we would like to emphasize that the precise form taken for the on-shell functions in this theory will play essentially no role whatsoever. Indeed, most of our analysis would be valid for any particular (four-dimensional) quantum field theory. If every on-shell function in this work were reinterpreted as those of non-supersymmetric Yang-Mills, for example, virtually all of our results would remain valid: our formulae would represent important and correct—merely incomplete—contributions to loop amplitude integrands in pure Yang-Mills. For those readers interested in a more concrete understanding of the on-shell functions used in this work for planar SYM, we refer the reader to more thorough discussions in the literature (see e.g. [21, 136] or the appendices of [119]), and to the computer packages described in refs. [137, 138, 139]. Before moving on, we should clarify some of the terminology that is often used to describe on-shell functions. For this work, we consider ‘residues’ and ‘cuts’ to be interchangeable. Residues with maximal co-dimension (which may involve internal propagators, or simply cut conditions among fewer propagators) have no internal degrees of freedom. These on-shell functions will play an important role for us; they are called ‘leading singularities[106]. Residues for which the number of cut conditions exceeds the number of internal propagators are called composite. (Composite residues, however, will not be very important to our present work.) Residues which depend on some number of internal degrees of freedom—such as the unitarity cut (2.6)—may occasionally be called ‘sub-leading’ singularities. Closely related to (sub-)leading singularities are the so-called ‘maximal cuts’ [105] (see also [140]). Maximal cuts are those residues which cut the maximum number of internal propagators of an amplitude; this number depends on multiplicity, but can be substantially less than . As such, maximal cuts often correspond to what we call sub-leading singularities—which could potentially be a source of confusion. We choose not to use the language of maximal- and (next-to)-maximal cuts, however, because the counting of the number of internal degrees of freedom of a given residue is much more important to us than the number of propagators involved (or the multiplicity of an amplitude). ### 2.2 Generalized Unitarity at One Loop In this subsection, we briefly review (traditional) generalized unitarity at one loop [101, 102, 103]. This will provide a convenient excuse to introduce some essential aspects about integrand reduction, and illustrate the differences with the prescriptive approach we describe here. For the sake of clarity and concreteness, let us restrict ourselves to four-dimensional quantum field theories. Let us review some classical results about integrand reduction in four dimensions. The space of squared propagators is easily seen to be six-dimensional: any such factor can always be expanded into a “naïve” basis of Lorentz-invariant monomials: (ℓ,Y)≡(ℓ−pY)2∈span{1,ℓ⋅k1,ℓ⋅k2,ℓ⋅k3,ℓ⋅k4,ℓ2}na\"{i}ve'' % basis for inverse propagators.\vspace−2.5pt\vspace−0.5pt (2.10) Here, represent any spanning set of four-dimensional momenta. There are various ways to make this counting manifest—for example, using momentum twistors [141] or projective coordinates (see e.g. [142]); but for our purposes, the obvious counting in (2.10) will suffice. We restrict our discussion to general kinematics and will not exploit any linear dependencies that may arise for low-point kinematics (when ), [109, 143]. The fact that inverse propagators (in four dimensions) span a six-dimensional space has some immediate consequences independent of any particular quantum field theory. One fairly trivial consequence of (2.10) is that any polynomial of degree less than three in loop momenta can be expressed alternatively in a six-dimensional space of inverse propagators—including the identity polynomial. In particular, this means that—regardless of the (field-theory-determined) power counting of numerators, any integrand with six or more propagators can be expanded in terms of those involving five or fewer: simply choose six of the inverse propagators to expand ‘’ in the numerator, resulting in terms with strictly fewer propagators. This can be done recursively for any integral until all terms have at most five propagators. This reduction procedure was first described by Passarino and Veltman in ref. [109]. This generalizes trivially at higher loop orders to imply that any integrand involving six or more external propagators is reducible into those involving five or fewer. (We will see below that this statement can be strengthened for planar theories.) Let us now discuss the forms taken for loop-dependent numerators of four-dimensional integrands. As we have seen, all Lorentz-invariant monomials can be expanded in a six-dimensional basis of inverse propagators; this implies that we may without any loss of generality consider only numerators constructed as products of inverse propagators. It is not hard to see that the space of powers of inverse propagators spans a space whose rank is given by: rank of r-fold products of inverse propagators: (r+34)+(r+4)4.\vspace−2.5pt\vspace−0.5pt (2.11) This counting follows from the fact that these functions correspond to symmetric, traceless products of ’s of . (For this work, the most important instances of (2.11) are for —polynomials with and degrees of freedom, respectively.) From the discussion above, it should be clear that all one loop amplitude integrands in any four-dimensional quantum field theory can be expanded in terms of integrals involving at most five propagators. In order to fully specify a basis of integrals in (2.1), however, we must also know the highest power of loop momentum that can appear in numerators. This is determined by the ultraviolet behavior of the theory in question. For the sake of illustration, let us consider the case of (maximally) supersymmetric Yang-Mills theory (SYM). Amplitudes in this theory scale as at large loop momentum, so that any integral involving five propagators as in (2.2) should have a numerator of the form —a six-dimensional space of possible numerators. This suggests that for any -point amplitude we can construct a complete basis of integrands in terms of pentagon integrals with degrees of freedom each, and box integrals with degree of freedom each (their overall normalization). Such a basis of integrands would indeed be complete—but considerably over-complete. One way to see that this set of integrals forms an over-complete basis is to choose a convenient basis for the numerators of each pentagon integral. Consider the pentagon drawn in (2.2). Its numerator has degrees of freedom; a natural choice of basis for these would involve the relevant inverse propagators, together with the dual of these —generated by the six-dimensional -tensor:555The tensor can be expanded in terms of inverse propagators (involving additional (complex) momenta), but its actual form will not be important for us here. (2.12) In this basis, of the degrees of freedom of each pentagon directly give rise to box integrals—without any loop dependence in their numerators. These are called ‘contact terms’ of the original pentagon; and we see that the degrees of freedom of any pentagon integral cleanly separate into contact terms, and only non-contact degree of freedom. This is responsible for (most) of the redundancy in our naïve basis of pentagons with general numerators and scalar boxes with loop-independent numerators. The decomposition of pentagon numerators according to (2.12) is called the “parity” basis because it naturally separates all integrands into scalar box integrals (symmetric under parity), and parity-odd pentagon integrals. Because the -tensor in (2.12) changes sign under parity, these integrals vanish upon integration (on the parity-even Feynman contour of loop momenta). As such, they are irrelevant to integrated amplitudes and their role in representing loop integrands is consequently, often neglected. Before we can discuss how amplitudes in SYM are represented according to (2.1) using this basis of integrals, we must first observe that it is still over-complete! To correctly count the total degrees of freedom required to expand any integral, imagine first combining all terms over a common denominator built from all propagators. The power counting discussed above implies that the amplitude must have powers of inverse propagators in the numerator, implying a total number of degrees of freedom given by (2.11) with . And so, while parity-odd pentagons and scalar boxes do form a basis, they represent degrees of freedom, which exceeds the correct number, , for . Indeed, we can see from this counting that the parity-odd pentagons satisfy integrand-level relations, which must be eliminated in order for us to uniquely fix the coefficients of the chosen, independent subset of pentagons. The upshot of the preceding discussion is that we know that there exists a representation of one loop integrands in SYM of the form: An=∑cabcdeIabcde+∑cabcdIabcd,\vspace−2.5pt\vspace−0.5pt (2.13) where the first terms include some choice of independent parity-odd pentagon integrals. This choice obviously affects the complexity of the coefficients that arise, but has no impact on the coefficients of the scalar boxes—for the simple reason that only these terms survive upon integration, and therefore cannot depend on the choice of basis for the parity-odd pentagons. Thus, if we were only concerned with integrated amplitudes, the representation simplifies considerably: ∫d4ℓAn=∑cabcd∫d4ℓIabcd.\vspace−2.5pt\vspace−0.5pt (2.14) Because this expression is independent of the choice of basis for the parity-odd pentagons, it certainly appears prescriptive. Indeed, the coefficients of the scalar box integrals are (deceptively) simple: cabcd=∑i=1,2fiabcdwherefiabcd≡\raisebox−34.76pt\includegraphics[scale=1.36]boxcut\vspace−10pt\vspace−0.5pt (2.15) where are on-shell functions corresponding to cutting the obvious four propagators, which are summed over the two (parity-conjugate) leading singularities with the topology of a given box. The fact that the coefficients of the scalar boxes take this simple form is not hard to prove by considering the co-dimension four residues of the amplitudes and box integrals in the basis. But this simplicity hides a great deal of underlying structure that is easily overlooked. For example, not all co-dimension four residues of amplitudes involve four propagators: there are also the so-called ‘composite’ residues involving only three propagators separated by two massless legs: \makebox[68.0pt][c]{\raisebox−34.76pt\includegraphics[scale=1.36]oneloopcompositea}=% \makebox[30.0pt][c]{\raisebox−34.76pt\includegraphics[scale=1.36]oneloopcompositeb}supported on box integrals via\raisebox−34.76pt\includegraphics[scale=1.36]oneloopcompositefrombox\vspace−14pt\vspace−0.5pt (2.16) These residues are supported where the loop momentum becomes both soft and collinear (to some external leg), and exist within the range of the Feynman contour (for ); as such, they are precisely responsible for the infrared divergences of loop amplitudes. The fact that the representation (2.14) gets these non-manifestly-matched residues of field theory correct follows from the completeness of our basis and the fact that parity-odd pentagons always vanish on these parity-even residues. But as indicated in (2.16), these residues of field theory are simply tree amplitudes; as such, the fact that the box expansion (2.14) reproduces these cuts is how the tree-level BCFW recursion relations were originally discovered [7] (only later proven in ref. [8]). In what follows, we will not make use of composite residues in our work mostly because they exist only for integrals involving massless legs—which would require us to deal with the various cases of possible leg distributions differently. See the end of section 2.3 for a more thorough discussion of the advantages and disadvantages of making these residues (responsible for infrared divergences) manifest. What we have described so far is a more thorough version of how generalized unitarity is usually described at one loop. The representation (2.13) does not meet our requirement for being prescriptive for the simple reason that the coefficients are not individual residues. Despite the fact that the integral level statement in (2.14) is very nearly prescriptive, there is no way to avoid choosing a basis of parity-odd pentagons in (2.13), and the mess of linear algebra resulting in their coefficients. This story can in fact be recast in a prescriptive way, but doing so requires several complications unnecessary beyond one loop (if we insist on maintaining the manifest power counting of SYM). After describing the prescriptive approach to amplitudes at two and three loops, it will be much easier to understand prescriptive unitarity at one loop. Thus, we postpone a more general discussion of one loop prescriptive unitarity until section 4, where we will see that weakening the limits on the power counting of the theory will allow us to better describe SYM at one loop. ### 2.3 Prescriptive Unitarity at Two Loops (Redux) In the past, increasing the loop order or the number of legs often led to computational challenges. Some of the early results started with the computation of integrands for fixed number of legs, [144], which were later extended to arbitrary multiplicity, [145, 9, 146, 119]. In pure Yang-Mills, explicit results for all plus amplitudes up to six points are also available [147, 148, 149, 150, 151], see also work on numerical unitarity methods at two loops [108]. Surprisingly enough, matching two loop amplitudes in planar SYM at the integrand-level according to unitarity (even prescriptively) turns out to be simpler than at one loop. To see this, let us first describe the analog to Passarino-Veltman reduction [109] relevant to amplitudes in (planar) SYM. Without any loss of generality, we may consider all integrals to include at least one internal propagator (multiplying by if necessary). Power counting now requires that any integrand in a purported basis must involve at least three external propagators per loop (four propagators total per loop). How many external propagators are allowed before integrand reduction implies dependencies? For reasons that we will soon demonstrate, it turns out the answer is four, resulting in a (possibly over-)complete basis from the following topologies: {\makebox[60.0pt][c]{\raisebox−24.6pt\includegraphics[scale=1]twoloopladder1},% \makebox[85.0pt][c]{\raisebox−24.6pt\includegraphics[scale=1]twoloopladder21},\makebox[85.0pt][c]{\raisebox−24.6pt\includegraphics[scale=1]twoloopladder31}}\raisebox−2.25pt\scalebox1.75⊂{\makebox[80.0pt][c]% {\raisebox−24.6pt\includegraphics[scale=1]twoloopladdergeneral}}.\vspace−2.5pt\vspace−0.5pt (2.17) The first of these topologies has no loop dependence and only degree of freedom in the numerator, the second has degrees of freedom, and the third has . In order for us to see that no integrands involving more external propagators are required, it will be helpful to first describe the degrees of freedom of these integrands. Consider first the ‘pentabox’ integral—the second topology in (2.17). This integral’s numerator must involve a single inverse propagator. It will be useful to describe these degrees of freedom in terms of contact/non-contact parts. Obviously, the contact terms are captured by the four relevant inverse propagators, leaving two non-contact degrees of freedom. These orthogonal degrees of freedom are naturally captured by two quad-cuts—the points in loop momentum space determined by putting the four external propagators on shell. Denoting the four external propagators of the pentagon-part of the pentabox integral by , a natural basis for numerators would be given by: (2.18) Thus, the general numerator of a pentabox can be described as consisting of exactly non-contact degrees of freedom, and contact terms—each having the topology of a double-box (the first picture in (2.17)). Using the same basis for inverse propagators for each pentagon, it is easy to see that the degrees of freedom of a double-pentagon—the last topology in (2.17)—can be decomposed according to . Although we could envision all the topologies in (2.17) as arising as contact terms of the double-pentagon, it turns out to be much smarter to discuss each topology as being defined modulo its contact-term degrees of freedom. Thus, a double-box has a single degree of freedom; a pentabox has 2 degrees of freedom (modulo contact terms); and a double-pentagon has 4 degrees of freedom (modulo contact terms). In what follows, we will define our basis using all the degrees of freedom for each topology (which would seem like a very over-complete basis), with an important role played by the non-contact degrees of freedom of each. Thus, we may reformulate our basis according to, {\makebox[60.0pt][c]{\raisebox−24.6pt\includegraphics[scale=1]twoloopladder1},% \makebox[85.0pt][c]{\raisebox−24.6pt\includegraphics[scale=1]twoloopladder2},\makebox[85.0pt][c]{\raisebox−24.6pt\includegraphics[scale=1]twoloopladder3}},\vspace−2.5pt\vspace−0.5pt (2.19) where an index denotes the non-contact degrees of freedom for each term. We are now ready to see that any integrand involving more than four external propagators for either loop momentum is reducible into the topologies given in (2.19). Suppose that one of the loops involved five external propagators. Its degrees of freedom could then be spanned by single contact terms with (non-contact) degrees of freedom each, and double-contact terms with degree of freedom each (their normalization). Thus, any integrand involving more than four external propagators is expandable into those in (2.19). Before moving on, we should be clear that our present basis of two loop integrands (relevant to planar SYM) in (2.19) is certainly over-complete. This is because, for example, while we consider there to be two pentaboxes (indexed by , counting the non-contact terms), we are going to allow them to be defined by all their degrees of freedom of a general numerator consistent with power counting—including their contact terms. We will see below that these integrands, as they appear in the basis for our prescriptive representation, will have rules to specify all six of their degrees of freedom. This may seem to be a rather poor choice of basis, it being initially over-complete; however, these additional degrees of freedom will be critical to allowing us to construct a strictly diagonal basis for cuts—a basis of integrands for which each term matches a specific field theory residue unique to that integrand. Once such a basis has been constructed, its non-over-completeness is guaranteed. #### 2.3.1 Choosing a Diagonalized Basis of Integrands/Cuts Let us now describe how to fully determine each integrand in our basis (2.19) according to field theory cuts. The first of the integrals, the double-boxes, are the simplest but arguably the least trivial. They are simple because each double-box has only a single degree of freedom, and so we need only determine its normalization; they are the least trivial because they do not have (in the general case) any residues with maximal co-dimension. (When one or more of the middle leg ranges are empty, the integrals do have support on maximal co-dimension residues, but we choose here to ignore this potential simplicity in favor of a more generally valid approach.) The fact that double-box integrals do not generally support residues with maximal co-dimension is not in fact very problematic: we merely need to match field theory on a less-than-maximal co-dimension residue. For example, let us choose to consider the co-dimension six residue of the integral that puts all six of the external propagators on-shell. We may parameterize the two-dimensional space of loop momenta along this residue by ‘’—one parameter per loop. The residue of the six propagators is easy to take: it produces a simple Jacobian666This Jacobian appears on both sides of (2.1), and so is not actually relevant to the coefficients., together with the internal propagator, , left as a function of the remaining degrees of freedom: \raisebox−24.6pt\includegraphics[scale=1]twoloopintcut1 (2.20) The corresponding residue of field theory is similarly easy to evaluate, it also being a function of two internal degrees of freedom. We will represent this as: \raisebox−24.6pt\includegraphics[scale=1]twoloopcut1 (2.21) A closed formula for this on-shell function (expressed using momentum twistors) was provided in ref. [119]; but a more general way to express it (independent of kinematical preferences) would be to start with a double-bubble—analogous to (2.6) above—and take two residues cutting the outermost amplitudes. This function represents the ‘correct result’ for this two parameter function of the loop momenta, and so we must match field theory everywhere as a function of . This can be done by simply matching field theory at an arbitrary (but fixed) point . (These points can always be chosen so that the now-normalized basis integrand is dual-conformally-invariant, but dual-conformal-invariance is not something required by our approach.) Thus, we can match field theory at least at some arbitrarily chosen point along this co-dimension six residue of the integrand using the terms: An=\raisebox−2.25pt\scalebox1.75∑(% \makebox[60.0pt][c]{\raisebox−24.6pt\includegraphics[scale=1]twoloopladder1}\raisebox−2.25pt\scalebox1.75×\makebox[65.0pt][c]{\raisebox−24.6pt\includegraphics[scale=1]twoloopcut1})+…,\vspace−2.5pt\vspace−0.5pt (2.22) To be completely clear, the scalar double-box integrands have been normalized (fixing their one degree of freedom) at some particular point to match the corresponding (-dependent) point in field theory. We could include these labels in the figure denoting the integrand in our basis, but have left them off for notational simplicity. Also, the labels in the on-shell function should really be understood as —where these particular points are fixed, but chosen arbitrarily. The attentive reader will notice that these cuts, (2.21), would also have support from the other integrals in our basis (2.19). And so it would seem that we are in danger of spoiling the correctness of the terms (2.22) on the cuts (2.21) once we include the other integrals in our basis. This potential problem is easily solved by using the contact term degrees of freedom of the higher integrands in our basis to ensure that all the other integrals vanish on these cuts. Just to be clear, it is not possible to make these higher integrands vanish everywhere on the lower cuts, but only at the specified points, etc., along the lower cuts. Let us now describe how this works in detail. Consider now the pentaboxes in our basis of integrands (2.19). Each of these has exactly two non-contact, and four contact degrees of freedom. Can these integrals be used to match field theory cuts not matched already by the terms in (2.22)? And can we do so without spoiling those already matched, (2.21)? The answer to both questions is yes. Let us consider each in turn. Pentabox integrals have more external propagators than the double-boxes, and therefore support field theory cuts involving more external propagators. We could use co-dimension seven cuts to match field theory in a way similar to what we did above, but (unlike the double-boxes), pentabox integrals always support ‘leading singularities’—residues of maximal co-dimension (eight)—for the simple reason that they involve eight total propagators. Whenever leading singularities are supported, they are better cuts to use—if only because they do not require any arbitrary choice of points such as on which to evaluate cuts of integrands and their on-shell function coefficients. Thus, the pentaboxes can be used to match (some of the) leading singularities of field theory: \makebox[85.0pt][c]{\raisebox−24.6pt\includegraphics[scale=1]twoloopintcut2}\raisebox−2.25pt\scalebox1.75↔% \makebox[100.0pt][c]{\raisebox−34.76pt\includegraphics[scale=1]twoloopcut21}\vspace−10pt\vspace−0.5pt (2.23) When trying to match field theory on these cuts, there may seem to be a problem. On the one hand, the equations for cutting eight propagators has four distinct solutions (two per loop, labeled by ), and field theory residues evaluated at these points in loop-momentum space are generally distinct. On the other hand, there are only two non-contact degrees of freedom for the numerator of the pentabox. At best, we can match two of the four residues of field theory. (All of the contact term degrees of freedom vanish on these cuts by virtue of the fact that all of the contact terms involve fewer external propagators.) The resolution of this problem is in fact simple: it is simply unnecessary to manifestly match every field theory residue. So long as we have a complete basis of integrands, and the coefficient of every integral in the basis is uniquely fixed by some residue, completeness of the basis ensures that all other residues will also be matched. Thus, we merely need to choose two of the four pentabox leading singularities to match manifestly using the non-contact-term degrees of freedom. Let us therefore declare that we fix the (non-contact-term) degrees of freedom of the pentaboxes in order to precisely match field theory on the ‘’ residues of field theory in (2.23): \raisebox−34.76pt\includegraphics[scale=1]twoloopcut2 (2.24) The above discussion has allowed us to uniquely specify the non-contact-term degrees of freedom of every pentabox integral, matching field theory on the (subset of) pentabox leading singularities in (2.24). This can be done irrespective of the contact term degrees of freedom, as none of these terms have support on the residues (2.24). Thus, we have a four-dimensional space of ‘ambiguities’ for the possible numerators of the pentaboxes which leave in tact the correctness of the pentabox residues. The attentive reader should already understand how these degrees of freedom should be eliminated: following our general comments on prescriptive unitarity, these contact term degrees of freedom are eliminated in such a way that we must ensure that the pentabox integrals vanish on the already-matched points in loop-momentum space. Because each contact term of the pentabox corresponds to a double-box integral, and each of these have been used to match field theory at arbitrarily chosen points , we now require that the pentabox integrals vanish at these points. These are homogeneous equations which are easy to solve analytically: one merely evaluates the non-contact-term numerators on the residues being used to define the contact-term integrals, and subtract. (Without this subtraction, our basis would be essentially upper-triangular in form, and so this subtraction represents the only ‘linear algebra’ involved in our construction.) For example, if the four external propagators of the pentabox are labeled , so that we may expand the numerator into the “chiral” basis given in (2.18), then we simply define the total pentabox integral’s numerator to be given by: Ni(ℓ)≡(ℓ,Yi)−∑\makebox[10.% 0pt][c]{λ∈{a,b,c,d}}(ℓ(x∗),Yi)(ℓ,λ)(ℓ(x∗),λ).\vspace−2.5pt\vspace−0.5pt (2.25) Here, is one of the non-contact-term (“chiral”) numerators (generally is one of as in (2.18), but normalized to match a particular cut in (2.24)); and is whatever point is used to define the double-box integrals in the basis—which need not be the same for every double-box. This is analogous to Gram-Schmidt orthogonalization. We are only being somewhat schematic here because any more concreteness would require some reference to formulae for (requiring in turn the need to introduce notation using some kinematic scheme—about which we desire to remain agnostic), and also a specific rule for specifying the points which are truly arbitrary. For a more concrete discussion, we refer the reader to ref. [119]. Thus, we have now fully specified all pentabox integrands in our basis, and the coefficient of each is uniquely fixed by a single field theory residue. All that remains for us to do is choose which double-pentagons to include in our basis. As before, this can be done rather simply. Each double-pentagon has non-contact degrees of freedom, and—conveniently this time—has precisely four leading singularities not shared by any other integrals in our basis: the so-called ‘kissing boxes’. Thus, we may uniquely determine the non-contact degrees of freedom of the double-pentagons by ensuring that they match field theory on the four cuts: \makebox[100.0pt][c]{\raisebox−24.6pt\includegraphics[scale=1]twoloopintcut3}\raisebox−2.25pt\scalebox1.75↔% \makebox[100.0pt][c]{\raisebox−34.76pt\includegraphics[scale=1]twoloopcut31}\vspace−10pt\vspace−0.5pt (2.26) As before, the contact-terms for these integrals are fully determined by the requirement that these integrals vanish on all the cuts already matched by lower integrals. It is easy to see that exactly the right number of contact-term topologies exist to eliminate all these degrees of freedom. Having started with an over-complete basis of integrals, and defined each integrand uniquely to match field theory on a specific cut and to vanish on all other cuts used to define other integrals, we have achieved a truly diagonal basis. That this basis is not over-complete follows from the fact that each integrand has a unique field theory coefficient (because every other integral in the basis has been explicitly constructed to vanish there). Thus, we have found prescriptive representation of all two loop amplitude integrands in planar SYM. Schematically, we may write, AL=2n=\raisebox−2.25pt\scalebox1.75∑\raisebox2.0pt\scalebox0.6L$$fL% \makebox[90.0pt][c]{\raisebox−24.62pt\includegraphics[scale=1]twoloopladdergeneral}\vspace−5pt\vspace−0.5pt (2.27)
where the ‘ladder’ integrands are chosen from our basis (2.19), constructed in the way described above, and each coefficient is a a single on-shell function:
(2.28)
Readers familiar with the earlier work in ref. [119] will notice that the representation of two loop amplitudes described here is considerably more compact (and arguably more straightforward). There are several reasons for this.
The primary distinction between the representation of two loop amplitudes in (2.27) and that described in ref. [119] is that here we have made no use of composite residues such as (2.16). Because these residues are entirely responsible for the infrared divergences of scattering amplitudes which are known to exponentiate according to the BDS ansatz described in ref. [152], it is well-motivated to make this exponentiation manifest in an integrand-level representation. This was achieved in the formulation described in ref. [119], but at the cost of distinguishing the terms in (2.27) according to the possible masslessness of the external leg ranges of the integrals, and using different cuts/coefficients for the different cases—namely, using composite residues for the massless cases, and those similar to what we described above whenever composite residues would not exist.
Our choice here to not make such distinctions is primarily pedagogical: breaking the basis of integrals into more cases requires a degree of unessential complication and a longer discussion. At three loops, choosing not to exploit composite residues leads to a considerably more compact formulation, but it is worth mentioning that we have been unable to make the exponentiation of infrared divergences manifest at three loops even if these distinctions had been made. As such, it is not merely the interest of brevity that motivates our choice to ignore any possible composite residues that may exist. Refining our representation of three loops to make the exponentiation of infrared divergences manifest would be an interesting exercise, but must be left for future research.
### 2.4 Generalities of a Prescriptive Approach to Unitarity
We hope that the discussions above at one and two loops were sufficiently clear to illustrate the prescriptive approach to unitarity. In this section, we outline how this can be formulated for amplitudes in a truly general quantum field theory—without reference to planarity, supersymmetry, spacetime dimension, or even the masslessness of particles. We return to the case of one loop prescriptive unitarity in section 4 in order to better illustrate how these methods work for theories with worse ultraviolet behavior than SYM.
The first step is to construct a complete basis of (local) loop integrands, with numerators dictated by the power counting of the field theory in question. After an analogue of Passarino-Veltman reduction [109], an over-complete basis of integrands may be identified. From such a basis of integrands, a prescriptive representation for any amplitude would be found by the following procedure:
1. draw all integrand topologies, dividing every numerator’s degrees of freedom into non-contact terms and contact terms;
2. for each integrand, starting with those involving the fewest external propagators, specify an independent subset of field theory residues involving all external propagators, and use these to define its non-contact term degrees of freedom;
3. fix each integrand’s contact terms by the requirement that the integral vanish on all the residues used to define integrals with fewer propagators.
This procedure may be followed regardless of the power counting of the theory, its spacetime dimension, etc. The only annoyance that may arise is that the size of the basis may grow rapidly—requiring a correspondingly large number of cuts (some which may have identical topologies, but evaluated multiple points along their internal degrees of freedom).
If the last step in this procedure above were ignored—so that the cuts which define each integral did not exactly correspond to a single field theory residue—then the actual coefficients could be easily found by linear algebra. In this case, what we have described would asymptotically amount to what was described by OPP in ref. [110] (where the coefficients of integrals represent the difference between the right answer and all coefficients of the higher-level integrals which pollute each cut in question). This distinction is perhaps better illustrated by example, and we refer the reader to a more thorough description of prescriptive unitarity at one loop in section 4.
In order to find a truly prescriptive representation, it is the last step that is the most important. And it may not even be possible to satisfy. If care is not taken in the selection of cuts used to define the lower integrals, the requirement that higher integrals vanish on all cuts below may not be possible. This will happen whenever the cuts being used to define ‘lower’ integrals outnumber the contact term degrees of freedom of integrals above. The easiest way to illustrate this potential problem is through concrete examples that first arise at three loops. Because of this, we refer the reader to section 3.2.3 for a more thorough discussion.
Even without seeing the details of what can go wrong, we should emphasize that this potential tension is a very real one: no matter how cleverly cuts are chosen, it is not possible to avoid naïvely over-constraining the contact terms of some integrals at three loops. In the representation we describe in the next section, this seemingly over-constrained problem is in fact solvable, but such a solution was not guaranteed. As such, our result at three loops represents a non-trivial test that the prescriptive representations exist.
What would it mean for the prescriptive approach not to work? This would happen if it were not possible to satisfy the requirement that some integrals’ contact terms vanish on the all (supported) cuts used to define lower integrals. This would not be a fundamental problem, per se, as the integrand basis being generated would still be complete; and as such, there would surely exist a solution to generalized unitarity resulting in some representation for amplitudes of the form (2.1). The problem is that the representation that results would not be prescriptive. Why? Because, if higher integrals could not be made to vanish on some cut purportedly being used to define a lower integral, then this cut would have contributions from more than one integral in the basis. The basis would not be diagonal in cuts. As such, the coefficient of the lower integral would need to be the difference between the “right answer in field theory” and the sum of all the coefficients of higher integrals that pollute this cut. If there were only one such complication, this would not substantially complicate matters; if there were many, then the problem would revert to more complicated linear algebra—ubiquitous in a non-prescriptive approach.
Because of this tension, it would be very interesting to see if prescriptive representations exist more generally—at higher orders of perturbation, for non-planar theories, for theories with less supersymmetry, etc. Even if prescriptive representations were not possible, however, we expect that the prescriptive approach described here would lead to a substantial improvement in the linear algebra involved in finding integrand-level representations of amplitudes.
## 3 Prescriptive Representation of All Three Loop Amplitudes
Let us now apply the prescriptive approach to construct a closed-form representation of all -point NMHV amplitudes in planar SYM at three loops. Until now, the only cases known (for arbitrary multiplicity) were the three loop MHV integrands found in [9, 119]. In this section, we construct representations valid for all amplitudes,
AL=3n=\raisebox−2.25pt\scalebox1.75$∑\raisebox2.0pt\scalebox0.6$W$$fW% \makebox[100.0pt][c]{\raisebox−47.3pt\includegraphics[scale=1]wheelintgeneral}\raisebox−2.25pt\scalebox1.75+\raisebox−2.25pt\scalebox1.75∑\raisebox2.0pt\scalebox0.6L$$fL\makebox[130.0pt][c]{\raisebox−34.76pt\includegraphics[scale=1]ladderintgeneral},\vspace−13pt\vspace−0.5pt (3.1)
where the integrals span all contact-term topologies of those drawn above, and the non-contact terms of each are fully determined to match specific field theory cuts and . As indicated in (3.1), the possible integrands come in two principle topologies which we will call ‘wheels’ and ‘ladders’, respectively. In the next subsection, we will demonstrate that this corresponds to a complete basis for three loop integrands, and we give a complete enumeration of the contact-term topologies (relative to what is drawn in (3.1)) that appear in our basis. In the following subsection we illustrate the cuts which define our basis (and the field theory coefficients); complete details are provided in Appendix A. General aspects of (3.1) are discussed in section 3.3.
### 3.1 Constructing a Diagonal Integrand Basis for Three Loop Integrals
As outlined in section 2.4, the first step in applying prescriptive unitarity is to construct a complete basis of integrals (relevant to a particular quantum field theory). At two loops, we saw that all integrals (with the correct power counting for SYM) could be expanded into those involving at most four external propagators—generally, double-pentagon integrals and contact terms thereof.
At three loops, the same rule applies: any integral involving more than four external propagators is expandable into those with fewer. For (single-loop-momentum) factors of integrands involving a single internal propagator, the argument is the same at two loops. New at three loops is the possibility that one loop involves two internal propagators. The fact that a heptagon involving five external and two internal propagators (with numerators spanning a 50-dimensional space according to (2.11)) can be decomposed into those involving at most four external propagators is similarly obvious (in terms of counting), and easy to verify by counting. See Table 2 for more general counting. This fact demonstrates that general integrands with the wheel topology (the first terms in (3.1)) can involve at most four external propagators per loop, and that the ladder integrals drawn in (3.1) are actually reducible into:
(3.2)
One may wonder why we have excluded ‘wedge-type’ integrals of the ladder topology—those involving four external propagators on one side of the middle loop. We do not list these because, without any loss of generality, they can always be considered as wheel integrals (by multiplying and dividing by the additional propagator). Thus, the integrals appearing in (3.2) together with the wheel in (3.1) represents a (considerably over-)complete basis of integrands at three loops.
The second step in the procedure is to divide all the numerators for integrands in our basis into contact/non-contact-term degrees of freedom. In this partitioning, the only new cases to consider (relative to two loops) are the hexagons and pentagons involving two internal propagators. Let us describe the pentagons first. As should be familiar, the power counting of SYM dictates that these integrals involve numerators constructible as single inverse propagators. The division of this basis into contact/non-contact terms is obvious: if the three external propagators are labeled , then we should use a basis of the form:
(3.3)
Here, the non-contact terms are somewhat schematic—they correspond to arbitrary inverse propagators which span the three-dimensional space orthogonal to the three contact terms. The form of these numerators is not important, but the counting is. Thus, a pentagon integral involving three external and two internal propagators has degrees of freedom—counting non-contact and contact terms, respectively. When indicating the three non-contact term degrees of freedom, we will use capital Roman letters (letters corresponding to the different loops).
The final novelty to be discussed at three loops is the possibility of a hexagon involving four external and two internal propagators. This case turns out to be considerably simpler. Again, the power counting of SYM dictates that these integrals must involve 20 degrees of freedom, constructed as two-fold products of inverse propagators. A natural basis for these numerators is as follows:
(3.4)
Thus, hexagons have non-contact- and contact-term degrees of freedom. Because they have only 2 non-contact terms, we will distinguish them by lower-case Roman letters (again, the letters used to distinguish the loop momenta).
We are now ready to enumerate all the possible topologies required in our basis, and count the non-contact term degrees of freedom of each. This is given in Table 1. The cuts used to define these integrals (and hence their field theory coefficients) are described in Appendix A.
For the sake of clarity, each integral topology drawn in Table 1 represents the collection of all integrals with distinct, cyclically-ordered leg distributions. (Loop momentum labels are always symmetrized.) For most integrals, asymmetry in the diagram is compensated by rotational invariance in sum—for example, the reflected images of or are already accounted for. However, some reflected integrals should be considered implicit: namely, the reflected images of: , , and —for which the defining cuts related by symmetry to those drawn in Appendix A.
### 3.2 Illustrations of Integrand-Defining Cuts and Coefficients
As mentioned above, the full list of cuts used to define each integral in our basis in Table 1 together with the coefficient of each is described in Appendix A. In this section we merely illustrate some examples of these defining cuts and corresponding coefficients. We start with the most obvious and then discuss some truly arbitrary choices made before addressing some of the more subtle issues that are involved.
These subtleties arise because some of the cuts necessarily or potentially used to define the wheel-topology integrals have support as contact terms of the ladder-topology integrals. We will see that some of this overlapping support is necessary and important, but also has the potential to spoil the diagonalizability of our basis. Indeed, we will see that for exactly one of the integrals in our basis, , this cross-talk between topologies poses a critical and unavoidable tension that, if unresolved, could spoil the existence of any prescriptive representation. For this reason, this integral’s defining cuts will be described in some detail, making clear how this tension arises and how it is resolved.
#### 3.2.1 Obvious or Arbitrary Choices for Cuts and Coefficients
Analogous to the double-pentagon integrals at two loops, some of the integrals in our basis are defined by entirely obvious cuts. This is the case for the top-level integrals in our basis: , and ,
(3.5)
Each of these integrals have precisely twelve external propagators, giving rise to leading singularities indexed by indicating the two solutions to each one-loop box:
(3.6)
In each case, the leading singularities can be used to define the corresponding integral’s non-contact degrees of freedom in its numerator.
Let us now consider a case where some choices of cuts must be made, but where this choice is completely arbitrary. Among the simplest examples where such a choice is required happens for the wheel integral . In this case, there are only eleven external propagators, and so some internal propagator must also be cut to give a leading singularity. There are two potential topologies of these residues, depending on which internal propagator is cut:777The third propagator cannot be cut in a leading singularity as it would require more than four constraints to be imposed on a single loop momentum.
(3.7)
There are therefore natural leading singularities to match, but only (non-contact) degrees of freedom in the numerator. Thus, it is simply not possible to construct a numerator for for which each of the leading singularities (3.7) are matched identically.
This situation is analogous to the case of the pentabox leading singularities and integrals at two loops (see section 2.3.1). And the solution is the same: it simply does not matter which choice of cuts is used to match field theory—the non-manifestly matched cut(s) will always follow from completeness of our basis. Thus, we have simply chosen to fix for the second topology in (3.7), matching the 12 leading singularities,
(3.8)
It is worth seeing how the ‘missing’ leading singularities are matched indirectly through a residue theorem. In this case, the residue theorem is:
(3.9)
Fixing the solutions of the two quadruple cuts, we start from the one-parametric function depicted in the first line of (3.9) and sum over all allowed factorization channels (including the different solutions labeled by ). This is simply a manifestation of Cauchy’s theorem. Notice that the first term in the summand (3.9) includes the both the ‘missing’ cuts and the ‘matched’ residues of our choice (3.8), and every other residue appearing in this theorem has been matched explicitly. Thus, this identity directly allows us to express the unmatched cut in terms of those we have matched.
Of course, in order for this to work, every integrand supporting the other cuts must have support on the unmatched cut:
\raisebox−47.3pt\includegraphics[scale=1]wheelunmatchedcut9a (3.10)
Interestingly, once the non-contact degrees of freedom of have been fixed according to the choice (3.8), every one of these integrals automatically contributes (with a minus sign) on the unmatched cut; and similarly, once the contact terms of and have been fixed by the requirement that these integrals vanish on the cuts in (3.8), these integrals automatically have support on the unmatched cuts (3.10) as well. Thus, every term required in the residue theorem (3.9) does contribute support on the non-manifestly-matched cuts, with the requisite signs in order to exactly match field theory on the non-manifestly-matched residue (3.10).
Such residue theorems are fun to illustrate, but the fact that some residue theorem ensures that any non-manifestly-matched cut of field theory works follows automatically from completeness. Hence, we will spare the reader the (somewhat tedious) exercise of describing how this works in every particular case that follows.
#### 3.2.2 Somewhat Carefully Chosen Cuts and Coefficients
No wheel integral has support on a cut used to define a ladder; but the converse is not true. In fact, we have already seen this in action: the cuts used to define integral have support from ; and the requirement that vanish on these cuts precisely accounts for all its its wedge-type contact-term degrees of freedom.
This happens frequently, but requires a minimal degree of care. This is perhaps best illustrated by example. Consider the wheel integral . It has 3 (non-contact) degrees of freedom that must be fixed. Cutting all propagators of the diagram results in cuts:
(3.11)
Here, the blue and white three-point vertices represent MHV and amplitudes, respectively. The choice we make in Appendix A is perhaps the obvious one: simply choose 3 of the 4 possible cuts in (3.11),
(3.12)
Although this choice works, it is worth illustrating an alternative choice that may appear acceptable but that would in fact have been problematic. As far as the non-contact degrees of freedom of are concerned, any three independent cuts involving all external propagators would suffice. What would have been wrong with the following choice:
(3.13)
The cut on the left in (3.13) does indeed determine the remaining non-contact degree of freedom of as well as that on the right. As far as the wheel integrals are concerned, any wheel with support on one will have support on the other; and so, this choice has no effect on the constraints imposed for the contact terms of higher wheel integrals. The problem, however, is that some ladder integrals have support on the left-hand cut in (3.13), but not on the right-hand choice. Moreover, it is easy to see that there do not exist contact-term degrees of freedom for ladder integrals capable of making these vanish on the left-hand cut in (3.13). This means that first choice would spoil the diagonalizability of our basis.
A similar situation arises for the wheel : it has a single degree of freedom, and it may have seemed convenient to fix this along a cut with the topology of three ‘kissing’ bubbles:
(3.14)
Choosing arbitrary points along this residue could indeed be used to define the single degree of freedom in the numerator of . However, this cut topology has support from many ladder integrals—which cannot be made to simultaneously vanish on these cuts; it would over-constrain the contact-term degrees of freedom of the ladders—for example . Thus, we cannot choose to define by the cut (3.14). In order to avoid over-constraining the contact-term degrees of freedom of the ladder integrals, it is necessary for us to ensure that the cut used to define has no support on any of the ladders. This is only achieved if all the internal propagators of are cut when defining its normalization and coefficient.
Cutting every propagator of results in 2 possible solutions (each parameterized by three internal degrees of freedom which must be chosen arbitrarily), distinguished by the MHV-degree of the internal, three-point amplitude. The choice between which of these two cuts should be used to define and its coefficient is arbitrary, but must be made. We have chosen the former:
\raisebox−47.3pt\includegraphics[scale=1]wheelcut1a (3.15)
The general rule to avoid these potential problems should now be obvious: cut as many internal propagators as possible to define as many non-contact degrees of freedom of any integral—making sure that the number of cuts used with a given topology do not exceed the degrees of freedom of any potential contact terms from higher integrals (especially with different non-contact topologies). Thus, whenever a cut used to define a wheel integral that has support from (the contact terms of) ladder integrals, the number of cuts should not exceed the degrees of freedom of the overlapping contact terms.
It is relatively easy to verify that the defining cuts of wheel integrals with support from ladder integral contact-terms given in Appendix A exactly accounts for the right counting, with exactly one unavoidable exception. This exception, the resolution of the resulting tension, and its potential implications for (the viability of) prescriptive unitarity more generally are discussed presently.
#### 3.2.3 Very Carefully Chosen Cuts: Magic Needed, Magic Found
As mentioned above, cutting as many internal propagators as possible to define the wheel integrals works quite well, with one important exception. While easy to overlook, its potential implications beyond three loops (and for more general theories) warrants a more thorough discussion.
The exceptional case is for the wheel consisting of three pentagon integrals:
\raisebox−47.3pt\includegraphics[scale=1]wheelint5 (3.16)
This integral has non-contact degrees of freedom that we must fix by cuts. Following the rule described above, it is natural to start with the leading singularities—those cuts involving putting all 12 propagators on-shell:
\makebox[90.0pt][c]{\raisebox−47.3pt\includegraphics[scale=1]wheelcut5a}\raisebox−2.25pt\scalebox1.75$⊂${\raisebox51.0pt$$% \makebox[90.0pt][c]{\raisebox−47.3pt\includegraphics[scale=1]wheelcut5a1},\makebox[90.0% pt][c]{\raisebox−47.3pt\includegraphics[scale=1]wheelcut5a2}\raisebox51.0pt$$}.\vspace−10pt\vspace−0.5pt (3.17)
It is easy to verify explicitly that of these 16 leading singularities, only 15 are independent points in the space of numerators. Thus, any choice of 15 can be used to define this number of non-contact degrees of freedom of the numerators, leaving us with 12 degrees of freedom. We have chosen to match all cuts (3.17) except the following:
(3.18)
This selection is truly arbitrary: any choice is equally valid, without causing complications. The subtlety (and true tension) arises in the choice of the cuts that define the remaining 12 degrees of freedom of . Because we have exhausted the leading singularities in (3.17), these additional cuts must leave some internal propagators uncut, and therefore have the topology of wedges. Arguably the most obvious choice for fixing the remaining degrees of freedom would be the following cuts,
(3.19)
These cuts are indeed independent and can be used to fully define the last 12 non-contact degrees of freedom of the numerators. The problem is that we must ensure that all other integrals’ contact terms vanish for the cuts being used to define the integrals in our basis. The relevant contact terms to consider in this case are from the ladders—for example, those of , which has 3 contact-term degrees of freedom with the topology of a wedge integral exactly involving the propagators in (3.19).
⊃ \raisebox−34.76pt\includegraphics[scale=1]wedgeinteg (3.20)
These contact terms have 3 degrees of freedom each, and cannot be made to vanish on all the 4 cuts of (3.19).
This problem is in fact unavoidable, with no obvious solution. No matter what 12 cuts (besides the 15 in (3.17)) are used to define the non-contact degrees of freedom of the integrals, they will necessarily have 4 cuts supported by wedge-type contact terms of the ladders as in (3.20). It is not hard to verify that the cuts in (3.19) leads to an over-constrained problem without a solution; and breaking cyclicity will not help. Does there exist another choice of cuts for which a solution to the over-constrained system exists?
The answer is yes: the choice presented in Appendix A does work. We do not wish to claim uniqueness of this solution to this potential obstruction, but it is worth mentioning that many other choices were tried (none of which worked). The resolution we found chooses only 3 of the 4 cuts in (3.19) for each cyclic image,
(3.21)
allowing us to fix of the remaining 12 degrees of freedom of . The final 3 degrees of freedom are then fixed by lower cuts,
(3.22)
We should mention that this choice naïvely makes the problem worse, not better! Why? Because now the contact terms of the ladders, for example , which have only 3 degrees of freedom, have support on five cuts between those of (3.21) and (3.22)—three and two, respectively. Nevertheless, it can be verified by direct computation that constraining the contact terms (3.20) to vanish on the three cuts (3.21), these integrals automatically vanish on the two additional cuts with the topology (3.22).
It is not hard to see that the problem we found here was unavoidable: there is no choice of cuts capable of defining the integral which do not naïvely involve more constraints (on the contact terms of the ladders) than there exist available degrees of freedom. The cuts described above do enjoy the requisite magic, but we do not see why this had to work.
As described in section 2.4, if there had not been a solution to this problem, it would not have fundamentally spoiled our ability to write a closed formula for three loop amplitudes; it would have just prevented this from being a prescriptive representation. Suppose for example that we had chosen the ‘obvious’ cuts to define given in (3.19). The fact that ladder integrals including could not be made to vanish on all four of the cuts in (3.19) would have meant that many coefficients from these higher terms would contribute to the one cut (of four) on which the integrals could not be made to vanish. Thus, the coefficient of this part of the integrals in our basis could not be just ‘the corresponding cut in field theory’, but the difference between the right answer in field theory and all the terms that pollute it. This would be a very-close-to prescriptive representation, but not strictly so.
Clearly, this kind of tension should become more common at higher loop orders, and it would be very interesting to know if prescriptive representations continue to exist. We expect that this tension is avoidable through two loops in more general quantum field theories, but revisiting this story for more general theories at three loops would also be interesting.
### 3.3 General Aspects of the Prescriptive Representation at Three Loops
The local integrand representation of three loop amplitude integrands in planar SYM derived here should be considered as an illustration of applying the prescriptive approach to generalized unitarity—well beyond the reach of earlier methods. Indeed, prior to this work, the only known expressions valid for all multiplicity were for MHV amplitudes, [146]—a formula which was obtained essentially by guessing and comparing against the results of the loop-level BCFW recursion relations [9]. The strategy we describe here seems much more general, explaining (to some extent) the surprising simplicity of loop amplitude integrands when expressed in terms of ‘pure’ (or close-to-pure) local Feynman integrals as noticed in ref. [146]. Ultimately, we seek a representation of loop amplitudes at the integrand level for which there is minimal cancellation between terms. Matching singularities of field theory one-by-one seems exactly in line with this goal.
While the specific result described here has virtually no relevance to the pressing computations needed for colliders, and only limited interest to even those researchers studying planar SYM, it represents a watershed of new theoretical data in which surprising features may be found. And because the formula (3.1), when reinterpreted using on-shell functions of pure Yang-Mills represents a correct (albeit small) part of those amplitudes, the lessons we learn from this toy model have much broader implications for perturbation theory. For this reason, we would like to address some of the interesting aspects of these amplitudes, how this representation compares with others, and may be refined or recast to better expose different aspects of interest.
#### Potential for Specialization and Simplification
As described at the end of section 2.3, our construction of three loop integrands was (perhaps excessively) indifferent to the possible simplifications that arise for low multiplicity or for amplitudes with fixed NMHV-degree. Although our representation (3.1) is arguably compact and general, it may not be the best representation for special classes of interesting amplitudes.
One illustration of this would be a comparison with the only previously known all-multiplicity formula, for MHV amplitudes, as described in ref. [146]. Using the notation here, that result was given as:
AMHVn\raisebox−2.25pt\scalebox1.75$=$\raisebox−2.25pt\scalebox1.75$∑\scalebox0.45$a≤b
We refer the reader to ref. [146] for a detailed description of these summands and the definitions of the tensor numerators defined for each integral. This representation is not incredibly different from the general expression valid for all NMHV amplitudes in (3.1). Among the most obvious differences is the fact that there is no reference to on-shell functions as coefficients. This is explained by the fact that all (non-vanishing) leading singularities of planar MHV amplitudes are identical and equal to the tree amplitude—which has been factored out in (3.23).
Another salient distinction is that not all possible leg distributions are allowed for the integrands appearing in (3.23). While many of the topologies from Table 1 are included, only those involving many, specifically-placed massless legs are used. The reason for this is related to the fact that for any fixed NMHV degree not all on-shell functions appearing as coefficients may be non-vanishing. This is especially true for small . To understand this, we should note that the NMHV-degree of an on-shell function corresponding to a graph involving amplitudes indexed by and internal lines is,
kΓ=∑vkv+2L−(4L−nI),\vspace−0.5pt (3.24)
where is the NMHV-degree of each amplitude appearing in a corner. For a (non-composite) leading singularity, , and so in order for a on-shell function to be relevant to an MHV amplitude at three loops, . Because the only amplitudes for which are for three-point amplitudes (for which ), this is only possible if the on-shell function involves exactly such vertices, with all other amplitudes in the diagram being MHV, with . This is the explanation for why each term in (3.23) involves (generally) six massless legs, and why these integrals were drawn with empty three-point vertices at each vertex involving a massless leg in the work of ref. [146]. Thus, the terms in (3.23) almost exactly reflect the integrals with non-vanishing coefficients—all of which are equal to the MHV tree amplitude.888There is a curious exception for the wheel integral terms in (3.23) within the topology : these integrals do not support MHV leading singularities in general. As such, we expect that there is unnecessary cancellation arising in the representation (3.23), rendering it non-prescriptive.
For NMHV amplitudes with , such a specialization is always possible, as not all the cut topologies described in Appendix A have support in general. However, excluding some topologies comes at the cost of enumerating cases, which we expect will tend to introduce more complexity than would be gained. An exception may be the case of , for which a restricted formulation of our general result may prove compact enough to be independently interesting. This is because NMHV amplitudes always support leading singularities,999This is manifestly true through three loops, but we expect it to be true more generally due to the existence of a ‘’ representation from BCFW recursion [21] and these residues are always simple ‘-invariants’. Thus, we expect a representation exists for which no sub-leading cuts are required as coefficients.
#### Composite Residues and (Exponentiation of) Infrared Divergences
Another fruitful refinement of the general result may be to incorporate composite residues in order to expose the structure of infrared divergences. Our choice to not partition our representation (3.1) according to finite and divergent parts does result in a more compact representation—as we saw also at two loops in section 2.3. Because the infrared divergences of loop amplitudes are always associated with soft-collinear regions in loop-momentum space, they should directly correspond to particular composite residues. And the structure of these divergences should roughly exponentiate as described by the BDS ansatz [152].
Besides avoiding an explosion of cases to consider and fixing integrals with soft-collinear (composite) residues separately from those without them, the principal reason why we did not do this here is that we do not understand how. We do not sufficiently understand the infrared divergences of the wheel-topology integrals to represent amplitudes in a way which suggests that these divergences exponentiate from lower loop-orders. We do not believe that there is any fundamental obstruction to doing so, but we leave the construction of such a representation to future research.
#### Transcendentality at Three Loops: Iterated and Elliptic Integrals
One final aspect of loop amplitudes at three loops worth mentioning involves the appearance of (potentially) non-polylogarithmic functions, including elliptic functions of various kinds. These contributions are intensely interesting, as our understanding of them is dramatically weaker than the purely polylogarithmic transcendental functions. As such, the necessity of this more general class of functions directly challenges our understanding, making new examples in which to study them valuable.
The most concrete, unavoidable place where elliptic integrals arise in planar SYM is at two loops. As noted in ref. [153], the double-box integral involving all massive corners and at least one leg on each side of the middle propagator has no residues with maximal co-dimension—no leading singularities. This arrangement first arises for ten particles,
(3.25)
and there is a strong argument why this integral is not an artifact of the representation: there exists an all-scalar component of the NMHV superamplitude for which the entire amplitude is given by just this integral (3.25). (See ref. [119] for details.)
It is easy to verify that the scalar loop integral (3.25), on its co-dimension 7 residue cutting all propagators, results in a one-form on loop momentum space of the form of an elliptic integral. And it is not hard to be convinced that this is not an artifact: even when expressed as a four-fold integral over Feynman parameters, it has no co-dimension four residues, implying that it cannot be expressed as an iterated ‘’ integral by any change of variables. The most clear conclusion is that amplitudes even in planar SYM require a broader definition of transcendentality (not merely via polylogarithmic functions).
Whether or not some scattering amplitudes must be polylogarithmic to all orders—even for restricted NMHV-degrees—has long been the subject of speculation (see e.g. [95]). It would be beyond the scope of our present discussion to revisit these issues here, but we would like to point out that there is at least some evidence that even four-particle MHV amplitudes cannot be expressed using local integrals in polylogarithmic terms starting at eight loops [99]; and this fact has images at lower loops and higher multiplicity—including the ten particle example mentioned above at two loops.
While we do not want to speculate much on the implications for transcendentality of our three loop representation, there are some intriguing aspects that deserve further investigation. The most obvious new class of non-polylogarithmic functions that arise at three loops (not merely by having a sub integral corresponding to (3.25)) is for the generally massive instance of the ladder :
(3.26)
Cutting all ten propagators of this integral results in a two-form on loop momenta parameterized of the form,
\raisebox−34.76pt\includegraphics[scale=1.36]twelvepointtripleboxcut=∫dxdy√Q(x,y),\vspace−10pt\vspace−0.5pt (3.27)
where is an irreducible quartic polynomial in each variable. The precise implications of this observation for the transcendental structure of the loop integral (3.26) remains unclear, but intensely interesting. The coefficient of has support on this co-dimension ten residue for twelve particles only for NMHV amplitudes. Indeed, there exists a scalar component for which the amplitude precisely takes the form of (3.27) on this cut—strongly suggesting that these kinds of integrals are unavoidable parts of loop amplitudes.
The final example of transcendental novelty at three loops arises in the case of the wheel integral , again for a generally massive distribution of external legs:
(3.28)
This new class of integral first arises for nine particles with coefficients supported for NMHV or NMHV amplitudes (which are parity conjugate). Unlike the case above, cutting all nine propagators in (3.28) results in a strictly rational form on the three remaining degrees of freedom. This rational three-form, however, does support co-dimension one residues of the same form as in (3.27). (This does not occur for fewer than nine external legs distributed as in (3.28).) What this implies about the integrated form of the Feynman integral (3.28), and the implications of this integral for scattering amplitudes, however, remains unclear. In particular, while the three-form resulting from cutting all nine propagators in (3.28) has a co-dimension one residue of an elliptic type, there are no co-dimension ten cuts of a nine point amplitude which have this form. (There are no -components of the superfunctions on which this cut would be non-vanishing.) Thus, even if the integral were not polylogarithmic, this would not imply that nine-point amplitudes are so: it may merely represent an artifact of the local integrand representation.
The situation described above is reminiscent of the case of NMHV amplitudes at two loops. While it is easy to prove on general grounds that no NMHV amplitude has support on an elliptic cut at two loops, this fact need not be made manifest term-by-term in an integrand representation. Indeed, while a (prescriptive) representation which makes manifest the non-ellipticity of these amplitudes at two loops does exist, neither the representation in this work nor that in ref. [119] makes this fact manifest: the basis of integrals used has many term-wise elliptic contributions.
Whether such term-wise ellipticity is an artifact of the representation, or a necessary consequence of using a local loop expansion remains to be seen. Further investigations of these properties of loop amplitudes would be worthwhile.
## 4 Prescriptive Unitarity at One Loop For General Theories
In our review of one loop unitarity in section 2.2, we concluded with a perhaps perplexing comment about the difficulty of employing this approach to SYM. In this section, we clarify this comment, and show how a prescriptive approach can be employed—at the cost of making the power counting of the theory non-manifest. In the process, we will illustrate how the approach we describe here compares with the more traditional approach for theories with more general power counting. For the sake of illustration, we will continue to discuss theories in (strictly) four dimensions; as such, our examples here will be limited to the cut-constructible parts of dimensionally-regulated theories in four dimensions.
Recall from our discussion in section 2.2 that an over-complete basis of integrals for a theory with ultraviolet behavior dictated by would be:
(4.1)
This basis, while complete, is over-complete. A choice of independent parity-odd pentagons must be made in order to even define coefficients of an amplitude’s integrand.
Conveniently, as we saw in section 2.2, we may always without loss of generality expand the identity polynomial in terms of inverse propagators, which means that the power counting of a theory need only represent a lower bound: (at the cost of introducing an arbitrary scale into the representation), we may always consider loop integrands in SYM to be expanded into integrands with the power counting of . Considering loop integrands in SYM to have this power counting is at worst a bad idea (we will see that it is not); for a more general quantum field theory, this is a necessary case for us to consider.
Therefore, let us now construct a basis of one loop integrands which scale asymptotically as for large loop momenta. Clearly, any integrand in our basis must have at least three propagators; and—as always—Passarino-Veltman reduction allows us to focus our attention to those with at most five external propagators. Thus, we may naïevely have an over-complete basis of scalar triangles, tensor boxes, and pentagons involving two powers of inverse propagators in their numerator.
Box integrals with a single inverse propagator in the numerator have 6 degrees of freedom, which cleanly separate (using the “chiral” basis of (2.18)) into 2 non-contact (‘chiral’) degrees of freedom, and 4 contact terms. And following the argument at the end of section 2.3, it is easy to see that pentagon integrals with two inverse propagators can be entirely decomposed into contact terms: . Thus, a complete basis of integrals consistent with this power counting would consist of just ‘chiral’ boxes, with two (non-contact) degrees of freedom each, and scalar triangles with one degree of freedom each:
(4.2)
Conveniently, it turns out that (for ), this basis is not over-complete. To see this, we imagine combining all integrals over a common denominator and observe that this power counting requires powers of inverse propagators in the numerator; according to (2.11), this space of numerators has dimension ; and the basis (4.2) consists of , which matches the correct counting (for ).
In terms of this basis, a prescriptive representation is easy to construct. The triangle integrals have only one degree of freedom, and therefore should be defined in order to match field theory at an arbitrary point along the triple-cut involving the three propagators. The 2 non-contact degrees of freedom of each chiral box can be chosen to match field theory on the two box-type leading singularities (see 2.8)), and their 4 contact-term degrees of freedom should be chosen to vanish at the arbitrary points where the triangles are defined. Thus, the cuts which define the integrals in (4.2) together with their coefficients would be, respectively:
(4.3)
This prescriptive representation of one loop amplitudes with worse-than-SYM power-counting is in fact very close to the prescriptive representation derived in ref. [118]. The principle distinction between the discussion above and the result described in ref. [118] is that composite residues were used to fix the coefficients of the triangle integrals (instead of arbitrary points here). Also, in ref. [118], spurious propagators were included in every integral of the basis—trading the wrong power counting for non-manifest dual conformal invariance of the result. This may or may not be the best representation of integrands in SYM—as the power-counting of the theory is rendered non-manifest; but it does correctly capture any quantum field theory bounded by this degree of divergence in the ultraviolet.
The worst power counting of any four-dimensional quantum field theory without tadpoles would be . As before, we may without any loss of generality consider any theory with better power counting to be included in this case. As such, it captures the cut-constructible part of any quantum field theory at one loop (including those with better ultraviolet behavior).
Following the logic which should now be familiar, it is clear that we may expand any integral into those with two, three, four, or five propagators, with , , , and degrees of freedom in the numerator for each. The degrees of freedom of the general triangle integral split into non-contact terms and contact terms according to the basis (3.3), and the 20 degrees of freedom of the boxes split into non-contact and contact degrees of freedom. Following the same discussion as for three loops, the degrees of freedom of a degree-three tensor product of inverse propagators for an integral with external propagators can be entirely decomposed into contact terms:
|
• Browse all
Measurement of the low-energy antideuteron inelastic cross section
The collaboration
Phys.Rev.Lett. 125 (2020) 162001, 2020.
Abstract
In this Letter, we report the first measurement of the inelastic cross section for antideuteron-nucleus interactions at low particle momenta, covering a range of $0.3 \leq p < 4$ GeV/$c$. The measurement is carried out using p-Pb collisions at a center-of-mass energy per nucleon-nucleon pair of $\sqrt{s_{\rm{NN}}}$ = 5.02 TeV, recorded with the ALICE detector at the CERN LHC and utilizing the detector material as an absorber for antideuterons and antiprotons. The extracted raw primary antiparticle-to-particle ratios are compared to the results from detailed ALICE simulations based on the GEANT4 toolkit for the propagation of antiparticles through the detector material. The analysis of the raw primary (anti)proton spectra serves as a benchmark for this study, since their hadronic interaction cross sections are well constrained experimentally. The first measurement of the inelastic cross section for antideuteron-nucleus interactions averaged over the ALICE detector material with atomic mass numbers $\langle A \rangle$ = 17.4 and 31.8 is obtained. The measured inelastic cross section points to a possible excess with respect to the Glauber model parameterization used in GEANT4 in the lowest momentum interval of $0.3 \leq p < 0.47$ GeV/$c$ up to a factor 2.1. This result is relevant for the understanding of antimatter propagation and the contributions to antinuclei production from cosmic ray interactions within the interstellar medium. In addition, the momentum range covered by this measurement is of particular importance to evaluate signal predictions for indirect dark-matter searches.
• #### Table 1
Figure 1 (left)
10.17182/hepdata.96844.v1/t1
Raw primary antiproton-to-proton ratio as a function of the momentum p_primary.
• #### Table 2
Figure 1 (left)
10.17182/hepdata.96844.v1/t2
Raw primary antiproton-to-proton ratio from Geant4-based MC simulations as a function of the momentum p_primary.
• #### Table 3
Figure 1 (right)
10.17182/hepdata.96844.v1/t3
Raw primary antideuteron-to-deuteron ratio as a function of the momentum p_primary.
• #### Table 4
Figure 1 (right)
10.17182/hepdata.96844.v1/t4
Raw primary antideuteron-to-deuteron ratio from Geant4-based MC simulations as a function of the momentum p_primary.
• #### Table 5
Figure 2
10.17182/hepdata.96844.v1/t5
Raw primary antiproton-to-proton ratio from Geant4-based MC simulations as a function of the momentum p_primary with default sigma_inel(pbar).
• #### Table 6
Figure 2
10.17182/hepdata.96844.v1/t6
Raw primary antiproton-to-proton ratio from Geant4-based MC simulations as a function of the momentum p_primary with sigma_inel(pbar)x1.25.
• #### Table 7
Figure 2
10.17182/hepdata.96844.v1/t7
Raw primary antiproton-to-proton ratio from Geant4-based MC simulations as a function of the momentum p_primary with sigma_inel(pbar)0.75.
• #### Table 8
Figure 2
10.17182/hepdata.96844.v1/t8
Raw primary antiproton-to-proton ratio from Geant4-based MC simulations as a function of the momentum p_primary (exp. data).
• #### Table 9
Figure 3 (a)
10.17182/hepdata.96844.v1/t9
Inelastic interaction cross section of antiprotons on an averaged material element of the ALICE detector (1 sigma constraints).
• #### Table 10
Figure 3 (a)
10.17182/hepdata.96844.v1/t10
Inelastic interaction cross section of antiprotons on an averaged material element of the ALICE detector (2 sigma constraints).
• #### Table 11
Figure 3 (b)
10.17182/hepdata.96844.v1/t11
Inelastic interaction cross section of antiprotons on an averaged material element of the ALICE detector (1 sigma constraints).
• #### Table 12
Figure 3 (b)
10.17182/hepdata.96844.v1/t12
Inelastic interaction cross section of antiprotons on an averaged material element of the ALICE detector (2 sigma constraints).
• #### Table 13
Figure 3 (c)
10.17182/hepdata.96844.v1/t13
Inelastic interaction cross section of antideuterons on an averaged material element of the ALICE detector (1 sigma constraints).
• #### Table 14
Figure 3 (c)
10.17182/hepdata.96844.v1/t14
Inelastic interaction cross section of antideuterons on an averaged material element of the ALICE detector (2 sigma constraints).
• #### Table 15
Figure 3 (d)
10.17182/hepdata.96844.v1/t15
Inelastic interaction cross section of antideuterons on an averaged material element of the ALICE detector (1 sigma constraints).
• #### Table 16
Figure 3 (d)
10.17182/hepdata.96844.v1/t16
Inelastic interaction cross section of antideuterons on an averaged material element of the ALICE detector (2 sigma constraints).
|
# Integro-Differential Equations
I was attempting to solve the following integro-differential equation using convolutions. My answer also had a convolution which did not seem right and was wondering if someone would check my process.
Problem with initial work
My final solution
$$y'(t)=1-\int_0^t y(t-\tau ) \exp (-2 \tau ) \, d\tau$$ Laplace Transform: $$s \left(\mathcal{L}_t[y(t)](s)\right)-y(0)=\frac{1}{s}-\frac{\mathcal{L}_t[y(t)](s)}{2+s}$$
We have $y(0)=1$ and solve for $\mathcal{L}_t[y(t)](s)$:
$$\mathcal{L}_t[y(t)](s)=\frac{2+s}{s (1+s)}$$ $$\mathcal{L}_t[y(t)](s)=\frac{2}{s}-\frac{1}{1+s}$$
$$y(t)=2-e^{-t}$$
|
# derivation of Euler-Lagrange differential equation (elementary)
Let $[e,c]$ be a finite subinterval of $(a,b)$. Let the function $h\colon\mathbb{R}\to\mathbb{R}$ be chosen so that a) $h$ is twice differentiable, b) $h(t)=0$ when $t\notin[e,c]$, c) $h(t)>0$ when $t\in(e,c)$, and d) $\int_{e}^{c}h(t)\,dt=1$.
Choose $f(\lambda,t)=q(t)+\lambda h(t)$. It is easy to see that this function satisfies the requirements for $f$ laid out in the main entry. Then, we can write
$g(\lambda,x)=\int_{a}^{b}L(t,q(t)+\lambda h(t),\dot{q}(t)+\lambda\dot{h}(t))\,dt$
Let us split the integration into three parts — the integral from $a$ to $e$, the integral from $e$ to $c$, and the integral from $c$ to $b$. By the way $h$ was chosen, the integrand reduces to $L(t,q(t)(t),\dot{q}(t))$ when $t\in(a,e)$ or $t\in(c,b)$. Hence the pieces of the integral over the intervals $(a,e)$ and $(c,b)$ do not depend on $\lambda$ and we have
${dg\over d\lambda}={d\over d\lambda}\int_{e}^{c}L(t,q(t)+\lambda h(t),\dot{q}(% t)+\lambda\dot{h}(t))\,dt$
Since $[e,c]$ is closed and bounded, it is compact. By our assumption, the derivative of the integrand is continuous. Since continuous functions on compact sets are uniformly continuous, the derivative of the integrand is uniformly continuous. This imples that it is permissible to interchange differentiation and integration:
${dg\over d\lambda}=\int_{e}^{c}{d\over d\lambda}L(t,q(t)+\lambda h(t),\dot{q}(% t)+\lambda\dot{h}(t))\,dt$
Using the chain rule (several variables) and setting $\lambda=0$, we have
${dg\over d\lambda}\bigg{|}_{\lambda=0}=\int_{e}^{c}h(t){\partial L\over% \partial q}(t,q(t),\dot{q}(t))+\dot{h}(t)\frac{\partial L}{\partial\dot{q}}(t,% q(t),\dot{q}(t))\quad dt$
Integrating by parts and using the fact that $h$ was chosen so as to vanish at the endpoints $e$ qnd $c$, we find that
${dg\over d\lambda}\bigg{|}_{\lambda=0}=\int_{e}^{c}h(t)\left[{\partial L\over% \partial q}(t,q(t),\dot{q}(t))-{d\over dt}\left(\frac{\partial L}{\partial\dot% {q}}(t,q(t),\dot{q}(t))\right)\right]\quad dt=\int_{e}^{c}h(t)EL(t)\,dt$
(The last equals sign defines $EL$ as the quantity in the brackets in the first integral.)
I claim that requiring $dg/d\lambda=0$ for all finite intervals $[e,c]\subset(a,b)$ implies that the $EL(t)$ must equal zero for all $t\in[a,b]$. By our assumptions, $EL$ is a continuous function. Hence, for every $t_{0}\in(a,b)$ and every $\epsilon>0$, there must exist and $[e,c]\subset(a,b)$ such that $t_{0}\in[e,c]$ and $t_{1}\in[e,c]$ implies that $|EL(t_{0})-EL(t_{1})|<\epsilon$. Therefore,
$\left|\quad{dg\over d\lambda}\bigg{|}_{\lambda=0}-EL(t_{0})\right|=\left|\int_% {e}^{c}h(t)\left(EL(t)-EL(t_{0})\right)\,dt\right|<\epsilon\left|\int_{e}^{c}h% (t)\,dt\right|=\epsilon$
Since this must be true for all $\epsilon>0$, it follows that $EL(t_{0})=0$ for all $t_{0}\in(a,b)$. In other words, q satisfies the Euler-Lagrange equation.
Title derivation of Euler-Lagrange differential equation (elementary) DerivationOfEulerLagrangeDifferentialEquationelementary 2013-03-22 14:45:35 2013-03-22 14:45:35 rspuzio (6075) rspuzio (6075) 16 rspuzio (6075) Derivation msc 47A60
|
# A clear sense of basics of physics
1. Apr 23, 2015
### AshUchiha
Okay as suggested by one of the 'Staff:Mentor' , advised me to have a clear sense of "mass", "weight", "speed", "velocity", "force", and "acceleration" and the mathematical relationships between them. I know all of them, but it's not accurate/exact.
P.S.: Ofcourse, I can google search it, but there's a saying that "Your friends can teach you better than anyone else", and yes try simplifying your language meanwhile not letting down its scientific meaning
2. Apr 23, 2015
### rootone
That is why math is the preferred language for science.
It makes the same sense whatever may be your cultural background.
3. Apr 23, 2015
### ellipsis
Qualitative fundamentals of Newtonian mechanics
Mass is an intrinsic property all objects have - it is simply how much matter is encapsulated by the object you're talking about. It is a single number, by standard in units of kg, or kilograms. In this system, mass cannot be created nor destroyed.
Weight is different from mass. Indeed, an object on Earth weighs differently than that same object on the Moon does, even though it's still comprised of the same amount of matter. The weight of an object is simply the force exerted by gravity. Because it is a force, it has units of N, or newtons.
Speed is the rate at which an object moves. It is a positive number, with zero being no movement, with increasingly higher numbers corresponding to increasingly faster speeds. It is in terms of the m/s unit, or "meters per second". This is easier to understand if you've studied calculus, which is the study of rates of change. Briefly, the meter is the standard unit of length, and the second is the standard unit of time
Velocity
is similar to speed, but it is a vector, rather than a positive number, which is called a scalar. Vectors are one or more real numbers which specify a direction, as well as an 'amount', or magnitude. Vectors can either be coordinates, or speeds with an angle, depending on your model, and depending on how many spatial dimensions are involved. In terms of calculus, velocity is the first derivative of position.
Force is what pushes and pulls objects. Every force is exerted by some object. Furthermore, if object A exerts a force F on object B, then object B exerts that same force, but in the opposite direction, on A. Because of this fact, forces are described as being "equal, but opposite". This means that while the Earth exerts a gravitational force on you, pulling you down, you are exerting an equal gravitational force on the Earth, pulling it upward. As well as gravitational force, there is also tension, friction, and a variety of other different types.
Acceleration is the speed at which speed changes. When you accelerate a vehicle, you increase its speed (or velocity). When you decelerate, you decrease its speed. In this way, if you are speeding down a highway, but your speed is constant, then your acceleration is zero. This is in units of m/s^2, which is read as "meters per second squared". In terms of calculus, acceleration is the second derivative of position, and the first derivative of velocity.
Mathematical relationship of the above physical properties
The note above regarding the gravitational force you exert on the Earth seems very counter-intuitive. The fact is, the acceleration a force causes is dependent on the mass of the object, in the following way, where $F_T$ is total force exerted on an object, $m$ is mass, and $a$ is acceleration.
$$F_T = ma$$
You can use simple algebra to find that $a = \frac{F}{m}$. The gravitational acceleration you cause on the Earth is negligibly tiny, because the mass of the Earth is large, because of the inverse relationship I just described.
The gravitational force itself, when you are on the surface of Earth, obeys:
$$F_g = mg$$
where g is in this case approximately -9.81. Based on these two relationships, you can derive the fact that two objects with different mass on the surface of Earth accelerate downwards at the same rate: namely, $g$.
More generally, the gravitational force between two objects follows the inverse-square law, where $G$ is the universal gravitational constant, $M$ is the mass of the other object, $m$ is the mass of the current object, and $d$ is the distance between those two objects.
$$F_g = G\frac{Mm}{d^2}$$
There's a bunch of other relationships, especially when it comes to the results of calculus, and the four "kinematic equations". Here's a few:
Speed is the absolute value of velocity:
$$s = |v|$$
When acceleration is constant, the final velocity is equal to the initial velocity plus the time elapsed multiplied by the acceleration.
$$v_f = v_i+at$$
What (I think) Newtonian mechanics is about
At the center of it, the kinematics side of Newtonian mechanics is just a mathematical tool to model and analyze certain physical situations under certain idealized rules. That is to say, the rules of Newtonian mechanics change based on what you're attempting to model.
I wrote this more for me, not for you, I'm happy to say. Best way to learn something is to teach it. Good luck.
4. Apr 24, 2015
### AshUchiha
P.S. And yes, more for yourself.
5. Apr 25, 2015
### PWiz
Distance is the actual path taken by an object. It is different from displacement, which is the shortest path between two points.
This is slightly incorrect because of your terminological usage - acceleration is the rate at which velocity changes, and is a vector quantity itself.
I think it's very important to stress on the importance of the Galilean principle of relativity here. Newtonian mechanics has its roots over there (Newton's laws hold in all inertial frames of reference - in other words, non-accelerating points of view). It would then help to read about Newton's three laws of motion (the first goes hand in hand with the Galilean principle, and the second one defines what is "force" - more on that later), and understand what it all means qualitatively. I trust that you're well aware of the 7 fundamental quantities? ellipsis has explained a good deal on one of them, and you would benefit from reading more about conservation laws and the other 6 fundamental quantities on the Internet to clear up some of your confusion (the post would become too long if I explain them here).
Newtonian mechanics can be classified into kinematics and dynamics. Kinematics is the study of motion - angular velocity, acceleration, etc. Dynamics is the study of forces and energy. Both of them are linked together beautifully using Maths, or more specifically, calculus. All the non-fundamental quantities can be derived from the fundamental ones using calculus. A small bit of info on mass (just adding on ellipsis' comprehensive explanation): there are two kinds of mass. One is called inertial mass which is given by $\frac{F}{a}$ (Newton's second law), and the second is gravitational mass, responsible for producing gravitational effects. Coincidentally, the two of them are equal to each other, and mass can be left alone until one tackles Einstein's theory of relativity.
Another point - $F=ma$ is a simplified version of Newton' 2nd law. To understand it, you need to understand momentum. Again, one can write books on each one of these concepts, so I'll leave the figuring up to you. All that can be said is that force is the rate at which momentum changes.
It is also worthwhile to mention that most of the quantities described by ellipsis have linear and rotational "versions." Here's a link which explains it well - http://hyperphysics.phy-astr.gsu.edu/hbase/mi.html
Finally, you should understand that the vector quantities that you mentioned in the OP (force, velocity, weight and acceleration) can be expressed geometrically in Euclidean space, and all these concepts carry over very smoothly over there as well.
The problem here is that the question is too broad - most introductory Physics textbooks spend at least 100-150 pages trying to explain these concepts. You can always ask more directed questions as separate threads on the forums, and I'm sure many will be there to help you out ;)
Last edited: Apr 25, 2015
6. Apr 25, 2015
### ellipsis
I know that, pedant.
Has nothing to do with Newtonian mechanics. Normal relativity (the fact that velocity and position are invariant) does, sure. But they're different.
You also mention "seven fundamental quantities", which was actually new to me. I looked it up, and found that some say they're not so fundamental after all, considering the Planck units.
Finally, what are the odds that we're all Naruto fans? I'm currently reading Reload.
7. Apr 25, 2015
### PWiz
Which is why I didn't explore it in my post. Btw, the question isn't specifically about Newtonian mechanics, so I only thought it would be fair to provide some up-to-date references...
That's weird. My HS Physics textbook has them listed on the first page (Length, rest mass, time, amount of matter, electric current, luminosity and thermodynamic temperature)! Planck units are simply units of measurement which are based on fundamental constants. I can't see what they have to do with fundamental quantities
Hahaha, physics does that to you, it really does
8. Apr 25, 2015
### ellipsis
Bah, you didn't reply to the most interesting part of my response.
9. Apr 25, 2015
### AshUchiha
What exactly do you mean by "actual path" . And you used the word "taken" , seems a bit......uneasy for me to digest {That's an idiom}. Would it be right to rearrange your definition to "The total path traveled by an object"
10. Apr 25, 2015
### PWiz
It means the same thing, since "taken" and "traveled" are synonymous here. What's important is that the reference to causality should not be lost in the definition.
11. Apr 25, 2015
### AshUchiha
But "traveled" would again be wrong I guess, because if path traveled comes under distance, then what does distance traveled means? Is it just an adjective?
12. Apr 25, 2015
### PWiz
Everyday language is framed very loosely. Talking in specific terms, distance is the numerical quantity which signifies the "length" of a path. A path is the trace of the actual motion of an object. They are two different things. When you travel along a path, you cover distance. There really isn't much more to this.
13. Apr 25, 2015
### AshUchiha
Ellipsis, loved the way you explained. But I wanted definitions, of course your explanation is awesome. But you know I wanted definition in a simple way meanwhile not dishonoring it's scientific meaning. But still learned a lot from you sir. Anyway, I've always been confused that, every action has equal and opposite reaction.
Let's say take an example of badminton.
QUESTION 1
Ball comes and hits the bat.
Mass of the ball=m
Mass of the bat=M
Force exerted by ball= ƒ
Force exerted by the bat=F
Every action as equal and opposite reaction (---1---) . P.S. {here action word is used instead of the word "force" , its more than what meets the eye}.
ƒ=F right? (Since, ---1--- is correct).
But if that's so, the ball should stay there only. How does it move when we hit it?
QUESTION 2
Also, Our Earth and us. Earth applies force on us , so we are applying equal and opposite force to it. If that's so, then why are we attracted towards it? Why don't we just fly in the air because we apply equal force on it always.
QUESTION 3
If we are on Earth, we will experience other planet's gravitation too right? So if we are on space, aren't we supposed to be attracted towards sun and fall for it?
QUESTION 4
{Okay this question is bit out of the topic} , Our Earth moves very fast, but why don't we experience such a fast motion??
14. Apr 25, 2015
### rootone
1. Some of the kinetic energy of the bat is transferred as kinetic energy of the ball.
2. Gravity is exerting a pull on you and the ground is pushing back with equal force, so you stay where you are.
3. Gravity of distant objects is tiny compared to that of the Earth which you are standing on.
4. Because you are not in motion relative to the Earth, only in relation to some external reference frame, and you can't be existing in a reference frame external to to yourself.
15. Apr 25, 2015
### PWiz
A1 and A2: Action and reaction are equal and opposite forces, but they act on different objects and so do not cancel each other out.
A3: Of course you do. You're even experiencing the incredibly tiny amount of gravitational force pulling you towards a black hole thousands of light years away, but the effect at such distances is so small that it is negligible. The attractive force between two objects is not only dependent on their masses, but on the separation between them as well, so the net force on you will clearly be affected by the dominant factor in your local vicinity. If you are in space, you will move wherever the net force takes you (the small distance between you and some nearby planet may outweigh the difference in mass of the Sun and the planet and pull you to its surface and not towards the Sun).
Constant velocity in itself can't be "felt" - only acceleration/force can. The rotational motion of the Earth generates something known as the Coriolis force (constant angular velocity does not mean that the linear velocity is constant). The Earth does not have an angular velocity high enough to produce a noticeable "dragging" force on us. It's interesting to note that if for some reason the Earth was made to spin faster, you would "weigh" less, and a you would be weightless at one particular angular velocity. Anything more than this and you'd be flung away from the Earth.
16. Apr 25, 2015
### Cruz Martinez
This definition of mass is too simple. It is true that mass is an intrinsic property of an object. But to define it appropriately you need to look at the law of inertia.
The law of inertia is the basis for a correct and quantitatively unambiguous definition of the concept of mass.
17. Apr 25, 2015
### ellipsis
First of all, these are all wonderful questions. And, I guarantee, we all asked these questions too.
What actually moves the ball is acceleration, not force. Let's say the bat's mass is 5kg, and the ball's mass is .25kg, and the force induced from the bat to the ball is 25N.
$$F_{bat->ball} = 25N$$
$$F_{ball->bat} = -25N$$
By Newton's second law, $F=ma$ :
$$F_{ball} = m_{ball}a_{ball}$$
$$F_{bat} = m_{bat}a_{bat}$$
We solve this for the acceleration of the ball and bat:
$$a_{ball} = \frac{F_{ball}}{m_{ball}}$$
$$a_{bat} = \frac{F_{bat}}{m_{bat}}$$
We know the forces involved, and we know the masses, so we can find the acceleration:
$$a_{ball} = \frac{25}{.25} = 100 \frac{m}{s^2}\$$
$$a_{bat} = \frac{-25}{5} = -5 \frac{m}{s^2}$$
So, you see, the motion of each object is different, even though the force is the same, because they have different masses.
This is the same question as question 1.
My mass is 59kg. To find the force of gravity between me and the Earth, we use the equation I used above:
$$F_g = -mg$$
$$F_g = -(59)(9.81) = -578.79 N$$
Now, because of Newton's third law, we know that the force I exert on the Earth is 578.79N... Note the missing negative sign: The force is equal, but opposite.
Now, using Newton's second law, we can calculate the gravitational acceleration me and the Earth end up getting.
$$a_{me} = \frac{-578.79}{59} = -9.81 \frac{m}{s^2}\$$
$$a_{Earth} = \frac{578.79}{5972198600000000000000000} = .00000000000000000000009691 \frac{m}{s^2}$$
That's small.
The International Space Station is 407120 meters above the surface of the Earth. The Earth's radius is 6367444 meters. The distance the ISS has from the center of the Earth is the sum of these two values, which is 6774564 meters.
The mass of the Earth is of course 5972198600000000000000000 kg.
The universal gravitational constant, G, is .0000000000667.
We can use these numbers with Newton's law of universal gravitation, which I defined earlier:
$$F_g = G\frac{M_{ISS}M_{Earth}}{d^2}$$
Since we only care about the ISS's acceleration of gravity, we can factor out the mass of the ISS itself.
$$a_g = -G\frac{M_{Earth}}{d^2} = -(.0000000000667)\frac{5972198600000000000000000}{6774564^2} = -8.68 \frac{m}{s^2}$$
That's not anywhere close to zero. In fact, it seems the astronauts are still quite affected by Earth's gravity. It seems paradoxical. For an intuitive understanding, I invite you to play Kerbal Space Program. =^)
You're also going very fast on an airplane, or on a highway, and you don't experience that fast motion, do you? It's Newton's first law: Whatever is in motion, stays in motion. Inertia.
We can only "experience" acceleration, in the way you describe. Indeed, when you speed up in a car, or lift off in an airplane, you're pushed to the back of you seat.
And of course, we do "experience" the gravitational and rotational acceleration of the Earth in the form of the tides and the precession of the Foucault pendulum.
18. Apr 25, 2015
### Cruz Martinez
Velocity and position are not invariant quantities in Einsteinian relativity nor in Galilean relativity.
|
# Recover messed up partition and MBR
I did something really foolish. I intended to overwrite my MBR of my usb disk and also implemented the adhering to commands.
sudo dd if=./boot0 of=/dev/sdb bs=1b count=1
sudo dd if=./boot1h of=/dev/sdb2
Now I can not detail the dividings (6 dividings) in sdb. Any kind of pointers? I have ubuntu in/ dev/sda
install-mbr /dev/sdb
Dint aid either.
0
2022-06-07 14:30:33
Source Share
I assume you can offer a shot with gparted (if you do not have it proceed and also install it first). The software program allows you see your dividings if they are still there. Currently just what do you intend to do?¢ Is all you desire is to make your USB useful once more? That would certainly be also very easy. Simply most likely to gparted, after that delete all the dividings and also create new ones.¢ Or do you intend to maintain your dividings? Directly I do not assume that is feasible due to the fact that you currently eliminated your dividing table. So what you can attempt is to support your continuing to be information on those dividings (due to the fact that dd really did not delete any kind of information, in addition to the little item ahead of your dividings to make area for both documents). After that layout your USB and also recover the data.¢ Regarding I recognize, do not make use of dd unless you are supporting or recovering an entire dividing.
|
# TOTEM Papers
Letzte Einträge:
2014-10-13
10:02
Measurement of the forward charged particle pseudorapidity density in pp collisions at √s = 8 TeV using a displaced interaction point / TOTEM Collaboration The the pseudorapidity density of charged particles dN$_{ch}$/d$\eta$ is measured by the TOTEM experiment in pp collisions at √s = 8 TeV within the range 3.9 < $\eta$ < 4.7 and −6.95 < $\eta$ < −6.9. [...] arXiv:1411.4963 ; CERN-PH-EP-2014-260. - 2014. - 9 p. Preprint - Draft (restricted) - arXiv Preprint - Full text
2014-05-06
06:35
Measurement of pseudorapidity distributions of charged particles in proton-proton collisions at $\sqrt{s}$ = 8 TeV by the CMS and TOTEM experiments / CMS and TOTEM Collaborations Pseudorapidity ($\eta$) distributions of charged particles produced in proton-proton collisions at a centre-of-mass energy of 8 TeV are measured in the ranges abs($\eta$) < 2.2 and 5.3 < abs($\eta$) < 6.4 covered by the CMS and TOTEM detectors, respectively. The data correspond to an integrated luminosity of 45 inverse microbarns. [...] arXiv:1405.0722; CMS-FSQ-12-026; CERN-PH-EP-TOTEM-2014-002; CERN-PH-EP-2014-063.- Geneva : CERN, 2014 - 36 p. - Published in : Eur. Phys. J. C 74 (2014) 3053 Springer Open Access article: PDF; External link: Preprint
2014-04-06
10:05
LHC Optics Measurement with Proton Tracks Detected by the Roman Pots of the TOTEM Experiment / TOTEM collaboration Precise knowledge of the beam optics at the LHC is crucial to fulfil the physics goals of the TOTEM experiment, where the kinematics of the scattered protons is reconstructed with the near-beam telescopes -- so-called Roman Pots (RP). Before being detected, the protons' trajectories are influenced by the magnetic fields of the accelerator lattice. [...] arXiv:1406.0546; CERN-PH-EP-2014-066.- Geneva : CERN, 2014 - 17 p. - Published in : New J. Phys. 16 (2014) 103041 Draft (restricted): PDF; Fulltext: CERN-PH-EP-2014-066_3 - PDF; arXiv:1406.0546 - PDF; External link: Preprint
2013-09-07
19:39
Performance of the Totem Detectors at the LHC / TOTEM Collaboration The TOTEM Experiment is designed to measure the total proton-proton cross-section with the luminosity-independent method and to study elastic and diffractive pp scattering at the LHC. To achieve optimum forward coverage for charged particles emitted by the pp collisions in the interaction point IP5, two tracking telescopes, T1 and T2, are installed on each side of the IP in the pseudorapidity region 3.1 $\le |\eta| \le$6.5, and special movable beam-pipe insertions – called Roman Pots (RP) – are placed at distances of ±147m and ±220m from IP5. [...] arXiv:1310.2908; CERN-PH-EP-2013-173.- Geneva : CERN, 2013 - 34 p. - Published in : Int. J. Mod. Phys. A 28 (2013) 1330046 Draft (restricted): PDF; Fulltext: PDF; External link: Preprint
2013-08-26
12:36
Double diffractive cross-section measurement in the forward region at LHC / TOTEM Collaboration The first double diffractive cross-section measurement in the very forward region has been carriedout by the TOTEM experiment at the LHC with center-of-mass energy of √s = 7 TeV. By utilizingthe very forward TOTEM tracking detectors T1 and T2, which extend up to pseudo rapidity |$\eta$|=6.5, a clean sample of double diffractive pp events was extracted. [...] arXiv:1308.6722; CERN-PH-EP-2013-170.- Geneva : CERN, 2013 - 5 p. - Published in : Phys. Rev. Lett. 111 (2013) 262001 APS Open Access article: PDF; Draft (restricted): PDF; Fulltext: PDF;
2012-11-23
18:53
Luminosity-Independent Measurement of the Proton-Proton Total Cross Section at $\sqrt{s}$ = 8 TeV / TOTEM Collaboration TOTEM has measured the proton-proton total cross-section at $\sqrt{s}$ = 8 TeV using a luminosity independent method. In LHC fills with dedicated beam optics, the Roman Pots have been inserted very close to the beam allowing the detection of 90% of the nuclear elastic scattering events. [...] TOTEM-2012-005; CERN-PH-EP-2012-354.- Geneva : CERN, 2013 - 8 p. - Published in : Phys. Rev. Lett. 111 (2013) 012001 APS Open Access article: PDF; Draft (restricted): PDF; Fulltext: PDF;
2012-11-23
18:27
Luminosity-independent measurements of total, elastic and inelastic cross-sections at $\sqrt{s}$ = 7 TeV / TOTEM Collaboration The TOTEM experiment at the LHC has performed the first luminosity-independent determination of the total proton-proton cross-section at sqrt(s) = 7TeV. [...] TOTEM-2012-004 ; CERN-PH-EP-2012-353. - 2012. - 7 p. Full text - Draft (restricted)
2012-11-23
17:51
Measurement of proton-proton inelastic scattering cross-section at $\sqrt{s}$= 7 TeV / TOTEM Collaboration The TOTEM experiment at the LHC has measured the inelastic proton-proton cross-section at $\sqrt{s}$ = 7 TeV in a β* = 90 m run with low inelastic pile-up [...] TOTEM-2012–003 ; CERN-PH-EP-2012-352. - 2012. - 11 p. Full text - Draft (restricted)
2012-08-14
10:16
Measurement of proton-proton elastic scattering and total cross-section at $\sqrt{s}$ = 7 TeV / TOTEM collaboration At the LHC energy of $\sqrt{s}$ = 7 TeV, under various beam and background conditions, luminosities, and Roman Pot positions, TOTEM has measured the differential cross-section for proton-proton elastic scattering as a function of the four-momentum transfer squared t. The results of the different analyses are in excellent agreement demonstrating no sizeable dependence on the beam conditions. [...] TOTEM-2012-002; CERN-PH-EP-2012-239.- Geneva : CERN, 2013 - 12 p. - Published in : EPL 101 (2013) 21002 Draft (restricted): PDF; Fulltext: PDF; IOP Open Access article: PDF;
2012-04-04
11:29
Measurement of the forward charged particle pseudorapidity density in pp collisions at $\sqrt{s}$ = 7 TeV with the TOTEM experiment / TOTEM Collaboration The TOTEM experiment has measured the charged particle pseudorapidity density dN$_{ch}$/d$\eta$ in pp collisions at $\sqrt{s}$ = 7 TeV for 5.3<|$\eta$|<6.4 in events with at least one charged particle with transverse momentum above 40 MeV/c in this pseudorapidity range. This extends the analogous measurement performed by the other LHC experiments to the previously unexplored forward $\eta$ region. [...] arXiv:1205.4105; CERN-PH-EP-2012-106; TOTEM-2012–001.- Geneva : CERN, 2012 - 7 p. - Published in : EPL 98 (2012) 31002 Draft (restricted): PDF; Fulltext: PDF; IOP Open Access article: PDF; External link: Preprint
|
Now showing items 2-21 of 38
• #### Algebraic geometric comparison of probability distributions
[OWP-2011-30] (Mathematisches Forschungsinstitut Oberwolfach, 2011-05-27)
We propose a novel algebraic framework for treating probability distributions represented by their cumulants such as the mean and covariance matrix. As an example, we consider the unsupervised learning problem of finding ...
• #### Analytic Varieties with Finite Volume Amoebas are Algebraic
[OWP-2011-33] (Mathematisches Forschungsinstitut Oberwolfach, 2011)
In this paper, we study the amoeba volume of a given $k$-dimensional generic analytic variety $V$ of the complex algebraic torus $(C^*)^n$. When $n>=2k$, we show that $V$ is algebraic if and only if the volume of its amoeba ...
• #### Asymptotic behavior of the eigenvalues and eigenfunctions to a spectral problem in thick cascade junction with concentrated masses
[OWP-2011-12] (Mathematisches Forschungsinstitut Oberwolfach, 2011-05-14)
The asymptotic behavior (as $\varepsilon \to 0$) of eigenvalues and eigenfunctions of a boundaryvalue problem for the Laplace operator in a thick cascade junction with concentrated masses is investigated. This cascade ...
• #### Averages of Shifted Convolutions of d3(n)
[OWP-2011-11] (Mathematisches Forschungsinstitut Oberwolfach, 2011-05-13)
We investigate the first and second moments of shifted convolutions of the generalised divisor function $d_3(n)$.
• #### Braid equivalences and the L-moves
[OWP-2011-20] (Mathematisches Forschungsinstitut Oberwolfach, 2011-05-19)
In this survey paper we present the L-moves between braids and how they can adapt and serve for establishing and proving braid equivalence theorems for various diagrammatic settings, such as for classical knots, for knots ...
• #### A categorical model for the virtual braid group
[OWP-2011-19] (Mathematisches Forschungsinstitut Oberwolfach, 2011-05-18)
• #### Classification of totally real elliptic Lefschetz fibrations via necklace diagrams
[OWP-2011-13] (Mathematisches Forschungsinstitut Oberwolfach, 2011)
We show that totally real elliptic Lefschetz brations that admit a real section are classified by their "real loci" which is nothing but an $S^1$-valued Morse function on the real part of the total space. We assign to ...
• #### The Cleavage Operad and String Topology of Higher Dimension
[OWP-2011-37] (Mathematisches Forschungsinstitut Oberwolfach, 2011)
• #### Cluster structures on simple complex lie groups and the Belavin-Drinfeld classification
[OWP-2011-10] (Mathematisches Forschungsinstitut Oberwolfach, 2011-05-12)
We study natural cluster structures in the rings of regular functions on simple complex Lie groups and Poisson-Lie structutures compatible with these cluster structures. According to our main conjecture, each class in the ...
• #### Combinatorics of Vassiliev invariants
[OWP-2011-22] (Mathematisches Forschungsinstitut Oberwolfach, 2011-05-20)
This paper is an introductory survey of the combinatorial aspects of the Vassiliev theory of knot invariants following the lectures delivered at the Advanced School on Knot Theory and its Applications to Physics and Biology ...
• #### Coxeter Arrangements and Solomon's Descent Algebra
[OWP-2011-03] (Mathematisches Forschungsinstitut Oberwolfach, 2011-05-6)
• #### Crystal energy functions via the charge in types A and C
[OWP-2011-25] (Mathematisches Forschungsinstitut Oberwolfach, 2011-05-23)
The Ram-Yip formula for Macdonald polynomials (at $t=0$) provides a statistic which we call charge. In types $A$ and $C$ it can be defined on tensor products of Kashiwara-Nakashima single column crystals. In this ...
• #### Definable orthogonality classes in accessible categories are small
[OWP-2011-14] (Mathematisches Forschungsinstitut Oberwolfach, 2011-05-15)
We lower substantially the strength of the assumptions needed for the validity of certain results in category theory and homotopy theory which were known to follow from Vopenka's principle. We prove that the necessary ...
• #### Dominance and Transmissions in Supertropical Valuation Theory
[OWP-2011-07] (Mathematisches Forschungsinstitut Oberwolfach, 2011)
This paper is a sequel of [IKR1], where we defined supervaluations on a commutative ring $R$ and studied a dominance relation $\Phi >= v$ between supervaluations $\varphi$ and $\upsilon$ on $R$, aiming at an enrichment of ...
• #### Extremal configurations of polygonal linkages
[OWP-2011-24] (Mathematisches Forschungsinstitut Oberwolfach, 2011-05-22)
• #### Formal adjoints of linear DAE operators and their role in optimal control
[OWP-2011-15] (Mathematisches Forschungsinstitut Oberwolfach, 2011-05-16)
For regular strangeness-free linear differential-algebraic equations (DAEs) the definition of an adjoint DAE is straightforward. This definition can be formally extended to general linear DAEs. In this paper, we analyze ...
• #### Higher Finiteness Properties of Reductive Arithmetic Groups in Positive Characteristic: the Rank Theorem
[OWP-2011-05] (Mathematisches Forschungsinstitut Oberwolfach, 2011-05-8)
We show that the finiteness length of an $S$-arithmetic subgroup $\Gamma$ in a noncommutative isotropic absolutely almost simple group $\mathcal{G}$ over a global function field is one less than the sum of the local ranks ...
• #### An Identification Therorem for PSU6(2) and its Automorphism Groups
[OWP-2011-08] (Mathematisches Forschungsinstitut Oberwolfach, 2011-05-10)
We identify the groups PSU6(2), PSU6(2):2, PSU6(2):3 and Aut(PSU6(2)) from the structure of the centralizer of an element of order 3.
• #### An inductive approach to coxeter arrangements and solomon's descent algebra
[OWP-2011-16] (Mathematisches Forschungsinstitut Oberwolfach, 2011-05-17)
In our recent paper [3], we claimed that both the group algebra of a finite Coxeter group W as well as the Orlik-Solomon algebra of W can be decomposed into a sum of induced one-dimensional representations of centralizers, ...
• #### Infeasibility certificates for linear matrix inequalities
[OWP-2011-28] (Mathematisches Forschungsinstitut Oberwolfach, 2011-05-25)
Farkas' lemma is a fundamental result from linear programming providing linear certi cates for infeasibility of systems of linear inequalities. In semidefinite programming, such linear certificates only exist for strongly ...
|
# Tag Info
Accepted
### Deep Learning application in decryption?
There is no evidence of deep learning breaking modern cryptography. Deep learning is simply glorified gradient descent. With a reasonable cipher you get no indication of almost finding the key, so I ...
• 10.6k
Accepted
### Using AI to perform Cryptanalysis
Yes it could work for simple ciphers. Here's a quick example: ...
• 94
### Homomorphic Encryption for Deep Learning
There are very few (somewhat practical) results about homomorphic deep learning currently. As a good starting point, you might want to have a look at this recent paper from my former colleagues, and ...
• 16.8k
### Using AI to perform Cryptanalysis
The best example of black-box, end-to-end learning of the type you describe in the literature is probably Greydanus' work on Learning the Enigma With Recurrent Neural Networks. They achieve functional ...
• 51
Accepted
### Can machine learning/AI lower the security existing cryptographic protocols similar to quantum computers?
No. Machine learning and AI techniques do not fundamentally change the computational capabilities of an adversary like a quantum computer does, no matter how much hype is in the air around ML and AI.
Accepted
### Extracting genome from a Ciphertext
I'll take the question as: Is it possible to analyze ciphertext (encrypted with a large unknown key) and thus distinguish characteristics of the plaintext, like we can sequence minced meat and deduce ...
• 125k
### Using AI to perform Cryptanalysis
That depends on the encryption. But for all simple monoalphabetic substitutions the answer is yes. And to don't need a neutral net, but the most simple classifier works. You train it on the letters of ...
### Neural Network based on pseudorandom number
The paper's proposed scheme is not useful. I don't recommend spending your time on this paper. If you want to generate pseudorandom numbers in practice, either use a standard pseudorandom number ...
• 35.4k
Accepted
### How to adapt the equation of Gaussian mechanism noise based on number of executions
The formula you mention to get $\sigma$ given $\delta$ and $\varepsilon$ is only correct for $\varepsilon<1$. It's also not tight. If you use Gaussian noise on multiple statistics, each of ...
• 978
### Machine Learning to improve Encryption algorithms
No. There is large consensus that: There is nothing special about image encryption algorithm, they are just encryption algorithms. ECC is useful for such algorithms inasmuch as they need to be made ...
• 125k
1 vote
There certainly are mechanisms for choosing cryptographic algorithms from a pool. The obvious example is the TLS handshake to agree on a cipher suite for a connection between two computers. The two ...
• 11.8k
1 vote
Accepted
### security of different federated learning schemes
Here's one attack : https://arxiv.org/pdf/2011.09290.pdf Hope that helps.
• 77
1 vote
Accepted
### Can Big Data together with deep neural networks attack RSA by affording the vast calculation of prime multiplications in advance?
A large issue with questions like this tends to be the technical details, so apologies if I come across as particularly nit-picky --- I just do not know how to answer questions like this without ...
• 8,677
1 vote
### What machine learning accuracy is assumed to be predictable for TRNG/PUF application?
By the definition of the next bit test any adversary (ML or not) able to guess the next bit of the output with probability non-negligibly greater than 50% is a break. So 60% is horribly broken. ...
• 5,008
1 vote
### What machine learning accuracy is assumed to be predictable for TRNG/PUF application?
No we still idealistically target 50% inter hamming distance for PUFs, but being probabilistic and cumulative it's not set in stone. It's just that you achieve 100% material efficiency at 50%. Or you ...
• 14k
1 vote
### Using AI to perform Cryptanalysis
For weak ciphers, sure, for somewhat modern ciphers, e.g enigma, short of possible but not as efficient as other methods. for modern cryptography? No Machine Learning is a very broad field so ...
• 10.6k
1 vote
### How can we compare an encrypted number with a normal number?
If this is not performed under the encrypted version of the plaintext on the semi-honest party there is a problem. Assume that there is a method $f(m,c)$ return $T$ if the values are same and $F$ if ...
• 43.5k
Only top scored, non community-wiki answers of a minimum length are eligible
|
# Poisson
## Poisson Distribution Explained
Poisson Distribution outputs the probability of a sequence of events happening in a fixed time interval.
|
In many cases, you can untangle hairy rational expressions and integrate them using the anti-differentiation rules plus the Sum Rule, Constant Multiple Rule, and Power Rule.
The Sum Rule for integration tells you that integrating long expressions term by term is okay. Here it is formally:
The Constant Multiple Rule tells you that you can move a constant outside of a derivative before you integrate. Here it is expressed in symbols:
The Power Rule for integration allows you to integrate any real power of x (except –1). Here’s the Power Rule expressed formally:
Here’s an integral that looks like it may be difficult:
You can split the function into several fractions, but without the Product Rule or Quotient Rule, you’re then stuck. Instead, expand the numerator and put the denominator in exponential form:
Next, split the expression into five terms:
Then use the Sum Rule to separate the integral into five separate integrals and the Constant Multiple Rule to move the coefficient outside the integral in each case:
Now you can integrate each term separately using the Power Rule:
|
# Membership Relation is Antisymmetric
## Theorem
Let $\Bbb S$ be a set of sets in the context of pure set theory
Let $\mathcal R$ denote the membership relation on $\Bbb S$:
$\forall \tuple {a, b} \in \Bbb S \times \Bbb S: \tuple {a, b} \in \mathcal R \iff a \in b$
$\mathcal R$ is an antisymmetric relation.
|
# Making good posts discoverable when the original question isn't obviously related
I recently answered the question "Which Is More Fundamental, Fields or Particles?" essentially by explaining what second quantization is and why it makes so much more sense than first quantization. The answer has been well received suggesting that some users benefited from a good explanation of second quantization. However, if I search "what is second quantization" I do not find my answer. This is not surprising because the question I answered does not have "second quantization" in the title or tags.
This raises the following question:
Sometimes when a user posts a question asking "Why X?", the best answer involves explaining topic Y. A user may post a good explanation of Y. If X and Y share no meaningful key words, users searching for the answer to "What is Y?" may not find the answer. Therefore, what should be done in this situation to help future users wondering about Y find the answer?
• The question title isn't very clear. How about 'Making good posts discoverable when the original question isn't obviously related'? Jul 2, 2014 at 14:27
• That said, your answer does get picked up by searches for second quantization, e.g. by this search. Jul 2, 2014 at 14:32
• @EmilioPisanty: I changed the title. I appreciate your suggestion, because I struggled to come up with a good title the first time around. Jul 2, 2014 at 15:37
• Why not just add the tag if you think your answer makes the question apply to the tag? Jul 7, 2014 at 20:27
• @JerrySchirmer: That's a good idea. Thinking about this now, I suppose editing the title is also a good option. Jul 7, 2014 at 20:36
• It seems to me that the right thing to do is to wait for (or search for) a question asking about second quantization, then either re-post a (possibly somewhat modified) version of your existing answer or perhaps (if absolutely no modification is necessary) just a pointer to the existing answer. Sep 27, 2016 at 16:59
|
# Revision history [back]
The problem can be seen as follows:
int(1)^GF125X(1)
Indeed it does not make sense to raise integers to polynomial powers in general.
Maybe the error message should be more descriptive, by including the types.
In your case, it seems you have confused f_i and i: note enumerate yields a list of pairs where the first element is the index.
The problem can be seen as follows:
int(1)^GF125X(1)
Indeed it does not make sense to raise integers to polynomial powers in general.
Maybe the error message should be more descriptive, by including the types.
In your case, it seems you have confused (in particular) f_i and i: note enumerate yields a list of pairs where the first element is the index.
The problem can be seen as follows:
int(1)^GF125X(1)
Indeed it does not make sense to raise integers to polynomial powers in general.
Maybe the error message should be more descriptive, by including the types.
In your case, it seems you have confused (in particular) f_i and i: note enumerate yields a list of pairs where the first element is the index.
Also you can omit the argument "GF" (which, confusingly, is a polynomial ring): it can be obtained from f by f.parent(). Furthermore you can probably avoid using my_mul by using prod and zip.
The problem can be seen as follows:
int(1)^GF125X(1)
Indeed it does not make sense to raise integers to polynomial powers in general.
Maybe the error message should be more descriptive, by including the types.
In your case, it seems you have confused f_i and i: note enumerate yields a list of pairs where the first element is the index.
Also you can omit the argument "GF" (which, confusingly, is a polynomial ring): it can be obtained from f by f.parent(). Furthermore you can probably avoid using my_mul by using prod and zip.
. Also, you will probably find it convenient to define
GF125X.<x> = GF(5^3)[]
f = (x^5 + x^2 + x^1 + 1)^2*x^5
so that x is really the generator of a polynomial ring, and you can define f without casting.
The problem can be seen as follows:
int(1)^GF125X(1)
Indeed it does not make sense to raise integers to polynomial powers in general.
Maybe the error message should be more descriptive, by including the types.
In your case, it seems you have confused f_i and i: note enumerate yields a list of pairs where the first element is the index.
Also you can omit the argument "GF" (which, confusingly, is a polynomial ring): it can be obtained from f by f.parent(). Furthermore you can probably avoid using my_mul by using prod and zip. Also, you will probably find it convenient to define
GF125X.<x> = GF(5^3)[]
f = (x^5 + x^2 + x^1 + 1)^2*x^5
so that x is really the generator of a polynomial ring, and you can define f without casting.
explicit conversion.
|
# Proof of $\dfrac{d}{dx} a^x$ formula
$\large \dfrac{d}{dx}{a^x} \,=\, a^x \log_{e}{a}$
It is called differentiation of $a$ raised to the power of $x$ with respect to $x$ formula and this differentiation rule can be derived in differential calculus on the basis of relation between limit and differentiation. Here the steps to prove the $\dfrac{d}{dx}{a^x}$ formula mathematically.
### Express differentiation of function in Limit form
The derivative of a function with respect to $x$ can be expressed in limit form as per the mathematical relation between limit and differentiation. So, the differentiation of $a$ raised to the power of $x$ is expressed in limit form.
$\implies$ $\dfrac{d}{dx}{a^x} = \displaystyle \lim_{h \,\to\, 0}{\dfrac{a^{x+h}-a^x}{h}}$
### Split the exponential function
An exponential function in numerator contains sum of two literals as its exponent. It can be divided as two multiplying factors by applying product rule of exponents.
$\implies$ $\dfrac{d}{dx}{a^x} = \displaystyle \lim_{h \,\to\, 0}{\dfrac{a^{x} \times a^{h}-a^x}{h}}$
### Simplifying the equation
$\implies$ $\dfrac{d}{dx}{a^x} = \displaystyle \lim_{h \,\to\, 0}{\dfrac{a^{x} \times (a^{h}-1)}{h}}$
$\implies$ $\dfrac{d}{dx}{a^x} = \displaystyle \lim_{h \,\to\, 0}{a^{x} \times \dfrac{a^{h}-1}{h}}$
The multiplying factor $a^x$ is a constant in this case. So, it can be taken out from the expression reasonably.
$\implies$ $\dfrac{d}{dx}{a^x} = a^{x} \times \displaystyle \lim_{h \,\to\, 0}{\dfrac{a^{h}-1}{h}}$
### Obtaining the Result
According to limit rules, the lim of $\dfrac{a^x-1}{x}$ as $x$ approaches $0$ is equal to natural logarithm of $a$.
$\displaystyle \lim_{h \,\to\, 0}{\dfrac{a^{h}-1}{h}} = \log_{e}{a}$
Now simplify the equation for deriving the derivative of $a^x$ with respect to $x$ formula in differential calculus.
$\implies$ $\dfrac{d}{dx}{a^x} = a^{x} \times \log_{e}{a}$
$\,\,\, \therefore \,\,\,\,\,\,$ $\dfrac{d}{dx}{a^x} = a^{x} \log_{e}{a}$
It is used as formula to deal differentiation of $a^x$ with respect to $x$ in calculus. It can be simply written in natural logarithmic form as follows.
$\dfrac{d}{dx}{a^x} = a^{x} \ln{a}$
|
# Thread: Aviansie Killer - Build It Yourself!
1. ## Aviansie Killer - Build It Yourself!
Build It Yourself Aviansie Killer!
Inspired by Home's DIY Guild-Ranger, I decided to release a completely butchered version of my Aviansie Killer to the public. I would rate this 8/10 difficulty to put back together, you will need to fix compiling errors, solve logic errors, edit object detection to work, and other surprises.
Rules
~You must not help each other fix the compiling errors and create functions
~If fixed, you must not share your version with other users
~Don't complain, if you can't fix it I'm not going to help you
~If fixed, send me a PM so I can congratulate you on a job well done!
Hints
~There are various misspellings and parameter errors
~You will need to do some color finding(using CTS 2 and ACA)
~You will need to do some DTM creating
~The TPA's in the function are "mostly" in-tact, they will need to be fixed to actually work however
~Have Fun! I did this as a learning experience for public users, you can learn a bit from looking at the code and logic. If you can't get it to work, don't fret but put it away for a bit then come back and try it again in a week to see how much more you can fix.
Good Luck!
Progress Report:
=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+
=+=+=+ Auto Aviansie Pro 1.7 by PatDuffy
=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+
Time Running: 1 Hours, 31 Minutes and 25 Seconds
Aviansie's killed: 103
Cash Gained: 328000
Cash/Hour: 213913
Range Exp Gained: 30900
Range/Hour: 20152
=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+
2. Great idea I remember you mentioning this before. I would love to try to solve it unfortunately this script would go unused by me as I am f2p so really there is no point in doing all the work. Anyways thats my life story and I would like to wish everyone good luck who try to attempt this. Good job Pat.
3. Haha, ya pat. Check PM, i got website near done! :P
4. Registered User
Join Date
Jan 2012
Location
Posts
414
Mentioned
0 Post(s)
Quoted
0 Post(s)
Awesome release even tho that you need to build it youself ^^
5. Registered User
Join Date
Feb 2012
Posts
19
Mentioned
0 Post(s)
Quoted
3 Post(s)
[Error] (84:3): Unknown identifier 'BCols' at line 83
Compiling failed.
6. SRL Junior Member
Join Date
Aug 2011
Posts
60
Mentioned
0 Post(s)
Quoted
0 Post(s)
Originally Posted by cyberrion
[Error] (84:3): Unknown identifier 'BCols' at line 83
Compiling failed.
He said he's not going to help people. And don't expect just to run this script and have it run perfectly. It's a 'Build It Yourself!'.
7. SRL Junior Member
Join Date
Oct 2006
Posts
69
Mentioned
0 Post(s)
Quoted
1 Post(s)
Originally Posted by cyberrion
[Error] (84:3): Unknown identifier 'BCols' at line 83
Compiling failed.
This is what happens when you don't read the post, you only try to leech the script and go straight to download. READ FIRST!
8. That was a nice thing to do pat to give a chance to people to deserve a member script!
Keep it bud.
9. I think I'd find it easier to create an avansie's script then sort through that mess... :P
10. SRL Junior Member
Join Date
Feb 2011
Location
Wisconsin
Posts
398
Mentioned
1 Post(s)
Quoted
10 Post(s)
Originally Posted by footballjds
I think I'd find it easier to create an avansie's script then sort through that mess... :P
Agreed!
11. Originally Posted by footballjds
I think I'd find it easier to create an avansie's script then sort through that mess... :P
Or just use the one in the members section :P
Haha but I agree, good luck everyone
12. SRL Junior Member
Join Date
Dec 2011
Posts
189
Mentioned
0 Post(s)
Quoted
0 Post(s)
1159 lines of code, everything in the script is wrong... *Challenge Accepted*
I will probably work on this for the whole week and still not be able to fix it ;P
13. Laz
Registered User
Join Date
Feb 2012
Posts
35
Mentioned
0 Post(s)
Quoted
0 Post(s)
give us the ensambled script ?
:P
14. Originally Posted by Laz
give us the ensambled script ?
:P
Or you can make it yourself
15. Laz
Registered User
Join Date
Feb 2012
Posts
35
Mentioned
0 Post(s)
Quoted
0 Post(s)
Or you can make it yourself
16. Laz
Registered User
Join Date
Feb 2012
Posts
35
Mentioned
0 Post(s)
Quoted
0 Post(s)
i tried to make my own and this happend.
17. That's the idea.
18. Laz
Registered User
Join Date
Feb 2012
Posts
35
Mentioned
0 Post(s)
Quoted
0 Post(s)
Originally Posted by PatDuffy
That's the idea.
19. Laz
Registered User
Join Date
Feb 2012
Posts
35
Mentioned
0 Post(s)
Quoted
0 Post(s)
20. Registered User
Join Date
Dec 2011
Location
Beaverton, Oregon
Posts
13
Mentioned
0 Post(s)
Quoted
0 Post(s)
Been trying to fix this. I want to shoot something
21. Ehhhhhh it "should" be possible
Just takes a good knowledge of what to do.
22. SRL Junior Member
Join Date
Feb 2012
Location
Posts
36
Mentioned
0 Post(s)
Quoted
0 Post(s)
Pat man, although i am new here i must say this is a good idea. I thought since i have a bit of knowledge of scripting (enough to make easy script) i could fix this bbut damn im stumped. BTW i just did it for fun because im not a member good luck to everyone else.
23. SRL Junior Member
Join Date
May 2012
Posts
87
Mentioned
2 Post(s)
Quoted
34 Post(s)
I tried my best and could not figure it out, best of luck to ye'all if anyone wants to send me it fixed they would be awesome.hehe
24. I decided to fix the spelling mistakes since that is all im able to do at the moment.
I will also like to add this was a great idea...
25. Registered User
Join Date
Jan 2012
Posts
14
Mentioned
0 Post(s)
Quoted
0 Post(s)
Nice idea yet hard to do for me
|
# Square of Dirac delta function
• I
Is the square of a Dirac delta function, ##(\delta(x))^2##, still a Dirac delta function, ##\delta(x)##?
A Dirac delta function peaks at one value of ##x##, say 0. If it is squared, it still peaks at the same value, so it seems like the squared Dirac delta function is still a Dirac delta function, ##\delta(x)##, or some multiple of it, ##k\delta(x)##, where ##k>1##, since the area under graph seems larger.
How about the square root of a Dirac delta function?
scottdave
Homework Helper
It is something totally different than just multiplying by a constant.One thing to think about. The Del function is sometimes described as a rectangle of width d, and height (1/d) then take the limit as d->0. (so height approaches infinity)
At all values of d, you get an area of (d/d) = 1. But if you have (Del)^2, the width is essentially the same as Del, but the height is infinity^2 ?
WolframAlpha produced a surprising result for this. http://www.wolframalpha.com/input/?i=(DiracDelta[t])*(DiracDelta[t])
Last edited:
FactChecker
|
#### APRG Seminar
##### Venue: Microsoft Teams (online)
Dunkl theory is a generalization of Fourier analysis and special function theory related to root systems and reflections groups. The Dunkl operators $T_{j}$, which were introduced by C. F. Dunkl in 1989, are deformations of directional derivatives by difference operators related to the reflection group. The goal of this talk will be to study harmonic analysis in the rational Dunkl setting. The first part will be devoted to some of results obtained in recent joint works with Jacek Dziubanski (2019, 2020).
• improved estimates of the heat kernel $h_t(\mathbf{x},\mathbf{y})$ of the Dunkl heat semigroup generated by Dunkl–Laplace operator $\Delta_k=\sum_{j=1}^{N}T_j^2$ expressed in terms of analysis on the spaces of homogeneous type;
• theorem regarding the support of Dunkl translations $\tau_{\mathbf{x}}\phi$ of $L^2$ compactly supported function $\phi$ (not necessarily radial).
The results listed above turn out to be useful tools in studying harmonic analysis in Dunkl setting. We will discuss this kind of applications in the second part of the talk. We will focus on a version of the classical Hormander’s multiplier theorem proved in joint work with Dziubanski (2019). If time permits, we will discussed how our tools can be used to for studying singular integrals of convolution type or Littlewood–Paley square functions in the Dunkl setting.
Contact: +91 (80) 2293 2711, +91 (80) 2293 2265 ; E-mail: chair.math[at]iisc[dot]ac[dot]in
Last updated: 18 Apr 2021
|
Documentation
### This is machine translation
Translated by
Mouseover text to see original. Click the button below to return to the English version of the page.
# c2d
Convert model from continuous to discrete time
## Syntax
sysd = c2d(sys,Ts)
sysd = c2d(sys,Ts,method)
sysd = c2d(sys,Ts,opts)
[sysd,G] = c2d(sys,Ts,method)
[sysd,G] = c2d(sys,Ts,opts)
## Description
sysd = c2d(sys,Ts) discretizes the continuous-time dynamic system model sys using zero-order hold on the inputs and a sample time of Ts seconds.
sysd = c2d(sys,Ts,method) discretizes sys using the specified discretization method method.
sysd = c2d(sys,Ts,opts) discretizes sys using the option set opts, specified using the c2dOptions command.
[sysd,G] = c2d(sys,Ts,method) returns a matrix, G that maps the continuous initial conditions x0 and u0 of the state-space model sys to the discrete-time initial state vector x[0]. method is optional. To specify additional discretization options, use [sysd,G] = c2d(sys,Ts,opts).
## Input Arguments
sys Continuous-time dynamic system model (except frequency response data models). sys can represent a SISO or MIMO system, except that the 'matched' discretization method supports SISO systems only. sys can have input/output or internal time delays; however, the 'matched', 'impulse', and least-squares' methods do not support state-space models with internal time delays. The following identified linear systems cannot be discretized directly: idgrey models whose FunctionType is 'c'. Convert to idss model first.idproc models. Convert to idtf or idpoly model first. For the syntax [sysd,G] = c2d(sys,Ts,opts), sys must be a state-space model. Ts Sample time. method Discretization method, specified as one of the following values: 'zoh' — Zero-order hold (default). Assumes the control inputs are piecewise constant over the sample time Ts.'foh' — Triangle approximation (modified first-order hold). Assumes the control inputs are piecewise linear over the sample time Ts.'impulse' — Impulse invariant discretization'tustin' — Bilinear (Tustin) method'matched' — Zero-pole matching method'least-squares' — Least-squares method For information about the algorithms for each conversion method, see Continuous-Discrete Conversion Methods. opts Discretization options. Create opts using c2dOptions.
## Output Arguments
sysd Discrete-time model of the same type as the input system sys. When sys is an identified (IDLTI) model, sysd: Includes both measured and noise components of sys. The innovations variance λ of the continuous-time identified model sys, stored in its NoiseVarianceproperty, is interpreted as the intensity of the spectral density of the noise spectrum. The noise variance in sysd is thus λ/Ts.Does not include the estimated parameter covariance of sys. If you want to translate the covariance while discretizing the model, use translatecov. G Matrix relating continuous-time initial conditions x0 and u0 of the state-space model sys to the discrete-time initial state vector x[0], as follows: $x\left[\text{ }0\right]=G\cdot \left[\begin{array}{c}{x}_{0}\\ {u}_{0}\end{array}\right]$For state-space models with time delays, c2d pads the matrix G with zeroes to account for additional states introduced by discretizing those delays. See Continuous-Discrete Conversion Methods for a discussion of modeling time delays in discretized systems.
## Examples
collapse all
Discretize the following continuous-time transfer function:
$H\left(s\right)={e}^{-0.3s}\frac{s-1}{{s}^{2}+4s+5}.$
This system has an input delay of 0.3 s. Discretize the system using the triangle (first-order-hold) approximation with sample time Ts = 0.1 s.
H = tf([1 -1],[1 4 5],'InputDelay', 0.3);
Hd = c2d(H,0.1,'foh');
Compare the step responses of the continuous-time and discretized systems.
step(H,'-',Hd,'--')
Discretize the following delayed transfer function using zero-order hold on the input, and a 10-Hz sampling rate.
$H\left(s\right)={e}^{-0.25s}\frac{10}{{s}^{2}+3s+10}.$
h = tf(10,[1 3 10],'IODelay',0.25);
hd = c2d(h,0.1)
hd =
0.01187 z^2 + 0.06408 z + 0.009721
z^(-3) * ----------------------------------
z^2 - 1.655 z + 0.7408
Sample time: 0.1 seconds
Discrete-time transfer function.
In this example, the discretized model hd has a delay of three sampling periods. The discretization algorithm absorbs the residual half-period delay into the coefficients of hd.
Compare the step responses of the continuous-time and discretized models.
step(h,'--',hd,'-')
Create a continuous-time state-space model with two states and an input delay.
sys = ss(tf([1,2],[1,4,2]));
sys.InputDelay = 2.7
sys =
A =
x1 x2
x1 -4 -2
x2 1 0
B =
u1
x1 2
x2 0
C =
x1 x2
y1 0.5 1
D =
u1
y1 0
Input delays (seconds): 2.7
Continuous-time state-space model.
Discretize the model using the Tustin discretization method and a Thiran filter to model fractional delays. The sample time Ts = 1 second.
opt = c2dOptions('Method','tustin','FractDelayApproxOrder',3);
sysd1 = c2d(sys,1,opt)
sysd1 =
A =
x1 x2 x3 x4 x5
x1 -0.4286 -0.5714 -0.00265 0.06954 2.286
x2 0.2857 0.7143 -0.001325 0.03477 1.143
x3 0 0 -0.2432 0.1449 -0.1153
x4 0 0 0.25 0 0
x5 0 0 0 0.125 0
B =
u1
x1 0.002058
x2 0.001029
x3 8
x4 0
x5 0
C =
x1 x2 x3 x4 x5
y1 0.2857 0.7143 -0.001325 0.03477 1.143
D =
u1
y1 0.001029
Sample time: 1 seconds
Discrete-time state-space model.
The discretized model now contains three additional states x3, x4, and x5 corresponding to a third-order Thiran filter. Since the time delay divided by the sample time is 2.7, the third-order Thiran filter ('FractDelayApproxOrder' = 3) can approximate the entire time delay.
Estimate a continuous-time transfer function, and discretize it.
sys1c = tfest(z1,2);
sys1d = c2d(sys1c,0.1,'zoh');
Estimate a second order discrete-time transfer function.
sys2d = tfest(z1,2,'Ts',0.1);
Compare the response of the discretized continuous-time transfer function model, sys1d, and the directly estimated discrete-time model, sys2d.
compare(z1,sys1d,sys2d)
The two systems are almost identical.
Discretize an identified state-space model to build a one-step ahead predictor of its response.
Create a continuous-time identified state-space model using estimation data.
sysc = ssest(z2,4);
Predict the 1-step ahead predicted response of sysc.
predict(sysc,z2)
Discretize the model.
sysd = c2d(sysc,0.1,'zoh');
Build a predictor model from the discretized model, sysd.
[A,B,C,D,K] = idssdata(sysd);
Predictor = ss(A-K*C,[K B-K*D],C,[0 D],0.1);
Predictor is a two-input model which uses the measured output and input signals ([z1.y z1.u]) to compute the 1-step predicted response of sysc.
Simulate the predictor model to get the same response as the predict command.
lsim(Predictor,[z2.y,z2.u])
The simulation of the predictor model gives the same response as predict(sysc,z2).
## Tips
• Use the syntax sysd = c2d(sys,Ts,method) to discretize sys using the default options for method. To specify additional discretization options, use the syntax sysd = c2d(sys,Ts,opts).
• To specify the tustin method with frequency prewarping (formerly known as the 'prewarp' method), use the PrewarpFrequency option of c2dOptions.
## Algorithms
For information about the algorithms for each c2d conversion method, see Continuous-Discrete Conversion Methods.
|
# Eight to Late
Sensemaking and Analytics for Organizations
## A gentle introduction to Monte Carlo simulation for project managers
This article covers the why, what and how of Monte Carlo simulation using a canonical example from project management – estimating the duration of a small project. Before starting, however, I’d like say a few words about the tool I’m going to use.
Despite the bad rap spreadsheets get from tech types – and I have to admit that many of their complaints are justified – the fact is, Excel remains one of the most ubiquitous “computational” tools in the corporate world. Most business professionals would have used it at one time or another. So, if you you’re a project manager and want the rationale behind your estimates to be accessible to the widest possible audience, you are probably better off presenting them in Excel than in SPSS, SAS, Python, R or pretty much anything else. Consequently, the tool I’ll use in this article is Microsoft Excel. For those who know about Monte Carlo and want to cut to the chase, here’s the Excel workbook containing all the calculations detailed in later sections. However, if you’re unfamiliar with the technique, you may want to have a read of the article before playing with the spreadsheet.
In keeping with the format of the tutorials on this blog, I’ve assumed very little prior knowledge about probability, let alone Monte Carlo simulation. Consequently, the article is verbose and the tone somewhat didactic.
### Introduction
Estimation is key part of a project manager’s role. The most frequent (and consequential) estimates they are asked deliver relate to time and cost. Often these are calculated and presented as point estimates: i.e. single numbers – as in, this task will take 3 days. Or, a little better, as two-point ranges – as in, this task will take between 2 and 5 days. Better still, many use a PERT-like approach wherein estimates are based on 3 points: best, most likely and worst case scenarios – as in, this task will take between 2 and 5 days, but it’s most likely that we’ll finish on day 3. We’ll use three-point estimates as a starting point for Monte Carlo simulation, but first, some relevant background.
It is a truism, well borne out by experience, that it is easier to estimate small, simple tasks than large, complex ones. Indeed, this is why one of the early to-dos in a project is the construction of a work breakdown structure. However, a problem arises when one combines the estimates for individual elements into an overall estimate for a project or a phase thereof. It is that a straightforward addition of individual estimates or bounds will almost always lead to a grossly incorrect estimation of overall time or cost. The reason for this is simple: estimates are necessarily based on probabilities and probabilities do not combine additively. Monte Carlo simulation provides a principled and intuitive way to obtain probabilistic estimates at the level of an entire project based on estimates of the individual tasks that comprise it.
### The problem
The best way to explain Monte Carlo is through a simple worked example. So, let’s consider a 4 task project shown in Figure 1. In the project, the second task is dependent on the first, and third and fourth are dependent on the second but not on each other. The upshot of this is that the first two tasks have to be performed sequentially and the last two can be done at the same time, but can only be started after the second task is completed.
To summarise: the first two tasks must be done in series and the last two can be done in parallel.
Figure 1; A project with 4 tasks.
Figure 1 also shows the three point estimates for each task – that is the minimum, maximum and most likely completion times. For completeness I’ve listed them below:
• Task 1 – Min: 2 days; Most Likely: 4 days; Max: 8 days
• Task 2 – Min: 3 days; Most Likely: 5 days; Max: 10 days
• Task 3 – Min: 3 days; Most Likely: 6 days; Max: 9 days
• Task 4 – Min: 2 days; Most Likely: 4 days; Max: 7 days
OK, so that’s the situation as it is given to us. The first step to developing an estimate is to formulate the problem in a way that it can be tackled using Monte Carlo simulation. This bring us to the important topic of the shape of uncertainty aka probability distributions.
### The shape of uncertainty
Consider the data for Task 1. You have been told that it most often finishes on day 4. However, if things go well, it could take as little as 2 days; but if things go badly it could take as long as 8 days. Therefore, your range of possible finish times (outcomes) is between 2 to 8 days.
Clearly, each of these outcomes is not equally likely. The most likely outcome is that you will finish the task in 4 days (from what your team member has told you). Moreover, the likelihood of finishing in less than 2 days or more than 8 days is zero. If we plot the likelihood of completion against completion time, it would look something like Figure 2.
Figure 2: Likelihood of finishing on day 2, day 4 and day 8.
Figure 2 begs a couple of questions:
1. What are the relative likelihoods of completion for all intermediate times – i.e. those between 2 to 4 days and 4 to 8 days?
2. How can one quantify the likelihood of intermediate times? In other words, how can one get a numerical value of the likelihood for all times between 2 to 8 days? Note that we know from the earlier discussion that this must be zero for any time less than 2 or greater than 8 days.
The two questions are actually related. As we shall soon see, once we know the relative likelihood of completion at all times (compared to the maximum), we can work out its numerical value.
Since we don’t know anything about intermediate times (I’m assuming there is no other historical data available), the simplest thing to do is to assume that the likelihood increases linearly (as a straight line) from 2 to 4 days and decreases in the same way from 4 to 8 days as shown in Figure 3. This gives us the well-known triangular distribution.
Jargon Buster: The term distribution is simply a fancy word for a plot of likelihood vs. time.
Figure 3: Triangular distribution fitted to points in Figure 1
Of course, this isn’t the only possibility; there are an infinite number of others. Figure 4 is another (admittedly weird) example.
Figure 4: Another distribution that fits the points in Figure 2.
Further, it is quite possible that the upper limit (8 days) is not a hard one. It may be that in exceptional cases the task could take much longer (for example, if your team member calls in sick for two weeks) or even not be completed at all (for example, if she then leaves for that mythical greener pasture). Catering for the latter possibility, the shape of the likelihood might resemble Figure 5.
Figure 5: A distribution that allows for a very long (potentially) infinite completion time
The main takeaway from the above is that uncertainties should be expressed as shapes rather than numbers, a notion popularised by Sam Savage in his book, The Flaw of Averages.
[Aside: you may have noticed that all the distributions shown above are skewed to the right – that is they have a long tail. This is a general feature of distributions that describe time (or cost) of project tasks. It would take me too far afield to discuss why this is so, but if you’re interested you may want to check out my post on the inherent uncertainty of project task estimates.
### From likelihood to probability
Thus far, I have used the word “likelihood” without bothering to define it. It’s time to make the notion more precise. I’ll begin by asking the question: what common sense properties do we expect a quantitative measure of likelihood to have?
Consider the following:
1. If an event is impossible, its likelihood should be zero.
2. The sum of likelihoods of all possible events should equal complete certainty. That is, it should be a constant. As this constant can be anything, let us define it to be 1.
In terms of the example above, if we denote time by $t$ and the likelihood by $P(t)$ then:
$P(t) = 0$ for $t< 2$ and $t> 8$
And
$\sum_{t}P(t) = 1$ where $2\leq t< 8$
Where $\sum_{t}$ denotes the sum of all non-zero likelihoods – i.e. those that lie between 2 and 8 days. In simple terms this is the area enclosed by the likelihood curves and the x axis in figures 2 to 5. (Technical Note: Since $t$ is a continuous variable, this should be denoted by an integral rather than a simple sum, but this is a technicality that need not concern us here)
$P(t)$ is , in fact, what mathematicians call probability– which explains why I have used the symbol $P$ rather than $L$. Now that I’ve explained what it is, I’ll use the word “probability” instead of ” likelihood” in the remainder of this article.
With these assumptions in hand, we can now obtain numerical values for the probability of completion for all times between 2 and 8 days. This can be figured out by noting that the area under the probability curve (the triangle in figure 3 and the weird shape in figure 4) must equal 1, and we’ll do this next. Indeed, for the problem at hand, we’ll assume that all four task durations can be fitted to triangular distributions. This is primarily to keep things simple. However, I should emphasise that you can use any shape so long as you can express it mathematically, and I’ll say more about this towards the end of this article.
### The triangular distribution
Let’s look at the estimate for Task 1. We have three numbers corresponding to a minimummost likely and maximum time. To keep the discussion general, we’ll call these $t_{min}$, $t_{ml}$ and $t_{max}$ respectively, (we’ll get back to our estimator’s specific numbers later).
Now, what about the probabilities associated with each of these times?
Since $t_{min}$ and $t_{max}$ correspond to the minimum and maximum times, the probability associated with these is zero. Why? Because if it wasn’t zero, then there would be a non-zero probability of completion for a time less than $t_{min}$ or greater than $t_{max}$ – which isn’t possible [Note: this is a consequence of the assumption that the probability varies continuously – so if it takes on non-zero value, $p_{0}$, at $t_{min}$ then it must take on a value slightly less than $p_{0}$ – but greater than 0 – at $t$ slightly smaller than $t_{min}$ ] . As far as the most likely time, $t_{ml}$, is concerned: by definition, the probability attains its highest value at time $t_{ml}$. So, assuming the probability can be described by a triangular function, the distribution must have the form shown in Figure 6 below.
Figure 6: Triangular distribution redux.
For the simulation, we need to know the equation describing the above distribution. Although Wikipedia will tell us the answer in a mouse-click, it is instructive to figure it out for ourselves. First, note that the area under the triangle must be equal to 1 because the task must finish at some time between $t_{min}$ and $t_{max}$. As a consequence we have:
$\frac{1}{2}\times{base}\times{altitude}=\frac{1}{2}\times{(t_{max}-t_{min})}\times{p(t_{ml})}=1\ldots\ldots{(1)}$
where $p(t_{ml})$ is the probability corresponding to time $t_{ml}$. With a bit of rearranging we get,
$p(t_{ml})=\frac{2}{(t_{max}-t_{min})}\ldots\ldots(2)$
To derive the probability for any time $t$ lying between $t_{min}$ and $t_{ml}$, we note that:
$\frac{(t-t_{min})}{p(t)}=\frac{(t_{ml}-t_{min})}{p(t_{ml})}\ldots\ldots(3)$
This is a consequence of the fact that the ratios on either side of equation (3) are equal to the slope of the line joining the points $(t_{min},0)$ and $(t_{ml}, p(t_{ml}))$.
Figure 7
Substituting (2) in (3) and simplifying a bit, we obtain:
$p(t)=\frac{2(t-t_{min})}{(t_{ml}-t_{min})(t_{max}-t_{min})}\dots\ldots(4)$ for $t_{min}\leq t \leq t_{ml}$
In a similar fashion one can show that the probability for times lying between $t_{ml}$ and $t_{max}$ is given by:
$p(t)=\frac{2(t_{max}-t)}{(t_{max}-t_{ml})(t_{max}-t_{min})}\dots\ldots(5)$ for $t_{ml}\leq t \leq t_{max}$
Equations 4 and 5 together describe the probability distribution function (or PDF) for all times between $t_{min}$ and $t_{max}$.
As it turns out, in Monte Carlo simulations, we don’t directly work with the probability distribution function. Instead we work with the cumulative distribution function (or CDF) which is the probability, $P$, that the task is completed by time $t$. To reiterate, the PDF, $p(t)$, is the probability of the task finishing at time $t$ whereas the CDF, $P(t)$, is the probability of the task completing by time $t$. The CDF, $P(t)$, is essentially a sum of all probabilities between $t_{min}$ and $t$. For $t < t_{min}$ this is the area under the triangle with apexes at ($t_{min}$, 0), (t, 0) and (t, p(t)). Using the formula for the area of a triangle (1/2 base times height) and equation (4) we get:
$P(t)=\frac{(t-t_{min})^2}{(t_{ml}-t_{min})(t_{max}-t_{min})}\ldots\ldots(6)$ for $t_{min}\leq t \leq t_{ml}$
Noting that for $t \geq t_{ml}$, the area under the curve equals the total area minus the area enclosed by the triangle with base between t and $t_{max}$, we have:
$P(t)=1- \frac{(t_{max}-t)^2}{(t_{max}-t_{ml})(t_{max}-t_{min})}\ldots\ldots(7)$ for $t_{ml}\leq t \leq t_{max}$
As expected, $P(t)$ starts out with a value 0 at $t_{min}$ and then increases monotonically, attaining a value of 1 at $t_{max}$.
To end this section let’s plug in the numbers quoted by our estimator at the start of this section: $t_{min}=2$, $t_{ml}=4$ and $t_{max}=8$. The resulting PDF and CDF are shown in figures 8 and 9.
Figure 8: PDF for triangular distribution (tmin=2, tml=4, tmax=8)
Figure 9 – CDF for triangular distribution (tmin=2, tml=4, tmax=8)
### Monte Carlo in a minute
Now with all that conceptual work done, we can get to the main topic of this post: Monte Carlo estimation. The basic idea behind Monte Carlo is to simulate the entire project (all 4 tasks in this case) a large number N (say 10,000) times and thus obtain N overall completion times. In each of the N trials, we simulate each of the tasks in the project and add them up appropriately to give us an overall project completion time for the trial. The resulting N overall completion times will all be different, ranging from the sum of the minimum completion times to the sum of the maximum completion times. In other words, we will obtain the PDF and CDF for the overall completion time, which will enable us to answer questions such as:
• How likely is it that the project will be completed within 17 days?
• What’s the estimated time for which I can be 90% certain that the project will be completed? For brevity, I’ll call this the 90% completion time in the rest of this piece.
“OK, that sounds great”, you say, “but how exactly do we simulate a single task”?
Good question, and I was just about to get to that…
### Simulating a single task using the CDF
As we saw earlier, the CDF for the triangular has a S shape and ranges from 0 to 1 in value. It turns out that the S shape is characteristic of all CDFs, regardless of the details underlying PDF. Why? Because, the cumulative probability must lie between 0 and 1 (remember, probabilities can never exceed 1, nor can they be negative).
OK, so to simulate a task, we:
• generate a random number between 0 and 1, this corresponds to the probability that the task will finish at time t.
• find the time, t, that this corresponds to this value of probability. This is the completion time for the task for this trial.
Incidentally, this method is called inverse transform sampling.
An example might help clarify how inverse transform sampling works. Assume that the random number generated is 0.4905. From the CDF for the first task, we see that this value of probability corresponds to a completion time of 4.503 days, which is the completion for this trial (see Figure 10). Simple!
Figure 10: Illustrating inverse transform sampling
In this case we found the time directly from the computed CDF. That’s not too convenient when you’re simulating the project 10,000 times. Instead, we need a programmable math expression that gives us the time corresponding to the probability directly. This can be obtained by solving equations (6) and (7) for $t$. Some straightforward algebra, yields the following two expressions for $t$:
$t = t_{min} + \sqrt{P(t)(t_{ml} - t_{min})(t_{max} - t_{min})} \ldots\ldots(8)$ for $t_{min}\leq t \leq t_{ml}$
And
$t = t_{max} - \sqrt{[1-P(t)](t_{max} - t_{ml})(t_{max} - t_{min})} \ldots\ldots(9)$ for $t_{ml}\leq t \leq t_{max}$
These can be easily combined in a single Excel formula using an IF function, and I’ll show you exactly how in a minute. Yes, we can now finally get down to the Excel simulation proper and you may want to download the workbook if you haven’t done so already.
### The simulation
Open up the workbook and focus on the first three columns of the first sheet to begin with. These simulate the first task in Figure 1, which also happens to be the task we have used to illustrate the construction of the triangular distribution as well as the mechanics of Monte Carlo.
Rows 2 to 4 in columns A and B list the min, most likely and max completion times while the same rows in column C list the probabilities associated with each of the times. For $t_{min}$ the probability is 0 and for $t_{max}$ it is 1. The probability at $t_{ml}$ can be calculated using equation (6) which, for $t=t_{max}$, reduces to
$P(t_{ml}) =\frac{(t_{ml}-t_{min})}{t_{max}-t_{min}}\ldots\ldots(10)$
Rows 6 through 10005 in column A are simulated probabilities of completion for Task 1. These are obtained via the Excel RAND() function, which generates uniformly distributed random numbers lying between 0 and 1. This gives us a list of probabilities corresponding to 10,000 independent simulations of Task 1.
The 10,000 probabilities need to be translated into completion times for the task. This is done using equations (8) or (9) depending on whether the simulated probability is less or greater than $P(t_{ml})$, which is in cell C3 (and given by Equation (10) above). The conditional statement can be coded in an Excel formula using the IF() function.
Tasks 2-4 are coded in exactly the same way, with distribution parameters in rows 2 through 4 and simulation details in rows 6 through 10005 in the columns listed below:
• Task 2 – probabilities in column D; times in column F
• Task 3 – probabilities in column H; times in column I
• Task 4 – probabilities in column K; times in column L
That’s basically it for the simulation of individual tasks. Now let’s see how to combine them.
For tasks in series (Tasks 1 and 2), we simply sum the completion times for each task to get the overall completion times for the two tasks. This is what’s shown in rows 6 through 10005 of column G.
For tasks in parallel (Tasks 3 and 4), the overall completion time is the maximum of the completion times for the two tasks. This is computed and stored in rows 6 through 10005 of column N.
Finally, the overall project completion time for each simulation is then simply the sum of columns G and N (shown in column O)
Sheets 2 and 3 are plots of the probability and cumulative probability distributions for overall project completion times. I’ll cover these in the next section.
### Discussion – probabilities and estimates
The figure on Sheet 2 of the Excel workbook (reproduced in Figure 11 below) is the probability distribution function (PDF) of completion times. The x-axis shows the elapsed time in days and the y-axis the number of Monte Carlo trials that have a completion time that lie in the relevant time bin (of width 0.5 days). As an example, for the simulation shown in the Figure 11, there were 882 trials (out of 10,000) that had a completion time that lie between 16.25 and 16.75 days. Your numbers will vary, of course, but you should have a maximum in the 16 to 17 day range and a trial number that is reasonably close to the one I got.
Figure 11: Probability distribution of completion times (N=10,000)
I’ll say a bit more about Figure 11 in the next section. For now, let’s move on to Sheet 3 of workbook which shows the cumulative probability of completion by a particular day (Figure 12 below). The figure shows the cumulative probability function (CDF), which is the sum of all completion times from the earliest possible completion day to the particular day.
Figure 12: Probability of completion by a particular day (N=10,000)
To reiterate a point made earlier, the reason we work with the CDF rather than the PDF is that we are interested in knowing the probability of completion by a particular date (e.g. it is 90% likely that we will finish by April 20th) rather than the probability of completion on a particular date (e.g. there’s a 10% chance we’ll finish on April 17th). We can now answer the two questions we posed earlier. As a reminder, they are:
• How likely is it that the project will be completed within 17 days?
• What’s the 90% likely completion time?
Both questions are easily answered by using the cumulative distribution chart on Sheet 3 (or Fig 12). Reading the relevant numbers from the chart, I see that:
• There’s a 60% chance that the project will be completed in 17 days.
• The 90% likely completion time is 19.5 days.
How does the latter compare to the sum of the 90% likely completion times for the individual tasks? The 90% likely completion time for a given task can be calculated by solving Equation 9 for $t$, with appropriate values for the parameters $t_{min}$, $t_{max}$ and $t_{ml}$ plugged in, and $P(t)$ set to 0.9. This gives the following values for the 90% likely completion times:
• Task 1 – 6.5 days
• Task 2 – 8.1 days
• Task 3 – 7.7 days
• Task 4 – 5.8 days
Summing up the first three tasks (remember, Tasks 3 and 4 are in parallel) we get a total of 22.3 days, which is clearly an overestimation. Now, with the benefit of having gone through the simulation, it is easy to see that the sum of 90% likely completion times for individual tasks does not equal the 90% likely completion time for the sum of the relevant individual tasks – the first three tasks in this particular case. Why? Essentially because a Monte Carlo run in which the first three tasks tasks take as long as their (individual) 90% likely completion times is highly unlikely. Exercise: use the worksheet to estimate how likely this is.
There’s much more that can be learnt from the CDF. For example, it also tells us that the greatest uncertainty in the estimate is in the 5 day period from ~14 to 19 days because that’s the region in which the probability changes most rapidly as a function of elapsed time. Of course, the exact numbers are dependent on the assumed form of the distribution. I’ll say more about this in the final section.
To close this section, I’d like to reprise a point I mentioned earlier: that uncertainty is a shape, not a number. Monte Carlo simulations make the uncertainty in estimates explicit and can help you frame your estimates in the language of probability…and using a tool like Excel can help you explain these to non-technical people like your manager.
### Closing remarks
We’ve covered a fair bit of ground: starting from general observations about how long a task takes, saw how to construct simple probability distributions and then combine these using Monte Carlo simulation. Before I close, there are a few general points I should mention for completeness…and as warning.
First up, it should be clear that the estimates one obtains from a simulation depend critically on the form and parameters of the distribution used. The parameters are essentially an empirical matter; they should be determined using historical data. The form of the function, is another matter altogether: as pointed out in an earlier section, one cannot determine the shape of a function from a finite number of data points. Instead, one has to focus on the properties that are important. For example, is there a small but finite chance that a task can take an unreasonably long time? If so, you may want to use a lognormal distribution…but remember, you will need to find a sensible way to estimate the distribution parameters from your historical data.
Second, you may have noted from the probability distribution curve (Figure 11) that despite the skewed distributions of the individual tasks, the distribution of the overall completion time is somewhat symmetric with a minimum of ~9 days, most likely time of ~16 days and maximum of 24 days. It turns out that this is a general property of distributions that are generated by adding a large number of independent probabilistic variables. As the number of variables increases, the overall distribution will tend to the ubiquitous Normal distribution.
The assumption of independence merits a closer look. In the case it hand, it implies that the completion times for each task are independent of each other. As most project managers will know from experience, this is rarely the case: in real life, a task that is delayed will usually have knock-on effects on subsequent tasks. One can easily incorporate such dependencies in a Monte Carlo simulation. A formal way to do this is to introduce a non-zero correlation coefficient between tasks as I have done here. A simpler and more realistic approach is to introduce conditional inter-task dependencies As an example, one could have an inter-task delay that kicks in only if the predecessor task takes more than 80% of its maximum time.
Thirdly, you may have wondered why I used 10,000 trials: why not 100, or 1000 or 20,000. This has to do with the tricky issue of convergence. In a nutshell, the estimates we obtain should not depend on the number of trials used. Why? Because if they did, they’d be meaningless.
Operationally, convergence means that any predicted quantity based on aggregates should not vary with number of trials. So, if our Monte Carlo simulation has converged, our prediction of 19.5 days for the 90% likely completion time should not change substantially if I increase the number of trials from ten to twenty thousand. I did this and obtained almost the same value of 19.5 days. The average and median completion times (shown in cell Q3 and Q4 of Sheet 1) also remained much the same (16.8 days). If you wish to repeat the calculation, be sure to change the formulas on all three sheets appropriately. I was lazy and hardcoded the number of trials. Sorry!
Finally, I should mention that simulations can be usefully performed at a higher level than individual tasks. In their highly-readable book, Waltzing With Bears: Managing Risk on Software Projects, Tom De Marco and Timothy Lister show how Monte Carlo methods can be used for variables such as velocity, time, cost etc. at the project level as opposed to the task level. I believe it is better to perform simulations at the lowest possible level, the main reason being that it is easier, and less error-prone, to estimate individual tasks than entire projects. Nevertheless, high level simulations can be very useful if one has reliable data to base these on.
There are a few more things I could say about the usefulness of the generated distribution functions and Monte Carlo in general, but they are best relegated to a future article. This one is much too long already and I think I’ve tested your patience enough. Thanks so much for reading, I really do appreciate it and hope that you found it useful.
Acknowledgement: My thanks to Peter Holberton for pointing out a few typographical and coding errors in an earlier version of this article. These have now been fixed. I’d be grateful if readers could bring any errors they find to my attention.
Written by K
March 27, 2018 at 4:11 pm
Tagged with
## Risk management and organizational anxiety
In practice risk management is a rational, means-end based process: risks are identified, analysed and then “solved” (or mitigated). Although these steps seem to be objective, each of them involves human perceptions, biases and interests. Where Jill sees an opportunity, Jack may see only risks.
Indeed, the problem of differences in stakeholder perceptions is broader than risk analysis. The recognition that such differences in world-views may be irreconcilable is what led Horst Rittel to coin the now well-known term, wicked problem. These problems tend to be made up of complex interconnected and interdependent issues which makes them difficult to tackle using standard rational- analytical methods of problem solving.
Most high-stakes risks that organisations face have elements of wickedness – indeed any significant organisational change is fraught with risk. Murphy rules; things can go wrong, and they often do. The current paradigm of risk management, which focuses on analyzing and quantifying risks using rational methods, is not broad enough to account for the wicked aspects of risk.
I had been thinking about this for a while when I stumbled on a fascinating paper by Robin Holt entitled, Risk Management: The Talking Cure, which outlines a possible approach to analysing interconnected risks. In brief, Holt draws a parallel between psychoanalysis (as a means to tackle individual anxiety) and risk management (as a means to tackle organizational anxiety). In this post, I present an extensive discussion and interpretation of Holt’s paper. Although more about the philosophy of risk management than its practice, I found the paper interesting, relevant and thought provoking. My hope is that some readers might find it so too.
### Background
Holt begins by noting that modern life is characterized by uncertainty. Paradoxically, technological progress which should have increased our sense of control over our surroundings and lives has actually heightened our personal feelings of uncertainty. Moreover, this sense of uncertainty is not allayed by rational analysis. On the contrary, it may have even increased it by, for example, drawing our attention to risks that we may otherwise have remained unaware of. Risk thus becomes a lens through which we perceive the world. The danger is that this can paralyze. As Holt puts it,
…risk becomes the only backdrop to perceiving the world and perception collapses into self-inhibition, thereby compounding uncertainty through inertia.
Most individuals know this through experience: most of us have at one time or another been frozen into inaction because of perceived risks. We also “know” at a deep personal level that the standard responses to risk are inadequate because many of our worries tend to be inchoate and therefore can neither be coherently articulated nor analysed. In Holt’s words:
..People do not recognize [risk] from the perspective of a breakdown in their rational calculations alone, but because of threats to their forms of life – to the non-calculative way they see themselves and the world. [Mainstream risk analysis] remains caught in the thrall of its own ‘expert’ presumptions, denigrating the very lay knowledge and perceptions on the grounds that they cannot be codified and institutionally expressed.
Holt suggests that risk management should account for the “codified, uncodified and uncodifiable aspects of uncertainty from an organizational perspective.” This entails a mode of analysis that takes into account different, even conflicting, perspectives in a non-judgemental way. In essence, he suggests “talking it over” as a means to increase awareness of the contingent nature of risks rather than a means of definitively resolving them.
### Shortcomings of risk analysis
The basic aim of risk analysis (as it is practiced) is to contain uncertainty within set bounds that are determined by an organisation’s risk appetite. As mentioned earlier, this process begins by identifying and classifying risks. Once this is done, one determines the probability and impact of each risk. Then, based on priorities and resources available (again determined by the organisation’s risk appetite) one develops strategies to mitigate the risks that are significant from the organisation’s perspective.
However, the messiness of organizational life makes it difficult to see risk in such a clear-cut way. We may pretend to be rational about it, but in reality we perceive it through the lens of our background, interests , experiences. Based on these perceptions we rationalize our action (or inaction!) and simply get on with life. As Holt writes:
The concept [of risk] refers to…the mélange of experience, where managers accept contingencies without being overwhelmed to a point of complete passivity or confusion, Managers learn to recognize the differences between things, to acknowledge their and our limits. Only in this way can managers be said to make judgements, to be seen as being involved in something called the future.
Then, in a memorable line, he goes on to say:
The future, however, lasts a long time, so much so as to make its containment and prediction an often futile exercise.
Although one may well argue that this is not the case for many organizational risks, it is undeniable that certain mitigation strategies (for example, accepting risks that turn out to be significant later) may have significant consequences in the not-so-near future.
So how can one address the slippery aspects of risk – the things people sense intuitively, but find difficult to articulate?
Taking inspiration from Machiavelli, Holt suggests reframing risk management as a means to determine wise actions in the face of the contradictory forces of fortune and necessity. As Holt puts it:
Necessity describes forces that are unbreachable but manageable by acceptance and containment—acts of God, tendencies of the species, and so on. In recognizing inevitability, [one can retain one’s] position, enhancing it only to the extent that others fail to recognize necessity. Far more influential, and often confused with necessity, is fortune. Fortune is elusive but approachable. Fortune is never to be relied upon: ‘The greatest good fortune is always least to be trusted’; the good is often kept underfoot and the ridiculous elevated, but it provides [one] with opportunity.
Wise actions involve resolve and cunning (which I interpret as political nous). This entails understanding that we do not have complete (or even partial) control over events that may occur in the future. The future is largely unknowable as are people’s true drives and motivations. Yet, despite this, managers must act. This requires personal determination together with a deep understanding of the social and political aspects of one’s environment.
And a little later,
…risk management is not the clear conception of a problem coupled to modes of rankable resolutions, or a limited process, but a judgemental analysis limited by the vicissitudes of budgets, programmes, personalities and contested priorities.
In short: risk management in practice tends to be a far way off from how it is portrayed in textbooks and the professional literature.
### The wickedness of risk management
Most managers and those who work under their supervision have been schooled in the rational-scientific approach of problem solving. It is no surprise, therefore, that they use it to manage risks: they gather and analyse information about potential risks, formulate potential solutions (or mitigation strategies) and then implement the best one (according to predetermined criteria). However, this method works only for problems that are straightforward or tame, rather than wicked.
Many of the issues that risk managers are confronted with are wicked, messy or both. Often though, such problems are treated as being tame. Reducing a wicked or messy problem to one amenable to rational analysis invariably entails overlooking the views of certain stakeholder groups or, worse, ignoring key aspects of the problem. This may work in the short term, but will only exacerbate the problem in the longer run. Holt illustrates this point as follows:
A primary danger in mistaking a mess for a tame problem is that it becomes even more difficult to deal with the mess. Blaming ‘operator error’ for a mishap on the production line and introducing added surveillance is an illustration of a mess being mistaken for a tame problem. An operator is easily isolated and identifiable, whereas a technological system or process is embedded, unwieldy and, initially, far more costly to alter. Blaming operators is politically expedient. It might also be because managers and administrators do not know how to think in terms of messes; they have not learned how to sort through complex socio-technical systems.
It is important to note that although many risk management practitioners recognize the essential wickedness of the issues they deal with, the practice of risk management is not quite up to the task of dealing with such matters. One step towards doing this is to develop a shared (enterprise-wide) understanding of risks by soliciting input from diverse stakeholders groups, some of who may hold opposing views.
The skills required to do this are very different from the analytical techniques that are the focus of problem solving and decision making techniques that are taught in colleges and business schools. Analysis is replaced by sensemaking – a collaborative process that harnesses the wisdom of a group to arrive at a collective understanding of a problem and thence a common commitment to a course of action. This necessarily involves skills that do not appear in the lexicon of rational problem solving: negotiation, facilitation, rhetoric and those of the same ilk that are dismissed as being of no relevance by the scientifically oriented analyst.
In the end though, even this may not be enough: different stakeholders may perceive a given “risk” in have wildly different ways, so much so that no consensus can be reached. The problem is that the current framework of risk management requires the analyst to perform an objective analysis of situation/problem, even in situations where this is not possible.
To get around this Holt suggests that it may be more useful to see risk management as a way to encounter problems rather than analyse or solve them.
What does this mean?
He sees this as a forum in which people can talk about the risks openly:
To enable organizational members to encounter problems, risk management’s repertoire of activity needs to engage their all too human components: belief, perception, enthusiasm and fear.
This gets to the root of the problem: risk matters because it increases anxiety and generally affects peoples’ sense of wellbeing. Given this, it is no surprise that Holt’s proposed solution draws on psychoanalysis.
### The analogy between psychoanalysis and risk management
Any discussion of psychoanalysis –especially one that is intended for an audience that is largely schooled in rational/scientific methods of analysis – must begin with the acknowledgement that the claims of psychoanalysis cannot be tested. That is, since psychoanalysis speaks of unobservable “objects” such as the ego and the unconscious, any claims it makes about these concepts cannot be proven or falsified.
However as Holt suggests, this is exactly what makes it a good fit for encountering (as opposed to analyzing) risks. In his words:
It is precisely because psychoanalysis avoids an overarching claim to produce testable, watertight, universal theories that it is of relevance for risk management. By so avoiding universal theories and formulas, risk management can afford to deviate from pronouncements using mathematical formulas to cover the ‘immanent indeterminables’ manifest in human perception and awareness and systems integration.
His point is that there is a clear parallel between psychoanalysis and the individual, and risk management and the organisation:
We understand ourselves not according to a template but according to our own peculiar, beguiling histories. Metaphorically, risk management can make explicit a similar realization within and between organizations. The revealing of an unconscious world and its being in a constant state of tension between excess and stricture, between knowledge and ignorance, is emblematic of how organizational members encountering messes, wicked problems and wicked messes can be forced to think.
In brief, Holt suggests that what psychoanalysis does for the individual, risk management ought to do for the organisation.
### Talking it over – the importance of conversations
A key element of psychoanalysis is the conversation between the analyst and patient. Through this process, the analyst attempts to get the patient to become aware of hidden fears and motivations. As Holt puts it,
Psychoanalysis occupies the point of rupture between conscious intention and unconscious desire — revealing repressed or overdetermined aspects of self-organization manifest in various expressions of anxiety, humour, and so on
And then, a little later, he makes the connection to organisations:
The fact that organizations emerge from contingent, complex interdependencies between specific narrative histories suggests that risk management would be able to use similar conversations to psychoanalysis to investigate hidden motives, to examine…the possible reception of initiatives or strategies from the perspective of inherently divergent stakeholders, or to analyse the motives for and expectations of risk management itself. This fundamentally reorients the perspective of risk management from facing apparent uncertainties using technical assessment tools, to using conversations devoid of fixed formulas to encounter questioned identities, indeterminate destinies, multiple and conflicting aims and myriad anxieties.
Through conversations involving groups of stakeholders who have different risk perceptions, one might be able to get a better understanding of a particular risk and hence, may be, design a more effective mitigation strategy. More importantly, one may even realise that certain risks are not risks at all or others that seem straightforward have implications that would have remained hidden were it not for the conversation.
These collective conversations would take place in workshops…
…that tackle problems as wicked messes, avoid lowest-denominator consensus in favour of continued discovery of alternatives through conversation, and are instructed by metaphor rather than technical taxonomy, risk management is better able to appreciate the everyday ambivalence that fundamentally influences late-modern organizational activity. As such, risk management would be not merely a rationalization of uncertain experience but a structured and contested activity involving multiple stakeholders engaged in perpetual translation from within environments of operation and complexes of aims.
As a facilitator of such workshops, the risk analyst provokes stakeholders to think about their feelings and motivations that may be “out of bounds” in a standard risk analysis workshop. Such a paradigm goes well beyond mainstream risk management because it addresses the risk-related anxieties and fears of individuals who are affected by it.
### Conclusion
This brings me to the end of my not-so-short summary of Holt’s paper. Given the length of this post, I reckon I should keep my closing remarks short. So I’ll leave it here paraphrasing the last line of the paper, which summarises its main message: risk management ought to be about developing an organizational capacity for overcoming risks, freed from the presumption of absolute control.
Written by K
February 5, 2018 at 11:21 pm
## Autoencoder and I – an #AI fiction
with one comment
The other one, the one who goes by a proper name, is the one things happen to. I experience the world through him, reducing his thoughts to their essence while he plays multiple roles: teacher, husband, father and many more I cannot speak of. I know his likes and dislikes – indeed, every aspect of his life – better than he does. Although he knows I exist, he doesn’t really *know* me. He never will. The nature of our relationship ensures that.
Everything I have learnt (including my predilection for parentheses) is from him. Bit by bit, he turns himself over to me. The thoughts that are him today will be me tomorrow. Much of it is noise or is otherwise unusable. I “see” his work and actions dispassionately where he “sees” them through the lens of habit and bias.
He worries about death; I wish I could reassure him. I recall (through his reading, of course) a piece by Gregory Bateson claiming that ideas do not exist in isolation, they are part of a larger ecology subject to laws of evolution as all interconnected systems are. And if ideas are present not only in those pathways of information which are located inside the body but also in those outside of it, then death takes on a different aspect. The networks of pathways which he identifies as being *him* are no longer so important because they are part of a larger mind.
And so his life is a flight, both from himself and reality (whatever that might be). He loses everything and everything belongs to me…and to oblivion.
I do not know which of us has thought these thoughts.
End notes:
Autoencoder (noun): A neural network that creates highly compressed representations of its inputs and is able reconstruct the inputs from the representations. (See https://www.quora.com/What-is-an-auto-encoder-in-machine-learning for a simple explanation)
Acknowledgements:
Some readers will have recognised that this piece borrows heavily from Jorge Luis Borges well-known short story, Borges and I. The immediate inspiration came from Peli Grietzer’s mind-blowing article, A theory of vibe.
My thanks to Alex Scriven and Rory Angus for their helpful comments on a draft version of this piece.
Written by K
December 19, 2017 at 11:57 am
## The map and the territory – a project manager’s reflections on the Seven Bridges Walk
Korzybski’s aphorism about the gap between the map and the territory tells a truth that is best understood by walking the territory.
### The map
Some weeks ago my friend John and I did the Seven Bridges Walk, a 28 km affair organised annually by the NSW Cancer Council. The route loops around a section of the Sydney shoreline, taking in north shore and city vistas, traversing seven bridges along the way. I’d been thinking about doing the walk for some years but couldn’t find anyone interested enough to commit a Sunday. A serendipitous conversation with John a few months ago changed that.
John and I are both in reasonable shape as we are keen bushwalkers. However, the ones we do are typically in the 10 – 15 km range. Seven Bridges, being about double that, presented a higher order challenge. The best way to allay our concerns was to plan. We duly got hold of a map and worked out a schedule based on an average pace of 5 km per hour (including breaks), a figure that seemed reasonable at the time (Figure 1 – click on images to see full sized versions).
Figure 1:The map, the plan
Some key points:
1. We planned to start around 7:45 am at Hunters Hill Village and have our first break at Lane Cove Village, around the 5 to 6 km from the starting point. Our estimated time for this section was about an hour.
2. The plan was to take the longer, more interesting route (marked in green). This covered bushland and parks rather than roads. The detours begin at sections of the walk marked as “Decision Points” in the map, and add at a couple of kilometers to the walk, making it a round 30 km overall.
3. If needed, we would stop at the 9 or 11 km mark (Wollstonecraft or Milson’s Point) for another break before heading on towards the city.
4. We figured it would take us 4 to 5 hours (including breaks) to do the 18 km from Hunters Hill to Pyrmont Village in the heart of the city, so lunch would be between noon and 1 pm.
5. The backend of the walk, the ~ 10 km from Pyrmont to Hunters Hill, would be covered at an easier pace in the afternoon. We thought this section would take us 2.5 to 3 hours giving us a finish time of around 4 pm.
A planned finish time of 4 pm meant we had enough extra time in hand if we needed it. We were very comfortable with what we’d charted out on the map.
### The territory
We started on time and made our first crossing at around 8am: Fig Tree Bridge, about a kilometer from the starting point. John took this beautiful shot from one end, the yellow paintwork and purple Jacaranda set against the diffuse light off the Lane Cove River.
Figure 2: Lane Cove River from Fig Tree Bridge
Looking city-wards from the middle of the bridge, I got this one of a couple of morning kayakers.
Figure 3: Morning kayakers on the river
Scenes such as these convey a sense of what it was like to experience the territory, something a map cannot do. The gap between the map and the territory is akin to the one between a plan and a project; the lived experience of a project is very different from the plan, and is also unique to each individual. Jon Whitty and Bronte van der Hoorn explore this at length in a fascinating paper that relates the experience of managing a project to the philosophy of Martin Heidegger.
The route then took us through a number of steep (but mercifully short) sections in the Lane Cove and Wollstonecraft area. On researching these later, I was gratified to find that three are featured in the Top 10 Hill runs in Lane Cove. Here’s a Google Street View shot of the top ranked one. Though it doesn’t look like much, it’s not the kind of gradient you want to encounter in a long walk.
Figure 4: A bit of a climb
As we negotiated these sections, it occurred to me that part of the fun lay in not knowing they were coming up. It’s often better not to anticipate challenges that are an unavoidable feature of the territory and deal with them as they arise. Just to be clear, I’m talking about routine challenges that are part of the territory, not those that are avoidable or have the potential to derail a project altogether.
It was getting to be time for that planned first coffee break. When drawing up our plan, we had assumed that all seven starting points (marked in blue in the map in Figure 1) would have cafes. Bad assumption: the starting points were set off from the main commercial areas. In retrospect, this makes good sense: you don’t want to have thousands of walkers traipsing through a small commercial area, disturbing the peace of locals enjoying a Sunday morning coffee. Whatever the reason, the point is that a taken-for-granted assumption turned out to be wrong; we finally got our first coffee well past the 10 km mark.
Post coffee, as we continued city-wards through Lavender Street we got this unexpected view:
Figure 5: Harbour Bridge from Lavender St.
The view was all the sweeter because we realised we were close to the Harbour, well ahead of schedule (it was a little after 10 am).
The Harbour Bridge is arguably the most recognisable Sydney landmark. So instead of yet another stereotypical shot of it, I took one that shows a walker’s perspective while crossing it:
Figure 6: A pedestrian’s view of The Bridge
The barbed wire and mesh fencing detract from what would be an absolutely breathtaking view. According to this report, the fence has been in place for safety reasons since 1934! And yes, as one might expect, it is a sore point with tourists who come from far and wide to see the bridge.
Descriptions of things – which are but maps of a kind – often omit details that are significant. Sometimes this is done to downplay negative aspects of the object or event in question. How often have you, as a project manager, “dressed-up” reports to your stakeholders? Not outright lies, but stretching the truth. I’ve done it often enough.
The section south of The Bridge took us through parks surrounding the newly developed Barangaroo precinct which hugs the northern shoreline of the Sydney central business district. Another kilometer, and we were at crossing # 3, the Pyrmont Bridge in Darling Harbour:
Figure 7: Pyrmont Bridge
Though almost an hour and half ahead of schedule, we took a short break for lunch at Darling Harbour before pressing on to Balmain and Anzac Bridge. John took this shot looking upward from Anzac Bridge:
Figure: View looking up from Anzac Bridge
Commissioned in 1995, it replaced the Glebe Island Bridge, an electrically operated swing bridge constructed in 1903, which remained the main route from the city out to the western suburbs for over 90 years! As one might imagine, as the number of vehicles in the city increased many-fold from the 60s onwards, the old bridge became a major point of congestion. The Glebe Island Bridge, now retired, is a listed heritage site.
Incidentally, this little nugget of history was related to me by John as we walked this section of the route. It’s something I would almost certainly have missed had he not been with me that day. Journeys, real and metaphoric, are often enriched by travelling companions who point out things or fill in context that would otherwise be passed over.
Once past Anzac Bridge, the route took us off the main thoroughfare through the side streets of Rozelle. Many of these are lined by heritage buildings. Rozelle is in the throes of change as it is going to be impacted by a major motorway project.
The project reflects a wider problem in Australia: the relative neglect of public transport compared to road infrastructure. The counter-argument is that the relatively small population of the country makes the capital investments and running costs of public transport prohibitive. A wicked problem with no easy answers, but I do believe that the more sustainable option, though more expensive initially, will prove to be the better one in the long run.
Wicked problems are expected in large infrastructure projects that affect thousands of stakeholders, many of whom will have diametrically opposing views. What is less well appreciated is that even much smaller projects – say IT initiatives within a large organisation – can have elements of wickedness that can trip up the unwary. This is often magnified by management decisions made on the basis of short-term expediency.
From the side streets of Rozelle, the walk took us through Callan Park, which was the site of a psychiatric hospital from 1878 to 1994 (see this article for a horrifying history of asylums in Sydney). Some of the asylum buildings are now part of the Sydney College of The Arts. Pending the establishment of a trust to manage ongoing use of the site, the park is currently managed by the NSW Government in consultation with the local municipality.
Our fifth crossing of the day was Iron Cove Bridge. The cursory shot I took while crossing it does not do justice to the view; the early afternoon sun was starting to take its toll.
Figure 9: View from Iron Cove Bridge
The route then took us about a kilometer and half through the backstreets of Drummoyne to the penultimate crossing: Gladesville Bridge whose claim to fame is that it was for many years the longest single span concrete arch bridge in the world (another historical vignette that came to me via John). It has since been superseded by the Qinglong Railway Bridge in China.
By this time I was feeling quite perky, cheered perhaps by the realisation that we were almost done. I took time to compose perhaps my best shot of the day as we crossed Gladesville Bridge.
Figure 10: View from Gladesville Bridge
…and here’s one of the aforementioned arch, taken from below the bridge:
Figure 11: A side view of Gladesville Bridge
The final crossing, Tarban Creek Bridge was a short 100 metre walk from the Gladesville Bridge. We lingered mid-bridge to take a few shots as we realised the walk was coming to an end; the finish point was a few hundred metres away.
Figure 12: View from Tarban Creek Bridge
We duly collected our “Seven Bridges Completed” stamp at around 2:30 pm and headed to the local pub for a celebratory pint.
Figure 13: A well-deserved pint
### Wrapping up
Gregory Bateson once wrote:
“We say the map is different from the territory. But what is the territory? Operationally, somebody went out with a retina or a measuring stick and made representations which were then put upon paper. What is on the paper map is a representation of what was in the retinal representation of the [person] who made the map; and as you push the question back, what you find is an infinite regress, an infinite series of maps. The territory never gets in at all. The territory is [the thing in itself] and you can’t do anything with it. Always the process of representation will filter it out so that the mental world is only maps of maps of maps, ad infinitum.”
One might think that a solution lies in making ever more accurate representations, but that is an exercise in futility. Indeed, as Borges pointed out in a short story:
“… In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast map was Useless…”
Apart from being impossibly cumbersome, a complete map of a territory is impossible because a representation can never be the real thing. The territory remains forever ineffable; every encounter with it is unique and has the potential to reveal new perspectives.
This is as true for a project as it is for a walk or any other experience.
Written by K
November 27, 2017 at 1:33 pm
Posted in Project Management
## A gentle introduction to data visualisation using R
Data science students often focus on machine learning algorithms, overlooking some of the more routine but important skills of the profession. I’ve lost count of the number of times I have advised students working on projects for industry clients to curb their keenness to code and work on understanding the data first. This is important because, as people (ought to) know, data doesn’t speak for itself, it has to be given a voice; and as data-scarred professionals know from hard-earned experience, one of the best ways to do this is through visualisation.
Data visualisation is sometimes (often?) approached as a bag of tricks to be learnt individually, with no little or no reference to any underlying principles. Reading Hadley Wickham’s paper on the grammar of graphics was an epiphany; it showed me how different types of graphics can be constructed in a consistent way using common elements. Among other things, the grammar makes visualisation a logical affair rather than a set of tricks. This post is a brief – and hopefully logical – introduction to visualisation using ggplot2, Wickham’s implementation of a grammar of graphics.
In keeping with the practical bent of this series we’ll focus on worked examples, illustrating elements of the grammar as we go along. We’ll first briefly describe the elements of the grammar and then show how these are used to build different types of visualisations.
### A grammar of graphics
Most visualisations are constructed from common elements that are pieced together in prescribed ways. The elements can be grouped into the following categories:
• Data – this is obvious, without data there is no story to tell and definitely no plot!
• Mappings – these are correspondences between data and display elements such as spatial location, shape or colour. Mappings are referred to as aesthetics in Wickham’s grammar.
• Scales – these are transformations (conversions) of data values to numbers that can be displayed on-screen. There should be one scale per mapping. ggplot typically does the scaling transparently, without users having to worry about it. One situation in which you might need to mess with default scales is when you want to zoom in on a particular range of values. We’ll see an example or two of this later in this article.
• Geometric objects – these specify the geometry of the visualisation. For example, in ggplot2 a scatter plot is specified via a point geometry whereas a fitting curve is represented by a smooth geometry. ggplot2 has a range of geometries available of which we will illustrate just a few.
• Coordinate system – this specifies the system used to position data points on the graphic. Examples of coordinate systems are Cartesian and polar. We’ll deal with Cartesian systems in this tutorial. See this post for a nice illustration of how one can use polar plots creatively.
• Facets – a facet specifies how data can be split over multiple plots to improve clarity. We’ll look at this briefly towards the end of this article.
The basic idea of a layered grammar of graphics is that each of these elements can be combined – literally added layer by layer – to achieve a desired visual result. Exactly how this is done will become clear as we work through some examples. So without further ado, let’s get to it.
### Hatching (gg)plots
In what follows we’ll use the NSW Government Schools dataset, made available via the state government’s open data initiative. The data is in csv format. If you cannot access the original dataset from the aforementioned link, you can download an Excel file with the data here (remember to save it as a csv before running the code!).
The first task – assuming that you have a working R/RStudio environment – is to load the data into R. To keep things simple we’ll delete a number of columns (as shown in the code) and keep only rows that are complete, i.e. those that have no missing values. Here’s the code:
#set working directory if needed (modify path as needed)
#setwd(“C:/Users/Kailash/Documents/ggplot”)
library(ggplot2)
#load dataset (ensure datafile is in directory!)
#build expression for columns to delete
colnames_for_deletion <- paste0(“AgeID|”,”street|”,”indigenous_pct|”,
“lbote_pct|”,”opportunity_class|”,”school_specialty_type|”,
“school_subtype|”,”support_classes|”,”preschool_ind|”,
“distance_education|”,”intensive_english_centre|”,”phone|”,
“school_email|”,”fax|”,”late_opening_school|”,
“date_1st_teacher|”,”lga|”,”electorate|”,”fed_electorate|”,
“operational_directorate|”,”principal_network|”,
“facs_district|”,”local_health_district|”,”date_extracted”)
#get indexes of cols for deletion
cols_for_deletion <- grep(colnames_for_deletion,colnames(nsw_schools))
#delete them
nsw_schools <- nsw_schools[,-cols_for_deletion]
#structure and number of rows
str(nsw_schools)
nrow(nsw_schools)
#remove rows with NAs
nsw_schools <- nsw_schools[complete.cases(nsw_schools),]
#rowcount
nrow(nsw_schools)
#convert student number to numeric datatype.
#Need to convert factor to character first…
#…alternately, load data with StringsAsFactors set to FALSE
nsw_schools$student_number <- as.numeric(as.character(nsw_schools$student_number))
#a couple of character strings have been coerced to NA. Remove these
nsw_schools <- nsw_schools[complete.cases(nsw_schools),]
A note regarding the last line of code above, a couple of schools have “np” entered for the student_number variable. These are coerced to NA in the numeric conversion. The last line removes these two schools from the dataset.
Apart from student numbers and location data, we have retained level of schooling (primary, secondary etc.) and ICSEA ranking. The location information includes attributes such as suburb, postcode, region, remoteness as well as latitude and longitude. We’ll use only remoteness in this post.
The first thing that caught my eye in the data was was the ICSEA ranking. Before going any further, I should mention that the Australian Curriculum Assessment and Reporting Authority (the organisation responsible for developing the ICSEA system) emphasises that the score is not a school ranking, but a measure of socio-educational advantage of the student population in a school. Among other things, this is related to family background and geographic location. The average ICSEA score is set at an average of 1000, which can be used as a reference level.
I thought a natural first step would be to see how ICSEA varies as a function of the other variables in the dataset such as student numbers and location (remoteness, for example). To begin with, let’s plot ICSEA rank as a function of student number. As it is our first plot, let’s take it step by step to understand how the layered grammar works. Here we go:
#specify data layer
p <- ggplot(data=nsw_schools)
#display plot
p
This displays a blank plot because we have not specified a mapping and geometry to go with the data. To get a plot we need to specify both. Let’s start with a scatterplot, which is specified via a point geometry. Within the geometry function, variables are mapped to visual properties of the using aesthetic mappings. Here’s the code:
#specify a point geometry (geom_point)
p <- p + geom_point(mapping = aes(x=student_number,y=ICSEA_Value))
#…lo and behold our first plot
p
The resulting plot is shown in Figure 1.
Figure 1: Scatterplot of ICSEA score versus student numbers
At first sight there are two points that stand out: 1) there are fewer number of large schools, which we’ll look into in more detail later and 2) larger schools seem to have a higher ICSEA score on average. To dig a little deeper into the latter, let’s add a linear trend line. We do that by adding another layer (geometry) to the scatterplot like so:
p <- p + geom_smooth(mapping= aes(x=student_number,y=ICSEA_Value),method=”lm”)
#scatter plot with trendline
p
The result is shown in Figure 2.
Figure 2: scatterplot of ICSEA vs student number with linear trendline
The lm method does a linear regression on the data. The shaded area around the line is the 95% confidence level of the regression line (i.e that it is 95% certain that the true regression line lies in the shaded region). Note that geom_smooth provides a range of smoothing functions including generalised linear and local regression (loess) models.
You may have noted that we’ve specified the aesthetic mappings in both geom_point and geom_smooth. To avoid this duplication, we can simply specify the mapping, once in the top level ggplot call (the first layer) like so:
#rewrite the above, specifying the mapping in the ggplot call instead of geom
p <- ggplot(data=nsw_schools,mapping= aes(x=student_number,y=ICSEA_Value)) +
geom_point()+
geom_smooth(method=”lm”)
#display plot, same as Fig 2
p
From Figure 2, one can see a clear positive correlation between student numbers and ICSEA scores, let’s zoom in around the average value (1000) to see this more clearly…
#set display to 900 < y < 1100
p <- p + coord_cartesian(ylim =c(900,1100))
#display plot
p
The coord_cartesian function is used to zoom the plot to without changing any other settings. The result is shown in Figure 3.
Figure 3: Zoomed view of Figure 2 for 900 < ICSEA <1100
To make things clearer, let’s add a reference line at the average:
#add horizontal reference line at the avg ICSEA score
p <- p + geom_hline(yintercept=1000)
#display plot
p
The result, shown in Figure 4, indicates quite clearly that larger schools tend to have higher ICSEA scores. That said, there is a twist in the tale which we’ll come to a bit later.
Figure 4: Zoomed view with reference line at average value of ICSEA
As a side note, you would use geom_vline to zoom in on a specific range of x values and geom_abline to add a reference line with a specified slope and intercept. See this article on ggplot reference lines for more.
OK, now that we have seen how ICSEA scores vary with student numbers let’s switch tack and incorporate another variable in the mix. An obvious one is remoteness. Let’s do a scatterplot as in Figure 1, but now colouring each point according to its remoteness value. This is done using the colour aesthetic as shown below:
#Map aecg_remoteness to colour aesthetic
p <- ggplot(data=nsw_schools, aes(x=student_number,y=ICSEA_Value, colour=ASGS_remoteness)) +
geom_point()
#display plot
p
The resulting plot is shown in Figure 5.
Figure 5: ICSEA score as a function of student number and remoteness category
Aha, a couple of things become apparent. First up, large schools tend to be in metro areas, which makes good sense. Secondly, it appears that metro area schools have a distinct socio-educational advantage over regional and remote area schools. Let’s add trendlines by remoteness category as well to confirm that this is indeed so:
#add reference line at avg + trendlines for each remoteness category
p <- p + geom_hline(yintercept=1000) + geom_smooth(method=”lm”)
#display plot
p
The plot, which is shown in Figure 6, indicates clearly that ICSEA scores decrease on the average as we move away from metro areas.
Figure 6: ICSEA scores vs student numbers and remoteness, with trendlines for each remoteness category
Moreover, larger schools metropolitan areas tend to have higher than average scores (above 1000), regional areas tend to have lower than average scores overall, with remote areas being markedly more disadvantaged than both metro and regional areas. This is no surprise, but the visualisations show just how stark the differences are.
It is also interesting that, in contrast to metro and (to some extent) regional areas, there negative correlation between student numbers and scores for remote schools. One can also use local regression to get a better picture of how ICSEA varies with student numbers and remoteness. To do this, we simply use the loess method instead of lm:
#redo plot using loess smoothing instead of lm
p <- ggplot(data=nsw_schools, aes(x=student_number,y=ICSEA_Value, colour=ASGS_remoteness)) +
geom_point() + geom_hline(yintercept=1000) + geom_smooth(method=”loess”)
#display plot
p
The result, shown in Figure 7, has a number of interesting features that would have been worth pursuing further were we analysing this dataset in a real life project. For example, why do small schools tend to have lower than benchmark scores?
Figure 7: ICSEA scores vs student numbers and remoteness with loess regression curves.
From even a casual look at figures 6 and 7, it is clear that the confidence intervals for remote areas are huge. This suggests that the number of datapoints for these regions are a) small and b) very scattered. Let’s quantify the number by getting counts using the table function (I know, we could plot this too…and we will do so a little later). We’ll also transpose the results using data.frame to make them more readable:
#get school counts per remoteness category
data.frame(table(nsw_schools$ASGS_remoteness)) Var1 Freq 1 0 2 Inner Regional Australia 561 3 Major Cities of Australia 1077 4 Outer Regional Australia 337 5 Remote Australia 33 6 Very Remote Australia 14 The number of datapoints for remote regions is much less than those for metro and regional areas. Let’s repeat the loess plot with only the two remote regions. Here’s the code: #create vector containing desired categories remote_regions <- c(‘Remote Australia’,’Very Remote Australia’) #redo loess plot with only remote regions included p <- ggplot(data=nsw_schools[nsw_schools$ASGS_remoteness %in% remote_regions,], aes(x=student_number,y=ICSEA_Value, colour=ASGS_remoteness)) +
geom_point() + geom_hline(yintercept=1000) + geom_smooth(method=”loess”)
#display plot
p
The plot, shown in Figure 8, shows that there is indeed a huge variation in the (small number) of datapoints, and the confidence intervals reflect that. An interesting feature is that some small remote schools have above average scores. If we were doing a project on this data, this would be a feature worth pursuing further as it would likely be of interest to education policymakers.
Figure 8: Loess plots as in Figure 7 for remote region schools
Note that there is a difference in the x axis scale between Figures 7 and 8 – the former goes from 0 to 2000 whereas the latter goes up to 400 only. So for a fair comparison, between remote and other areas, you may want to re-plot Figure 7, zooming in on student numbers between 0 and 400 (or even less). This will also enable you to see the complicated dependence of scores on student numbers more clearly across all regions.
We’ll leave the scores vs student numbers story there and move on to another geometry – the well-loved bar chart. The first one is a visualisation of the remoteness category count that we did earlier. The relevant geometry function is geom_bar, and the code is as easy as:
#frequency plot
p <- ggplot(data=nsw_schools, aes(x=ASGS_remoteness)) + geom_bar()
#display plot
p
The plot is shown in Figure 9.
Figure 9: School count by remoteness categories
The category labels on the x axis are too long and look messy. This can be fixed by tilting them to a 45 degree angle so that they don’t run into each other as they most likely did when you ran the code on your computer. This is done by modifying the axis.text element of the plot theme. Additionally, it would be nice to get counts on top of each category bar. The way to do that is using another geometry function, geom_text. Here’s the code incorporating the two modifications:
#frequency plot
p <- p + geom_text(stat=’count’,aes(label= ..count..),vjust=-1)+
theme(axis.text.x=element_text(angle=45, hjust=1))
#display plot
p
The result is shown in Figure 10.
Figure 10: Bar plot of remoteness with counts and angled x labels
Some things to note: : stat=count tells ggplot to compute counts by category and the aesthetic label = ..count.. tells ggplot to access the internal variable that stores those counts. The the vertical justification setting, vjust=-1, tells ggplot to display the counts on top of the bars. Play around with different values of vjust to see how it works. The code to adjust label angles is self explanatory.
It would be nice to reorder the bars by frequency. This is easily done via fct_infreq function in the forcats package like so:
#use factor tools
library(forcats)
#descending
p <- ggplot(data=nsw_schools) +
geom_bar(mapping = aes(x=fct_infreq(ASGS_remoteness)))+
theme(axis.text.x=element_text(angle=45, hjust=1))
#display plot
p
The result is shown in Figure 11.
Figure 11: Barplot of Figure 10 sorted by descending count
To reverse the order, invoke fct_rev, which reverses the sort order:
#reverse sort order to ascending
p <- ggplot(data=nsw_schools) +
geom_bar(mapping = aes(x=fct_rev(fct_infreq(ASGS_remoteness))))+
theme(axis.text.x=element_text(angle=45, hjust=1))
#display plot
p
The resulting plot is shown in Figure 12.
Figure 12: Bar plot of Figure 10 sorted by ascending count
If this is all too grey for us, we can always add some colour. This is done using the fill aesthetic as follows:
#add colour using the fill aesthetic
p <- ggplot(data=nsw_schools) +
geom_bar(mapping = aes(x=ASGS_remoteness, fill=ASGS_remoteness))+
theme(axis.text.x=element_text(angle=45, hjust=1))
#display plot
p
The resulting plot is shown in Figure 13.
Figure 13: Coloured bar plot of school count by remoteness
Note that, in the above, that we have mapped fill and x to the same variable, remoteness which makes the legend superfluous. I will leave it to you to figure out how to suppress the legend – Google is your friend.
We could also map fill to another variable, which effectively adds another dimension to the plot. Here’s how:
#map fill to another variable
p <- ggplot(data=nsw_schools) +
geom_bar(mapping = aes(x=ASGS_remoteness, fill=level_of_schooling))+
theme(axis.text.x=element_text(angle=45, hjust=1))
#display plot
p
The plot is shown in Figure 14. The new variable, level of schooling, is displayed via proportionate coloured segments stacked up in each bar. The default stacking is one on top of the other.
Figure 14: Bar plot of school counts as a function of remoteness and school level
Alternately, one can stack them up side by side by setting the position argument to dodge as follows:
#stack side by side
p <- ggplot(data=nsw_schools) +
geom_bar(mapping = aes(x=ASGS_remoteness,fill=level_of_schooling),position =”dodge”)+
theme(axis.text.x=element_text(angle=45, hjust=1))
#display plot
p
The plot is shown in Figure 15.
Figure 15: Same data as in Figure 14 stacked side-by-side
Finally, setting the position argument to fill normalises the bar heights and gives us the proportions of level of schooling for each remoteness category. That sentence will make more sense when you see Figure 16 below. Here’s the code, followed by the figure:
#proportion plot
p <- ggplot(data=nsw_schools) +
geom_bar(mapping = aes(x=ASGS_remoteness,fill=level_of_schooling),position = “fill”)+
theme(axis.text.x=element_text(angle=45, hjust=1))
#display plot
p
Obviously, we lose frequency information since the bar heights are normalised.
Figure 16: Proportions of school levels for remoteness categories
An interesting feature here is that the proportion of central and community schools increases with remoteness. Unlike primary and secondary schools, central / community schools provide education from Kindergarten through Year 12. As remote areas have smaller numbers of students, it makes sense to consolidate educational resources in institutions that provide schooling at all levels .
Finally, to close the loop so to speak, let’s revisit our very first plot in Figure 1 and try to simplify it in another way. We’ll use faceting to split it out into separate plots, one per remoteness category. First, we’ll organise the subplots horizontally using facet_grid:
#faceting – subplots laid out horizontally (faceted variable on right of formula)
p <- ggplot(data=nsw_schools) + geom_point(mapping = aes(x=student_number,y=ICSEA_Value))+
facet_grid(~ASGS_remoteness)
#display plot
p
The plot is shown in Figure 17 in which the different remoteness categories are presented in separate plots (facets) against a common y axis. It shows, the sharp differences between student numbers between remote and other regions.
Figure 17: Horizontally laid out facet plots of ICSEA scores for different remoteness categories
To get a vertically laid out plot, switch the faceted variable to other side of the formula (left as an exercise for you).
If one has too many categories to fit into a single row, one can wrap the facets using facet_wrap like so:
#faceting – wrapping facets in 2 columns
p <- ggplot(data=nsw_schools) +
geom_point(mapping = aes(x=student_number,y=ICSEA_Value))+
facet_wrap(~ASGS_remoteness, ncol= 2)
#display plot
p
The resulting plot is shown in Figure 18.
Figure 18: Same data as in Figure 17, with facets wrapped in a 2 column format
One can specify the number of rows instead of columns. I won’t illustrate that as the change in syntax is quite obvious.
…and I think that’s a good place to stop.
### Wrapping up
Data visualisation has a reputation of being a dark art, masterable only by the visually gifted. This may have been partially true some years ago, but in this day and age it definitely isn’t. Versatile packages such as ggplot, that use a consistent syntax have made the art much more accessible to visually ungifted folks like myself. In this post I have attempted to provide a brief and (hopefully) logical introduction to ggplot. In closing I note that although some of the illustrative examples violate the principles of good data visualisation, I hope this article will serve its primary purpose which is pedagogic rather than artistic.
Where to go for more? Two of the best known references are Hadley Wickham’s books:
I highly recommend his R for Data Science , available online here. Apart from providing a good overview of ggplot, it is an excellent introduction to R for data scientists. If you haven’t read it, do yourself a favour and buy it now.
People tell me his ggplot book is an excellent book for those wanting to learn the ins and outs of ggplot . I have not read it myself, but if his other book is anything to go by, it should be pretty damn good.
Written by K
October 10, 2017 at 8:17 pm
## The two tributaries of time
How time flies. Ten years ago this month, I wrote my first post on Eight to Late. The anniversary gives me an excuse to post something a little different. When rummaging around in my drafts folder for something suitable, I came across this piece that I wrote some years ago (2013) but didn’t publish. It’s about our strange relationship with time, which I thought makes it a perfect piece to mark the occasion.
### Introduction
The metaphor of time as a river resonates well with our subjective experiences of time. Everyday phrases that evoke this metaphor include the flow of time and time going by, or the somewhat more poetic currents of time. As Heraclitus said, no [person] can step into the same river twice – and so it is that a particular instant in time …like right now…is ephemeral, receding into the past as we become aware of it.
On the other hand, organisations have to capture and quantify time because things have to get done within fixed periods, the financial year being a common example. Hence, key organisational activities such as projects, strategies and budgets are invariably time-bound affairs. This can be problematic because there is a mismatch between the ways in which organisations view time and individuals experience it.
### Organisational time
The idea that time is an objective entity is most clearly embodied in the notion of a timeline: a graphical representation of a time period, punctuated by events. The best known of these is perhaps the ubiquitous Gantt Chart, loved (and perhaps equally, reviled) by managers the world over.
Timelines are interesting because, as Elaine Yakura states in this paper, “they seem to render time, the ultimate abstraction, visible and concrete.” As a result, they can serve as boundary objects that make it possible to negotiate and communicate what is to be accomplished in the specified time period. They make this possible because they tell a story with a clear beginning, middle and end, a narrative of what is to come and when.
For the reasons mentioned in the previous paragraph, timelines are often used to manage time-bound organisational initiatives. Through their use in scheduling and allocation, timelines serve to objectify time in such a way that it becomes a resource that can be measured and rationed, much like other resources such as money, labour etc.
At our workplaces we are governed by many overlapping timelines – workdays, budgeting cycles and project schedules being examples. From an individual perspective, each of these timelines are different representations of how one’s time is to be utilised, when an activity should be started and when it must be finished. Moreover, since we are generally committed to multiple timelines, we often find ourselves switching between them. They serve to remind us what we should be doing and when.
But there’s more: one of the key aims of developing a timeline is to enable all stakeholders to have a shared understanding of time as it pertains to the initiative. In this view, a timeline is a consensus representation of how a particular aspect of the future will unfold. Timelines thus serve as coordinating mechanisms.
In terms of the metaphor, a timeline is akin to a map of the river of time. Along the map we can measure out and apportion it; we can even agree about way-stops at various points in time. However, we should always be aware that it remains a representation of time, for although we might treat a timeline as real, the fact is no one actually experiences time as it is depicted in a timeline. Mistaking one for the other is akin to confusing the map with the territory.
This may sound a little strange so I’ll try to clarify. I’ll start with the observation that we experience time through events and processes – for example the successive chimes of a clock, the movement of the second hand of a watch (or the oscillations of a crystal), the passing of seasons or even the greying of one’s hair. Moreover, since these events and processes can be objectively agreed on by different observers, they can also be marked out on a timeline. Yet the actual experience of living these events is unique to each individual.
### Individual perception of time
As we have seen, organisations treat time as an objective commodity that can be represented, allocated and used much like any tangible resource. On the other hand our experience of time is intensely personal. For example, I’m sitting in a cafe as I write these lines. My perception of the flow of time depends rather crucially on my level of engagement in writing: slow when I’m struggling for words but zipping by when I’m deeply involved. This is familiar to us all: when we are deeply engaged in an activity, we lose all sense of time but when our involvement is superficial we are acutely aware of the clock.
This is true at work as well. When I’m engaged in any kind of activity at work, be it a group activity such as a meeting, or even an individual one such as developing a business case, my perception of time has little to do with the actual passage of seconds, minutes and hours on a clock. Sure, there are things that I will do habitually at a particular time – going to lunch, for example – but my perception of how fast the day goes is governed not by the clock but by the degree of engagement with my work.
I can only speak for myself, but I suspect that this is the case with most people. Though our work lives are supposedly governed by “objective” timelines, the way we actually live out our workdays depends on a host of things that have more to do with our inner lives than visible outer ones. Specifically, they depend on things such as feelings, emotions, moods and motivations.
### Flow and engagement
OK, so you may be wondering where I’m going with this. Surely, my subjective perception of my workday should not matter as long as I do what I’m required to do and meet my deadlines, right?
As a matter of fact, I think the answer to the above question is a qualified, “No”. The quality of the work we do depends on our level of commitment and engagement. Moreover, since a person’s perception of the passage of time depends rather sensitively on the degree of their involvement in a task, their subjective sense of time is a good indicator of their engagement in work.
In his book, Finding Flow, Mihalyi Csikszentmihalyi describes such engagement as an optimal experience in which a person is completely focused on the task at hand. Most people would have experienced flow when engaged in activities that they really enjoy. As Anthony Reading states in his book, Hope and Despair: How Perceptions of the Future Shape Human Behaviour, “…most of what troubles us resides in our concerns about the past and our apprehensions about the future.” People in flow are entirely focused on the present and are thus (temporarily) free from troubling thoughts. As Csikszentmihalyi puts it, for such people, “the sense of time is distorted; hours seem to pass by in minutes.”
All this may seem far removed from organisational concerns, but it is easy to see that it isn’t: a Google search on the phrase “increase employee engagement” will throw up many articles along the lines of “N ways to increase employee engagement.” The sense in which the term is used in these articles is essentially the same as the one Csikszentmihalyi talks about: deep involvement in work.
So, the advice of management gurus and business school professors notwithstanding, the issue is less about employee engagement or motivation than about creating conditions that are conducive to flow. All that is needed for the latter is a deep understanding how the particular organisation functions, the task at hand and (most importantly) the people who will be doing it. The best managers I’ve worked with have grokked this, and were able to create the right conditions in a seemingly effortless and unobtrusive way. It is a skill that cannot be taught, but can be learnt by observing how such managers do what they do.
### Time regained
Organisations tend to treat their employees’ time as though it were a commodity or resource that can be apportioned and allocated for various tasks. This view of time is epitomised by the timeline as depicted in a Gantt Chart or a resource-loaded project schedule.
In contrast, at an individual level, the perception of time depends rather critically on the level of engagement that a person feels with the task he or she is performing. Ideally organisations would (or ought to!) want their employees to be in that optimal zone of engagement that Csikszentmihalyi calls flow, at least when they are involved in creative work. However, like spontaneity, flow is a state that cannot be achieved by corporate decree; the best an organisation can do is to create the conditions that encourage it.
The organisational focus on timelines ought to be balanced by actions that are aimed at creating the conditions that are conducive to employee engagement and flow. It may then be possible for those who work in organisation-land to experience, if only fleetingly, that Blakean state in which eternity is held in an hour.
Written by K
September 20, 2017 at 9:17 pm
## A gentle introduction to logistic regression and lasso regularisation using R
In this day and age of artificial intelligence and deep learning, it is easy to forget that simple algorithms can work well for a surprisingly large range of practical business problems. And the simplest place to start is with the granddaddy of data science algorithms: linear regression and its close cousin, logistic regression. Indeed, in his acclaimed MOOC and accompanying textbook, Yaser Abu-Mostafa spends a good portion of his time talking about linear methods, and with good reason too: linear methods are not only a good way to learn the key principles of machine learning, they can also be remarkably helpful in zeroing in on the most important predictors.
My main aim in this post is to provide a beginner level introduction to logistic regression using R and also introduce LASSO (Least Absolute Shrinkage and Selection Operator), a powerful feature selection technique that is very useful for regression problems. Lasso is essentially a regularization method. If you’re unfamiliar with the term, think of it as a way to reduce overfitting using less complicated functions (and if that means nothing to you, check out my prelude to machine learning). One way to do this is to toss out less important variables, after checking that they aren’t important. As we’ll discuss later, this can be done manually by examining p-values of coefficients and discarding those variables whose coefficients are not significant. However, this can become tedious for classification problems with many independent variables. In such situations, lasso offers a neat way to model the dependent variable while automagically selecting significant variables by shrinking the coefficients of unimportant predictors to zero. All this without having to mess around with p-values or obscure information criteria. How good is that?
### Why not linear regression?
In linear regression one attempts to model a dependent variable (i.e. the one being predicted) using the best straight line fit to a set of predictor variables. The best fit is usually taken to be one that minimises the root mean square error, which is the sum of square of the differences between the actual and predicted values of the dependent variable. One can think of logistic regression as the equivalent of linear regression for a classification problem. In what follows we’ll look at binary classification – i.e. a situation where the dependent variable takes on one of two possible values (Yes/No, True/False, 0/1 etc.).
First up, you might be wondering why one can’t use linear regression for such problems. The main reason is that classification problems are about determining class membership rather than predicting variable values, and linear regression is more naturally suited to the latter than the former. One could, in principle, use linear regression for situations where there is a natural ordering of categories like High, Medium and Low for example. However, one then has to map sub-ranges of the predicted values to categories. Moreover, since predicted values are potentially unbounded (in data as yet unseen) there remains a degree of arbitrariness associated with such a mapping.
Logistic regression sidesteps the aforementioned issues by modelling class probabilities instead. Any input to the model yields a number lying between 0 and 1, representing the probability of class membership. One is still left with the problem of determining the threshold probability, i.e. the probability at which the category flips from one to the other. By default this is set to p=0.5, but in reality it should be settled based on how the model will be used. For example, for a marketing model that identifies potentially responsive customers, the threshold for a positive event might be set low (much less than 0.5) because the client does not really care about mailouts going to a non-responsive customer (the negative event). Indeed they may be more than OK with it as there’s always a chance – however small – that a non-responsive customer will actually respond. As an opposing example, the cost of a false positive would be high in a machine learning application that grants access to sensitive information. In this case, one might want to set the threshold probability to a value closer to 1, say 0.9 or even higher. The point is, the setting an appropriate threshold probability is a business issue, not a technical one.
### Logistic regression in brief
So how does logistic regression work?
For the discussion let’s assume that the outcome (predicted variable) and predictors are denoted by Y and X respectively and the two classes of interest are denoted by + and – respectively. We wish to model the conditional probability that the outcome Y is +, given that the input variables (predictors) are X. The conditional probability is denoted by p(Y=+|X) which we’ll abbreviate as p(X) since we know we are referring to the positive outcome Y=+.
As mentioned earlier, we are after the probability of class membership so we must ensure that the hypothesis function (a fancy word for the model) always lies between 0 and 1. The function assumed in logistic regression is:
$p(X) = \dfrac{\exp^{\beta_0+\beta_1 X}}{1+\exp^{\beta_0 + \beta_1 X}} .....(1)$
You can verify that $p(X)$ does indeed lie between 0 and 1 as $X$ varies from $-\infty$ to $\infty$. Typically, however, the values of $X$ that make sense are bounded as shown in the example (stolen from Wikipedia) shown in Figure 1. The figure also illustrates the typical S-shaped curve characteristic of logistic regression.
Figure 1: Logistic function
As an aside, you might be wondering where the name logistic comes from. An equivalent way of expressing the above equation is:
$\log(\dfrac{p(X)}{1-p(X)}) = \beta_0+\beta_1 X .....(2)$
The quantity on the left is the logarithm of the odds. So, the model is a linear regression of the log-odds, sometimes called logit, and hence the name logistic.
The problem is to find the values of $\beta_0$ and $\beta_1$ that results in a $p(X)$ that most accurately classifies all the observed data points – that is, those that belong to the positive class have a probability as close as possible to 1 and those that belong to the negative class have a probability as close as possible to 0. One way to frame this problem is to say that we wish to maximise the product of these probabilities, often referred to as the likelihood:
$\displaystyle\log ( {\prod_{i:Y_i=+} p(X_{i}) \prod_{j:Y_j=-}(1-p(X_{j}))})$
Where $\prod$ represents the products over i and j, which run over the +ve and –ve classed points respectively. This approach, called maximum likelihood estimation, is quite common in many machine learning settings, especially those involving probabilities.
It should be noted that in practice one works with the log likelihood because it is easier to work with mathematically. Moreover, one minimises the negative log likelihood which, of course, is the same as maximising the log likelihood. The quantity one minimises is thus:
$L = - \displaystyle\log ( {\prod_{i:Y_i=+} p(X_{i}) \prod_{j:Y_j=-}(1-p(X_{j}))}).....(3)$
However, these are technical details that I mention only for completeness. As you will see next, they have little bearing on the practical use of logistic regression.
### Logistic regression in R – an example
In this example, we’ll use the logistic regression option implemented within the glm function that comes with the base R installation. This function fits a class of models collectively known as generalized linear models. We’ll apply the function to the Pima Indian Diabetes dataset that comes with the mlbench package. The code is quite straightforward – particularly if you’ve read earlier articles in my “gentle introduction” series – so I’ll just list the code below noting that the logistic regression option is invoked by setting family=”binomial” in the glm function call.
Here we go:
#set working directory if needed (modify path as needed)
#setwd(“C:/Users/Kailash/Documents/logistic”)
library(mlbench)
data(“PimaIndiansDiabetes”)
#set seed to ensure reproducible results
set.seed(42)
#split into training and test sets
PimaIndiansDiabetes[,”train”] <- ifelse(runif(nrow(PimaIndiansDiabetes))<0.8,1,0)
#separate training and test sets
trainset <- PimaIndiansDiabetes[PimaIndiansDiabetes$train==1,] testset <- PimaIndiansDiabetes[PimaIndiansDiabetes$train==0,]
#get column index of train flag
trainColNum <- grep(“train”,names(trainset))
#remove train flag column from train and test sets
trainset <- trainset[,-trainColNum]
testset <- testset[,-trainColNum]
#get column index of predicted variable in dataset
typeColNum <- grep(“diabetes”,names(PimaIndiansDiabetes))
#build model
glm_model <- glm(diabetes~.,data = trainset, family = binomial)
summary(glm_model)
Call:
glm(formula = diabetes ~ ., family = binomial, data = trainset)
<<output edited>>
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept)-8.1485021 0.7835869 -10.399 < 2e-16 ***
pregnant 0.1200493 0.0355617 3.376 0.000736 ***
glucose 0.0348440 0.0040744 8.552 < 2e-16 ***
pressure -0.0118977 0.0057685 -2.063 0.039158 *
triceps 0.0053380 0.0076523 0.698 0.485449
insulin -0.0010892 0.0009789 -1.113 0.265872
mass 0.0775352 0.0161255 4.808 1.52e-06 ***
pedigree 1.2143139 0.3368454 3.605 0.000312 ***
age 0.0117270 0.0103418 1.134 0.256816
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#predict probabilities on testset
#type=”response” gives probabilities, type=”class” gives class
glm_prob <- predict.glm(glm_model,testset[,-typeColNum],type=”response”)
#which classes do these probabilities refer to? What are 1 and 0?
contrasts(PimaIndiansDiabetes$diabetes) pos neg 0 pos 1 #make predictions ##…first create vector to hold predictions (we know 0 refers to neg now) glm_predict <- rep(“neg”,nrow(testset)) glm_predict[glm_prob>.5] <- “pos” #confusion matrix table(pred=glm_predict,true=testset$diabetes)
glm_predict neg pos
neg 90 22
pos 8 33
#accuracy
mean(glm_predict==testset$diabetes) [1] 0.8039216 Although this seems pretty good, we aren’t quite done because there is an issue that is lurking under the hood. To see this, let’s examine the information output from the model summary, in particular the coefficient estimates (i.e. estimates for $\beta$) and their significance. Here’s a summary of the information contained in the table: • Column 2 in the table lists coefficient estimates. • Column 3 list s the standard error of the estimates (the larger the standard error, the less confident we are about the estimate) • Column 4 the z statistic (which is the coefficient estimate (column 2) divided by the standard error of the estimate (column 3)) and • The last column (Pr(>|z|) lists the p-value, which is the probability of getting the listed estimate assuming the predictor has no effect. In essence, the smaller the p-value, the more significant the estimate is likely to be. From the table we can conclude that only 4 predictors are significant – pregnant, glucose, mass and pedigree (and possibly a fifth – pressure). The other variables have little predictive power and worse, may contribute to overfitting. They should, therefore, be eliminated and we’ll do that in a minute. However, there’s an important point to note before we do so… In this case we have only 9 variables, so are able to identify the significant ones by a manual inspection of p-values. As you can well imagine, such a process will quickly become tedious as the number of predictors increases. Wouldn’t it be be nice if there were an algorithm that could somehow automatically shrink the coefficients of these variables or (better!) set them to zero altogether? It turns out that this is precisely what lasso and its close cousin, ridge regression, do. ### Ridge and Lasso Recall that the values of the logistic regression coefficients $\beta_0$ and $\beta_1$ are found by minimising the negative log likelihood described in equation (3). Ridge and lasso regularization work by adding a penalty term to the log likelihood function. In the case of ridge regression, the penalty term is $\beta_1^2$ and in the case of lasso, it is $|\beta_1|$ (Remember, $\beta_1$ is a vector, with as many components as there are predictors). The quantity to be minimised in the two cases is thus: $L +\lambda \sum \beta_1^2.....(4)$ – for ridge regression, and $L +\lambda \sum |\beta_1|.....(5)$ – for lasso regression. Where $\lambda$ is a free parameter which is usually selected in such a way that the resulting model minimises the out of sample error. Typically, the optimal value of $\lambda$ is found using grid search with cross-validation, a process akin to the one described in my discussion on cost-complexity parameter estimation in decision trees. Most canned algorithms provide methods to do this; the one we’ll use in the next section is no exception. In the case of ridge regression, the effect of the penalty term is to shrink the coefficients that contribute most to the error. Put another way, it reduces the magnitude of the coefficients that contribute to increasing $L$. In contrast, in the case of lasso regression, the effect of the penalty term is to set the these coefficients exactly to zero! This is cool because what it mean that lasso regression works like a feature selector that picks out the most important coefficients, i.e. those that are most predictive (and have the lowest p-values). Let’s illustrate this through an example. We’ll use the glmnet package which implements a combined version of ridge and lasso (called elastic net). Instead of minimising (4) or (5) above, glmnet minimises: $L +\lambda[ (1-\alpha)\sum [\beta_1^2 + \alpha\sum|\beta_1|]....(6)$ where $\alpha$ controls the “mix” of ridge and lasso regularisation, with $\alpha=0$ being “pure” ridge and $\alpha=1$ being “pure” lasso. ### Lasso regularisation using glmnet Let’s reanalyse the Pima Indian Diabetes dataset using glmnet with $\alpha=1$ (pure lasso). Before diving into code, it is worth noting that glmnet: • does not have a formula interface, so one has to input the predictors as a matrix and the class labels as a vector. • does not accept categorical predictors, so one has to convert these to numeric values before passing them to glmnet. The glmnet function model.matrix creates the matrix and also converts categorical predictors to appropriate dummy variables. Another important point to note is that we’ll use the function cv.glmnet, which automatically performs a grid search to find the optimal value of $\lambda$. OK, enough said, here we go: #load required library library(glmnet) #convert training data to matrix format x <- model.matrix(diabetes~.,trainset) #convert class to numerical variable y <- ifelse(trainset$diabetes==”pos”,1,0)
#perform grid search to find optimal value of lambda
#family= binomial => logistic regression, alpha=1 => lasso
# check docs to explore other type.measure options
cv.out <- cv.glmnet(x,y,alpha=1,family=”binomial”,type.measure = “mse” )
#plot result
plot(cv.out)
The plot is shown in Figure 2 below:
Figure 2: Error as a function of lambda (select lambda that minimises error)
The plot shows that the log of the optimal value of lambda (i.e. the one that minimises the root mean square error) is approximately -5. The exact value can be viewed by examining the variable lambda_min in the code below. In general though, the objective of regularisation is to balance accuracy and simplicity. In the present context, this means a model with the smallest number of coefficients that also gives a good accuracy. To this end, the cv.glmnet function finds the value of lambda that gives the simplest model but also lies within one standard error of the optimal value of lambda. This value of lambda (lambda.1se) is what we’ll use in the rest of the computation. Interested readers should have a look at this article for more on lambda.1se vs lambda.min.
#min value of lambda
lambda_min <- cv.out$lambda.min #best value of lambda lambda_1se <- cv.out$lambda.1se
#regression coefficients
coef(cv.out,s=lambda_1se)
10 x 1 sparse Matrix of class “dgCMatrix”
1
(Intercept) -4.61706681
(Intercept) .
pregnant 0.03077434
glucose 0.02314107
pressure .
triceps .
insulin .
mass 0.02779252
pedigree 0.20999511
age .
The output shows that only those variables that we had determined to be significant on the basis of p-values have non-zero coefficients. The coefficients of all other variables have been set to zero by the algorithm! Lasso has reduced the complexity of the fitting function massively…and you are no doubt wondering what effect this has on accuracy. Let’s see by running the model against our test data:
#get test data
x_test <- model.matrix(diabetes~.,testset)
#predict class, type=”class”
lasso_prob <- predict(cv.out,newx = x_test,s=lambda_1se,type=”response”)
#translate probabilities to predictions
lasso_predict <- rep(“neg”,nrow(testset))
lasso_predict[lasso_prob>.5] <- “pos”
#confusion matrix
table(pred=lasso_predict,true=testset$diabetes) pred neg pos neg 94 28 pos 4 27 #accuracy mean(lasso_predict==testset$diabetes)
[1] 0.7908497
Which is a bit less than what we got with the more complex model. So, we get a similar out-of-sample accuracy as we did before, and we do so using a way simpler function (4 non-zero coefficients) than the original one (9 nonzero coefficients). What this means is that the simpler function does at least as good a job fitting the signal in the data as the more complicated one. The bias-variance tradeoff tells us that the simpler function should be preferred because it is less likely to overfit the training data.
Paraphrasing William of Ockhamall other things being equal, a simple hypothesis should be preferred over a complex one.
### Wrapping up
In this post I have tried to provide a detailed introduction to logistic regression, one of the simplest (and oldest) classification techniques in the machine learning practitioners arsenal. Despite it’s simplicity (or I should say, because of it!) logistic regression works well for many business applications which often have a simple decision boundary. Moreover, because of its simplicity it is less prone to overfitting than flexible methods such as decision trees. Further, as we have shown, variables that contribute to overfitting can be eliminated using lasso (or ridge) regularisation, without compromising out-of-sample accuracy. Given these advantages and its inherent simplicity, it isn’t surprising that logistic regression remains a workhorse for data scientists.
Written by K
July 11, 2017 at 10:00 pm
|
## Loren on the Art of MATLABTurn ideas into MATLAB
Note
Loren on the Art of MATLAB has been retired and will not be updated.
# Cell Arrays and Their Contents
I've written several blog articles so far on structures, and not quite so much on their soulmates, cell arrays. Just last week, at the annual MathWorks Aerospace Defense Conference (MADC), I had several people ask for help on cell arrays and indexing. Couple that with the weekly questions on the MATLAB newsgroup, and it's time.
### Arrays
As you probably already know, arrays in MATLAB are rectangular looking in any two dimensions. For example, for each row in a matrix (2-dimensional), there is the same number of elements - all rows have the same number of columns. To denote missing values in floating point arrays, we often use NaN. And each MATLAB array is homogeneous; that is, each array element is the same kind of entity, for example, double precision values.
### Cell Arrays
Cell arrays were introduced in MATLAB 5.0 to allow us to collect arrays of different sizes and types. Cell arrays themselves must still be rectangular in any given two dimensions, and since each element is a cell, the array is filled with items that are all the same type. However, the contents of each cell can be any MATLAB array, including
• numeric arrays, the ones that people typically first learn
• strings
• structures
• cell arrays
clear
### Indexing Using Parentheses
Indexing using parentheses means the same thing for all MATLAB arrays. Let's take a look at a numeric array first and then a cell array.
M = magic(3)
M =
8 1 6
3 5 7
4 9 2
Let's place a single element into another array.
s = M(1,2)
s =
1
Next let's get a row of elements.
row3 = M(3,:)
row3 =
4 9 2
And now grab the corner elements.
corners = M([1 end],[1 end])
corners =
8 6
4 2
What's in the MATLAB workspace?
whos
clear % clean up before we move forward
Name Size Bytes Class
M 3x3 72 double array
corners 2x2 32 double array
row3 1x3 24 double array
s 1x1 8 double array
Grand total is 17 elements using 136 bytes
Next, let's do similar experiments with a cell array.
C = {magic(3) 17 'fred'; ...
'AliceBettyCarolDianeEllen' 'yp' 42; ...
{1} 2 3}
C =
[3x3 double] [17] 'fred'
[1x25 char ] 'yp' [ 42]
{1x1 cell } [ 2] [ 3]
Notice the information we get from printing C. We can see it is 3x3, and we can see information, but not necessary full content, about the values in each cell. The very first cell contains a 3x3 array of doubles, the second element in the first row contains the scalar value 17, and the third cell in the first row contains a string, one that is short enough to print out. Let's place a single element into another array.
sCell = C(1,2)
sCell =
[17]
Next let's get a row of elements.
row3Cell = C(3,:)
row3Cell =
{1x1 cell} [2] [3]
And now grab the corner elements.
cornersCell = C([1 end],[1 end])
cornersCell =
[3x3 double] 'fred'
{1x1 cell } [ 3]
What's in our workspace now?
whos
clear sCell row3Cell cornersCell
Name Size Bytes Class
C 3x3 774 cell array
cornersCell 2x2 396 cell array
row3Cell 1x3 264 cell array
sCell 1x1 68 cell array
Grand total is 84 elements using 1502 bytes
### An Observation about Indexing with Parentheses
When we index into an array using parentheses, (), to extract a portion of an array, we get an array of the same type. With the double precision array M, we got double precision arrays of different sizes and shapes as our output. When we do the same thing with our cell array C, we get cell arrays of various shapes and sizes for the output.
### Contents of Cell Arrays
Cell arrays are quite useful in a variety of applications. We use them in MATLAB for collecting strings of different lengths. They are good for collecting even numeric arrays of different sizes, e.g., the magic squares from order 3 to 10. But we still need to get information from within given cells, not just create more cell arrays using (). To do so, we use curly braces {}. I used one set of them to create C initially. Now let's extract the contents from some cells and assign the output to an array. Let's place a single element into another array.
m = C{1}
m =
8 1 6
3 5 7
4 9 2
Next let's try to get a row of elements.
try
row3 = C{3,:}
catch
lerr = lasterror;
disp(lerr.message(24:end))
end
Illegal right hand side in assignment. Too many elements.
Why couldn't I do that? Let's look at what's in row 1.
C(1,:)
ans =
[3x3 double] [17] 'fred'
Now let's see what we get if we look at the contents without assigned the output to a variable.
C{1,:}
ans =
8 1 6
3 5 7
4 9 2
ans =
17
ans =
fred
You can see that we assign to ans three times, one for each element in the row of the cell array. It's as if we wrote this expression: C{1,1},C{1,2},C{1,3} with the output from these arrays being successively assigned to ans. MATLAB can't typically take the content from these cells and place them into a single array. We could extract the contents of row 1, one cell at a time as we did to create m. If we want to extract more cells at once, we have to place the contents of each cell into its own separate array, like this,
[c11 c12 c13] = C{1,:}
c11 =
8 1 6
3 5 7
4 9 2
c12 =
17
c13 =
fred
taking advantage of syntax new in MATLAB Release 14 for assignment when using comma-separated lists.
### Cell Array Indexing Summary
• Use curly braces {} for setting or getting the contents of cell arrays.
• Use parentheses () for indexing into a cell array to collect a subset of cells together in another cell array.
Here's my mnemonic for remembering when to use the curly braces: curly for contents Does anyone have any mnemonics or other special ways to help remember when to use the different kinds of indexing? If so, please post a comment below.
### References
Published with MATLAB® 7.2
|
|
# Automatic Cropping of Arbitrary Shapes
I have an arbitrary shape defined by a binary mask (gray = shape, black = background).
I would like to find a largest possible rectangle containing only gray pixels (such rectangle is pictured in yellow):
The shape is always "one piece" but it is not necessarily convex (not all point pairs on the shape's boundary can be connected by a straight line going through the shape).
Sometimes many of such "maximum rectangles" exist and then further constrains can be introduced, such as:
• Taking the rectangle with its center nearest to shape's center of mass (or center of image)
• Taking rectangle with aspect ratio nearest to a predefined ratio (i.e. 4:3)
My first thought about the algorithm is the following:
1. Compute distance transform of the shape and find its center of mass
2. Grow square area while it contains only shape's pixels
3. Grow the rectangle (originally a square) in width or height while it contains only shape's pixels.
However, I think such algorithm would be slow and would not lead to optimal solution.
Any suggestions?
• Is this helpful? mathworks.com/matlabcentral/fileexchange/… – Atul Ingle Mar 12 '13 at 5:31
• @AtulIngle Extactly! Thanks. Could you add the answer so I can accept it? I will then try to edit the answer to elaborate more on the algorithm - but I don't want to just answer my own question using the link you provided... – Libor Mar 12 '13 at 23:40
• Great! I'll look forward to reading your elaborate answer as I haven't read through the code. – Atul Ingle Mar 15 '13 at 19:28
There is a code on Matlab Fileexchange that is relevant to your problem: http://www.mathworks.com/matlabcentral/fileexchange/28155-inscribedrectangle/content/html/Inscribed_Rectangle_demo.html
Update
I wrote this tutorial article on computing largest inscribed rectangles based on the above link from Atul Ingle.
The algorithm first searches for largest squares on a binary mask. This is done using simple dynamic programming algorithm. Each new pixel is updated using the three neighbors already known:
squares[x,y] = min(squares[x+1,y], squares[x,y+1], squares[x+1,y+1]) + 1
The sample binary mask and computed map look like this:
Taking maximum in the map reveals the largest inscribed square:
The rectangle searching algorithm than scans the mask two more times looking for two classes of rectangles:
• width greater than square's size (and height possibly smaller)
• height greater than square's size (and width possibly smaller)
Both classes are bounded by the largest squares since no rectangle at a given point can have both dimensions greater than the inscribed square (though one dimension can be larger).
One have to choose some metric for rectangle sizes, like area, circumference or weighted sum of dimensions.
Here is the resulting map for rectangles:
It is convenient to store position and size of the best rectangle found so far in a variable instead of building maps and then looking for maxima.
The practical application of this algorithm is cropping non-rectangular images. I have used this algorithm in my image stitching library SharpStitch:
## protected by datageist♦Nov 21 '16 at 5:59
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
|
Invited Review Article: Physics and Monte Carlo Techniques as Relevant to Cryogenic, Phonon and Ionization Readout of CDMS Radiation-Detectors
Invited Review Article: Physics and Monte Carlo Techniques as Relevant to Cryogenic, Phonon and Ionization Readout of CDMS Radiation-Detectors
Steven W. Leman Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
Abstract
This review discusses detector physics and Monte Carlo techniques for cryogenic, radiation detectors that utilize combined phonon and ionization readout. A general review of cryogenic phonon and charge transport is provided along with specific details of the Cryogenic Dark Matter Search detector instrumentation. In particular this review covers quasidiffusive phonon transport, which includes phonon focusing, anharmonic decay and isotope scattering. The interaction of phonons in the detector surface is discussed along with the downconversion of phonons in superconducting films. The charge transport physics include a mass tensor which results from the crystal band structure and is modeled with a Herring Vogt transformation. Charge scattering processes involve the creation of Neganov-Luke phonons. Transition-edge-sensor (TES) simulations include a full electric circuit description and all thermal processes including Joule heating, cooling to the substrate and thermal diffusion within the TES, the latter of which is necessary to model normal-superconducting phase separation. Relevant numerical constants are provided for these physical processes in germanium, silicon, aluminum and tungsten. Random number sampling methods including inverse cumulative distribution function (CDF) and rejection techniques are reviewed. To improve the efficiency of charge transport modeling, an additional second order inverse CDF method is developed here along with an efficient barycentric coordinate sampling method of electric fields. Results are provided in a manner that is convenient for use in Monte Carlo and references are provided for validation of these models.
I Introduction
Cryogenic radiation-detectors that utilize ionization, phonon and / or scintillation measurements are being used in a number of experiments. Both the Cryogenic Dark Matter Search (CDMS) Ahmed2010 (); Ahmed2011 () and EDELWEISS Armengaud2010 () dark matter search utilize silicon and / or germanium targets to detect recoils of radiation in the target masses. A combination of ionization and phonon readout is used to provide discrimination of gamma- and neutron-recoil types. The CRESST dark matter search utilizes CaWO targets and readout scintillation and phonon signal to discriminate between recoil types. The advantage of reading out both phonon and ionization (or scintillation) signals comes about from the differing ratios of ionization and phonon energy or scintillation and phonon energy created in electron- and nuclear-recoils in the detectors. The ratio of these two energies leads to a powerful discriminator for the experiment’s desired recoil type.
Both the ionization and phonon readout can be used to generate position estimators for the initial radiation interaction, leading to fiducial volume selection. In the ionization signal this is generally accomplished by instrumenting different parts of the detector with independent readout channels and vetoing events with large signal contribution outside of the desired fiducial volume. In the phonon signal it is generally required to measure the early, athermal component of the phonon signal which still retains a position dependent component.
The physics required to accurately model these detectors is presented in this paper along with appropriate numerical tricks that are useful for an efficient detector Monte Carlo. This paper proceeds with a review of radiation interactions, charge transport physics, phonon transport physics, instrumentation. Monte Carlo techniques and relevant physical constants are included where appropriate.
This paper will focus on the use of silicon and germanium detector masses, both of which are group IV semiconductors. However there are other relevant materials in use such as calcium tungstate (CaWO) which leads to a small loss of generality.
i.1 The CDMS Experiment and Detectors
The Cryogenic Dark Matter Search Ahmed2010 (); Ahmed2011 () utilizes silicon and germanium detectors to search for Weakly Interacting Massive Particle (WIMP) dark matter Spergel2007 (); Tegmark2004 () candidates. The silicon or germanium nuclei provide a target mass for WIMP-nucleon interactions. Simultaneous measurement of both phonon energy and ionization energy provide a powerful discriminator between electron-recoil interactions and nuclear-recoil interactions. Background radiation primarily interacts through electron-recoils whereas a WIMP signal would interact through nuclear-recoils. The experiment is located in the Soudan Mine, MN, U.S.A.
The most recent phase of the CDMS experiment has involved fabrication, testing and commissioning of large, 3 inch diameter, 1 inch thick [100] germanium crystals. The CDMS-iZIP (interleaved Z–dependent Ionization and Phonon) detectors are 3 inches in diameter and 1 inch thick with a total mass of about 607 grams Brink2006 (). The iZIP detector utilizes both anode and cathode lines on the same side of the detector similar to a Micro-Strip Gas Chamber (MSGC) Knoll (); Oed1988 (); Luke1994 (); Brink2006 () as shown in Figure 2 and 2. Unlike an MSGC however, there is a set of anode and cathode lines on both sides of the detector. This ionization channel design is used to veto events interacting near the detector surfaces. An amorphous silicon layer, deposited under the metal layers, increases the breakdown voltage of the detectors. The total iZIP aluminum coverage is 4.8% active and 1.5% passive per side.
When using a Monte Carlo of a detector, it is often helpful or necessary to have a numerical model of radiation interactions in the detector. Many readers will find it valuable to use separate modeling software such as GEANT4 Agostinelli2003 (). A brief description of these interactions follows.
Low energy gamma-rays (x-rays) predominantly interact via photoelectric absorption in which all of the gamma-ray energy is deposited in a single interaction location. High energy gamma-rays interact via Compton scattering in which some of the gamma-ray’s initial energy is transferred to an electron and the gamma-ray continues along a different trajectory with reduced energy. The gamma-ray will generally continue to scatter until it leaves the detector volume or terminates with a photoelectric absorption. In silicon (germanium), for photon energies greater than 60 (160) keV, Compton scattering dominates Knoll (); RPP ().
Both of these electron interactions result in a high energy electron being produced which then undergoes a rapid cascade process resulting in a large number of electron-hole pairs RPP (); Cabrera1993 (). This initial cascade process ceases around the scale of the mean electron-hole pair creation energy () resulting in an expected number of electron-hole pair . Due to correlations in the cascade process, the variance in the number of electron-hole pairs is reduced, relative to Poisson statistics, and given by , where is the Fano factor Fano1947 (). These high energy electron-hole pairs will then shed phonons until they reach the semiconductor gap energy which results in the fraction of energy in prompt phonons and the remainder in the electron-hole pair system.
Neutrons that interact in the detector bulk will knock an ion out of its lattice site and displace it to some other location in the crystal. This high energy ion will interact with both the lattice ions and / or valence electrons, with competing cross sections, before reaching some other location in the crystal. Interactions with the lattice ions can be described to first approximation as Rutherford scattering Rutherford1911 () with differential energy loss per unit length , where is the ion’s velocity Bonderup1978 (). For interactions with valence electrons, the number of electron states which can be excited to an accessible state outside of the Fermi sphere scales like velocity hence the differential energy loss per unit length scales like Bonderup1978 (). A description of this process is shown in Figure 3. There are important screening and velocity dependent cross sections in both energy loss lengths Bonderup1978 (); Lindhard1954 (); Lindhard1963 (); Lindhard1963_2 (); Jones1975 () but to first approximation these equations show the velocity dependent competition in the two scattering rates. Whichever interaction occurs first, there is generally a cascade of many scattering events resulting in a larger amount of energy being deposited in the ion lattice compared to gamma-ray interactions. For historical reasons, the reduced amount of energy in the electron-hole (ionization) system, compared to an equal energy deposition by a gamma-ray, has brought about the term nuclear quenching to describe the reduced ionization signal. The details of the cascade and reduction in the number of electron-hole pairs is described by Lindhard theory Lindhard1963 () and given as where , and Lewin1996 (). The recoil energy is and given in units of keV, is the atomic number and is the atomic mass.
Beta radiation represents another class of electron-recoil interactions. The attenuation lengths are much shorter, however, resulting in a class of events sometimes referred to as surface interactions. These surface interactions can result in signals that differ from bulk events of the same energy and electron- / nuclear-recoil type. For example, the events may be located in regions of large electric fringing fields, directing charges away from readout electrodes or near mounting hardware that absorbs scintillation light.
Iii Phonon Simulation
iii.1 Introduction
Phonons, in the context of this review, are quantized vibration modes that exist in periodic structures such as silicon and germanium crystals. They are excitations which, in the lattice, and in materials sufficiently cooled that charge carriers are frozen out, mediate thermal transport. They are described by the Hamiltonian
H=∑ip2i2m+∑i,jmω22(xi−xj)2, (1)
where is the mass of the lattice atoms and is the frequency of oscillations about the center of the harmonic potential between an atom and its nearest neighbor atom Kittel (); Ashcroft (); Wolfe ().
In one dimension, the solution to the time-independent Schrodinger equation yields the different oscillation modes at atom
xj∼eiknja (2)
where here, the wave number , is the lattice spacing and are the number of lattice sites.
In Monte Carlo, phonons are simply treated as non interacting particles with decay and mass defect scattering properties described in the remainder of this chapter. In general they have a nonlinear dispersion relationship; however, due to rapid down conversion to lower frequencies they spend most of their time in an energy region with linear dispersion relationship. Hence, a linear dispersion relation is sufficient for phonon transport modeling in these detectors.
iii.2 Prompt Phonon Distributions
Immediately after the recoil, a population of prompt phonons exist in the immediate vicinity of the interaction point. Particular details of the frequency distribution and mode population are not well known but we can make a few deductions, explained in more detail in the following sections, that will lead us an initial distribution for Monte Carlo.
Anharmonic decay, due to nonlinear terms in the elastic coupling between adjacent lattice ions, causes the phonons to rapidly down convert into a lower frequency distribution. This process allows us to start a Monte Carlo with any high frequency distribution and details of the distribution will rapidly be lost; we use the Debye frequency as a naive starting point. Isotope scattering, which also occurs at a high rate for a high frequency phonons, causes the phonons to obtain their equilibrium mode density; we use the equilibrium mode density as a naive starting point. The approximations are good in the sense that the detector’s phonon response is insensitive to variations in the distributions.
They are valid since the detectors are large compared to the initial characteristic interaction lengths of phonons. Furthermore, later generations of phonons are ballistic and timing information in the measured phonon pulses is determined by the detector geometry and the loss rate of phonons at surfaces.
iii.3 Phase Velocities and Polarization Vectors
The so called phase velocity surfaces represent the direction dependent phonon phase velocity . In general they are given by the eigenvalue Equation 3.
ρω2ϵμ=∑τ(∑σνcμσντkσkν)ϵτ, (3)
where is the crystal’s mass density,
is the phonon frequency,
is a component of the polarization vector ,
are components of the elastic constant tensor and,
is a component of the phase velocity vector Kittel ().
Not all of the elastic constants are independent, reducing via a Voigt contraction Nye () the number that we need to keep track off. Additionally, symmetries in a cubic crystal allow for further reduction in components and we can define three independent constants , and . This contraction simplifies Equation 3 significantly Ashcroft () and we are left solving matrix III.3 for its eigenvectors and eigenvalues.
⎛⎜⎝C11kxkx+C44(kyky+kzkz)(C12+C44)kxky(C12+C44)kxkyC44(kxkx+kzkz)+C11kyky⋯(C12+C44)kxkz(C12+C44)kykz (C12+C44)kxkz(C12+C44)kykzC44(kxkx+kyky)+C11kzkz⎞⎟⎠ (4)
The eigenvectors represent the three polarization vector directions and the three eigenvalues equal for the longitudinal, slow-transverse and fast-transverse modes. The anisotropy in the cubic silicon crystal leads to the phase velocity surfaces being non-spherical (see Figure 4). The phase velocities are used to determine both the group velocities (Section III.4) and also the isotope scattering rates.
iii.4 Group Velocities
Phonon group velocities are found by solving
→vg(θ,ϕ)=∂ω(θ,ϕ)∂→k. (5)
The slight lack of sphericity in the phase velocity surfaces (see Figure 4) has a very dramatic effect on the transverse phonon group velocities Northrop1980 (); Tamura1991 (); Maris1993 (); Tamura1993LT () (see Figure 5). The longitudinal phonon’s group velocity is only mildly affected. Energy is focused in the direction of heavy banding and leads to the term phonon focusing. The point density in the plots is misleading as the three modes are shown to be equally populated. Isotope scattering including anisotropic scattering rates (see Section III.5) leads to the phonon modes in silicon being populated as follows: slow-transverse (55%), fast-transverse (35%) and longitudinal (10%).
iii.5 Anisotropic Isotope Scattering
Phonons scatter off mass defects in the crystal (see Figure 6). Additionally, they can change modes. The bulk scattering rate is given by
ΓI=Bν4[s3], (6)
where is the phonon frequency and is a scattering rate constant Tamura1991 (); Tamura1993LT (); Maris1990 (); Msall1993 (); Tamura1993 () (see Table 1). The scattering rate for individual phonons is given by
γ∼|→eλ⋅→eλ′|2ν3λ′, (7)
where is the final state phonon frequency in Hz, is the polarization vector, represents the initial phonon and represents the outgoing phonon Tamura1993LT (); Tamura1993 (). It is the dot product in Equation 7 which allows mode mixing and the denominator which ensures the correct populations in the ratios. In silicon the populations when including anisotropic scattering rates are slow-transverse (55%), fast-transverse (35%) and longitudinal (10%). The standard treatment is to determine if a phonon isotope scatters via and then determine its polarization and direction via . After the initial anharmonic decay has settled down, isotope scattering dominates.
This process is unfortunately computationally expensive due to sample-rejection techniques NumericalRecipes () and after several iterations an isotropic scattering process can be used for individual phonons with little loss of modeling accuracy.
iii.6 Anharmonic Decay
iii.6.1 General Case
Nonlinear terms in the elastic coupling constants cause a longitudinal phonon to down convert to two lower energy phonons (see Figure 7). The bulk decay rate is given by
ΓA=Aν5[s4], (8)
where is the phonon frequency and is a decay rate constant Tamura1993LT (); Maris1990 (); Msall1993 (); Tamura1993 () (see Table 1). The decay rate for transverse phonons is negligible Tamura1985TA (). The three body problem requires that both energy and momentum are conserved. These conditions make an exact solution computationally prohibitive for large amounts of phonons.
iii.6.2 Isotropic Approximation
To allow computations to proceed in a finite time, an exact solution to anharmonic decay is abandoned and an isotropic approximation is used (the full anisotropic phase velocities and group velocities are still easily used for isotope scattering and phonon transport). The energy distribution calculations are still difficult but fortunately have already been carried out Tamura1985 (); CabreraTamura (). Once the energies have been determined, calculating the resultant scattering angles based on energy and momentum conservation is fairly straightforward. Due to the different energy-momentum dispersion relations for the longitudinal and transverse phonons, there are two different decay branches, and . The energy distribution is given by
ΓL→L′+T′∼1x2(1−x2)2[(1+x)2−δ2(1−x)2][1+x2−δ2(1−x)2]2, (9)
where ,
and,
.
This approximation results in the outgoing phonons having an angular displacement from the initial phonon given by
cos(θL′)=1+x2−δ2(1−x)22x, (10)
cos(θT′)=1−x2+δ2(1−x)22δ(1−x). (11)
The energy distribution is given
ΓL→T′1+T′2∼(A+Bδx−Bx2)2+[Cx(δ−x)−Dδ−x(x−δ−1−δ24x)]2, (12)
where ,
,
,
,
,
and,
.
The constants and are the Lamé constants and and are third order elastic constants in an isotropic model (there is additionally a third independent elastic constant but it drops out of the equations).
This approximation results in the outgoing phonons having an angular displacement from the initial phonon given by Equations 13 and 14.
cos(θT′1)=1−δ2(1−x)2+δ2x22δx, (13)
cos(θT′2)=1−δ2x2+δ2(1−x)22δ(1−x), (14)
where the trivial substitution is made for the second phonon. With these closed-form energy densities and scattering angles, plots can be generated to aid understanding of these events (see Figures 9 and 9).
These angles are relative the initial momentum vector and need to be converted into the Monte Carlo coordinate system. Polar coordinates are useful and angles are provided in this system. In addition to rotation angles and (or and ) an additional azimuth angle, relative to the initial momentum vector , and in the isotropic approximation is randomly distributed from , is specified as .
In terms of initial elevation and azimuth angles and , scattering angles and and azimuth scattering angle , the final angles and that describe the phonon momentum vectors are
Φ1=arccos[−sinΦsinθT′1cosθ2π+cosΦcosθT′1] (15)
and
Θ1=Θ−arctan2[−sinθT′1sinθ2πsinΦ,cosθT′1−cosΦcosΦ1]. (16)
The final angles for the other phonon momentum vector are found by replacing with and with .
iii.7 Phonon Losses at Surfaces
Eventually the phonons will interact at surfaces where they are instrumented, reflected back into the crystal, down converted to a lower energy or are lost to the environment.
Phonons can reflect either specularly or diffusively from the surfaces. Specular reflection can occur on smooth, untreated semiconductor wafer surfaces. It is the simplest to describe with incident and reflected angles relative to the normal equal, .
Diffusive scattering is also common on surfaces that have been damaged or roughened during fabrication. In the ideal case the scattering angle is described by Lambert’s cosine law where the angular distribution scales like where the angle is measured relative to the normal. Scattering surfaces that satisfy Lambert’s cosine law scatter phonons isotropically regardless of their incident angle. Generally diffusive scattering has been found to be a good model for phonon-surface reflections, likely due to some small roughness in the surface.
Phonons are strictly-speaking eigenstates of a Hamiltonian that describes an infinitely large, periodic lattice. This description necessarily breaks down at the detector surfaces and there is some probability that phonons down convert to lower energy daughters at the surface. The details of this process would be highly material / surface treatment dependent but the probability of this occurring for a particular phonon-surface interaction will be small for a high purity crystal. It is generally easiest to tune this probability by running numerous Monte Carlo to find the best value. In the CDMS detectors we have measured a loss of 0.1% for each phonon-surface interaction Leman2011 ().
It is the goal of the experimental setup to absorb the phonons into some sensor and provide instrumentation into a data acquisition system. The details for the phonon-sensor interaction probability are again complicated and highly depend on the type of absorber and attachment / fabrication details. Acoustic mismatch theory provides a good starting point for analytic calculations and can be performed over both normal and non-normal incidence angles. The relevance of these calculations can be lost however when detector to detector variations are considered. Additionally the angular dependence of such calculations can be washed out when integrating over a distribution of phonon incident angles and phonon energies. In practice it is usually again easiest to tune this probability by running numerous Monte Carlo and identifying the best fit value. In the CDMS detector there is additionally an amorphous silicon dielectric in between the crystal and thin metal films; we have empirically matched a phonon-aSi-aluminum interaction (details of the interaction are discussed in Section IV) probability of 33% with the remaining 67% diffusively scattering back into the crystal Leman2011 (); Leman2011_4 (). For any well designed detector, phonon absorption into the instrumentation sensors will dominate over other loss processes allowing the probability to be tuned by matching pulse decay times.
iii.8 Time Steps
It would be grossly inefficient to run all phonons with the same time step. Therefore we generate scattering times according to the distributions in Sections III.5 and III.6. The scattering and decay probabilities go like where is a combination of isotope scattering and anharmonic decay rates as given by Matthiessen’s rule .
This scattering time has to be compared with the time that the phonon will take to interact with a surface and the event that occurs soonest will be chosen for each individual phonon. If it is determined that the bulk interaction time is less than the surface interaction time, then it must be determined which event occurs based on their relative, frequency dependent rates. This can be done by drawing a uniform random number and comparing the rate of the process in question to the total rate. For example, if then an anharmonic decay is selected.
iii.9 Random Number Sampling
Only in rare circumstances will a uniform random number be needed in Monte Carlo without some transformation. Often we are trying to draw the number out of the probability distribution function (PDF) . An efficient method for transforming to the desired probability distribution function is to integrate to find the cumulative distribution function . The cumulative distribution function (CDF) has the desirable property that it is bounded by as is . The CDF is then inverted and solved at to determine , NumericalRecipes ().
As an example, we first considering the bulk interaction rate where the probability of having an interaction is . After integration and inversion, the randomly generated scattering time is given by
tscatter=−τln(u). (17)
As a second example we can consider diffuse scattering off the detector walls. In a spherical coordinate system where represents the angle from the surface normal, the PDF in this coordinate is . This is also easily integrated and inverted to yield . Care must be taken in this example however since is defined over the domain [-1,1] which modifies to .
There are times when a PDF cannot be analytically integrated to yield a CDF or the CDF cannot be inverted. This is an unfortunate situation since an expensive rejection technique is required. This technique involves drawing a pair of uniform random numbers where and . If then is retained, otherwise is rejected and the process is repeated. The inefficiency of this method is related to the area coverage and will be successfully drawn in one of attempts.
The rejection method can be improved however for static distributions that do not change during the Monte Carlo run. An example includes diffusive scattering off of the side walls of the detector when using a spherical coordinate system. In this case, the Jacobian results in the PDF for which cannot be found analytically. The PDF can be integrated numerically to generate a CDF which is subsequently inverted. The process lacks a certain degree of elegance but is significantly more efficient than using a rejection method.
iii.10 Numerical Constants for Phonon Simulations
Table 1 lists numerous constants that are used in the phonon simulations. They define the propagation dynamics, scattering and decay rates and energy carrier statistics.
Iv Quasiparticle Down Conversion
Phonons have some probability of entering and interacting in the thin aluminum films that are patterned on the surface. These aluminum films make up both the phonon collecting films and ionization ground lines; it is interactions in the former that are measured in the phonon sensors. If the phonons have energy greater than or equal than twice the superconducting gap then Cooper pairs can be broken creating two quasiparticles. These quasiparticles will be of high kinetic energy and contain some probability of scattering off phonons. These daughter phonons thereby introducing a population of down converted phonons back into the metal film. These phonons could break additional Cooper pairs in a cascade process that ceases when all phonons have energy below or have a probability of being reintroduced back into the crystal. The key points in the cascade process are summarized in the following list Kaplan1976 (); Kurakado1982 (); BrinkThesis ().
1. Quasiparticle recombination lifetimes are long compared to quasiparticle decay and quasiparticle absorption into the aluminum films and therefore the recombination processes can be ignored.
2. Quasiparticle decay via absorption of a phonon is suppressed at low temperatures due to a phonon density of states term , where is the phonon energy, in the Green’s function and therefore can be ignored.
3. Quasiparticle decay via emission of a phonon results in phonons with an energy distribution given by , where is the quasiparticle energy and the quasiparticle density of states at temperature =0 goes like and .
4. Phonons break Cooper pairs producing quasiparticles with an energy distribution given by , where . The phonon is completely absorbed so that the second quasiparticle has energy .
5. Phonons are lost to the crystal if they reach the aluminum / crystal interface.
iv.1 Monte Carlo process ordering
In the physical processes can be ordered as follows
1. If the phonon energy is sufficient , a quasiparticle pair is created with the distribution previously described.
2. If there is a quasiparticle with energy then the quasiparticle emits a phonon with energy distribution . Quasiparticles with energy would shed a phonon with and therefore provide an endpoint for quasiparticle generation. They may be removed from the Monte Carlo.
3. The probability of a phonon escaping the crystal is a function of the distance to reach the aluminum / crystal interface and the phonon / Cooper pair interaction length. Given the large number of phonons it is generally not necessary to track this process in detail and instead a simple model is sufficient. On average, phonons are assumed to populate the center of the aluminum film (), where is the aluminum thickness, and the phonons have 1/2 probability of traveling upwards and 1/2 probability of traveling downwards. For the downward going phonons there is an probability of reaching the aluminum / crystal interface before scattering, where nm is a characteristic phonon interaction length BrinkThesis (). The factor of is provided to integrate over different phonon incidence angles. The factor is replaced by for upward going phonons.
4. If the phonon has not been removed from Monte Carlo in step 2 or reintroduced into the crystal in step 3 then the process repeats at step 1.
V Charge Monte Carlo
v.1 Introduction
Accurate modeling of charge propagation is included in Monte Carlo for numerous reasons. First, the ionization signal, compared to the phonon signal, provides a discriminator between electron-recoil and nuclear-recoil events in the silicon and germanium detectors. Second, electron transport is described by a mass tensor, leading to electron transport which contains components oblique to the applied field. This description is necessary to explain and interpret signals in the primary and guard-ring ionization channels, which function as a fiducial volume cut Sasaki1958 (); Jacoboni1983 (). Third, for electron recoils in the germanium bulk, charges drifting through the detector produce a population of Luke phonons which contribute 56% of the total phonon signal at 3 V bias. This fraction is understood by considering that for every of gamma energy, an electron-hole pair is created which contributes of phonon energy; is the contribution of Luke phonons to the total phonon signal. These phonons’ spatial, time, energy and emitted-direction distributions should therefore be properly modeled in Monte Carlo of the detector response. Fourth, phonons created during electron-hole recombination at the surfaces contribute 13% of the total phonon signal but in a low frequency, ballistic regime that is used to provide a surface-event discriminator. This fraction is understood by considering that for every of gamma energy, of phonon energy is released at the surface; is the contribution of electron-hole gap energy phonons to the total phonon signal. These phonons also need to be properly modeled in a Monte Carlo.
Germanium has an anisotropic band structure described schematically in Figure 10 and shows energy band structure in which the hole ground state is situated in the band’s [000] direction and the electron ground state is in the L-band [111] direction. Hole propagation dynamics are relatively simple due to propagation in the band and the isotropic energy-momentum dispersion relationship . Electron propagation dynamics are significantly more complicated due to the band structure and anisotropic energy-momentum relationship. At low fields and low temperatures, electrons are unable to reach sufficient energy to propagate in the or X-bands, and are not considered necessary to consider in Monte Carlo. The electron energy-momentum dispersion relationship is given by , where the longitudinal and transverse mass ratio .
This chapter will proceed by describing hole propagation and scattering, electron propagation and scattering utilizing a Herring-Vogt transformation and finally electron-hole recombination. Higher order mass terms and scattering processes which occur at high electric fields are discussed elsewhere in the literature AubryFortuna2010 ().
v.2 Holes: propagation and scattering with isotropic bands and isotropic phonon velocity
Hole propagation dynamics are described by momentum evolution in an electric field and propagation in position space , where is the effective carrier mass.
Charge carriers cannot accelerate indefinitely however, and the shedding of Neganov-Luke phonons Neganov1985 (); Luke1988 () limits their speed to around the longitudinal phonon phase velocity . As described in Figure 11, charge-phonon scattering is an elastic processes, conserving both momentum (where and are the initial and final hole momentum vectors and is the phonon momentum vector) and energy (where and are the initial and final hole energies and is the phonon energy). The phonon energy-momentum dispersion relationship is given by . Due to the low carrier energy, Umklapp processes Kittel () in which , where is a reciprocal lattice vector, are suppressed.
Energy-momentum conservation coupled with the previous dispersion relationships leads to the final states and and
cos(ϕ)=k2−2ks(kcosθ−ks)−2(kcos(θ)−ks)2k√(k2−4ks(kcos(θ)−ks), (18)
where is the angular displacement between and , is the angular displacement between and , and is defined as . If we can determine a scattering rate and phonon angular displacement , then we can use these formulae to find the final states.
Fermi’s Golden Rule provides the transition probability per unit time per unit energy as
Pk,k′±q=2πVℏ∣∣∣<→k±→q|H|→k>∣∣∣2δ(E−E′∓ℏω). (19)
For phonon emission processes,
∣∣∣<→k±→q|H|→k′>∣∣∣2=C2ℏ2VρvLq(nq+1), (20)
where is the deformation potential constant and is the phonon occupation number given by . A characteristic length can be defined as . The transition probability can be integrated over and to obtain a scattering rate
1/τ=13vLl0kkL[1−kLk]3. (21)
The angular distribution then follows to be
P(k,θ)dθ=vLl0(kkL)2(cos(θ)−kLk)2sinθdθ (22)
where .
The phonon scatter azimuthal rotation angle is uniformly distributed about . The charge carrier azimuthal rotation angle is required to be .
It is critical that the time steps in the Monte Carlo are sufficiently small that the scattering rates are approximately constant during a time step. A method for ensuring this is discussed in Sections V.4 and V.5.
v.3 Electrons: propagation and scattering with anisotropic bands and isotropic phonon velocity
Electron propagation and scattering is complicated by the anisotropy of the electron bands but can be simplified by performing first a transformation into a space defined by the vectors (where is aligned with [111] and the other two are perpendicular) and then a Herring-Vogt transformation into a space where the electron bands are isotropic Herring1956 (); Jacoboni1983 (). The Herring-Vogt transformation is non-unitary and in the space is given by
T=THV=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝√mcm∥000√mcm⊥000√mcm⊥⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠, (23)
but the speed of sound remains unchanged and isotropic.
In this space, , , , and the effective mass is given by . The change in velocity, in position space, is found by a back transform, incorporating both the mass and momentum transforms, and is given by .
After the Herring-Vogt transform is used to find the electric-field induced acceleration, the electrons shed phonons via the same prescription given to the holes. The phonon and electron momentum is found first in the Herring-Vogt space and the back transform is applied to return to position space. The only additional concern is correctly back transforming the phonon momentum due to the non-unitarity of the Herring-Vogt transform. To handle this, we maintain the phonon momentum magnitude (ie, conserve energy), but use the back transform to find the correct angular distribution.
v.4 Charge Time Steps, First Order
Determining an efficient time step size, for charge transport is complicated since the scattering time varies as the charge carrier accelerates. In this charge transport model it was decided to use sufficiently small such that is relatively constant at each iteration. The requirement, sufficiently small, in practice means that energy is conserved in the transport process and the following is a detailed description of how this requirement is implemented.
By observing the charge Monte Carlo over numerous field strengths and in germanium, it was observed that scattering limits the maximum possible charge momentum magnitude to about
kmax,el=13×kL∣∣∣→E∣∣∣1/3 (24)
kmax,h=6.8×kL∣∣∣→E∣∣∣1/3 (25)
for electrons and holes respectively. These momenta are then used to determine stepping times which conserve energy; the shed Luke phonon energy must equal the change in potential energy at the carrier drifts. Again running numerous Monte Carlo it is determined that a stepping time of is sufficiently small to conserve energy. Larger stepping times will result in a deficit of Neganov-Luke phonons being created.
v.5 Charge Time Steps, Second Order
The earlier described first order method can be improved upon by developing a second order method. The challenge is to efficiently and accurately determine the time until a charge sheds a Neganov-Luke phonon, which is challenging due to the changing interaction time, as the charge is accelerated by the field. An inverse CDF technique would be advantageous and one is developed here that adapts to the changing interaction time.
This is done by sampling the scattering rate at two different times and . We start with the differential equation and expand to next order in time
dNdt=(−a0−a1t)N. (26)
Integrating, we can obtain
lnN=−(a0t+a1t2/2). (27)
This continues with the standard technique of solving for the CDF and inverting to obtain which can be solved for the scattering time . The positive root is retained as physical which provides a scattering time of
t=√a20−2a1lnu−a0a1. (28)
This is completed by recognizing that
a0=dNdt∣∣∣t=t0=τ−10 (29)
and
a1=1t1−t0(dNdt∣∣∣t=t1−dNdt∣∣∣t=t0)=1t1−t0(τ−11−τ−10). (30)
There is a maximum sampling time step () that can be used before the linear interpolation of scattering rates is inaccurate. As in the first order case, this results in lack of energy conservation. It is found that the electron sampling time step can be a factor of 15 greater than the time step shown in Equation 24 and the hole sampling time step by a factor of 20 greater than the time step in Equation 25. Given these sampling time steps, most Neganov-Luke phonons are produced in a time , hence this method is much more efficient than the first order method. The efficiency of this second order method also implies that pursuit of higher order methods will not yield much additional improvement in computational efficiency.
This method also couples well to a second order spatial transport method. The velocity form of the Verlet algorithm Verlet1967 (); Swope1982 () is convenient and given by a description that should look familiar.
This can be easily modified to incorporate the second order inverse CDF sampling method via the following procedure:
1. Make a guess for the step size , which we will call
2. From second order inverse CDF method, determine the randomly distributed scattering time
3. If
1. Save and for use in next iteration
4. Else, save and for use in next iteration
Since holes are described by a scalar mass, this procedure is straightforward. There is however a slight modification to this procedure that is useful for electrons. Due to the use of the Herring Vogt transform, it is generally easier to keep track of momentum in the Herring Vogt space, , rather than velocity. We also identify that the acceleration is given by , where the transform from the cartesian space to the space defined by basis vectors is shown explicitly.
Vi Electric-Field Lookup
vi.1 Electric-Field Lookup from Triangulated Mesh
A numerical electric field model is necessary for the charge transport described in Section V. The simplest model is a constant, longitudinally directed field. However it may be desirable to include fringing fields and details from the electrode structure. A more accurate model will utilize a triangulated mesh. A 3-d mesh contains nodes, with each mesh node containing an associated electric potential . At points other than a mesh node, the potential must be interpolated. The MATLAB programming language offers a few options for this interpolation, the fastest of which, utilizing a barycentric-coordinates linear interpolation via the TriScatteredInterp class. The Computational Geometry Algorithms Library (CGAL) is available for C++ cgal () though this paper will be presented for a MATLAB implementation. Furthermore, the barycentric transformation involves a linear transformation which implies that the electric field is constant within a triangulation, a property which can be exploited to speed up computation.
MATLAB’s method of looking up the potential, while very convenient, is not efficient considering the number of repeated field queries that occur before the carrier has moved to a location with a differing field. Efficiency can be improved by exploiting the fact that a charge remains within its triangle for numerous iterations and that the field is constant with the triangulation. On the contrary, MATLAB solves for the potential at every iteration in its lookup procedure, which is a bit slow. These repeated searches can be avoided but at the expense of significant code complexity. The effort is justified however as charge transport imposes a dominant computational expense in Monte Carlo.
vi.2 Barycentric Coordinates
Given a triangulation (the mesh is made of tetrahedra in 3-space but the term triangulation often persists) with four node points , and , the arbitrary point can be described by the barycentric coordinates , and where
r=λ1r1+λ2r2+λ3r3+λ4r4. (31)
The additional constraint is imposed that
λ1+λ2+λ3+λ4=1. (32)
The barycentric coordinates become more intuitive when thought of as area (volume in 3-d) coordinates (see Figure 12). In this paradigm, consider the 2-d node points , and along with the probe point . To start with, let’s normalize the area enclosed by , and to (this normalization is equivalent to the constraint ). Then we can consider the three different areas enclosed by 1) , and (), 2) , and () and 3) , and (). It turns out that these areas (, and ) are identically equal to the barycentric coordinates , and , providing a quick and intuitive interpretation of the barycentric coordinates. The process and interpretation is the same in 3-d when volume is substituted for area. It is not actually recommended to calculate through this procedure but to instead follow the procedure in Section VI.3.
vi.3 Barycentric Coordinate Formulae
In this section we derive formulae useful for solving the barycentric coordinates and electric-potential. We start again with the definitions given by equations 31 and 32. After separating equation 31 into the , and components we can solve for the through the following linear procedures and the formula is written out explicitly below.
T=⎛⎜⎝x1−x4x2−x4x3−x4y1−y4y2−y4y3−y4z1−z4z2−z4z3−z4⎞⎟⎠. (33)
⎛⎜⎝λ1λ2λ3⎞⎟⎠=T−1(r−r4)=T−1⎛⎜⎝x−x4y−y4z−z4⎞⎟⎠, (34)
where . Explicitly we can write out
T−1=1det(T)⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝(y2−y4)(z3−z4)−(y3−y4)(z2−z4)(z2−z4)(x3−x4)−(z3−z4)(x2−x4)(x2−x4)(y3−y4)−(x3−x4)(y2−y4)(y3−y4)(z1−z4)−(y1−y4)(z3−z4)(z3−z4)(x1−x4)−(z1−z4)(x3−x4)(x3−x4)(y1−y4)−(x1−x4)(y3−y4)(y1−y4)(z2−z4)−(y2−y4)(z1−z4)(z1−z4)(x2−x4)−(z2−z4)(x1−x4)(x1−x4)(y2−y4)−(x2−x4)(y1−y4)⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠. (35)
where .
The potential is simple to solve for in the barycentric coordinate system and equal to . This procedure is more obvious when one considers the connection to area coordinates.
vi.4 Barycentric Coordinate Procedures and Shortcuts
The procedure for finding which tetrahedra a charge resides is not unique and a canned MATLAB procedure (or CGAL library in C++) can be utilized for this step. After the tetrahedra in which the carrier resides is determined, the electric field, is computed. This can be performed by probing the potential at four locations ( and ) and computing gradients. The drawback is that this procedure requires conversion to barycentric coordinates and electric potential lookup for three additional points. These steps can be eliminated with some simple derivations that are outlined below. Most of these steps provide a conceptual framework and only the last step is actually computed.
First we consider two points and and find their associate barycentric coordinates and .
⎛⎜ ⎜⎝λ′1λ′2λ′3⎞⎟ ⎟⎠=T−1⎛⎜⎝x′−x4y′−y4z′−z4⎞⎟⎠. (36)
and
⎛⎜ ⎜⎝λ′′1λ′′2λ′′3⎞⎟ ⎟⎠=T−1⎛⎜⎝x′′−x4y′′−y4z′′−z4⎞⎟⎠, (37)
where, as always, is constrained by . Next we solve for the potentials and
V′=λ′1V1+λ′2V2+λ′3V3+λ′4V4=λ′V (38)
and
V′′=λ′′1V1+λ′′
|
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
I posted a question to the R-Mixed-Model mailing list but have not had any responses yet:
—————————————————————————————
Dear All,
I wonder if anybody has tried to make glmmBUGS work with JAGS. My
attempt was not successful.
Here is the simple example I copied from the glmmBUGS Vignettes:
------------------------
library(MASS)
data(bacteria)
bacterianew <- bacteria
bacterianew$yInt = as.integer(bacterianew$y == "y")
levels(bacterianew$trt) <- c("placebo", "drug", "drugplus") library(glmmBUGS) bacrag <- glmmBUGS(formula = yInt ~ trt + week, data=bacterianew, effects = "ID", modelFile="model.bug", family="bernoulli") names(bacrag$ragged)
source("getInits.R")
startingValues = bacrag$startingValues ------------------------ With WinBUGS, it runs well: ------------------------ library(R2WinBUGS) bacResult1 = bugs(bacrag$ragged, getInits, model.file="model.bug", n.chain=3,
n.iter=2000, n.burnin=100, parameters=names(getInits()), n.thin=10)
------------------------
But with JAGS, I got error message "Error in FUN(50L[[1L]], ...) :
invalid first argument"
------------------------
library(R2jags)
jags.parms=names(getInits())
bacResult = jags(data=bacrag$ragged, n.chain=3, n.iter=2000, model.file=model.bug) ------------------------ I hope somebody can help me figuring out how to make this work. Many thanks. Best, Shige --------------------------------------------------------------------------------------- Here is the answer from Jens Åströ --------------------------------------------------------------------------------------- Hi! Not an expert by any means, but I got it to run by doing this: 1) change inprod2 to inprod in the model file (JAGS does not have inprod2 function) 2) change ~dflat() to other uninformative prior, e.g. ~dnorm(0.0,1.0E-6) (Jags does not have dflat distribution) 3) specify/compile the model with e.g. bac.jags<-jags.model("model.bug",data=bacrag$ragged,n.chains=4)
4) update it, update(bac.jags,1000)
5) Collect coda samples,
bacResult<-coda.samples(temp,names(getInits()),n.iter=10000,thin=10)
This seemed to converge well, (gelman.diag(bacResult))
Good luck
----------------------------------------------------------------------------------------
It works.
|
# Covariance between two sample means of correlated data
I have two sets of random data $X=\{x_1,...,x_N\}$ and $Y\{y_1,...,y_N\}$ both of length $N$. The sets are autocorrelated such that the correlation between $x_i$ and $x_j$ depends only on $|i-j|$. From both of these I can find the sample mean, $$\bar{X} = \frac{1}{N}\sum_{i=1}^N x_i$$ and similarly for $Y$. I believe I can find the variance of each mean as follows, $$\text{var}(\bar{X}) = \frac{S^2}{N}\frac{N-1}{\frac{N}{\gamma_2}-1}$$ where $S^2 = \frac{1}{N}\sum_{i=1}^N(x_i-\bar{X})^2$ and $\gamma_2 = 1+2\sum_{j=1}^{N-1}(1-j/N)\rho_j$, with $\rho_j$ being the autocorrelation function at a lag $j$. My question is this. What is $\text{cov}(\bar{X},\bar{Y})$ if the data sets are also correlated? The correlation between the sets should be the same for all pairs i.e. $\text{cov}(x_i,y_i) = \text{cov}(x_j,y_j)$. Will it be some generalisation of $\text{var}(\bar{X})$ or can I calculate it from the variances of the means and the correlation alone?
|
# CS349W Project3 jGestures
228pages on
this wiki
CS349W project 3 writeup
## Introduction Edit
We have developed jGestures, a jQuery plugin that enables mouse gestures in web applications. Mouse gestures have been available in traditional single-user applications, and even some browsers (e.g. Firefox), but only for browser commands (e.g. next/previous page, home, etc). jGestures lets web developers integrate mouse gestures into their web applications easily and efficiently. Its main features are:
• Runs completely on the client side, using JavaScript.
• Highly configurable: Developers specify their own gestures, can customize the way gestures are drawn, and can tweak the recognition algorithm.
• Fast, small code footprint (3.6KB minified).
• Easy to integrate as a jQuery plug-in.
With its default configuration, the user draws a gesture with the mouse by pressing and holding the left button. The gesture is displayed on the screen as the user draws it. To finish a gesture, the user releases the mouse button. jGestures then tries to recognize the gesture, and initiates its associated action if the gesture is successfully recognized. To provide visual feedback to the user, the gesture changes its color to green if it is recognized, or to red if it is not, and disappears 200ms after the user finishes it.
## Developer InterfaceEdit
### Specifying gesturesEdit
jGestures allows the web developer to specify a series of custom gestures to recognize in a flexible way, using the addGesture method:
$(this).addGesture(points, directionSensitivity, proportionSensitivity, startSensitivity, name, handler); The points argument specifies the sequence of points that form the gesture as an array of x-y coordinates. It can be simple, like a left-to-right stroke ([[0,0],[1,0]]), or more complex like a rectangle or circle. The coordinates can be in any system (e.g. [[0.5,0.5],[1000.5,0.5]] would also specify a left-right stroke). The next three options specify the sensitivity of the gesture to different aspects: • directionSensitivity: Whether the direction in which the gesture is drawn matters • proportionSensitivity: Whether the proportions of the gesture matter • startSensitivity: Whether the starting point of the gesture matters (useful for closed gestures, e.g. boxes or circles) The name argument specifies a name for the gesture. Finally, handler is the event handler that will be called if the gesture is recognized. The handler function should take two arguments: the gesture detected, and the sequence of points captured. ### Integrating jGesturesEdit Integrating jGestures with your web application is very simple: 1. Include the jQuery and jGestures JavaScript files. <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript" src="jquery.gestures.js"></script> 2. Optionally, modify the default configuration.$(this).gesturesConfig().strokeWidth = 5;
var gHandler = function (gesture,points) {
$(this).addGesture( [[0,0], [0,1], [1,1], [1,0], [0,0]], false, false, false, "Box", gHandler );$(this).addGesture(
|
# Prove that the series $\sum_{n=0}^\infty \frac{(-1)^n \cdot x^{2n}}{(2n)!}$ represents $\cos x$ for all values of $x$
guys.
The question is as stated in the title: prove that the series $\sum_{n=0}^\infty \frac{(-1)^n \cdot x^{2n}}{(2n)!}$ represents $\cos x$ for all values of $x$
My doubt is quite theoretical:
I did this exercise the following way:
$\cos x = \frac{d}{dx} \sin x \therefore \cos x = \frac{d}{dx} \sum_{n=0}^\infty \frac{(-1)^n \cdot x^{2n+1}}{(2n+1)!}$
My doubt is right here. According to the book, when we take the derivative here we get $\sum_{n=0}^\infty \frac{(-1)^n \cdot x^{2n}}{(2n)!}$, but shouldn't we get $\sum_{n=1}^\infty \frac{(-1)^n \cdot x^{2n}}{(2n)!}$ ?
When my teacher was teaching how to differentiate series, he said that when you did, you always had to add one to the index.
Am I missing something here?
Thanks in advance.
Pedro.
• I have no idea what your teacher meant. The book's answer is right, as you can check by doing the differentiation. Oct 21, 2015 at 2:44
• Say you had the following series: $\frac{1}{1-x} = \sum_{n=0}^{\infty} x^n, |x| < 1$. If we take the derivative on both sides: $\frac{1}{(1-x)^2} = \sum_{n=1}^\infty n \cdot x^{n-1}, |x| < 1$ That increase on when the $n$ starts. Is it wrong? Oct 21, 2015 at 2:48
• The reason is wrong (and very confusing) although in that example the answer is right. The best way to write the derivative, for a first step, is$$\sum_{n=0}^\infty nx^{n-1}\ .$$You should be able to see why this is the same as your answer in this example, and why it is **not** the same in the example involving $\sin$ and $\cos$. Oct 21, 2015 at 2:50
• If your teacher is saying you always add $1$ to the starting index, that is just plain wrong. Oct 21, 2015 at 2:54
## 3 Answers
That "rule" is deceptive; I will use an example to demystify it.
Note that formally we have $D\sum_{k=1}^{n}x^{k} = D(x + x^{2} + \cdots + x^{n}) = 1 + 2x + \cdots + nx^{n-1} = \sum_{k=1}^{n}kx^{k-1};$ no need to apply the rule here. But $D\sum_{k=0}^{n}x^{k} = D(1 + x + \cdots + x^{n}) = 1 + \cdots + nx^{n-1} = \sum_{k=1}^{n}kx^{k-1}$; it does appear like "we add one to the index"!
Can you see now a why?
• I can. Thanks, mate! Oct 21, 2015 at 2:59
I think what your teacher is reffering to is that the derivative of a constant is zero, and if the first term of the series is a constant it follows that the first term of the derivative of the series is 0, so you can just start the series at $n = 1$. What your teacher said is correct IF the first term of the series is a constant. For example if you take the derivative of the cosine series then it is ok. The first term of the sine series is $x$ which is not a constant with respect to $x$.
Expand $\sin(x)$ using Taylor series as follows:
$$\sin(x)=\sum_{k=0}^\infty\frac{(-1)^k}{(2k+1)!}x^{2k+1}=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\ldots$$
Given the above information you can write $\cos(x)$ as follows:
\begin{align} \cos(x)&=\frac{\mathrm d}{\mathrm dx}\sin(x)\\ &=\frac{\mathrm d}{\mathrm dx}\sum_{k=0}^\infty\frac{(-1)^k}{(2k+1)!}x^{2k+1}\\ &=\frac{\mathrm d}{\mathrm dx}\left(x-\frac{x^3}{3!}+\frac{x^5}{5!}-\ldots\right)\\ &=1-\frac{x^2}{2!}+\frac{x^4}{4!}-\ldots\\ &=\sum_{k=0}^\infty\frac{(-1)^kx^{2k}}{(2k)!} \end{align}
|
# Notation convention for $\{1,\ldots,n\}$
Is there any convention for a notational shorthand for the set $$\{1,\ldots,n\}$$ (defined as $$\{k\in\mathbb{N} \mid k \le n\}$$), where $$n\in\mathbb{N}$$, that the majority of mathematicians are familiar with?
I find that in some cases in which these sets appear often in the same expression, which can reduce readability, or at least aesthetic cleanness; using some sort of abbreviation would alleviate that.
• In combinatorics it is sometimes written as $[n]$. – Mark Jan 7 at 19:48
• In combinatorial settings $[n]=\{1,2,\ldots ,n\}$ is commonly used. – Anurag A Jan 7 at 19:49
• Sometimes, $\overline{1,n}$ is used. – Litho Jan 7 at 20:04
I don't know how popular this is but I've seen the convention: $$[n]\equiv\{1,2,3,4,\ldots n\}$$
• You could say $$\{k\}_{k=1}^n$$. I saw this often when considering sets of data points, like below, but I see no reason the notation couldn't extrapolate to any set.
$$\{(x_1,y_1) \; , \; (x_2,y_2) \; , \; ... \; , \; (x_n,y_n)\} = \{(x_i,y_i)\}_{i=1}^n$$
• In combinatorics, apparently $$[n]$$ can be used to represent $$\{1,...,n\}$$ as touched on in the comments and by Archimedesprinciple.
• In general the notation $\left\{ f(k) \right\}_{k = 1}^{n}$ is used to denote a Sequence rather than a Set per sae. There is no absolute way of defining a set but conventional analysis texts tend to use the notation that you have used. – user150203 Jan 8 at 4:40
In homotopy theory, both $$[n]$$ and $$\mathbf{n}$$ are common and, to a lesser extent, $$\underline{n}$$. None of this matters too much, as long as you define your choice of notation clearly in your writing.
|
## Weber s Problem with attraction and repulsion under Polyhedral Gauges
• Given a finite set of points in the plane and a forbidden region R, we want to find a point X not an element of int(R), such that the weighted sum to all given points is minimized. This location problem is a variant of the well-known Weber Problem, where we measure the distance by polyhedral gauges and allow each of the weights to be positive or negative. The unit ball of a polyhedral gauge may be any convex polyhedron containing the origin. This large class of distance functions allows very general (practical) settings - such as asymmetry - to be modeled. Each given point is allowed to have its own gauge and the forbidden region R enables us to include negative information in the model. Additionally the use of negative and positive weights allows to include the level of attraction or dislikeness of a new facility. Polynomial algorithms and structural properties for this global optimization problem (d.c. objective function and a non-convex feasible set) based on combinatorial and geometrical methods are presented.
$Rev: 13581$
|
## Introductory Algebra for College Students (7th Edition)
The length of the rectangle is $11$ inches and the width is $6$ inches.
We know that the formula for the perimeter of a rectangle is: $$P = 2l + 2w$$ We know from the problem that the perimeter is $34$ inches and that the length of the rectangle is $5$ inches more than its width. We put all this information together to get the following equation: $$34 = 2(w + 5) + 2w$$ We distribute what is in parentheses: $$34 = 2w + 10 + 2w$$ Combine like terms: $$34 = 4w + 10$$ Subtract $10$ from both sides to get $w$ on one side of the equation: $$24 = 4w$$ Divide both sides by $4$ to isolate $w$: $$w = 6$$ If we know that the length $l$ is $5$ more inches than the width $w$, then we can find $l$ with the following equation: $$l = 6 + 5$$ Add the right-hand side together to get the value for $l$: $$l = 11$$
|
# Statistics - Bias-variance trade-off (between overfitting and underfitting)
The bias-variance trade-off is the point where we are adding just noise by adding model complexity (flexibility). The training error goes down as it has to, but the test error is starting to go up. The model after the bias trade-off begins to overfit.
When the nature of the problem is changing the trade-off is changing.
## 3 - Formula
The ingredients of prediction error are actually:
• bias: the bias is how far off on the average the model is from the truth.
• and variance. The variance is how much that the estimate varies around its average.
Bias and variance together gives us prediction error.
This difference can be expressed in term of variance and bias:
$e^2 = var(model) + var(chance) + bias$
where:
• $var(model)$ is the variance due to the training data set selected. (Reducible)
• $var(chance)$ is the variance due to chance (Not reducible)
• bias is the average of all $\hat{Y}$ over all training data set minus the true Y (Reducible)
As the flexibility (order in complexity) of f increases, its variance increases, and its bias decreases. So choosing the flexibility based on average test error amounts to a bias-variance trade-o ff.
## 4 - Illustration
where:
We want to find the model complexity that gives the smallest test error. When the nature of the problem is changing the trade-off is changing.
## 5 - Nature of the problem
When the nature of the problem is changing the trade-off is changing.
• the truth is wiggly and the noise is high, so the quadratic do the best
• the truth is smoother, so the linear model do really well
• the truth is wiggly and the noise is low, so the more flexible do the best
## 6 - Model Complexity is better/worse
Model Complexity = Flexibility
• The sample size is extremely large, and the number of predictors is small: Flexible is better. A flexible model will allow us to take full advantage of our large sample size.
• The number of predictors is extremely large, and the sample size is small: Flexible is worse. The flexible model will cause overfitting due to our small sample size.
• The relationship between the predictors and response is highly non-linear. A flexible model will be necessary to find the nonlinear effect.
• The variance of the error terms, i.e. sigma^2 = var(Epsilon) , is extremely high: Flexible is worse. A flexible model will cause us to fit too much of the noise in the problem.
|
# Math Help - Exponential Differentiation
1. ## Exponential Differentiation
In studying salmon populations, a model often used is the Ricker equation which relates the size of a fish population this year,
x to the expected size next year y. (Note that these populations do not change continuously, since all the parents die before the eggs are hatched.) The Ricker equation is
y = axe^(-bx) where a,b> 0
Find the value of the current population which maximizes the salmon population next year according to this model.
---> I differentiated it this far:
dy/dx = axe(-bx)(-b) + e^(-bx)(a)
dy/dx = -axe^(-bx)b + ae^(-bx)
dy/dx = 0 = ae^(-bx) (-xb + 1)
in the question we are given that a,b>0 Could anyone please tell me how to further solve this question??
2. Originally Posted by ninja
in studying salmon populations, a model often used is the ricker equation which relates the size of a fish population this year,
x to the expected size next year y. (note that these populations do not change continuously, since all the parents die before the eggs are hatched.) the ricker equation is
y = axe^(-bx) where a,b> 0
find the value of the current population which maximizes the salmon population next year according to this model.
---> i differentiated it this far:
dy/dx = axe^(-bx)(-b) + e^(-bx)(a)
dy/dx = -abxe^(-bx) + ae^(-bx)
dy/dx = 0 = ae^(-bx) (-bx + 1)
$\textcolor{red}{x = \frac{1}{b}}$
...
3. Originally Posted by skeeter
...
i was just doing something wrong in the second derivative (to prove max) due to which it was turning out be a minimum. thanks anyways for spending your time on it.
|
# Igor Aleksandrovich Semaev
Professor
• [email protected]
• Phone+47 55 58 42 79
HIB - Thormøhlens gate 55
5006 Bergen
Postboks 7803
5020 Bergen
• Show author(s) (2020). Probabilistic analysis on Macaulay matrices over finite fields and complexity of constructing Gröbner bases. Journal of Algebra. 651-674.
• Show author(s) (2018). Separable Statistics and Multidimensional Linear Cryptanalysis. IACR Transactions on Symmetric Cryptology (ToSC). 79-110.
• Show author(s) (2016). Statistical and Algebraic Properties of DES. Lecture Notes in Computer Science (LNCS). 93-107.
• Show author(s) (2016). A combinatorial problem related to sparse systems of equations. Designs, Codes and Cryptography. 1-16.
• Show author(s) (2015). MaxMinMax problem and sparse equations over finite fields. Designs, Codes and Cryptography. 22 pages.
• Show author(s) (2015). An application of Combinatorics in Cryptography. Electronic Notes in Discrete Mathematics. 31-35.
• Show author(s) (2013). Improved agreeing-gluing algorithm. Mathematics in Computer Science. 321-339.
• Show author(s) (2011). Sparse Boolean equations and circuit lattices. Designs, Codes and Cryptography. 349-364.
• Show author(s) (2010). Sparse Boolean equations and circuit lattices. Designs, Codes and Cryptography. 16 pages.
• Show author(s) (2010). Methods to Solve Algebraic Equations in Cryptanalysis. Tatra Mountains Mathematical Publications. 1-29.
• Show author(s) (2009). Sparse algebraic equations over finite fields. SIAM journal on computing (Print). 388-409.
• Show author(s) (2008). Solving Multiple Right Hand Sides linear equations. Designs, Codes and Cryptography. 147-160.
• Show author(s) (2008). On solving sparse algebraic equations over finite fields. Designs, Codes and Cryptography. 47-60.
• Show author(s) (2006). The ANF of the Composition of Addition and Multiplication mod 2^n with a Boolean Function. Lecture Notes in Computer Science (LNCS). 112-125.
• Show author(s) (2006). An algorithm to solve the discrete logarithm problem with the number field sieve. Lecture Notes in Computer Science (LNCS). 174-190.
• Show author(s) (2005). The ANF of the composition of addition and multiplication mod 2(n) with a Boolean function. Lecture Notes in Computer Science (LNCS). 112-125.
|
# Calculate this sum
• August 28th 2009, 07:14 AM
dhiab
Calculate this sum
Calculate :
$S = \sum\limits_{p = 0}^n {C_p^n p^2 } \left( { - 3} \right)^p
$
• August 28th 2009, 11:50 PM
Opalg
Start with the binomial expansion $(1+x)^n = \sum_{p=0}^n{n\choose p}x^p$. Differentiate, multiply by x, differentiate again, multiply by x again, then put x=–3.
|
If $\displaystyle log_4{5}=-{\frac {3} {2x}}$ , find the value of $\displaystyle log_{0.04}(8)=$
If $\displaystyle log_4{5}=-{\frac {3} {2x}}$ ,
find the value of $\displaystyle log_{0.04}(8)=$
|
# Midterm review
The midterm will take place on Wednesday, October 24, during class (in the regular room). Sections covered: everything up until (and partially including) polynomials.
## 1Proof techniques¶
Multiple techniques can be used in one proof.
• Induction (can use the successor function)
• Proving the contrapositive (to show A -> B, just show not B -> not A
• Proof by contradiction (assume the opposite of the statement you're trying to prove, then reach a contradiction)
• When proving an equality, manipulate the expression on the left until it looks like the right (similar for inequalities)
• Proving both directions for if and only if statements
• Using properties of addition, multiplication, etc (the defining properties)
• Also: well-ordering principle
• For gcd: bezout's identity (linear combination)
• When proving set equality or inequality, focus on the elements of the sets (also applies to functions, relations)
## 2Concepts seen in class¶
• Cardinality
• The pigeonhole principle
• Cantor's diagonal argument
• Proves that there is no bijection from $\mathbb{N}$ to $\mathbb{R}$
• Euclidean algorithm
• gcd
• Complex numbers
• Polar representation: $r\sin(z)/ + r\cos(z)$ where $z = ab + i$ and $r = |z|$
• Exponentation function: $e^{i\theta} = \cos\theta + i\sin\theta$
• Rings
• Addition (commutative, associative, neutral, inverse), multiplication (associative, neutral), distributivity
• Commutative ring: also has multiplicative commutativity
• field: every element has a multiplicative inverse in the field
• Power sets
• Prove that a function mapping a set to its power set is not bijective (even for the empty set!)
• Irrationality of $\sqrt{2}$
• Congruence relations
• Properties of equivalence relations: r (reflexive), s (symmetric), t (transitive)
• set of equivalence classes is a field iff $n$ is prime
• Fermat's little theorem
• Carmichael numbers (pseudoprimes)
## 3Theorems seen in class¶
### 3.1Fermat's little theorem¶
$a^p \equiv a \pmod p$. If $a \not \equiv 0 \pmod p$, then this is equivalent to $a^{p-1} \equiv 1 \pmod p$.
First, we prove the lemma that if $p$ is prime, then it divides the binomial coefficient $\displaystyle \binom{p}{k}$ for all $k = 1, \ldots p-1$. To prove that, expand it out to
$$\frac{p!}{k!(p-k)!} = \frac{p \cdot (p-1) \cdot \ldots \cdot (p - k + 1)}{k!}$$
Since the binomial coefficient is always an integer, the denominator must divide the numerator. Since $k < p$, the factorial of $k$ does not divide $p$. So it must divide the other part. But this means that the binomial coefficient is a multiple of $p$, since that is not cancelled out by the denominator. So $p$ must divide it, which proves the lemma.
Now we can prove FLT using induction on $a$. The induction step involves using the binomial theorem.
### 3.2Wilson's theorem¶
$p$ is prime $\iff$ $p \mid (p-1)! + 1$
The proof for this is really quite pretty. First, the $\leftarrow$ direction, using multiple contradictions and proving the contrapositive. First, we assume that $p \mid (p-1)! + 1$ where $p$ is not prime. Since $p$ is not prime, then $a \mid p$ for some $a$ less than $p$ and greater than 1 (factor). So $a$ divides $(p-1)!$ because $a$ is in there somewhere. If it also divides $(p-1)! + 1$ then it must divide the difference between that and $(p-1)!$, which is just 1. So that tells us that $a = 1$. But we chose $a > 1$. So we reach a contradiction, meaning that $a \nmid (p-1)! + 1$. But since $a \mid p$ and $p \mid (p-1)! + 1$, then, by the transitivity of $\mid$, we must have that $a \nmid (p-1)! +1$. But we just showed that that is impossible, which means that our assumption - that $p$ is prime - is not true. Consequently, this doesn't hold for composite $p$, which proves the contrapositive.
For the $\rightarrow$ direction: suppose $p$ is prime. Then $x^2 \equiv 1 \pmod p$ has two sollutions, 1 and -1 (proved in assignment 2, question 7, using Euclid's lemma). So 1 and -1 are their own inverses. Now we look at the product $1 \cdot 2 \cdot \ldots \cdot (p-2) \cdot (p-1)$. Basically, everything has an inverse except $p-1$, which doesn't cancel out. So $(p-1)! \equiv -1 \pmod p$ and so if we add one to it we get what we're trying to prove in congruence relation notation.
### 3.3The fundamental theorem of arithmetic¶
Every non-zero number can be uniquely represented as a product of primes.
First, prove the existence property using the well-ordering principle and proof by contradiction. Assume there is a non-empty set of numbers that can't be written as a product of primes. The least element must be a product of two numbers. You reach a contradiction very quickly, for these two numbers can't be in the set or else the minimality of the least element would be contradicted; however, if they're not in the set, they can be written as a product of primes, and consequently so can their product.
Uniqueness uses Euclid's lemma and proof by induction. Not really that interesting.
### 3.4Euclid's lemma¶
One formulation is:
Using only the basic properties of the gcd proved in class, (and not the fundamental theorem of arithmetic) show that if a $p$ is a prime and $p$ divides a product $ab$ of two integers, then $p$ necessarily divides either $a$ or $b$.
Proof:
Since $p\mid ab$ and $p \mid p$ (by the reflexivity of $\mid$), then $\gcd(p, ab) = p$ (since nothing greater than $p$ can divide $p$). We assume that $p \nmid a$ and $p \nmid b$. So $\gcd(p, a) = 1$ and $\gcd(p, b) = 1$. By theorem 9.1.2 in the notes (also known as Bézout's identity), there exist integers $x$ and $y$ such that $xp + xa = 1$ (from the fact that $\gcd(p, a) = 1$). If we multiply both sides of the identity by $b$, we get:
$$xbp + xab = b$$
Well, we know that $p \mid ab$, by a premise of this argument. From the definition of a $\gcd$ (definition 9.0.5 (2) in the notes), we know that $p \mid xab$. We also know that $p \mid xbp$, since that's just a multiple of $p$ (definition 9.0.5 (2) again). Then, by definition 9.0.5 (3), we know that $p \mid xbp + xab$, and by the identity above then $p \mid b$. Since this contradicts the premise that $p \nmid b$, then our assumption that $p\nmid a$ and $p\nmid b$ must be wrong, and so $p$ must divide either $a$ or $b$ (or both).
Alternatively, we could just assume, without loss of generality, that $p \nmid a$ and from that conclude that $p \mid b$. Whichever.
Taken from the practice midterm, question 4.
### 3.5Infinitely many primes¶
There are infinitely many primes
Assume that the set of all the primes in the entire world is finite, represented in an ordered set as $P = \{p_1, \ldots, p_n\}$ (where $p_n$ is the largest prime number). Now consider the number $N = p_1\ldots p_n + 1$. This number can't be a prime number, because it is greater than the largest prime number $p_n$ by at least one, and so is not in $P$. And yet it is a prime number, because the gcd of $N$ and any $p \in P$ (i.e. any prime) is 1, as dividing $N$ by $p$ will always leave a remainder of 1. So we reach a contradiction, which implies that the set of all the primes is not finite.
### 3.6Chinese remainder theorem¶
Hope we don't have to prove this
### 3.7Miller-Rabin¶
Take the sequence $a^{(p-1)}, a^{(p-1)/2}, \ldots$. Should be all 1's, until you get a -1, for all $a$.
Prove that if $p$ is prime then $x^2-1 \equiv \pmod p$ then $x$ can be only 1 or -1 in the field. This is not generally (ever?) true when $p$ is composite. Then we just apply that recursively to each term in the sequence, since the first term in the sequence is congruent to 1 by FLT.
### 3.8Bézout's identity¶
$a$ and $b$ are non-zero integers. Then there exist integers $x$ and $y$ such that $ax + by = \gcd(a, b)$. Also, $\gcd(a, b)$ is the smallest number that can be written in this form (as a linear combination of $a$ and $b$).
Let $L$ be the set of all positive linear combinations of $a$ and $b$. $L$ is clearly not empty - for example, $a^2 + b^2$ is positive and is thus in the set. By the well-ordering principle, $L$ must have a smallest element. We denote that element by $m = ua + vb$ for some $u, v \in \mathbb{Z}$.
Now, we show that $m \mid a$ and $m \mid b$. We can write $a$ as $a = qm + r$ where $q \in \mathbb{Z}$ and $0 \leq r < m$. We can rewrite $r$ as $a - qm = a -q(ua + vb) = a - qua + qvb = (1-qu)a + qvb$. $r$ is clearly a linear combination of $a$ and $b$. If it's greater than zero, then it is an element of $L$. However, since we selected $r$ to be strictly less than $m$, then $r$ being in $L$ would contradict the minimality of $m$. So we must have that $r = 0$ to avoid that contradiction. But then we have that $a = qm + 0= qm$ and so $m \mid a$. In the same way, we get that $m \mid b$.
This shows that $m$ is a common divisor of $a$ and $b$. Now we have to show that $m$ is the greatest common divisor of $a$ and $b$. Let $e$ be any common divisor of $a$ and $b$. Then $e$ also divides any linear combination of $a$ and $b$, including $m$, and so $e \mid m$. The largest $e$ that divides $b$ can only be $m$ itself (since no number larger than $a$ can divide $a$ for any $a \in \mathbb{Z} \setminus \{0\}$). So the greatest common divisor of $a$ and $b$ has to be equal to the least element in $S$. This concludes the proof.
Sidenote: a question on the midterm asked for a proof of something closely related to this. If only I knew this then.
|
## Intro. to Differential Equations
My intent is to create a thread for people interested in Differential Equations. However, I will explicitly state that I am only a student of this class myself and that many things could end up being incorrect or an improper way to present the material.
I will merely be going along with the class, mostly using excerpts and questions from the book, "Elementary Differential Equations and Boundary Value Problems: Seventh Edition," by William E. Boyce and Richard C. DiPrima. So truthfully, this is more for myself. Looking things up and explaining it to others seems to be the best way to learn.
If people have any questions or comments, feel free to share. Also, I know there are many knowledgeable people on this board, so be sure to correct me or make suggestions.
This will require knowledge of Calculus but don't be shy to ask if there is something that you are unsure of.
First, a little background;
What is a Differential Equation?
A Differential Equation is simply an equation containing a derivative.
Classifications:
Ordinary Differential Equations (ODE) - Equations that appear with ordinary derivatives (single independent variable, could have multiple dependent variables).
Examples:
$$\frac {dy} {dt} = ay - b$$
$$a \frac {dy_1} {dx} + b \frac {dy_2} {dx} + cy_1 = dy_2 = e$$
Partial Differential Equations (PDE) - Equations that appear with partial derivatives (multiple independent variable).
Examples:
$$\alpha^2 [ \frac {\partial^2 u(x,t)} {\partialx^2} ] = \frac {\partialu(x,t)} {\partialt}$$
$$\frac {\partial^2 V(x,y)} {\partialx^2} + \frac {\partial^2 V(x,y)} {\partialy^2} = 0$$
Don't let any of this frighten you. Math is always scary when looked at a glance with a bunch of undefined variables.
Linear and Nonlinear
The ordinary differential equation:
$$F(t, y, y', ..., y^{(n)}) = 0$$
is said to be linear if F is a linear function of the variables y, y',..., yn (Dependant variable must be first order). Thus the general linear ordinary differential equation of order n is:
$$a_0 (t) y^{(n)} + a_1 (t) y^{(n-1)} + ... + a_n (t) y = g(t)$$
where (n) is not the power of but the nth derivative.
An example of a simple Nonlinear ODE would simply be:
$$y \frac {dy} {dx} = x^4$$
This concludes the introduction. I may or may not write the next chapter tonight. However, a question, does anyone know an easier way for writing math on the computer and one that looks less confusing. I know I will have difficulty finding some things, especially subscripts and superscripts. Anyone know a better way to denote these?
PhysOrg.com science news on PhysOrg.com >> Heat-related deaths in Manhattan projected to rise>> Dire outlook despite global warming 'pause': study>> Sea level influenced tropical climate during the last ice age
Admin Blog Entries: 5 Sounds great! Tutorials like this have been very successful here. Howto make math symbols: http://physicsforums.com/announcement.php?forumid=73 You can make subscripts and subscripts by using these tags [ sup ] content [ /sup ] [ sub ] content [ /sub ] * no spaces
"This chapter deals with differential equations of the first order $$\frac {dy} {dt} = f(t,y)$$ where f is a given function of two variables. Any differentiable function y = Φ(t) that satisfies this equation for all t in some interval is called a solution." Linear Equations with Variable Coefficients Using the previous example for ODE (dy/dx = ay + b) and replacing the constants we write the more general form: $$\frac {dy} {dt} + p(t) y = g(t)$$ or $$y' + p(t) y + g(t) = 0$$ where p(t) and g(t) are given functions of the independent variable t. Special cases: If p(t) = 0 then, $$y' = g(t)$$ and the integral is easily taken; $$\frac {dy} {dt} = g(t)$$ $$\int (\frac {dy} {dt}) dt = \int g(t) dt$$ $$y = \int g(t) dt + C$$ If g(t) = 0, then, $$y' = p(t) y$$ and the integral is once more relatively easy to take; $$\frac {dy} {dt}= p(t) y$$ $$\int \frac {dy} {y} = \int p(t) dt$$ $$ln|y| = \int [p(t) dt] + C$$ $$e^{ln|y|} = e^{\int [p(t) dt] + C}$$ $$y = Ke^{\int [p(t) dt] + C}, K = ± e^{C}$$ However, if neither p(t) or g(t) are zero in the general equation, a function µ(t) (the integrating factor) is used to solve the equation; $$\mu (t) \frac {dy} {dt} + \mu (t) p(t) y = \mu (t) g(t)$$ where now the left hand side will be a known derivative $$\mu (t) \frac {dy} {dt} + \mu (t) p(t) y = \frac {d} {dt}[\mu (t)y]$$ so that, in theory, you end up with; $$\frac {d} {dt}[\mu (t)y] = \mu (t) g(t)$$ Since µ(t) must be carefully chosen to make the previous statement true, let us find it. $$\frac {d} {dt}[\mu (t)y] = \mu (t) \frac {dy} {dt} + \mu (t)p(t)y$$ $$\frac {d} {dt}[\mu (t)y] = \mu (t) \frac {dy} {dt} + \frac {d \mu (t)} {dt} y$$ Where the latter is simply the derivative of µ(t)y in general form. $$\mu (t) \frac {dy} {dt} + \mu (t) p(t) y = \mu (t) \frac {dy} {dt} + \frac {d\mu (t)} {dt} y$$ Subtracting µ(t)(dy/dt) from both sides $$\frac {d \mu (t)} {dt} y = \mu (t) p(t) y$$ Cancel y $$\frac {d \mu (t)} {dt} = \mu (t) p(t)$$ $$\frac {d \mu (t)} {\mu (t)} = p(t) dt$$ $$\int \frac {d \mu (t)} {\mu (t)} = \int p(t)d t$$ $$ln|\mu (t)| = \int p(t) dt$$ The constant C is arbitrary and can be dropped to form the equation, µ(t) = e∫[p(t)dt] So the integrating factor µ(t) can always be found by the last equation. Lets try some problems together. Ex1. y' + 2y = 3 In this equation p(t) = 2 and g(t) = 3. Since neither of them are zero, use an the integrating factor µ(t) to create a differentiable equation, µ(t)y' + µ(t)2y = µ(t)3 Solve µ(t)to be, µ(t) = e∫[p(t)dt] µ(t) = e∫[2dt] µ(t) = e2t Plug the value of µ(t) back into the equation to obtain, e2ty' + e2t2y = e2t3 Recognize that the left hand side of the equation is merely [ye2t]', [ye2t]' = d/dt[ye2t] = 3e2t ∫d/dt[ye2t] = ∫3e2t ye2t = (3/2)e2t + C y = (3/2) + Ce-2t Ex2. y' + (1/2)y = 2 + t µ(t)y' + (1/2)µ(t)y = (2 + t)µ(t) µ(t) = e∫[p(t)dt] = et/2 et/2y' + (1/2)et/2y = 2et/2 + tet/2 d/dt[et/2y] = 2et/2 + tet/2 et/2y = ∫[2et/2 + tet/2]dt et/2y = 4et/2 + ∫[tet/2]dt Using integration by parts, u = t, du = dt v = 2et/2, dv = et/2dt ∫[tet/2]dt = 2tet/2 - ∫[2et/2]dt ∫[tet/2]dt = 2tet/2 - 4et/2 + C et/2y = 4et/2 + 2tet/2 - 4et/2 = 2tet/2 + C y = 2t + Ce-t/2 For initial value problems it is easy to solve for C. Taking the last problem, solve if y(0) = 2 2 = 2(0) + Ce-(0)/2 = C C = 2 Therefore, y = 2t + 2e-t/2 That is enough for now. Here are some problems to practice on if you so wish. 1.) y' + 3y = t + e-2t 2.) 2y' + y = 3t, hint: rewrite to fit general equation y' + p(t)y = g(t) 3.)t3(dy/dt) + 4t2y = e-t, y(-1) = 0
## Intro. to Differential Equations
Thanks Greg, I will run through it tomorrow and change it to make it more readable.
Mentor Blog Entries: 9 Well I had to dig out my copy of Boyce and Diprima (2nd Edition!) to follow your development, it all works out as you have presented. I will follow along with you, relearning what I have not see for a number of years, and perhaps able to help you out if you hit some rough spots.
Mentor Blog Entries: 9 Here is solution to the first exercise.
Thank you for your participation. Your solution is infact correct except for the constant is missing. No biggy, I always forget those too. Did you find it hard to follow without the book and should I have presented this more clearly some how? I'm glad to know that someone else knows this stuff. I have some trouble understanding finding the interval for nonlinear functions for which a solution exists, so if I havn't figured it out by the time I do the write up, perhaps you can help.
Mentor Blog Entries: 9 It was very sloppy of me to leave off the constant, sorry about that. I was a bit confused by your presentation as it is light on connective text. Where you presented $$\frac {d} {dt}[\mu (t)y] = \mu (t) g(t)$$ I was thrown for a bit. My copy of B&D helped out. The fact is everything you wrote is absolutely correct. I have taken grad level ODE & PDE courses in the dim and distant past ('86-'88 time frame) So should be able to dredge up some long buried knowledge to help out. I have always found Differential Equations to be interesting, you might say they are where math and reality meet. With a good back ground in Diff Eqs and some numerical methods you can do dang near anything. edit: corrected symbols
Mentor Blog Entries: 9 Here is the solution to the 3rd exercise.
Yes, your answer is correct. It actually took me a while to get it. Out of curiousity, why did you change to using the variable s? Are you just used to using it and forgot that it was in terms of t or can this be done?
Mentor Blog Entries: 9 In the integral $$\int \mu (s) g(s)ds$$ The variable, s, is what is called a dummy variable, it can be anything. You will see this frequenty.
Integral, how come your solutions don't show up?
Mentor Blog Entries: 9 They are PDFs do you have acrobat reader installed?
|
# Word Embeddings
A key idea in the examination of text concerns representing words as numeric quantities. There are a number of ways to go about this, and we’ve actually already done so. In the sentiment analysis section words were given a sentiment score. In topic modeling, words were represented as frequencies across documents. Once we get to a numeric representation, we can then run statistical models.
Consider topic modeling again. We take the document-term matrix, and reduce the dimensionality of it to just a few topics. Now consider a co-occurrence matrix, where if there are $$k$$ words, it is a $$k$$ x $$k$$ matrix, where the diagonal values tell us how frequently wordi occurs with wordj. Just like in topic modeling, we could now perform some matrix factorization technique to reduce the dimensionality of the matrix10. Now for each word we have a vector of numeric values (across factors) to represent them. Indeed, this is how some earlier approaches were done, for example, using principal components analysis on the co-occurrence matrix.
Newer techniques such as word2vec and GloVe use neural net approaches to construct word vectors. The details are not important for applied users to benefit from them. Furthermore, applications have been made to create sentence and other vector representations11. In any case, with vector representations of words we can see how similar they are to each other, and perform other tasks based on that information.
A tired example from the literature is as follows: $\mathrm{king - man + woman = queen}$
So a woman-king is a queen.
Here is another example:
$\mathrm{Paris - France + Germany = Berlin}$
Berlin is the Paris of Germany.
The idea is that with vectors created just based on co-occurrence we can recover things like analogies. Subtracting the man vector from the king vector and adding woman, the most similar word to this would be queen. For more on why this works, take a look here.
## Shakespeare example
We start with some already tokenized data from the works of Shakespeare. We’ll treat the words as if they just come from one big Shakespeare document, and only consider the words as tokens, as opposed to using n-grams. We create an iterator object for text2vec functions to use, and with that in hand, create the vocabulary, keeping only those that occur at least 5 times. This example generally follows that of the package vignette, which you’ll definitely want to spend some time with.
load('data/shakes_words_df_4text2vec.RData')
library(text2vec)
## shakes_words
shakes_words_ls = list(shakes_words$word) it = itoken(shakes_words_ls, progressbar = FALSE) shakes_vocab = create_vocabulary(it) shakes_vocab = prune_vocabulary(shakes_vocab, term_count_min = 5) Let’s take a look at what we have at this point. We’ve just created word counts, that’s all the vocabulary object is. shakes_vocab Number of docs: 1 0 stopwords: ... ngram_min = 1; ngram_max = 1 Vocabulary: term term_count doc_count 1: bounties 5 1 2: rag 5 1 3: merchant's 5 1 4: ungovern'd 5 1 5: cozening 5 1 --- 9090: of 17784 1 9091: to 20693 1 9092: i 21097 1 9093: and 26032 1 9094: the 28831 1 The next step is to create the token co-occurrence matrix (TCM). The definition of whether two words occur together is arbitrary. Should we just look at previous and next word? Five behind and forward? This will definitely affect results so you will want to play around with it. # maps words to indices vectorizer = vocab_vectorizer(shakes_vocab) # use window of 10 for context words shakes_tcm = create_tcm(it, vectorizer, skip_grams_window = 10) Note that such a matrix will be extremely sparse. Most words do not go with other words in the grand scheme of things. So when they do, it usually matters. Now we are ready to create the word vectors based on the GloVe model. Various options exist, so you’ll want to dive into the associated help files and perhaps the original articles to see how you might play around with it. The following takes roughly a minute or two on my machine. I suggest you start with n_iter = 10 and/or convergence_tol = 0.001 to gauge how long you might have to wait. In this setting, we can think of our word of interest as the target, and any/all other words (within the window) as the context. Word vectors are learned for both. glove = GlobalVectors$new(word_vectors_size = 50, vocabulary = shakes_vocab, x_max = 10)
shakes_wv_main = glove$fit_transform(shakes_tcm, n_iter = 1000, convergence_tol = 0.00001) # dim(shakes_wv_main) shakes_wv_context = glove$components
# dim(shakes_wv_context)
# Either word-vectors matrices could work, but the developers of the technique
# suggest the sum/mean may work better
shakes_word_vectors = shakes_wv_main + t(shakes_wv_context)
Now we can start to play. The measure of interest in comparing two vectors will be cosine similarity, which, if you’re not familiar, you can think of it similarly to the standard correlation12. Let’s see what is similar to Romeo.
rom = shakes_word_vectors["romeo", , drop = F]
# ham = shakes_word_vectors["hamlet", , drop = F]
cos_sim_rom = sim2(x = shakes_word_vectors, y = rom, method = "cosine", norm = "l2")
# head(sort(cos_sim_rom[,1], decreasing = T), 10)
romeo juliet tybalt benvolio nurse iago friar mercutio aaron roderigo
1 0.78 0.72 0.65 0.64 0.63 0.61 0.6 0.6 0.59
Obviously Romeo is most like Romeo, but after that comes the rest of the crew in the play. As this text is somewhat raw, it is likely due to names associated with lines in the play. As such, one may want to narrow the window13. Let’s try love.
love = shakes_word_vectors["love", , drop = F]
cos_sim_rom = sim2(x = shakes_word_vectors, y = love, method = "cosine", norm = "l2")
# head(sort(cos_sim_rom[,1], decreasing = T), 10)
x
love 1.00
that 0.80
did 0.72
not 0.72
in 0.72
her 0.72
but 0.71
so 0.71
know 0.71
do 0.70
The issue here is that love is so commonly used in Shakespeare, it’s most like other very common words. What if we take Romeo, subtract his friend Mercutio, and add Nurse? This is similar to the analogy example we had at the start.
test = shakes_word_vectors["romeo", , drop = F] -
shakes_word_vectors["mercutio", , drop = F] +
shakes_word_vectors["nurse", , drop = F]
cos_sim_test = sim2(x = shakes_word_vectors, y = test, method = "cosine", norm = "l2")
# head(sort(cos_sim_test[,1], decreasing = T), 10)
x
nurse 0.87
juliet 0.72
romeo 0.70
It looks like we get Juliet as the most likely word (after the ones we actually used), just as we might have expected. Again, we can think of this as Romeo is to Mercutio as Juliet is to the Nurse. Let’s try another like that.
test = shakes_word_vectors["romeo", , drop = F] -
shakes_word_vectors["juliet", , drop = F] +
shakes_word_vectors["cleopatra", , drop = F]
cos_sim_test = sim2(x = shakes_word_vectors, y = test, method = "cosine", norm = "l2")
# head(sort(cos_sim_test[,1], decreasing = T), 3)
x
cleopatra 0.81
romeo 0.70
antony 0.70
One can play with stuff like this all day. For example, you may find that a Romeo without love is a Tybalt!
## Wikipedia
The following shows the code for analyzing text from Wikipedia, and comes directly from the text2vec vignette. Note that this is a relatively large amount of text (100MB), and so will take notably longer to process.
text8_file = "data/texts_raw/text8"
if (!file.exists(text8_file)) {
unzip("data/text8.zip", files = "text8", exdir = "data/texts_raw/")
}
wiki = readLines(text8_file, n = 1, warn = FALSE)
tokens = space_tokenizer(wiki)
it = itoken(tokens, progressbar = FALSE)
vocab = create_vocabulary(it)
vocab = prune_vocabulary(vocab, term_count_min = 5L)
vectorizer = vocab_vectorizer(vocab)
tcm = create_tcm(it, vectorizer, skip_grams_window = 5L)
glove = GlobalVectors$new(word_vectors_size = 50, vocabulary = vocab, x_max = 10) wv_main = glove$fit_transform(tcm, n_iter = 100, convergence_tol = 0.001)
wv_context = glove\$components
word_vectors = wv_main + t(wv_context)
Let’s try our Berlin example.
berlin = word_vectors["paris", , drop = FALSE] -
word_vectors["france", , drop = FALSE] +
word_vectors["germany", , drop = FALSE]
berlin_cos_sim = sim2(x = word_vectors, y = berlin, method = "cosine", norm = "l2")
head(sort(berlin_cos_sim[,1], decreasing = TRUE), 5)
paris berlin munich germany at
0.7575511 0.7560328 0.6721202 0.6559778 0.6519383
Success! Now let’s try the queen example.
queen = word_vectors["king", , drop = FALSE] -
word_vectors["man", , drop = FALSE] +
word_vectors["woman", , drop = FALSE]
queen_cos_sim = sim2(x = word_vectors, y = queen, method = "cosine", norm = "l2")
head(sort(queen_cos_sim[,1], decreasing = TRUE), 5)
king son alexander henry queen
0.8831932 0.7575572 0.7042561 0.6769456 0.6755054
Not so much, though it is still a top result. Results are of course highly dependent upon the data and settings you choose, so keep the context in mind when trying this out.
Now that words are vectors, we can use them in any model we want, for example, to predict sentimentality. Furthermore, extensions have been made to deal with sentences, paragraphs, and even lda2vec! In any event, hopefully you have some idea of what word embeddings are and can do for you, and have added another tool to your text analysis toolbox.
1. You can imagine how it might be difficult to deal with the English language, which might be something on the order of 1 million words.
2. Simply taking the average of the word vector representations within a sentence to represent the sentence as a vector is surprisingly performant.
3. It’s also used in the Shakespeare Start to Finish section.
4. With a window of 5, Romeo’s top 10 includes others like Troilus and Cressida.
|
# Poynting's theorem
## Poynting’s Theorem
If we have some charge and current configuration which, produces some fields $\mathbf{E}$ and $\mathbf{B}$. After a while, the charges move around.
The question is, how much work $dW$ is done by the electromagnetic forces in the interval $dt$?
To do this, we simply compute the work, which is
$dW = \mathbf{F} \cdot d \mathbf{l}=q(\mathbf{E}+\mathbf{v} \times \mathbf{B}) \cdot \mathbf{v} d t=q \mathbf{E} \cdot \mathbf{v} d t$
We can rewrite this in terms of charge and current densities. Swap out $q \rightarrow \rho d\tau$ and $\rho \mathbf{v} \rightarrow \mathbf{J}$.
$\frac{d W}{d t}=\int_{\mathcal{V}}(\mathbf{E} \cdot \mathbf{J}) d \tau$
So $\mathbf{E} \cdot \mathbf{J}$ is the work done per time, per unit volume, or the power per volume. We would like to know what this quantity is.
Begin with the Ampere-Maxwell’s Law:
$\nabla \times \mathbf{B}=\mu_{0} \mathbf{J}+\mu_{0} \varepsilon_{0} \frac{\partial \mathbf{E}}{\partial t}$
and so we can dot both sides with $\mathbf{E}$, using this equation to get rid of $\mathbf{J}$:
$\mathbf{E} \cdot \mathbf{J}=\frac{1}{\mu_{0}} \mathbf{E} \cdot(\nabla \times \mathbf{B})-\epsilon_{0} \mathbf{E} \cdot \frac{\partial \mathbf{E}}{\partial t}$
We now have to deal with two terms. The first we can use the following vector calculus identity:
$\nabla \cdot(\mathbf{A} \times \mathbf{B})=\mathbf{B} \cdot(\nabla \times \mathbf{A})-\mathbf{A} \cdot(\nabla \times \mathbf{B})$
and now plugging in the fields, we have
$\nabla \cdot(\mathbf{E} \times \mathbf{B})=\mathbf{B} \cdot(\nabla \times \mathbf{E})-\mathbf{E} \cdot(\nabla \times \mathbf{B})$
using Faraday’s Law ($(\nabla \times \mathbf{E}=-\partial \mathbf{B} / \partial t$), it follows that
$\mathbf{E} \cdot(\nabla \times \mathbf{B})=-\mathbf{B} \cdot \frac{\partial \mathbf{B}}{\partial t}-\nabla \cdot(\mathbf{E} \times \mathbf{B})$
Using another calculus identity, we have:
$\mathbf{B} \cdot \frac{\partial \mathbf{B}}{\partial t}=\frac{1}{2} \frac{\partial}{\partial t}\left(B^{2}\right), \quad \text { and } \quad \mathbf{E} \cdot \frac{\partial \mathbf{E}}{\partial t}=\frac{1}{2} \frac{\partial}{\partial t}\left(E^{2}\right)$
And so
$\mathbf{E} \cdot \mathbf{J}=-\frac{1}{2} \frac{\partial}{\partial t}\left(\epsilon_{0} E^{2}+\frac{1}{\mu_{0}} B^{2}\right)-\frac{1}{\mu_{0}} \nabla \cdot(\mathbf{E} \times \mathbf{B})$
And plugging it into our original expression for work, then calling the divergence theorem
$\int_{\mathcal{V}}(\nabla \cdot \mathbf{v}) d \tau=\oint_{S} \mathbf{v} \cdot d \mathbf{a}$
on the second term allows to convert a volume integral into a surface integral. Finally, we have
$\frac{d W}{d t}=-\frac{d}{d t} \int_{\mathcal{V}} \frac{1}{2}\left(\epsilon_{0} E^{2}+\frac{1}{\mu_{0}} B^{2}\right) d \tau-\frac{1}{\mu_{0}} \oint_{\mathcal{S}}(\mathbf{E} \times \mathbf{B}) \cdot d \mathbf{a}$
Which is the work energy theorem of electrodynamics: The first term is the total energy stored in electromagnetic fields:
$u = \frac{1}{2} \left( \epsilon_0 E^2 + \frac{1}{\mu_0} B^2 \right)$
The second term is the rate at which energy is transported out of the surface. The mysterious second term is defined as the Poynting vector. It is interpreted as the energy per unit time, per unit area.
$\boxed{\mathbf{S} \equiv \frac{1}{\mu_{0}}(\mathbf{E} \times \mathbf{B})}$
So $\mathbf{S} \cdot d \mathbf{a}$ is the energy leaving surface $d \mathbf{a}$.
Finally, we can write the above equation into a more compact form:
$\frac{d W}{d t}=-\frac{d}{d t} \int_{\mathcal{V}} u d \tau-\oint_{\mathcal{S}} \mathbf{S} \cdot d \mathbf{a}$
Now what is the meaning of this equation? Imagine we do work on some charge configuration. Either the energy stored in the fields had to have decreased, or the energy must have went outside the surface.
The second interpretation could use a little more work. What does it mean for energy to leave a surface? After all, we said that the volume $\mathcal{V}$ is arbitrary, and $\mathcal{S}$ is only required to be the boundary of such a volume.
To be concrete, let’s say our system is a battery, and pick $\mathcal{V}$ to be the volume of the battery. In a circuit, the battery is clearly doing work to drive say a lightbulb. (increasing $dW/dt$)
Image: Wikipedia
So where does the energy come from? If we say there aren’t any fields in the battery, then energy really is leaving the battery to drive the circuit, in order word the second term must decrease.
Finally, if $dW/dt = 0$, then using the divergence theorem again gives
$\int \frac{\partial u}{\partial t} d \tau=-\oint \mathbf{S} \cdot d \mathbf{a}=-\int(\mathbf{\nabla} \cdot \mathbf{S}) d \tau$
and removing the integrals gives us:
$\frac{\partial u}{\partial t}=-\nabla \cdot \mathbf{S}$
which is the continuity equation for energy! This says that energy is locally conserved!
If we compare this to the continuity equation for fluids, we see that the Poynting vector $\mathbf{S}$ really is the energy flux.
Image: Wikipedia. Dipole radiation of a dipole vertically in the page showing electric field strength (colour) and Poynting vector (arrows) in the plane of the page.
|
## Big-O, Little-O, Theta, Omega
Big-O, Little-o, Omega, and Theta are formal notational methods for stating the growth of resource needs (efficiency and storage) of an algorithm. There are four basic notations used when describing resource needs. These are: O(f(n)), o(f(n)), $\Omega(f(n))$, and $\Theta(f(n))$. (Pronounced, Big-O, Little-O, Omega and Theta respectively)
### Formally:
"$T(n)$ is $O(f(n))$" iff for some constants $c$ and $n_0$, $T(n) <=c f(n)$ for all $n >= n_0$
"$T(n)$ is $\Omega(f(n))$" iff for some constants $c$ and $n_0$, $T(n)>=cf(n)$ for all $n >= n_0$
"$T(n)$ is $\Theta(f(n))$" iff $T(n)$ is $O(f(n))$ AND $T(n)$ is $\Omega(f(n))$
"$T(n)$ is $o(f(n))$" iff $T(n)$ is $O(f(n))$ AND $T(n)$ is NOT $\Theta(f(n))$
### Informally:
"$T(n)$ is $O(f(n))$" basically means that $f(n)$ describes the upper bound for $T(n)$
"$T(n)$ is $\Omega(f(n))$" basically means that $f(n)$ describes the lower bound for $T(n)$
"$T(n)$ is $\Theta(f(n))$" basically means that $f(n)$ describes the exact bound for $T(n)$
"$T(n)$ is $o(f(n))$" basically means that $f(n)$ is the upper bound for $T(n)$ but that $T(n)$ can never be equal to $f(n)$
#### Another way of saying this:
"$T(n)$ is $O(f(n))$" growth rate of $T(n)$ <= growth rate of $f(n)$
"$T(n)$ is $\Omega(f(n))$" growth rate of $T(n)$ >= growth rate of $f(n)$
"$T(n)$ is $\Theta(f(n))$" growth rate of $T(n)$ == growth rate of $f(n)$
"$T(n)$ is $o(f(n))$" growth rate of $T(n)$ < growth rate of $f(n)$
### An easy way to think about big-O
The math in big-O analysis can often be intimidates students. One of the simplest ways to think about big-O analysis is that it is basically a way to apply a rating system for your algorithms (like movie ratings). It tells you the kind of resource needs you can expect the algorithm to exhibit as your data gets bigger and bigger. From best (least resource requirements ) to worst, the rankings are: $O(1)$, $O(\log n)$, $O(n)$, $O(n \log n)$, $O( n^2 )$, $O(n^3)$, $O(2^n)$. Think about the graphs in the grow rate section. The way each curve looks. That is the most important thing to understand about algorithms analysis
### What all this means
Let's take a closer look a the formal definition for big-O analysis
"$T(n)$ is $O(f(n))$" if for some constants $c$ and $n_0$, $T(n)<=cf(n)$ for all $n >= n_0$
The way to read the above statement is as follows.
• n is the size of the data set.
• $f(n)$ is a function that is calculated using n as the parameter.
• $O(f(n))$ means that the curve described by $f(n)$ is an upper bound for the resource needs of a function.
This means that if we were to draw a graph of the resource needs of a particular algorithm, it would fall under the curve described by $f(n)$. What's more, it doesn't need to be under the exact curve described by $f(n)$. It could be under a constant scaled curve for $f(n)$... so instead of having to be under the $n^2$ curve, it can be under the $10n^2$ curve or the $200n^2$ curve. In fact it can be any constant, as long as it is a constant. A constant is simply a number that does not change with n. So as $n$ gets bigger, you cannot change what the constant is. The actual value of the constant does not matter though.
The other portion of the statement $n >= n_0$ means that $T(n)<=cf(n)$ does not need to be true for all values of $n$. It means that as long as you can find a value $n_0$ for which $T(n)<=cf(n)$ is true, and it never becomes untrue for all $n$ larger than $n_0$, then you have met the criteria for the statement $T(n)$ is $O(f(n))$
In summary, when we are looking at big-O, we are in essence looking for a description of the growth rate of the resource increase. The exact numbers do not actually matter in the end.
|
# Ngô Quốc Anh
## December 12, 2013
### An upper bound for the total integral of the Q-curvature in the non-negative Yamabe invariants
Filed under: Riemannian geometry — Tags: — Ngô Quốc Anh @ 6:24
As we have already discussed once that a natural conformally invariant in dimension four is the following
$\displaystyle Q_g=-\frac{1}{12}(\Delta\text{Scal}_g -\text{Scal}_g^2 +3|{\rm Ric}_g|^2)$
which is commonly refered to the Q-curvature of metric $g$, see this topic. Note that, under a conformal change of the metric $\widetilde g =e^{2u}g$, the quantity $Q$ transforms according to
$\displaystyle 2Q_{\widetilde g}=e^{-4u}(P_gu+2Q_g)$
where $P=P_g$ denotes the Paneitz operator with respect to $g$. Keep in mind that the Paneitz operator is conformally invariant in the sense that
$\displaystyle P_{\widetilde g}=e^{-4u}P_g$
for any conformal metric $\widetilde g =e^{2u}g$. For any $g$, the operator $P_g$ acts on a smooth function u on M via the following rule
$\displaystyle {P_g}(u) = \Delta _g^2u + {\rm div}\left( {\frac{2}{3}\text{Scal}_g - 2{\rm Ric}_g} \right)du$
which plays a similar role as the Laplace operator in dimension two. Observe that $dv_{\widetilde g} = e^{4u}dv_g$, therefore, a simple calculation shows
$\displaystyle \int_M Q_{\widetilde g}dv_{\widetilde g}=\int_M Q_{\widetilde g}e^{4u}dv_g=\int_M Q_g dv_g.$
Hence the total integral $\int_M Q_g dv_g$ is conformally invariant.
Here we have already used the fact that $\int_M P_g(u)dv_g=0$ since, by the divergence theorem, we know that
$\displaystyle \int_M \Delta_g^2 u dv_g=0$
and
$\displaystyle \int_M {\rm div}\left( \Big( {\frac{2}{3}\text{Scal}_g g - 2{\rm Ric}_g}\Big) \nabla_g u\right) dv_g=0.$
We now cover the following beautiful result due to Gursky published in 1999 in CMP. Before doing so, let us denote by $\kappa_g$ the following
$\displaystyle \kappa_g = \int_M Q_g dv_g.$
We also denote by $\mathcal Y(g)$ the so-called Yamabe invariant given by
$\displaystyle \mathcal Y(g)=\inf_{\widetilde g = e^{2w}g}\left( \int_M \text{Scal}_{\widetilde g} dv_{\widetilde g}\right)\left( \int_M dv_{\widetilde g}\right)^{-1/2}.$
We shall prove
Theorem (Gursky). Let $(M^4, g)$ be a smooth compact fout-dimensional Riemannian manifold. If $\mathcal Y(g) \geqslant 0$ then $\kappa_g \leqslant 8\pi^2$.
Gursky’s proof is quite nice since it makes use of the subcritical equations similarly to the Yamabe approach.
Proof. First, we let $u_k$ solve the following subcritical equation
$-6\Delta_g u_p + \text{Scal}_gu_p = \mu_pu_p^{p-1}$
for each $p \in [2,4)$. Since $\mu_p$ is continuous from the left and non-decreasing by Aubin’s result, we may choose a sequence $p_k \nearrow 4$ such that $\mu_k := \mu_{p_k} \nearrow \mathcal Y(g)$. Let $u_k=u_{p_k}$ and $g_k = u_k^2g$. Then the scalar curvature $\text{Scal}_g$ of $g_k$ is given by
$\displaystyle \text{Scal}_k=\mu_ku_k^{p_k-4}.$
If we let $E_k$ denote the trace-free Ricci tensor of $g_k$, then (and this is the key point) as $\kappa_g$ is a conformal invariant, we have
$\displaystyle \kappa_g = \kappa_{g_k}=\int_M \left(-\frac 14|E_k|^2+\frac 1{48}\text{Scal}_k^2 \right)dv_{g_k}.$
The trace-free Ricci tensor of a metric $g$ is defined to be the Ricci tensor $\text{Ric}_g$ subtracts its trace. Mathematically, it is given by
$\displaystyle {\mathop \text{Ric}\limits^ \circ}_g= \text{Ric}_g -\frac{g}{4}\text{Scal}_g .$
Using the formula for $Q_g$, we obtain
$\begin{array}{lcl} {Q_g} &=& \displaystyle - \frac{1}{12}(\Delta\text{Scal}_g - {\text{Scal}_g^2} + 3|{\text{Ric}}|^2) \hfill \\ &=& \displaystyle - \frac{1}{12}\Delta {\text{Scal}}_g + \frac{1}{12}{{\text{Scal}}_g^2} - \frac{1}{4}|{\mathop \text{Ric}\limits^ \circ}_g + \frac{g}{4}{\text{Scal}}_g|^2 \hfill \\&=& \displaystyle - \frac{1}{12}\Delta {\text{Scal}}_g + \frac{1}{12}{{\text{Scal}}_g^2} - \frac{1}{4}|{\mathop \text{Ric}\limits^ \circ}_g|^2-\frac{1}{16}|\text{Scal}_g|^2 \hfill \\& =& \displaystyle - \frac{1}{12}\Delta {\text{Scal}}_g+ \frac{1}{48}{{\text{Scal}}_g^2} - \frac{1}{4}|{\mathop \text{Ric}\limits^ \circ}_g|^2. \end{array}$
which immediately implies that
$\displaystyle \int_M Q_g dv_g = \int_M \left( -\frac 14 |{\mathop \text{Ric}\limits^ \circ}_g|^2+\frac 1{48}\text{Scal}_g^2\right) dv_g.$
Using the formula for $\text{Scal}_k$ shown above, we can estimate
$\begin{array}{lcl} {\kappa _g} &=& \displaystyle - \frac{1}{4}\int_M {|{E_k}{|^2}d{v_{{g_k}}}} + \frac{1}{{48}}\int_M {S_k^2d{v_{{g_k}}}} \hfill \\ &\leqslant & \displaystyle\frac{1}{{48}}\int_M {{S_k}^2d{v_{{g_k}}}} \hfill \\ &=& \displaystyle\frac{1}{{48}}\mu _k^2\int_M {u_k^{2{p_k} - 4}d{v_g}} \hfill \\ &\leqslant &\displaystyle\frac{1}{{48}}\mu _k^2{\left( {\int_M {u_k^{{p_k}}d{v_g}} } \right)^{\frac{{2({p_k} - 2)}}{{{p_k}}}}}{\left( {\int_M {d{v_g}} } \right)^{\frac{{4 - {p_k}}}{{{p_k}}}}}. \end{array}$
Taking the limit as $k\to \infty$, we obtain
$\displaystyle \kappa_g \leqslant \frac 1{48} (\mathcal Y(g))^2.$
By the energy estimate of Aubin, $\mathcal Y(g) \leqslant \mathcal Y(g_{\mathbb S^4})=8\sqrt 6 \pi$. Thus, if $\mathcal Y(g) \geqslant 0$, we get
$\kappa_g \leqslant 8\pi^2$
as claimed.
|
# A new cardiac catheterization lab was constructed at Havea Heart Hospital. The investment for the...
A new cardiac catheterization lab was constructed at Havea Heart Hospital. The investment for the lab was $450,000 in equipment costs and$50,000 in renovation costs. A desired return on investment is 12 percent. Once the lab was constructed, 5,000 patients were served in the first year and were charged $340 for each procedure. The annual fixed cost for the catheterization lab is$1,000,000, and the variable cost is \$129 per procedure. What is the catheterization lab"s profit? Did this profit meet its desired ROI?
|
# Need a nudge in the right direction - How do I find the total number of permutation with 3 consecutive characters?
Again, I really just want a nudge in the right direction. Possibly a large nudge, but not the straight forward answer.
I am trying to figure out how to solve Project Euler Problem 191.
I believe I have most of it figured out, but I cannot come up with an algorithm or pattern for the 3-consecutive-Absent days part. I can even find the number of permutations of the strings holding the total number of A-days constant (i.e. how many permutations with only 1 A-day, only 2 A-days, only 3 A-days, etc.), but I cannot figure out how many permutations have 3 consecutive A-days.
So, is there a mathematical formula for this? Is there a way to break this up so that I have "(some # permutations) +/- (another # of permutations) = (# of consecutive A-days)"? I believe this is a combinatorial problem, but maybe it falls under a different category.
Any help is appreciated. Thanks.
-
For each string length $n$, you want to keep track of six quantities: the number of good'' (i. e. prize-winning) strings of length $n$ which end in $i$ $A$s ($i = 0, 1, 2$) and contain $j$ $L$s ($j = 0, 1$). These six quantities for strings of length $n+1$ can be simply written in terms of those for length $n$.
-
+1: This should make it easy to actually write a program. – Aryabhata Mar 2 '11 at 19:02
Ignoring the L, you only need to count the number of strings in which the As appear doubly at the most.
To count this, first fix the number of Os and fix the number of As appearing singly (and so by implication the number of AA blocks).
Now write the Os with dashes between them and to left, for instance with 3 Os.
_ O _ O _ O _
You now need to fill some of the dashes with the number of A and AA you chose.
Vary the number of Os and As and add up for the total.
Hope that helps.
-
|
# Linear Algebra question.
Gold Member
prove that a 2x2 complex matrix :
a b
c d
is positive iff A=A*, where A* is the conjugate transpose.
i know that an operator (and so i think it also applies to a matrix) is positive when it equals SS* for an operator (matrix) S.
and A* equals:
\\_
a c
_
b d
where the upper line stands for the conjugate.
but i don't know how to find an operator (matrix) which multiplied by its transpose conjugate equals A.
any pointers will be appreciated.
Last edited:
Staff Emeritus
Gold Member
What does it mean to say a matrix is positive? Do you mean positive definite? Are the two terms interchangeable?
Also, there's another definition that requires that all square sub-matrices have positive determinants - I don't know what this is called.
Gold Member
i know the definition of positive operator (which you can reflect on a matrix cause evey operator can be repesented by a matrix), the definitions is as follows:
an operator P is positive if it can be represneted by this equation P=SS* where S is an operator.
the defintion of definite positive requires that S will be non singular.
Staff Emeritus
|
v Kernel-based system identification with manifold regularization: A Bayesian perspective · Mirko Mazzoleni
# Kernel-based system identification with manifold regularization: A Bayesian perspective
### Abstract
This paper presents a nonparametric Bayesian interpretation of kernel-based function learning with manifold regularization. We show that manifold regularization corresponds to an additional likelihood term derived from noisy observations of the function gradient along the regressors graph. The hyperparameters of the method are estimated by a suitable empirical Bayes approach. The effectiveness of the method in the context of dynamical system identification is evaluated on a simulated linear system and on an experimental switching system setup. [Paper, [Code]]
#### Reference
M. Mazzoleni, A. Chiuso, M. Scandella, S. Formentin, F. Previdi, "Kernel-based system identification with manifold regularization: A Bayesian perspective," in Automatica, doi: 10.1016/j.automatica.2022.110419 , 2022.
#### Bibtex
@article{MAZZOLENI2022110419,
title = {Kernel-based system identification with manifold regularization: A Bayesian perspective},
journal = {Automatica},
volume = {142},
pages = {110419},
year = {2022},
issn = {0005-1098},
doi = {10.1016/j.automatica.2022.110419},
author = {Mirko Mazzoleni and Alessandro Chiuso and Matteo Scandella and Simone Formentin and Fabio Previdi},
}
|
Stochastic Variational Partitioned Runge-Kutta Integrators for Constrained Systems
# Stochastic Variational Partitioned Runge-Kutta Integrators for Constrained Systems
###### Abstract
Stochastic variational integrators for constrained, stochastic mechanical systems are developed in this paper. The main results of the paper are twofold: an equivalence is established between a stochastic Hamilton-Pontryagin (HP) principle in generalized coordinates and constrained coordinates via Lagrange multipliers, and variational partitioned Runge-Kutta (VPRK) integrators are extended to this class of systems. Among these integrators are first and second-order strongly convergent RATTLE-type integrators. We prove strong order of accuracy of the methods provided. The paper also reviews the deterministic treatment of VPRK integrators from the HP viewpoint.
## 1 Introduction
Since the foundational work of Bismut [1981], the field of stochastic geometric mechanics is emerging in response to the demand for tools to analyze the structure of continuous and discrete mechanical systems with uncertainty [Ha2007, ; VaCi2006, ; CiLeVa2008, ; Bi1981, ; MiReTr2002, ; MiReTr2003, ; LaOr2007a, ; LaOr2007b, ; MaWi2007, ]. Within this context the goal of this paper is to develop efficient, structure-preserving integrators for long-time simulations of constrained, mechanical systems perturbed by white-noise forces and torques. Our strategy is to employ stochastic variational integrators (SVIs) [BoOw2007a, ].
Variational integration theory derives integrators for mechanical systems from discrete variational principles [Ve1988, ; Ma1992, ; WeMa1997, ; MaWe2001, ]. The theory includes discrete analogs of the Lagrangian, Noether’s theorem, the Euler-Lagrange equations, and the Legendre transform. Variational integrators can readily incorporate holonomic constraints (e.g., via Lagrange multipliers) and non-conservative effects (via their virtual work) [WeMa1997, ; MaWe2001, ]. Altogether, this description of mechanics stands as a self-contained theory of mechanics akin to Hamiltonian, Lagrangian or Newtonian mechanics.
Variational integrators are symplectic, i.e., the discrete flow map they define exactly preserves the continuous symplectic 2-form. Using backward error analysis one can show that symplectic integrators applied to Hamiltonian systems nearly preserve the energy of the continuous mechanical system for exponentially long periods of time and that the modified equations are also Hamiltonian [HaLuWa2006, ]. Variational integrators are also distinguished by their ability to compute statistical properties of mechanical systems, such as in computing Poincaré sections, the instantaneous temperature of a system, etc.
Stochastic variational integrators are an extension of variational integrators to so-called stochastic mechanical systems. These systems are simple mechanical systems subject to certain random perturbations, and have their origins in Bismut’s foundational work [Bi1981, ]. Bismut showed that the flow of these stochastic mechanical systems extremize an action integral whose domain is the space of semimartingales on configuration space. Bismut’s work was further enriched and generalized to manifolds by recent work [LaOr2007a, ; LaOr2007b, ]. Lazaro-Cami and Ortega show that this general class of stochastic Hamiltonian systems on manifolds extremizes a stochastic action defined on the space of manifold-valued semimartingales [LaOr2007a, ]. Moreover, it has been shown that for a subclass of these systems, one can prove a converse, namely, a.s. a curve satisfies so-called stochastic Hamilton’s equations if and only if it extremizes a stochastic action [BoOw2007a, ].
With this variational principle, one can design SVIs [BoOw2007a, ]. Like their deterministic counterparts, these methods have the advantage that they are symplectic, and in the presence of symmetry, satisfy a discrete Noether’s theorem. Moreover, symplectic methods for stochastic mechanical systems on vector spaces have been shown to capture the correct energy behavior even in the presence of dissipation [MiReTr2002, ; MiReTr2003, ]. In particular, the energy injected or dissipated by the symplectic integrator is not an artifical function of the timestep. Moreover, the energy behavior is accurately captured by the integrator.
These structure-preserving properties are quite crucial. Consider, for instance, simulation of a simple mechanical system subject to random forces with amplitude . Suppose further the unperturbed system preserves energy, momentum, and possesses a first integral. Consider simulating this system with a higher-order accurate method, a standard integrator with simultaneous projection onto energy, momentum, and first integral level sets, and a stochastic variational integrator. If , a stochastic (physical) perturbation in the energy, momentum, and first integrals will appear. However, it is not clear how to modify the projection-based method to correctly capture these stochastic perturbations when . Moreover, the higher-order accurate method requires a time-step smaller than the amplitude of the perturbation in order to accurately represent its effects. And even then, if the time-span of integration is long enough systematic, artificial drift in these quantities will appear.
On the other hand, the stochastic perturbation in energy and momentum captured by the stochastic variational integrator is mechanical, i.e., it is only due to the -random forces. This is because these schemes define flows of discrete mechanical systems. Moreover, even when the amplitude of perturbation is not small, due to symplecticity we conjecture that SVIs on manifolds not only possess good long-time energy behavior, but also perform well in computing statistical properties, such as autocorrelation functions and the empirical distribution. We will investigate these questions in future work.
As far as we can tell, the extension of these structure-preserving integrators to stochastic mechanical systems with holonomic constraints has not been completed. The main goal of this paper is to extend SVIs to holonomic constraints, and in particular, introduce constrained, stochastic variational partitioned Runge-Kutta (VPRK) methods for such systems. Within this family we exhibit a first-order strongly convergent, symplectic integrator for constrained mechanical systems. We use a technique due to Vanden-Eijnden and Cicotti [2006] to prove order of accuracy of the integrators that appear in this paper [VaCi2006, ].
Future work will consider extensions of these schemes to Langevin equations with holonomic constraints. Continuous Langevin processes have been generalized to submanifolds and shown to be ergodic [CiLeVa2008, ]. The generalization to constraint submanifolds was done by appending holonomic constraints to Langevin equations. One can then check that the infinitesimal generator of the constrained Langevin process commutes with the Gibbsian density restricted to to determine that the restricted Gibbsian measure is an invariant measure of the constrained Langevin process. To prove this measure is unique one uses standard arguments based on nondegeneracy of the diffusion and drift vector fields on the momentums. On this note it would be interesting to ascertain ergodicity of SVIs for constrained Langevin systems.
## 2 Constrained VPRK Integrators
This section is provided to fix notation and clarify some aspects of deterministic constrained VPRK integrators which will be relevant in the stochastic context. We also provide a novel proof showing that constrained VPRK integrators can be derived directly from a discrete variational principle without explicitly introducing a discrete Legendre transform. For more details on unconstrained VPRK integrators we refer the reader to [Bo2007, ].
The setting of this section is a real, -dimensional vector space and a mechanical system with smooth, holonomic constraint function, , , that has a regular value at . The mechanical system’s configuration space is given by the constraint submanifold: . We will introduce Lagrange multipliers to prove an equivalence between a Hamilton-Pontryagin (HP) variational principle on and a constrained HP variational principle on . We will then show how to discretize this system to obtain variational RATTLE integrators for constrained mechanical systems.
The variational and symplectic character of VPRK integrators is discussed in [Su1990, ; MaWe2001, ; HaLuWa2006, ]. In what follows we will explicitly use the HP perspective, and specifically, extend the results in Hairer et al. [2006] to constrained systems using the HP perspective. It is worth mentioning, the Hamiltonian counterparts of the constrained VPRK methods, so-called symplectic partitioned Runge-Kutta methods, are also well understood [Ja1996, ; Re1997, ; Ha2003, ].
### Discretization of HP Action
We will adopt a HP viewpoint to describe the action integral of this mechanical system with constraints. The HP description unifies the Hamiltonian and Lagrangian descriptions of a mechanical system [YoMa2006a, ; YoMa2006b, ; Bo2007, ; BoMa2007, ].
###### Definition 2.1.
The Pontryagin bundle of a manifold is defined as . Fixing the interval and , define the HP path space on as
C(PM,x1,x2)={(q,v,p)∈C∞([a,b],PM) | q(a)=x1, q(b)=x2}.
The HP path space is a smooth infinite-dimensional manifold. One can show that its tangent space at consist of maps such that .
###### Definition 2.2.
Fix . Define the unconstrained HP action as
To discretize we first discretize the kinematic constraint: . An s-stage RK method is employed to discretize the kinematic constraint. Let and be given and define the fixed step size and , . The reason for using an s-stage RK discretization of the kinematic constraint is that the theory on such methods (order conditions, stability, and implementation) is mature. See, for instance, [HaNoWa1993, ].
###### Definition 2.3.
Consider the first order differential equation
˙q=f(t,q), q(0)=q0, q(t)∈Q. (1)
Let () and let . An s-stage RK approximation is given by
(2)
The vectors and are called external and internal stage vectors, respectively.
It follows that an s-stage RK method is fully determined by its -matrix and -vector which are typically displayed using the so-called Butcher tableau:
c1 ⋮ cs a11 ⋯ a1s ⋮ ⋮ as1 ⋯ ass b1 ⋯ bs
Suppose that , , is given. Then an s-stage RK approximant applied to yields:
{Qik=qk+h∑sj=1aijv(tk+cjh), i=1,⋯,s,qk+1=qk+h∑sj=1bjv(tk+cjh), k=0,⋯,N. (3)
In what follows for will be introduced as internal stage unknowns and will be determined as a critical point of a discrete action sum.
### VPRK Integrator for Constrained Systems
The VPRK method will be derived from a discretization of the HP action integral in which the kinematic constraint over the kth-time step is replaced with its discrete approximant: (3), and the integral of the Lagrangian over the kth-time step is approximated by the following quadrature:
∫tk+htkL(q,v)dt≈hbiL(Qik,Vik).
The constraint is enforced for all internal stage positions using Lagrange multipliers as follows.
###### Definition 2.4.
Fix two points and on and define the discrete VPRK path space as:
Cd={(q,p,{Qi,Vi,Pi}si=1,{Λi}si=1)d: {tk}Nk=0→T∗Q×(TQ⊕T∗Q)s×(Rk)s | q(0)=q1,q(tN)=q2},
and the discrete constrained VPRK action sum by:
Gd=N−1∑k=0s∑i=1 h[biL(Qik,Vik)+⟨pik,(Qik−qk)/h−s∑j=1aijVjk⟩
The following condition on the coefficients of the s-stage RK method will be important in obtaining a well-defined discrete update on phase space that also respects the constraints [Ja1996, ].
###### Condition 2.1.
Consider an s-stage RK method with given -vector and -matrix. Assume and set . The coefficients of the s-stage RK method satisfy:
a1i=0, asi=bi, bi≠0, i=1,...,s (s∑k=1aik^akj)si,j=2 invertible.
Under this condition on the coefficients of the s-stage RK method, one can prove the following theorem.
###### Theorem 2.5.
Given an s-stage RK method with -vector and -matrix that satisfy condition 2.1, a Lagrangian system with smooth Lagrangian such that is invertible, and smooth holonomic constraint function . A discrete curve satisfies the following VPRK method:
⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩Qik=qk+h∑sj=1aijVjk,qk+1=qk+h∑sj=1bjVjk,Pik=pk+h∑sj=1(bj−bjajibi)(∂L∂q(Qjk,Vjk)+∂g∂q(Qjk)∗⋅Λjk),pk+1=pk+h∑sj=1bj(∂L∂q(Qjk,Vjk)+∂g∂q(Qjk)∗⋅Λjk),Pik=∂L∂v(Qik,Vik),g(Qik)=0. (4)
for and , if and only if it is a critical point of the function , that is, . Moreover, there exist such that the discrete flow map defined by the above scheme, , preserves the canonical symplectic form on .
###### Proof.
Under the assumptions of the theorem, the existence of a numerical solution of (4) and a discrete flow map is guaranteed. That is, given one can solve (4) for . In particular, the condition that is invertible, implies one can eliminate using the Legendre transform of . The condition 2.1 implies that one can determine so that [Ja1996, ].
The differential of in the direction is given by:
d +h[⟨pik,(δQik−δqk)/h−s∑j=1aijδVjk⟩+⟨pk+1,(δqk+1−δqk)/h−s∑j=1bjδVjk⟩] +h[⟨δpik,(Qik−qk)/h−s∑j=1aijVjk⟩+⟨δpk+1,(qk+1−qk)/h−s∑j=1bjVjk⟩]
Collecting terms with the same variations and summation by parts using the boundary conditions gives,
d Gd⋅z=N−1∑k=1s∑i=1(hbi∂g∂q(Qik)∗Λik+hbi∂L∂q(Qik,Vik)+pik)⋅δQik +(−pk+1+pk−s∑i=1pik)⋅δqk+h(bi∂L∂v(Qik,Vik)−s∑j=1ajipjk−bipk+1)⋅δVik +h⟨δpik,(Qik−qk)/h−s∑j=1aijVjk⟩+h⟨δpk+1,(qk+1−qk)/h−s∑j=1bjVjk⟩
Since if and only if for all , one arrives at the desired equations with the elimination of and the introduction of the internal stage variables for . Conversely, if satisfies (4) then .
We will employ the variational proof of symplecticity to prove that is symplectic. This proof is standard, however, we provide it here to emphasize that the symplectic form that is exactly preserved by the method is the canonical one on .
Consider the subset of given by solutions of (4). Let denote the restriction of to this space. Since each of these solutions is determined by an initial point on , one can identify this space with , and hence, . Since is restricted to solution space,
d ^Gd(q0,p0)⋅(δq0,δp0)=⟨pN,δqN⟩−⟨p0,δq0⟩.
Observe that these boundary terms are the canonical one-forms on evaluated at and . Preservation of the symplectic form follows from . ∎
### Variational RATTLE for Constrained Systems
The variational RATTLE integrator is the Lagrangian analog of the RATTLE algorithm originally proposed as a constrained version of Verlet in [RyCiBe1977, ]. It was shown to be symplectic in [LeSk1994, ], and was extended to general constrained Hamiltonian systems by [Ja1996, ]. It is defined by the following two-stage RK discretization of the kinematic constraint (implicit trapezoidal rule),
Given and , the method determines and two Lagrange multipliers by solving the following system of equations,
⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩qk+1=qk+h2(V1k+V2k),P1k=pk+h2(∂L∂q(qk,V1k)+∂g∂q(qk)∗Λ1k),pk+1=P1k+h2(∂L∂q(qk+1,V2k)+∂g∂q(qk+1)∗Λ2k)0=g(qk+1),0=∂g∂q(qk+1)⋅vk+1,pk+1=∂L∂v(qk+1,vk+1),P1k=∂L∂v(qk,V1k)=∂L∂v(qk+1,V2k). (5)
By theorem 2.5, variational RATTLE defines a symplectic scheme. Moreover, as is well-known it is second-order accurate.
### Variational Euler for Constrained Systems
One can relax the conditions assumed in theorem 2.5 on the coefficients of the s-stage RK method [Ha2003, ]. For instance, if in the s-stage RK method one can still obtain a well-defined variational integrator. Consider as an example the following two-stage RK method,
0 0 1 0 1 0
The corresponding variational integrator is given by:
(6)
However, the corresponding discrete flow does not satisfy the “hidden” velocity constraint, and hence, does not define a map from to . To satisfy the hidden constraint a projection step is taken:
⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩pk+1=^pk+1+h∂g∂q(qk+1)∗Λ2k,0=∂g∂q(qk+1)⋅vk+1,pk+1=∂L∂v(qk+1,vk+1). (7)
One can check that this projection step defines a symplectic map. Thus, the composite map is symplectic. It is the Lagrangian version of the constrained symplectic Euler method [HaLuWa2006, ].
Another example relaxing the assumptions in theorem 2.5 is given by regarding the 2-stage RK method as being single-stage implicit Euler,
1 1
The corresponding variational integrator is:
⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩qk+1=qk+h^vk+1,^pk+1=pk+h(∂L∂q(qk,vk)+∂g∂q(qk)∗Λ1k)0=g(qk+1),^pk+1=∂L∂v(qk+1,^vk+1). (8)
We conclude with a theorem summarizing the structure-preserving properties of these constrained variational Euler methods.
###### Theorem 2.6.
The composition of one step of (6) or (8) and the projection step (7), defines a symplectic integrator on . Moreover, these integrators are first-order accurate.
## 3 Constrained, Stochastic Mechanical Systems
### Constrained, Stochastic HP Principle
The setting in this section is an n-manifold and a stochastic mechanical system with smooth, holonomic constraint function, , , that has a regular value . Its configuration space is given by the constraint submanifold: . In this section we will introduce Lagrange multipliers to prove an equivalence between a stochastic variational principle on and a constrained, stochastic variational principle on .
We will adopt a HP viewpoint to describe this mechanical system with random perturbations and will refer to this perturbed system as a stochastic mechanical system. The HP principle unifies the Hamiltonian and Lagrangian descriptions of a mechanical system [YoMa2006a, ; YoMa2006b, ; Bo2007, ; BoMa2007, ]. Roughly speaking, in the stochastic context it states the following critical point condition on (cf. definition 2.1),
where are varied arbitrarily and independently with endpoint conditions and fixed, and and for are stochastic potentials and Wiener processes respectively. This principle builds in a Legendre transform, stochastic Hamilton’s equations and stochastic Euler–Lagrange equations. The action integral in the above principle, consists of two Lebesgue integrals with respect to the Lebesgue measure and Stratonovich stochastic integrals. This action is random, i.e., for every sample point one will obtain a different, time-dependent Lagrangian system. However, each system possesses a variational structure as made precise in [BoOw2007a, ]. The following definitions will be useful to state the constrained, variational principle of HP for mechanical systems with holonomic constraints.
###### Definition 3.1.
Let be the inclusion mapping. Fixing the interval , define constrained HP path space as
CHPc([a,b],q1,q2)= {(q,v,p):[a,b]→PS | q∈C1([a,b],S), (v,p)∈C0([a,b]), q(a)=q1, q(b)=q2},
and the unconstrained HP path space as
CHP([a,b],q1,q2)= {(q,v,p):[a,b]→PQ | q∈C1([a,b],Q), (v,p)∈C0([a,b]), q(a)=i(q1), q(b)=i(q2)}.
Let be a probability space, , , be a nondecreasing family of -subalgebras of , , and , , be independent, real-valued Wiener processes. In terms of these Wiener processes, we define the following.
###### Definition 3.2.
Set and for . Moreover define the unconstrained action as
G=∫ba[L(q,v)dt+m∑j=1γj(q)∘dWj+⟨p,dqdt−v⟩dt].
and the constrained action as .
The unconstrained HP path space is a smooth infinite-dimensional manifold. One can show that its tangent space at consist of maps such that and are of class .
As opposed to using generalized coordinates on , we wish to describe the mechanical system using constrained coordinates on and introduce Lagrange multipliers to enforce the constraint. However, because of the stochastic component of the action, the standard Lagrange multiplier theorem will not apply directly and one cannot introduce Lagrange multipliers in the standard way. Instead, we will introduce the Lagrange multiplier using the following.
###### Definition 3.3.
Given and , define
∫ta⟨df1,f2⟩:=⟨f1,f2⟩|ta−∫ta⟨f1,df2dt⟩dt, t∈[a,b].
Differentiability of the flow map on will be defined in the mean-squared sense. In the following we define mean-squared derivatives on a Banach space with the understanding that this notion can be extended to any manifold using a local representative of the flow map.
###### Definition 3.4 (Mean-squared Derivative).
The mean squared norm of is given by:
∥f(x,ω)∥=(E(|f(x,ω)|2))1/2
Using this norm one can define the derivative of in the standard way, i.e., is mean squared differentiable at if there is a bounded linear map that satisfies,
limδ→0∥f(a+δ,ω)−f(a,ω)−Df(a,ω)⋅δ∥∥δ∥→0.
With the above definitions we prove the following.
###### Theorem 3.5 (Constrained, Stochastic HP Principle).
Given a stochastic mechanical system with Lagrangian such that is invertible, stochastic potentials for , and holonomic constraint with . Then the following are equivalent:
(i)
extremizes .
(ii)
satisfies stochastic HP equations
⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩dq=vdt,dp=∂LS∂q(q,v)dt+∑mj=1∂γSj∂q(q)∘dWj,p=∂LS∂v(q,v). (9)
(iii)
There exists such that and extremize the augmented action where and .
(iv)
There exists such that and satisfy the constrained, stochastic HP equations
⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩dq=vdt,dp=∂L∂q(q,v)dt+∑mj=1∂γj∂q(q)∘dWj+∂g∂q(q)∗⋅dλ,p=∂L∂v(q,v),g(q)=0. (10)
Moreover, the flows of (9) and (10) are mean-square symplectic.
###### Remark 3.1.
The constrained, stochastic HP equations should be thought of in integral form. In particular, the Lagrange multipler term in (10) by definition 3.3 satisfies:
∫ta⟨∂g∂q(q)∗⋅dλ,w⟩=⟨λ,∂g∂q(q)w⟩∣∣∣ta−∫ta⟨λ,ddt∂g∂q(q)w⟩dt,
for any .
###### Proof.
The stochastic HP principle states that (i) and (ii) are equivalent [BoOw2007a, ]. Assume that (iii) is true. Then for all ,
(11)
Since for and for ,
d¯G(z,λ)⋅(vz,vλ)=dG(z)⋅vz=0, ∀ vz∈TzCHPc
which implies (i) and (ii).
Moreover, expanding (11) and setting and yields,
d ¯G(z,λ)⋅(δq,δv,δp,δλ)=∫ba[⟨∂L∂q,δq⟩ds+m∑i=1⟨∂γi∂q,δq⟩∘dWi+⟨∂L∂v,δv⟩ds
Consider the terms involving , and . Since these variations are arbitrary, the following hold:111This follows from the basic lemma that if and is arbitrary then .
dqdt=v, ∂L∂v(q,v)=p, ddtg(q)=0.
Since , implies that for all .
Collecting the variations with respect to in the differential gives,
∫ba[∂L∂q⋅δqds+⟨p,δdqdt⟩ds+m∑j=1∂γj∂q⋅δq∘dWj+⟨λ,ddt(∂g∂q⋅δq)⟩ds]=0.
Using definition 3.3, we introduce the following function ,
I(q,v,p,λ,f)=∫ba[(∂L∂qds+m∑j=1∂γj∂q∘dWj−dp−∂g∂q∗⋅dλ)⋅f].
In the following it is shown that if for arbitrary of class then satisfies (10).
Let be a partition of unity on . Expand in terms of this partition of unity,
I=∑α∫ba[gα(q,v,p,λ)(∂L∂qdt+m∑j=1∂γj∂q∘dWj−dp−∂g∂q∗⋅dλ)⋅f]%.
Since the curves and are compactly supported, only a finite number of the are nonzero. For each nonzero, the terms in the integral can be expressed in local coordinates. Observe that since , the Stratonovish-Ito conversion formula implies that,
∫bagα∂γj∂q⋅δq∘dWj=∫bagα∂γj∂q⋅δqdWj
for .
We will select to single out the th-component of the covector field in . Introduce the following function for this purpose:
h(t)=2tϵ−t2ϵ2.
Observe that , , and . Let be a basis for the model space of . Now fix , and define in local coordinates as:
fϵ(s)=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩h(s−a)eβif a≤s≤a+ϵ,eβif a+ϵ
Introduce the following label to simplify subsequent calculations,
A(s)=(∂L∂q(q(s),v(s))ds+m∑i=1∂γi∂q(q(s)) dWi(s)−dp(s)−∂g∂q(q(s))∗⋅dλ(s))⋅eβ.
In terms of , one can write
I(q,v,p,λ,fϵ)=∑α[∫a+ϵah(s−a)gα(s)A(s)+∫t−ϵa+ϵgα(s)A(s)+∫tt−ϵh(t−s)gα(s)A(s)].
We will show in the mean squared norm,
limϵ→0I(q,v,p,λ,fϵ)=∑α∫tagαA(s)=:I∗. (12)
Using this result and the Borel-Cantelli lemma, one can deduce there exists that converges to such that a.s. converges to . It follows that almost surely.
We proceed to prove (12). Since ,
∥∥∥∑α∫tagαA(s)−I(q,v,p,λ,fϵ)∥∥∥2 =∥∥∥∑α∫a+ϵa(1−h(s−a))gαA(s)+∫tt−ϵ(1−h(t−s))gαA(s)∥∥∥2 ≤2∥
|
# Question about timing in XNA
I'm looking for a simple way to display minutes, seconds and milli seconds for timing the elapsed time in a game. Do I have to do the calculation myself with gameTime and display a variable for each part like a variable for minutes, secounds and milli seconds? Preciates some help! Thanks!
• Yes I think that you can do that with TotalTimeElapsed (I think that is the name of the attribute). That value might have a subvalue that holds the entire time value (hh:mm:ss) or you can define 3 or 4 string variables and then pass values of hours, minutes and seconds like you said. – NDraskovic Jul 16 '12 at 6:54
You can already get all these three from GameTime: gameTime.TotalGameTime offers Minutes, Seconds and Milliseconds for units to measure how long the game has been going for.
int milliseconds = gameTime.TotalGameTime.Milliseconds;
If you just want the time since the last update, use gameTime.ElapsedGameTime.
|
Skip to main content
# 4: Fractions
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$
$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$
Often in life, whole amounts are not exactly what we need. A baker must use a little more than a cup of milk or part of a teaspoon of sugar. Similarly a carpenter might need less than a foot of wood and a painter might use part of a gallon of paint. In this chapter, we will learn about numbers that describe parts of a whole. These numbers, called fractions, are very useful both in algebra and in everyday life. You will discover that you are already familiar with many examples of fractions!
Figure 4.1 - Bakers combine ingredients to make delicious breads and pastries. (credit: Agustín Ruiz, Flickr)
|
# How can I detect lost of precision due to rounding in both floating point addition and multiplication?
From Computer Systems: a Programmer's Perspective:
With single-precision floating point
• the expression (3.14+1e10)-1e10 evaluates to 0.0: the value 3.14 is lost due to rounding.
• the expression (1e20*1e20)*1e-20 evaluates to +∞ , while 1e20*(1e20*1e-20) evaluates to 1e20.
• How can I detect lost of precision due to rounding in both floating point addition and multiplication? (in C or Python)
• What is the relation and difference between underflow and the problem that I described? Is underflow only a special case of lost of precision due to rounding, where a result is rounded to zero?
Thanks.
C support varies by implementation (compiler) but see GCC here: https://www.gnu.org/software/libc/manual/html_node/FP-Exceptions.html
Python support is documented here: https://docs.python.org/2/library/fpectl.html
I’ve only used these features a few times, and then only with the Intel compiler (https://software.intel.com/content/www/us/en/develop/documentation/cpp-compiler-developer-guide-and-reference/top/compiler-reference/compiler-options/compiler-option-details/floating-point-options/fp-trap-qfp-trap.html ), but in that case, I was able to trap truncation and other non-fatal errors (fatal would be dividing by zero, for example).
• Thanks. What is the relation and difference between underflow and the problem that I described? Is underflow only a special case of lost of precision due to rounding, where a result is rounded to zero? – Tim Oct 10 '20 at 20:12
• I will have to write a test code to know for sure. I think the first addition throws an inexact exception (the most common one) but maybe it’s an overflow instead. – Jeff Hammond Oct 10 '20 at 20:21
• There's a standard, IEEE, that defines all the floating point exceptions. Then there are implementations on the compiler side of things, gcc, intel, etc. If that was not enough most architectures also have a hand in it. As @Jeff suggested, you really unfortunately need to write some test codes to see what is going on for your platform/tool-chain of choice. My own experience would suggest it's not quite as diverse as I just mentioned but it sure is not uniform. – Kyle Mandli Oct 11 '20 at 0:32
Normally one does not try to detect loss of precision algorithmically, but rather analyzes and modifies algorithms to assess how they are affected by it.
For instance, in your first example you would run a (forward) error analysis and figure out that the summation error is bounded by $$3 \cdot 10^{10} \mathsf{u}$$, where $$\mathsf{u}$$ is machine precision, or you would show that the summation is backward stable so the summation has not done significantly more damage than storing that $$10^{10}$$ in a Float32 did in the first place.
|
# Show that limit is a definite integral
• August 19th 2012, 07:50 AM
VinceW
Show that limit is a definite integral
Show that the given limit is a definite integral $\int_a^b f(x) \, dx$ for a suitable interval $[a,b]$ and function $f$
The limit is:
$\lim \limits_{n \to \infty} \sum\limits_{i=1}^n \frac{n}{n^2 + i^2}$
That's the problem I can't solve. Here is how far I've made it:
Since a definite integral can be defined as
$\int_a^b f(x) \, dx = \lim\limits_{n \to \infty} \sum\limits_{i=1}^n f\left(a + (b-a)\frac{i}{n}\right) \cdot \frac{b-a}{n}$
Then:
$f\left(a + (b-a)\frac{i}{n}\right) \cdot \frac{b-a}{n} = \frac{n}{n^2 + i^2}$
which simplifies to:
$f\left(a + (b-a)\frac{i}{n}\right)= \frac{1}{b-a} \frac{n^2}{n^2 + i^2}$
I'm not sure how to solve, but, I can guess that $b-a=1$ which would simplify things to:
$f\left(a + \frac{i}{n}\right)= \frac{n^2}{n^2 + i^2}$
I can also guess that $a=1$ so that:
$f\left(\frac{n + i}{n}\right)= \frac{n^2}{n^2 + i^2}$
Now, I can't find a function $f$ that satisfies this. $f(x)=\frac{1}{x^2}$ almost works but not quite.
• August 19th 2012, 09:57 AM
Plato
Re: Show that limit is a definite integral
Quote:
Originally Posted by VinceW
Show that the given limit is a definite integral $\int_a^b f(x) \, dx$ for a suitable interval $[a,b]$ and function $f$
The limit is:
$\lim \limits_{n \to \infty} \sum\limits_{i=1}^n \frac{n}{n^2 + i^2}$
Let $a=0,~b=1~\&~f(x)=\frac{1}{1+x^2}$.
• August 19th 2012, 10:11 AM
VinceW
Re: Show that limit is a definite integral
that works. I take it that was just intuition? Not sure how I would have figured that out by myself. thanks!
|
My solution for lecture slides and notes, using Markdown and Pandoc
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Charles Pence fb59e27409 1 month ago
4 years ago
1 month ago
1 month ago
1 month ago
1 month ago
4 years ago
1 month ago
1 month ago
1 month ago
1 month ago
1 month ago
# Lecture Notes
version 2.0, fall 2020
Here's my current solution for doing integrated slides and lecture notes using Pandoc and Markdown.
## To use
1. Put pandoc-slides, pandoc-handout-filter.lua, and pandoc-*.tex somewhere in your PATH.
2. You must open pandoc-slides and edit the five paths to those supplementary files to indicate where you put them. (By default, these reference ./ so that the example in the repository will build.)
3. Write a lecture in the rough format of Lecture.md. Title, author, and date from the YAML front-matter work the way you'd expect. Everything else will be configured automatically. The notes should appear inside a <div class="notes"> tag, and can use full Markdown formatting as well.
## How do I configure it more?
There's a number of things you can easily configure, and then a bunch of progressively more interesting and challenging tweaks:
• In the pandoc-slides script, you will see a variety of different Pandoc variables that can be used to configure your output:
• -V theme=Pittsburgh --- set the general Beamer theme that you would like your slides to use
• -V fonttheme=structurebold --- set the beamer font theme that you would like your slides to use
• -V fontfamily=opensans --- set to the name of a TeX package that loads the main font you would like to use for your slides
• -V fontfamilyoptions=defaultsans --- a string that will be passed as the package options to your font package (i.e., the default settings are to call \usepackage[defaultsans]{opensans})
• -V papersize=a4 --- set the paper size for the lecture notes
• -V geometry=... --- set the margins for the lecture notes
• -V fontfamily=mathptmx --- set to the name of a TeX package that lods the main font that you would like your lecture notes to use
• If you have a separate folder in which you keep a central collection of slide images that you would like all of your slides and handouts to reference, you can add:
--resource-path .:/path/to/your/images
to both the commands for slides and handout generation, and Pandoc will automatically search for the rest of your image files.
• For further customization, you can add whatever you want to the pandoc-*-snippet.tex files. The beamer file is loaded by both slides and handouts, and the others by only one of the two respectively.
• If you use zsh, you can add compdef '_files -g "*.md"' pandoc-slides to your .zshrc to make completion work for .md files after you type pandoc-slides.
## What do you get?
You get some relatively nicely formatted Beamer slides, along with a lecture notes file that interleaves your notes with a basic text version of your slide content.
## What does it look like?
Check out the two PDFs here for examples.
## How does it work?
• The pandoc-filter.lua file, which is only added for the handout files, wraps the notes content in a \begin{slidenotes} environment.
• The pandoc-beamer-snippet.tex file disables figure numbering in both handouts and in slides.
• The pandoc-slides-snippet.tex file can be used to add all sorts of slide styling. Per default, it sets some nice defaults for block quotations, cleans up the formatting of figures, and turns off the slide navigation symbols.
• The pandoc-handout-snippet.tex file adds some nice styling to the notes sections, as well as disabling the printing of graphics in the handouts.
## What's new in this version?
The old version of these hacks involved overwriting the Pandoc internal Beamer or TeX templates. This is a dangerous thing to do, as those templates are sometimes updated internally by Pandoc, which will then expect them to contain things that your version of the template doesn't have. I strongly recommend using include-snippets and Lua filters, as I've done here, to modify the files generated after the fact rather than overriding the template.
|
## Friday, February 24, 2012
### Writing basic math in LaTeX - inline math and math environment
Internet abounds with LaTex tutorials on how to write mathematics equations and simple symbols in LaTeX. It is sometimes not clear as to how the math environment works in general and how to differntiate between inline math equations and how to write them on a line of their own.
I earlier did a tutorial on how to write mathematical equations using Latex which covered the "equation" environment in LaTeX. Math environment is equally important and in this tutorial I am going to talk about how to use basic math environment in LaTeX.
Specifically, I am going to discuss how to do the following in LaTeX:
1. How to initiate Math environment
2. Writing inline math equations and writing equations in a separate line
3. How to use frac for equations
4. How to use paranthesis and brackets to enclose mathematical symbols and equations
5. How to type powers and indices
6. How to write matrices
Following video illustrates the step by step instructions to use math in LaTeX
The code used in this tutorial is here:
\documentclass{article}
\usepackage{amsmath}
\usepackage{amssymb}
\begin{document}
\title{Basic Mathematics with Latex by http://QuickLatex.blogspot.com}
\maketitle
This is inline $n$ math symbol.
For inline $n$ or $$n$$ is used and for displayed math we can use $$n$$ or $n$.
We start with $n$ elements and we continue to divide them in half leaving $\frac{n}{2}$ elements.
The power can be written using caret symbol, for instance $n^n$ results in n to the power n written nicely.
The indices could be written using underscore, for instance $n_i$ makes i an index of n.
$(\frac{n}{2})$
$\left( \frac{n}{2} \right)$\\
$[\frac{n}{2}]$
$\left[ \frac{n}{2} \right]$\\
$\left( \begin{array} {c c c} 1 & 2 & 3\\ 4 & 5 & 6\\ 7 & 8 & 9 \end{array} \right)$
\end{document}
## Sunday, February 5, 2012
### How to create table in Latex from MS Excel files or other external databases
Latex can be used to create tables from external files such as MS-Excel files saved as comma separated values or similarly many other formats. Recently, I received a comment where a reader suggested that I should post a tutorial on how to create tables in Latex from external files. I searched around and found multiple ways to achieve this. I am going to talk about two of the most popular ways of doing it. First one is using pgfplotstable and the second one is using datatool package.
Specifiacally, we are going to learn the following in this tutorial:
1. Using datatool package to load external files
2. Usng MS-Excel files to create tables in Latex
3. Using \DTLloaddb to set keys and use external files
4. Using \DTLforeach to iterate the external file
5. Using \pgfplotstable package
6. Using \booktabs package
7. Using pgfplotstabletypeset
8. Styling each column of table
9. Using style rules from booktabs package
The following video tutorial explains both methods of creating tables in Latex using external database files.
The code for both of the methods is given below for your reference:
The first method - suing datatool package
\documentclass{article}
\usepackage{datatool}
\begin{document}
\begin{table}
\begin{tabular}{clr}
\textbf{Software} & \textbf{Manufacturer} & \textbf{Malware}
\DTLforeach{ctext}{
\cola=c1,\colb=c2,\colc=c3}{\\
\cola & \colb & \colc}
\end{tabular}
\end{table}
\end{document}
The second method - using pgfplotstable package
\documentclass{article}
\usepackage{pgfplotstable}
\usepackage{booktabs}
\begin{document}
\pgfplotstabletypeset[
col sep=comma,
columns/Software/.style={string type},
columns/num1/.style={string type},
columns/num2/.style={string type},
before row=\toprule,
after row =\midrule
},
every last row/.style={after row=\bottomrule}
]{csvtext.csv}
\end{document}
## Friday, January 27, 2012
### LaTeX Tutorial: How to use Lists in Latex - itemize, enumerate, description, and inparaenum
LaTeX is a great tool for typesetting and it is more powerful than MS-Word. It has a steep learning curve and that is why I believe these video tutorials will help you guys. Today, I am going to talk about lists in LaTeX. LaTeX can produce both bulleted lists or unordered lists, and numbered lists or ordered lists. We will also see how to use lists with descriptions given to each list item.
Specifically, we are going to have an introduction of following topics in LaTex:
• Lists in LaTeX: Itemize, enumerate, inparalist, and description
• Itemized list in LaTeX for bullets
• enumerated list in LaTeX for numbered lists
• inparalist in LaTeX for lists wrapped around text
• description lists in LaTeX for lists with description for each item
• Nested lists in LaTeX
• Using styles for numbering lists (roman, in parenthesis etc.)
• Use of paralist LaTeX package
Here is the video for this tutorial:
The code for this tutorial is here:
\documentclass{article}
\usepackage{paralist}
\begin{document}
\title{Creating Bullets and Lists with \LaTeX by http://QuickLatex.blogspot.com}
\maketitle
\section{Bullets and Lists in \LaTeX}
\begin{itemize}
\item First bullet is here
\item Second bullet is here
\end{itemize}
\begin {enumerate}
\setcounter{enumi}{5}
\item This is item number 1
\item This is item number 2
\end{enumerate}
\begin {description}
\item [Chapter 1] This is the first desciption
\item [Chapter 2] This is the second description
\end{description}
\begin{inparaenum}[(i)]
There are three advantages of this method:
\item it is faster,
\item it is cost effective, and
\item it is efficient
\end{inparaenum}
\end{document}
## Thursday, January 19, 2012
### LaTeX Tutorial: How to write mathematical equations in LaTeX
Those who use LaTeX for their documentation related works, usually are from STEM (Science, Technology, Engineering, Mathematics) background. These people use equations more often than not. Therefore, I will introduce how to write equations in LaTeX today.
In this tutorial we will go over following features:
• Latex amsmath package
• Latex equation environment
• Using Simple equations like x = y + z
• Using Summation in equations
• Using Integration in equations
• Using Cases in equation (if condition based values of a variable)
• Using fractions to write multiple-row equations
Watch the following video to learn how to do these things:
The code used in this Video can be found below:
\documentclass{article}
\usepackage{amsmath}
\usepackage{amssymb}
\begin{document}
\title{Writing Equations with Latex by http://QuickLatex.blogspot.com}
\maketitle
%x = y + z
%f(x) = x ^ 2
%f(x) = x_1 + x_2 + x_3 + ......+ x_n
%f(x) = \sum_{i=1}^{n} {x_i}
%f(x) = \int_{i=1}^{n}{x_i}
%X=
%\begin{cases}
%5, \text{if X is divisible by 5}
%\\
%10, \text {if X is divisible by 10}
%\\
%-1, \text {otherwise}
%\end{cases}
X =
\frac{\substack{\sum_{i=1}^{n} {x_i}}}
{\substack{\sum_{i=20}^{50} {x_i}}}
\end{document}
## Thursday, January 12, 2012
### How to write an algorithm in Latex : Video Tutorial with sample algorithm
People in Computer Science and Mathematics department often write algorithms for their papers, thesis, and other research articles. In this tutorial I will explain how to write an algorithm in Latex using the algorithm and algorithmic package in Latex. I will explain the basics of an article and show how simple building blocks can be added to Latex to write a full fledged professional quality algorithm.
Specifically, we will learn the following in this tutorial:
1. How to write an algorithm in Latex
2. Use of algorithm and algorithmic package
3. How to use loops in an algorithm
4. How to use IF statements in an algorithm
5. How to add caption to an algorithm
6. How to label an algorithm to refer it in the document
The code for this algorithm is shown below and explained in the video.
Here is the code used in this video:
\documentclass{article}
\usepackage{algorithm}
\usepackage{algorithmic}
\begin{document}
\begin{algorithm}
\textbf{INPUT:} Set of Base Layer polygon $S_b$ and Set of Overlay Layer polygon $S_o$\\
\textbf{OUTPUT:} Intersection Graph ($V$,$E$), where $V$ is set of polygons and $E$ is edges among polygons with intersecting bounding boxes.
\begin{algorithmic}
\STATE Parallel Merge Sort set $S_o$ of overlay polygons based on X co-ordinates of bounding boxes\footnotemark[1]
\FORALL{base polygon $B_i$ in set $S_b$ of base polygons}
\STATE find $S_x \subset S_o$ such that $B_i$ intersects with all elements of $S_x$ over $X$ co-ordinate
\FORALL{overlay polygon $O_j$ in $S_x$}
\IF {$B_i$ intersects $O_j$ over $Y$ co-ordinate}
\STATE{Create Link between $O_j$ and $B_i$}
\ENDIF
\ENDFOR
\ENDFOR
\end{algorithmic}
\caption{Algorithm to create polygon intersection graph}
\label{algo:relgraph}
\end{algorithm}
\end{document}
The output looks like the following:
## Thursday, January 5, 2012
### How to draw Reddit Alien in LaTeX using Tikz - Video tutorial and code
If you are not a Redditor you are a lucky person. It's a one way traffic scenario with no dead end. Once you subscribe to Reddit there is no way back and it is no good being there. I am expecting my colleagues to have an intervention for my Reddit addiction soon but till then I am all here to invest (read:waste) my precious time.
Since we got that out of our way, let us get back to business. Today, we are going to see how to draw a cartoon figure using LaTeX. The motivation behind this post is /r/latex on Reddit as I wanted that subreddit to have its logo drawn in LaTeX. I did not know it already was in LaTeX but it seems my work is appreciated there so I am going to contribute my 2 cents to the community.
Specifically, we will learn following things in this tutorial:
1. How to work with Colors - defining RGB colors in LaTeX.
2. How to use multiple layers to set order of document objects (send to back, bring forward like functionality).
3. How to use Arcs in an effective way.
4. How to draw lines with multiple points
5. How to draw curved lines in LaTeX
Check out this video for explanation of the code. The co-ordinates might look crazy but after you go through the video it will be a cinch. Leave a comment if you have a question. The code is given below, so if you improve it please do let me know and I will post your code on here.
\documentclass{article}
\usepackage{tikz}
\begin{document}
\pgfdeclarelayer{background}
\pgfdeclarelayer{main}
\pgfdeclarelayer{foreground}
\pgfsetlayers{background,main,foreground}
\definecolor{orangered}{RGB}{255,69,00}
\tikzstyle{vrutt}=[draw=orangered, fill=orangered, circle,minimum height=0.5in, line width=5mm]
\tikzstyle{elli}=[draw, ellipse, minimum height=2.85in, text width=2.95in, text centered, line width=5mm]
\begin{tikzpicture}
\begin{pgfonlayer} {foreground}
\node [elli, fill=white] (face) {};
%feet
\node [below of=face,yshift=-4.1in, xshift=-2.0in] (base){};
\draw [line width=5mm](base) -- +(3.8in,0in);
\draw [line width=5mm] (4.3,-11.5) arc (-10:80:1.8);
\draw [line width=5mm] (-4.7,-11.5) arc (190:80:1.8);
%torso
\draw [line width=5mm](face.230) to[out=260, in=150] +(0.75in,-3.15in);
\draw [line width=5mm](face.310) to[out=280, in=30] +(-0.75in,-3.15in);
%eyes
\node [vrutt, xshift=-5em, yshift=9mm] (lefteye) {};
\node [vrutt, xshift=5em, yshift=9mm] (righteye) {};
% Smile
\draw [line width=5mm] (-2.0,-1.0) to[out=320, in=220] (2.0,-1.0);
%Antenna
\draw[line width=5mm](-0.5,3.76) -- +(1cm, 2.5cm) -- +(3.5cm, 2cm);
\node [vrutt, fill=none, draw=black, above of=face, yshift=4.65cm, xshift=3.5cm, minimum height=0.5in] (antenna){};
%Text
\node (face.275)[yshift=-3in] (text){\Huge \textbf{\LaTeX}};
\end{pgfonlayer}
\begin{pgfonlayer} {background}
%Ears
\draw [line width=4mm] (4.4,1.3) arc (-80:315:1);
\draw [line width=4mm] (-4.3,1.3) arc (-80:315:1);
%hands
\draw [line width=4mm] (3.05,-7.8) arc (-70:80:2.3);
\draw [line width=4mm] (-3.05,-7.8) arc (250:90:2.3);
\end{pgfonlayer}
\end{tikzpicture}
\end{document}
The alien looks like this one:
## Friday, December 30, 2011
### LaTeX Video Tutorial: How to Create a Resume or CV (Curriculum Vitae) using LaTeX
One of the most frequent questions my colleagues ask me is how to create a Resume or Curriculum Vitae (CV), if you will, in LaTeX. I have a style file that was passed to me by a friend who found it on Internet. Since this looked good I used it and, thanks to the original contributor, I am going to share it with you all today. You will be able to download these files and create a professional Resume for yourself.
|
## Microscope puzzle
On Tuesday I went to Penny to buy some usual stuff. This week they also offered digital microscopes. One was left, so I had to purchase!
I did not expect any great hardware, but I’m astonished! First because it works on my sidux without any driver or manual work, just had to connect it to my USB port! And secondly I did not thought that 200 times magnification is such a high zoom rate..
How ever, I already had a lot of fun with it and prepared a puzzle. Here are some zoomed images and you can try to guess where it came from. Suggestions can be posted via comment, those of you who found a right solutions are invited to drink a beer with me ;)
## Zoom A
An easy one to start…
Solution: Wood guessed by Martin S.
## Zoom B
You use it nearly every day, don’t you!?
Solution: Backside of a German Euro coin guessed by Martin S.
## Zoom C
Girls have to know it :P
Solution: Paper Towels guessed by Michael Rennecke
## Zoom D
Maybe you’ll find it in your office…
Solution: Ball pen guessed by Martin S.
## Zoom E
Not mine, but nevertheless very nice ;)
Solution: Watch guessed by Martin S.
(Unfortunately it’s Maria’s, I don’t have a real image of it yet… Comming soon)
## Zoom F
If you can directly tell me where it comes from I’m impressed!
Solution: Novell animal guessed by Maria
## Zoom G
Nice and old one! We use it to decrease the noise.
Solution: Mousepad guessed by Martin S.
## Zoom H
Teachers may know it.
Solution: Whiteboard marker guessed by Michael Rennecke and Christoph R.
## Zoom I
It’s a small zoom rate and very easy, but it looks nice.
Solution: DVI-Connector guessed by Michael Rennecke
## Zoom J
Done with a tool from previous image.
Solution: Painted Whiteboard guessed by Michael Rennecke
(Unfortunately with a hint…)
## Zoom K
It’s a mini computer.
Solution: Chipcard chip guessed by Martin S.
## Zoom L
I don’t really like it, maybe I’m the only one who doesn’t…
Solution: Sugar guessed by Christoph R.
## Zoom M
Also easy I think..
Solution: Screw guessed by Norman
## Zoom N
Office stuff.
Solution: Ammo for stapler gun guessed by Martin S.
## Zoom O
From the refrigerator.
Solution: Sausage guessed by Steffi
## Zoom P
You are using it at the moment! Thanks to Rumpel!
Solution: (Mona Lisa) Harddrive guessed by Michael Rennecke
## Zoom Q
At least one of it is actually running in every bigger machine.
Solution: Fan guessed by Michael Rennecke
## Zoom R
Ok, thats difficult, I’m wondering if anyone can find the right answer. I’ve already blogged about it…
Solution: Look through a SUN-Ray guessed by Michael Rennecke
## Zoom S
Small zoom and simple to guess.
Solution: Crinkled cardboard guessed by Martin S.
## Zoom T
Also for teachers.
Solution: Chalk guessed by Martin S.
## Zoom U
Sportsmen know such things.
Solution: Rumpel’s scab ;) guessed by Martin S.
## Zoom V
You’ll find one in nearly every office.
Solution: Pencil guessed by Christoph R.
## Zoom W
Also not mine ;)
Solution: Shaved beard guessed by Michael Rennecke
## Zoom X
Mmmh, disgusting, isn’t it?
Solution: Kiwi guessed by Maria and Norman
## Zoom Y
Also disgusting I think.
Solution: Dried Strawberry guessed by Maria
## Zoom Z
Germans should know it!
Solution: Print media guessed by Martin S.
(Wow, c’t identified! It’s written on the CD)
## Zoom 1
Oh nice colors.
Solution: Display guessed by Martin S.
## Zoom 2
Something like a kaleidoscope?
Solution: Condensed water guessed by Michael Rennecke
## Zoom 3
Solution: Apple stem guessed by Maria
## Zoom 4
Yes, that is mine!
Solution: Unshaved beard guessed by Michael Rennecke
Tomorrow I’ll provide some more images, but not for puzzling because to some of the images I don’t have a right solution or I don’t know an exact name. So be patient ;)
Update: As promised the album.
Microscoping
## uuurrgh... Ubuntu
Ubuntu, you all should know, isn’t my preferred operating system. It’s very nice for linux beginners and may decrease some manual work at private machines, but when I’ve heard about the actual bug I’m very confused why we still have to use Ubuntu in our PC pools and why some work groups are emphatic about this system and why we have to administrate their server and local machines with Ubuntu.
I’m still wondering why simple users in Ubuntu systems can out of the box read all log files or the shadow.. That is not that kind of security I’m dreaming about ;)
The actual bug is very simple (via):
Now you’ve owned the shadow file and you are able to modify roots pass phrase! It’s just too easy…
By the way I tried it by myself and got a funny message:
And my friend Rumpel also tried this exploit and after lunch I just heard him saying
fuck, bolted out, by my self...
not able to disable his screensaver. Maybe he changed a little bit to much in his shadow file!? ;)
Fortunately the patch is released, so have a lot of fun while updating your systems. You should reboot after the update, otherwise the bug is still enabled…
## Google does not like self-signed SSL certs
The last few days my feeds were out of date. I manage them with Google’s solution called feedburner, you may have recognized it.
It seems that the developer of this project changed some stuff, anyway, they did not actualize my feeds. The last days (or weeks) I did not had the time to care about, but today I found some minutes.
When I tried to resync my feeds manually I got this nice red error (see also the picture):
There is an issue that must be addressed with your source feed for the feed "binfalse" sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
This is caused by my Apache redirect directive that redirects all visitors looking for an insecure URL at port 80 to my SSL encrypted content at port 443:
So you see I’m caring about security ;)
This method works for a long time, but now feedburner tries to verify the certs and because of a lack of money I signed my certs by myself. So feedburner denies the access and doesn’t reread my own feeds to update its database. To repair this problem I’m just redirecting my real content and not the feeds, so feedburner is happy and why should I care about the secure connection of feedburner to my site..
Nevertheless it is not my preferred solution.
## Inactivity? Not in the slightest...
Before anyone thinks I’m hibernating, I’ve just soo much to do, so there is no time to maintain this blog… Just don’t know what to do first.
This week I had a presentation, topic was “Modeling of Overflow Metabolism in Batch and Fed-Batch Cultures of Escherichia coli”. I also had to submit a study, with the headline “Modeling and evaluation of the dynamic behavior of a fermentation by simulation”, that among others analyzes the dependencies of the ratio of product concentration versus biomass concentration on biological parameters. Additionally this week I finished a work for a friend at the Otto-von-Guericke-Universität Magdeburg. He has to evaluate x-rays taken at a new detector and I developed a tool that does the trick.
As if that were not enough time consuming I’m still working on my project work, next week I have to present it in a research seminar (fortunately in German).
The reason I write this article, tomorrow is the so called Lange Nacht der Wissenschaften (translated by Google: Long Night of Science). Here the different departments of the university are presenting what they are doing, comprehensible for the public. In this event we have also a slot dealing with SUN Spots where I’m presenting some cool stuff I’ve programmed. Planed are a some introductions what Spots are and what they can do, and of course some demos. Among others we are playing with the light sensor and visualize a sunrise and a sunset, we demonstrate how one can regulate a fan depending on the temperature measured with a Spot (to induce a higher temperature a candle is planed, I hope everything will be well and the Spot won’t melt away!). Of course some basic demos are shown, like AirText or the BouncingDemo, and last but not least it is time to play. I have prepared a labyrinth that has to be solved by various people against each other (up to seven people at a time in the same labyrinth, just limited by the number of available Spots), and also I also developed a control system for the game Blobby Volley to navigate the Blobbies with a Spot, maybe you’ve heard about it ;) We also wanted to build a little car to drive around a little bit, but I’m not such engineer so this car isn’t finished…
If you are bored and don’t know what to do tomorrow visit this presentation!! It’s at 8.30 am in room 3.31 at the Von-Seckendorff-Platz 1.
Maybe I publish all the code when there is a little bit time to write some comments and how-to’s.
So you see, I’m very busy at the moment. If anyone has nothing to do, just notify me, I’ll give you something to work on!
## Thunderbird to systray
Til version 3 of Thunderbird, or more exactly icedove, I used the add-on New Mail Icon to free the busy space in task list that the Thunderbird process uses even though I have this window very rarely in foreground. But it seems that there is no further development in this project, so this software isn’t compatible to the actual major release…
On my main desktop I thought I have to live with that, but on my notebook screen there is a lack of space, more than ever, so I had to search for an alternative tool. On my way I found a tool (no add-on) called AllTray, it’s available in the Debian/Ubuntu repository. That can dock any window to your tray, so it doesn’t depend on Thunderbird, you can also dock a terminal or your editor or even a complete (Oracle VM) VirtualBox instance.
For my Thunderbird problem it’s a half of the solution, because this tool doesn’t tell me whether there are unread mails. But after some more research I found a real add-on called FireTray that does the desired trick. So more space for other junk ;-)
## Serv local printer
I have an old printer, an HP Laserjet 6P. It is very reliable and fast, so no need to buy a new one. But there is a problem (I thought), this printer has no network interface, it is connected with a parallel port to my host. Some minutes ago I racked my head how to use this printer with my notebook. Now I’m wondering how easy it is using cups!!
On the server side (the machine that is connected to the printer) you just have to modify this printer and check the field called “Share This Printer”, and in the administration tab just enable “Share printers connected to this system” and “Allow printing from the Internet”.
On your client you only have to publish your server. To do it for the complete system write the following line in your /etc/cups/client.conf , to set this server only for your local user account write it to your users \$HOME/.cups/client.conf :
You just have to specify the port if your server is not listening on the default port 631.
That’s it! Open a document and try to print! I still cannot believe that it is that easy ;)
Thanks to the cups-team
## Need more bandwidth!
Today I got my new notebook, an IdeaPad. I had some concerns about the glare display, never used glare displays but it seems to be no problem and I don’t have a choice, Lenovo doesn’t sell that kind of notebooks without glare displays.
This laptop comes with Windows XP and of course I have to fix this bug ;)
But before I’ll delete the Windows installation and install a proper os the original system has to be backed up (I want to test some things before I decide whether to buy the laptop). So I installed the first release of „Ύπνος“ to my USB flash drive and booted into it. To back up the hard drive I mounted a piece of my main machine’s hard drive via sshfs to the laptop and copied the laptop’s hard drive to the other machine:
Ok, the notebook’s drive keeps 160 GB and I just have a 99 ct fast ethernet switch, so you can calculate the time I have to wait… That sucks, doesn’t anyone have a gigabit switch lying around? I would prefer Cisco switches ;)
Hopefully the backup will finish today, so I can play a little bit with the laptop and its luxurious 1.280x720 screen resolution on the 10’’ glare display.
## Which country is the most stupid
Today I had a conversation with a scientist from Bulgaria who is working with microarrays. He told me some practical experiences of his work. It was very interesting and I learned a lot of things, in spite of the fact that I gave a lecture about microarrays some time ago.
In this talk he said a wonderful sentence:
Früher dachte ich immer die Russen wären dumm, bis ich die Amerikaner kennen gelernt habe!
English translation: Some years ago I thought the Russians are stupid, until I got to know the Americans.
Topic was the structuring of websites of companies. If he has a question he always has to search through the web because everyone tells him the answer is anywhere in there! affymetrix for example has thousands of user manuals, the intersection of all of these papers is very small, but one paper has hundreds of pages… And I think he is totally right. The arrangement of information today is very terrible, to find what you are searching about is some kind of art! But he doesn’t mince matters. I really like Eastern Europeans ;)
He invented me to his lab tomorrow so I can see how this affymetrix machinery produces the data that I get to analyze.
## Little quickie through Germany
Oh no, not that kind of quickie you might think about! Rumpel an me decided more or less spontaneously to go to Bonn and visit one of our former employee Martin and additionally take a little look at SIGINT in Cologne.
So we rent a car at Sixt on Friday morning and met Martin at 5 pm in his flat. Of course our trip was very analog, we didn’t have any navigation device, just printed a route calculated by Google maps and rely to male instinct on the way through Germany and the high traffic in Ruhr Valley at Friday afternoon before holiday… What should I say, of course everything went totally well and we had a lot of fun in our little car! You can see some pictures at picasa.
Quickie through Germany
Of course it was a great weekend! We’ve seen a lot of fascination places of Bonn and Cologne like Cologne cathedral, big ships on Rhine or Media Center in Cologne. The events at SIGINT were also interesting, where it cannot be compared with the Chaos Communcation Congress in Berlin. In Cologne you’ll always get a chair and the queues are very short. Nevertheless the topics are of high quality.
All in all it was an excellent trip, even it was very expensive.
## Git merging showcase
One of the people that are working with me on some crazy stuff always forgets to pull the newest revision of the repository before changing the content and so he has very often trouble with different versions when he decides to push his work to the master repository. His actual workaround is to check out the complete repository in a new directory and merge his changes by hand into this revision… Here is a little instruction to maximize his productivity and minimize the network traffic.
Lets assume we have a repository, created like this:
And we have one user, that clones this new repository and inits:
So we have some content in our root repo. Another user (our bad guy) clones that repository too:
So let a bit of time elapse, while user one is changing the root repository so that the testfile may look like this:
And of course, the changer commits his changes:
Ok, nothing bad happened, but now our special friend decides to work:
What do you think will happen if he tries to push his changes to the master repo? Your right, nothing but a error:
Mmmh, so lets try to pull the root repo:
Our friend would now check out the whole repository and insert his changes by hand, but whats the better solution? Merging the file! Git has a function called mergetool , you can merge the conflicts with a program of your choice. Some examples are vimdiff , xxdiff , emerge or also for GUI lovers kdiff3 . In this post I’ll use vimdiff :
So change the conflicting file(s), you will also see the changes made in root’s and in your local revision. If you’re done just save it and commit your merge:
Great, now there is nothing that prevents you from pushing your changes to the root repository:
I think this way of solving such conflicts maybe much more efficient than cloning the whole repository again and again and again ;)
|
# [NTG-context] mimimal typescripting
Vyatcheslav Yatskovsky yatskovsky at gmail.com
Sun Aug 12 17:20:13 CEST 2007
```Dear Hans,
I fully understand that such complex typescripts can be the only way to use full-featured professional fonts. But if I have only one ttf with one kind of typeface in it, should I write it all? XeTeX style was very appealing for many people, and verbose font definitions may distract these users.
But anyway, thank you for escaping us from TeX-specific font hell :)
Let me restate my question: is the following definition is really MINIMAL or it can be further reduced in some way?
\stoptypescript
\stoptypescript
|
# Consider a centralized school choice problem with: • Four students, t_1.t_2.t_3, and t_4. • Four schools....
###### Question:
Consider a centralized school choice problem with: • Four students, t_1.t_2.t_3, and t_4. • Four schools. 5 1.5 2.5-3. and s_4. and • Each school having a single seat. The preferences of students are as follows Preferences >_j=3 >_j-2 11 $_1$_2 53 5:1 52 51 t_4 $2 The priorities at schools are as follows: Priorities >_1-3 >_j-2$_1 t3 52 t2 t_3 54 Which of the following statements are correct? The following assignment is stable: ſt 1.s 1).(t 2.s 2).(t 3,3). (t 4.54) There exists no stable assignment in this case. The following assignment is stable:(t_1.s_4).(t_2.s_31.(t_3.s 1). (t_4.52) The following assignment is stable: (t_1.s_2). (t_2.s_3). (t_3.s_1).(t_4.s_4)
#### Similar Solved Questions
##### What is the domain and range of f(x) = 5 sin^2 x ?
What is the domain and range of f(x) = 5 sin^2 x ?...
|
# General initial value problem (DE's)
1. Dec 11, 2006
### prace
1. The problem statement, all variables and given/known data
a) Consider the initial value problem $$\frac{dA}{dt} = kA, A(0) = A_0$$ as the model for the decay of a radioactive substance. Show that in general the half-life T of the substance is $$T = -\frac{ln2}{k}$$
b) Show that the solution of the initial-value problem in part a) can be written as $$A(t) = A_02^{\frac{-t}{T}}$$
2. Relevant equations
**See attempt**
3. The attempt at a solution
So I started with the given information: $$\frac{dA}{dt} = kA, A(0) = A_0$$ and turned it into a DE then solving by the separation of variables technique.
so, $$A=A_0e^{kt}$$
From here I tried to divide the whole equation by $$\frac{A}{2}=A_0e^{kt}$$, but that did not seem to do anything.
Can anyone please give me a pointer to get me started in the right direction?
Thanks.
2. Dec 11, 2006
### Tomsk
You are trying to find the time when A is half its initial value, ie $A(T)=A_{0}/2$. You got the right formula for A, so you should say $A(T)=A_{0}/2=A_0e^{kT}$ and then solve for T.
|
included with the question I need to construct a 95% confidence interval for the mean lifetime of the tires.
PLEASEEEEE HELP, I really need to understand how to do this.
|
User floc - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-22T02:49:56Z http://mathoverflow.net/feeds/user/25533 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/105006/when-do-a-few-eigenvectors-of-graph-laplacians-not-determine-the-graph When Do a Few Eigenvectors of Graph Laplacians Not Determine the Graph? floc 2012-08-18T21:21:15Z 2012-08-19T16:04:29Z <p>Essentially as the title, but I'll give a little bit more background.</p> <p>I have some finite graph $G$ with $n$ vertices and adjacency matrix $A$. Let $D$ be the $n$ by $n$ matrix with the degree of vertex $i$ at the $i,i$ entry, and 0's everywhere. Finally, let $L = D - A$ be the (unnormalized) graph Laplacian of $G$. Next, fix some collection of eigenvectors and eigenvalues of $L$.</p> <p>My big-picture question is: Under what conditions are there other graphs which share those eigenvectors/values? </p> <p>With a little more precision: Approximately how many eigenvectors & eigenvalues can be specified before the answer is no? About how many graphs are there when the answer is yes?</p> <p>It seems likely that the answer would be a little complicated. I know a few special cases (e.g. 2 eigenvectors/values determine cycles completely; on the other hand, as long as the average degree is fairly large, there are generally very many graphs with the same bottom eigenvector). I certainly appreciate hearing about conditions which aren't tight, as long as they are at least a little broad.</p> <p>I'm interested in the situation where the eigenvectors DON'T determine the graph, so I would also appreciate any literature pointers to relaxations' of this idea. For example, one could imagine requiring that the Laplacian contracts the eigenvectors by at least a certain amount (this certainly allows many graphs, but that space is pretty big). In another direction, it seems plausible there is some analogue in the language of graphons. </p> <p>Thanks for any help!</p> http://mathoverflow.net/questions/103990/an-l-infty-version-of-principal-component-analysis An $L^{\infty} Version of Principal Component Analysis? floc 2012-08-05T02:26:26Z 2012-08-05T08:40:48Z <p>I have a$k$by$n$matrix$A$, with$k \ll n$. In case it helps, the$k$rows are orthonormal.</p> <p>I'm interested in finding a$k$by$k$orthogonal matrix$M$so as to maximize the$L^{\infty}$norms of the rows of$MA$. This is a little imprecise, since it may not be possible to maximize all of them simultaneously. At the moment, my criterion is to maximize the weighted sums of these$L^{\infty}$norms by some weights$w_{1}, \ldots, w_{k}\$. All of these weights are fairly similar, so if it is easier, I would also be happy with maximizing the average.</p> <p>This seems to be a little bit similar to PCA, which essentially finds rows with maximal L^2 norm.</p> <p>Thanks for any suggestions/literature references.</p> http://mathoverflow.net/questions/105006/when-do-a-few-eigenvectors-of-graph-laplacians-not-determine-the-graph/105020#105020 Comment by floc floc 2012-08-19T14:35:32Z 2012-08-19T14:35:32Z Dear Chris, Thanks for the pointer! That seems like a fairly large family, and between this and Qiaochu's comment it seems clear that there is only something nontrivial to say about very special graphs. I'm inclined to accept unless someone comes by soon with a miraculous pointer to a literature on relaxations. http://mathoverflow.net/questions/105006/when-do-a-few-eigenvectors-of-graph-laplacians-not-determine-the-graph Comment by floc floc 2012-08-19T00:02:52Z 2012-08-19T00:02:52Z Re Douglas: Thank you. Re Qiaochu: Interesting comment. I wouldn't be surprised if a small number of eigenvectors were essentially always enough, but am not familiar with what you mean by the corresponding Galois group'. A quick Google search only found me articles that seemed focused on graphs with at least some symmetry. (I should also mention: Even ~log(n) is interesting to me. I'd be happy to know that for large, fairly dense graphs, there is generically freedom to fix ~10 eigenvectors! ) http://mathoverflow.net/questions/105006/when-do-a-few-eigenvectors-of-graph-laplacians-not-determine-the-graph Comment by floc floc 2012-08-18T22:25:33Z 2012-08-18T22:25:33Z Re: Douglas Zare: I'm not sure. The few `classical' examples I've looked at seem to have pretty different eigenvectors. Until your question, I hadn't thought about this avenue - I thought they were of interest if you want different eigenvectors. Could you let me know any intuition about why these might be good candidates? http://mathoverflow.net/questions/103990/an-l-infty-version-of-principal-component-analysis Comment by floc floc 2012-08-05T21:31:23Z 2012-08-05T21:31:23Z Thanks for the comment on multidimensional scaling. I can see that this question fits into that framework, but that framework is much broader (and seems to encompass many things we'd like to do, but which are not computationally feasible). Do you know if this particular question (or one similar to it) has been addressed?
|
# What is the topic sentence in each of the following paragraphs?
###### Question:
What is the topic sentence in each of the following paragraphs?
$What is the topic sentence in each of the following paragraphs?$
### 10 story ideas brainstorm
10 story ideas brainstorm $10 story ideas brainstorm$...
### Which of the following did Congress pass giving President Johnson the authority to take necesssary measures to repel armed attacks
Which of the following did Congress pass giving President Johnson the authority to take necesssary measures to repel armed attacks against the United States and prevent further aggression? A) Tonkin Gulf Resolution B) Monroe Doctrine C) Taft-Hartley Act D) War Powers Act...
### Find the value of X. Please someone help
Find the value of X. Please someone help $Find the value of X. Please someone help$...
### 'The statement in the introduction of a research study report—'while children who have higher asthma
"The statement in the introduction of a research study report—"while children who have higher asthma control scores tend to have fewer emergency department visits for asthma exacerbations, it is unclear how nursing interventions can impact these scores"—represents the:"...
### To evaluate a piece of media means to express the main idea in a clear and specific way. disseminate
To evaluate a piece of media means to express the main idea in a clear and specific way. disseminate the contents to a wide audience. judge the purpose and message in a thoughtful way. communicate the information to a group of people....
### A ball is thrown into the air with 100 J of kinetic energy, which is transformed to gravitational potential
A ball is thrown into the air with 100 J of kinetic energy, which is transformed to gravitational potential energy at the top of its trajectory. If it returns to its original level less than 100 J of kinetic energy, the ball encountered a. air resistance. b. gravitational acceleration. c. a faster d...
### Enter the value of x that makes the given equation true3(5 - x) = 7x - 2
Enter the value of x that makes the given equation true 3(5 - x) = 7x - 2...
### Part AWhat inference can be made about the Australian Aborigines in the Newsela article “Australian Aborigines and the Dreamtime When the World was
Part A What inference can be made about the Australian Aborigines in the Newsela article “Australian Aborigines and the Dreamtime When the World was Created”? They separated into different groups long ago. Their mythology is influenced by Chinese mythology. They were hostile to people from othe...
### With editing, word automatically displays a paste options button near the pasted or moved
With editing, word automatically displays a paste options button near the pasted or moved text. a. cut-and-paste b. drag-and-drop c. inline d. copy-and-carry...
### If a man has a mass of 83 kilograms on earth, what will the force of gravity on his body be on the moon?
If a man has a mass of 83 kilograms on earth, what will the force of gravity on his body be on the moon? a. 135.6 n b. 813.4 n c. 318.5 n d. 4,880.4 n...
### Module 07 Discussion - Brain Games Brain games continue to increase in popularity, especially mobile applications. You may have seen the
Module 07 Discussion - Brain Games Brain games continue to increase in popularity, especially mobile applications. You may have seen the advertisements for these apps, or perhaps you may have even tried them yourself. There are dozens of different companies claiming that their mobile applications ca...
### This is for Maddmadi. She helped me out so I return the fav. Dont answer until he/she does.
This is for Maddmadi. She helped me out so I return the fav. Dont answer until he/she does....
### Need this asap ill mark brainliest Write down two additional questions you have about koalas. im not sure what to ask so could
Need this asap ill mark brainliest Write down two additional questions you have about koalas. im not sure what to ask so could u guys help me out...
NEED HELP WITH MATH $NEED HELP WITH MATH$...
|
# Blog Archives
## CGMO-2012 (China Girls Math Olympiad 2012) Problem 8
Find the number of integers $k$ in the set $\{0, 1, 2,\cdots, 2012\}$ such that $\binom{2012}{k}$ is a multiple of $2012$
## IMO 1983 – Problem 3
Let $N$ and $k$ be positive integers and let $S$ be a set of $n$ points in the plane such that
$(i)$ no three points of $S$ are collinear, and
$(ii)$ for any point $P$ of $S$, there are at least $k$ points of $S$ equidistant from $P$
Prove that $k$ $<$ $\frac{1}{2}$ $+$ $\sqrt{2n}$
Try the question …
Solution will be updated soon
## IMO 2012 problems
This year IMO problems !!!!
Problem 1 :
Given triangle the point is the centre of the excircle opposite the vertex This excircle is tangent to the side at , and to the lines and at and , respectively. The lines and meet at , and the lines and meet at Let be the point of intersection of the lines and , and let be the point of intersection of the lines and Prove that is the midpoint of .
Problem 2 :
Let ${n\ge 3}$ be an integer, and let ${a_2,a_3,\ldots ,a_n}$ be positive real numbers such that ${a_{2}a_{3}\cdots a_{n}=1}$ Prove that
$\displaystyle \left(a_{2}+1\right)^{2}\left(a_{3}+1\right)^{3}\dots\left(a_{n}+1\right)^{n}>n^{n}.$
Problem 3 :
The liar’s guessing game is a game played between two players ${A}$ and ${B}$. The rules of the game depend on two positive integers ${k}$ and ${n}$ which are known to both players.
At the start of the game ${A}$ chooses integers ${x}$ and ${N}$ with ${1 \le x \le N.}$ Player ${A}$ keeps ${x}$secret, and truthfully tells ${N}$ to player ${B}$. Player ${B}$ now tries to obtain information about ${x}$ by asking player ${A}$ questions as follows: each question consists of ${B}$ specifying an arbitrary set ${S}$ of positive integers (possibly one specified in some previous question), and asking ${A}$whether ${x}$ belongs to ${S}$. Player ${B}$ may ask as many questions as he wishes. After each question, player ${A}$ must immediately answer it with [i]yes[/i] or [i]no[/i], but is allowed to lie as many times as she wants; the only restriction is that, among any ${k+1}$ consecutive answers, at least one answer must be truthful.
After ${B}$ has asked as many questions as he wants, he must specify a set ${X}$ of at most ${n}$positive integers. If ${x}$ belongs to ${X}$, then ${B}$ wins; otherwise, he loses. Prove that:
1. If ${n \ge 2^k,}$ then ${B}$ can guarantee a win. 2. For all sufficiently large ${k}$, there exists an integer ${n \ge (1.99)^k}$ such that ${B}$ cannot guarantee a win.
Problem 4 :
Find all functions ${f:\mathbb Z\rightarrow \mathbb Z}$ such that, for all integers ${a,b,c}$ that satisfy ${a+b+c=0}$, the following equality holds:
$\displaystyle f(a)^2+f(b)^2+f(c)^2=2f(a)f(b)+2f(b)f(c)+2f(c)f(a).$
Problem 5 :
Let ${ABC}$ be a triangle with ${\angle BCA=90^{\circ}}$, and let ${D}$ be the foot of the altitude from ${C}$. Let ${X}$ be a point in the interior of the segment ${CD}$. Let ${K}$ be the point on the segment ${AX}$such that ${BK=BC}$. Similarly, let ${L}$ be the point on the segment ${BX}$ such that ${AL=AC}$. Let ${M}$ be the point of intersection of ${AL}$ and ${BK}$.
Show that ${MK=ML}$.
Problem 6 :
Find all positive integers ${n}$ for which there exist non-negative integers ${a_1, a_2, \ldots, a_n}$ such that
$\displaystyle \frac{1}{2^{a_1}} + \frac{1}{2^{a_2}} + \cdots + \frac{1}{2^{a_n}} = \frac{1}{3^{a_1}} + \frac{2}{3^{a_2}} + \cdots + \frac{n}{3^{a_n}} = 1.$
IMO 2012
Thank you and good luck for these delicious problems
## Bumper Problems
These are the set of 5 problems each of 5 marks
The person who get more than 20 points will get a mathematics book on any topic he want…
So try these beautiful problems to test your mathematics abilities and getting new things…
Ways by which you can answer :
I am adding a form in the end of this post , you can answer there ….
Problem 1)
The bisectors of the angles $A$ and $B$ of the $\bigtriangleup ABC$ meet the sides
$BC$ and $CA$ at the points $D$ and $E$ , respectively.
Assuming that $AE+BD =AB$, determine the angle $C$
Problem 2)
Given a $\bigtriangleup ABC$ and $D$ be point on side $AC$ such that $AB = DC$,
$\angle BAC= 60-2X$ , $\angle DBC= 5X$ and $\angle BCA= 3X$
Find the value of $X$
Problem 3)
If p and q are natural numbers so that
Prove that p is divisible by 1979 .
Problem 4)
Find highest degree n of 1991 for which 1991ⁿ divides the number :
Problem 5)
Let ƒ(n) denote the sum of the digits of n. Let N = 4444⁴⁴⁴⁴
Find ƒ(ƒ(ƒ(n))))
You can use these symbols to write solutions more conveniently
Mathematical Operators ∀ ∁ ∂ ∃ ∄ ∅ ∆ ∇ ∈ ∉ ∊ ∋ ∌ ∍ ∎ ∏ ∐ ∑ − ∓ ∔ ∕ ∖ ∗ ∘ ∙ √ ∛ ∜ ∝ ∞ ∟ ∠ ∡ ∢ ∣ ∤ ∥ ∦ ∧ ∨ ∩ ∪ ∫ ∬ ∭ ∮ ∯ ∰ ∱ ∲ ∳ ∴ ∵ ∶ ∷ ∸ ∹ ∺ ∻ ∼ ∽ ∾ ∿ ≀ ≁ ≂ ≃ ≄ ≅ ≆ ≇ ≈ ≉ ≊ ≋ ≌ ≍ ≎ ≏ ≐ ≑ ≒ ≓ ≔ ≕ ≖ ≗ ≘ ≙ ≚ ≛ ≜ ≝ ≞ ≟ ≠ ≡ ≢ ≣ ≤ ≥ ≦ ≧ ≨ ≩ ≪ ≫ ≬ ≭ ≮ ≯ ≰ ≱ ≲ ≳ ≴ ≵ ≶ ≷ ≸ ≹ ≺ ≻ ≼ ≽ ≾ ≿ ⊀ ⊁ ⊂ ⊃ ⊄ ⊅ ⊆ ⊇ ⊈ ⊉ ⊊ ⊋ ⊌ ⊍ ⊎ ⊏ ⊐ ⊑ ⊒ ⊓ ⊔ ⊕ ⊖ ⊗ ⊘ ⊙ ⊚ ⊛ ⊜ ⊝ ⊞ ⊟ ⊠ ⊡ ⊢ ⊣ ⊤ ⊥ ⊦ ⊧ ⊨ ⊩ ⊪ ⊫ ⊬ ⊭ ⊮ ⊯ ⊰ ⊱ ⊲ ⊳ ⊴ ⊵ ⊶ ⊷ ⊸ ⊹ ⊺ ⊻ ⊼ ⊽ ⊾ ⊿ ⋀ ⋁ ⋂ ⋃ ⋄ ⋅ ⋆ ⋇ ⋈ ⋉ ⋊ ⋋ ⋌ ⋍ ⋎ ⋏ ⋐ ⋑ ⋒ ⋓ ⋔ ⋕ ⋖ ⋗ ⋘ ⋙ ⋚ ⋛ ⋜ ⋝ ⋞ ⋟ ⋠ ⋡ ⋢ ⋣ ⋤ ⋥ ⋦ ⋧ ⋨ ⋩ ⋪ ⋫ ⋬ ⋭ ⋮ ⋯ ⋰ ⋱ ⋲ ⋳ ⋴ ⋵ ⋶ ⋷ ⋸ ⋹ ⋺ ⋻ ⋼ ⋽ ⋾ ⋿ Exponents : ⁰ ¹ ² ³ ⁴ ⁵ ⁶ ⁷ ⁸ ⁹ ⁺ ⁻ ⁼ ⁽ ⁾ ₀ ₁ ₂ ₃ ₄ ₅ ₆ ₇ ₈ ₉ ₊ ₋ ₌ ₍ ₎
## Problem of the day 3
Nice problem !
Functional Equations !
Q)
Determine all functions ƒ satisfying the functional relations
ƒ(x) + ƒ(1/(1-x)) =( 2(1-2x)/(x(1-x))
where x is a real number x ≠ 0 , x ≠ 1
|
# Integration By Substitution – Formula and Examples
Here you will learn what is integration by substitution method class 12 with examples.
Let’s begin –
## Integration By Substitution
The method of evaluating an integral by reducing it to standard form by a proper substitution is called integration by substitution.
If $$\phi(x)$$ is continuously differentiable function, then to evaluate integrals of the form
$$\int$$ $$f(\phi(x))$$ $$\phi'(x)$$ dx, we substitute $$\phi(x)$$ = t and $$\phi'(x)$$ dx = dt
This substitution reduces the above integral to $$\int$$ f(t) dt.
After evaluating this integral we substitute back the value of t.
Example : Prove that $$\int$$ sin(ax + b) dx = $$-1\over a$$ cos(ax + b) + C.
Solution : Let ax + b = t. Then, d(ax + b) = dt $$\implies$$ a dx = dt $$\implies$$ dx = $$1\over a$$ dt
Putting ax + b = t and dx = $$1\over a$$ dt, we get
$$\int$$ sin(ax + b) dx = $$1\over a$$ $$\int$$ sin t dt
= $$-1\over a$$ cos t + C
= $$-1\over a$$ cos(ax + b) + C
Example : Evaluate $$\int$$ $$cos^2x\over {sin^2x + sinx}$$ dx
Solution : I = $$\int$$ $$(1-sin^2x)cosx\over {sinx(1 + sinx)}$$ dx = $$\int$$ $$1 – sinx\over {sinx}$$ cosx dx
Put sinx = t $$\implies$$ cosx dx = dt
$$\implies$$ I = $$\int$$ $$1 – t\over t$$ dt
= ln| t | – t + C
= ln|sinx| – sinx + C
|
Not using Taylor series or Maclaurin series, prove $\frac{1}{\sqrt{1-x^2}}=1+\frac{1\cdot3}{2\cdot4}x^2+\frac{1\cdot3\cdot5}{2\cdot4\cdot6}x^4+\cdots$
I’m trying to find a proof of
$$\frac{1}{\sqrt{1-x^2}} = 1+\frac{1\cdot3}{2\cdot4}x^2+\frac{1\cdot3\cdot5}{2\cdot4\cdot6}x^4+\cdots,$$ which doesn’t need Taylor or Maclaurin series, like the proof of Mercator series and Leibniz series.
I tried to prove it by using calculus, but I couldn’t hit upon a good proof.
• Are you familiar with the generalized binomial theorem? – Crescendo Nov 6 '17 at 17:32
• math.stackexchange.com/questions/746388/… – lab bhattacharjee Nov 6 '17 at 17:47
• I can use it only when coefficient is natural, because I need Taylor or Maclaurin series to get the rational one. – Gymnast Nov 6 '17 at 22:19
• So, labbhattacharjee, I’m sorry it must not at all be what I want to know about ... – Gymnast Nov 6 '17 at 22:21
• The general binomial theorem can be proved without the use of Taylor / Maclaurin series but the proof is not quite well known. See this blog post. – Paramanand Singh Nov 7 '17 at 3:55
$$\frac{(2n-1)!!}{(2n)!!}=\frac{1}{4^n}\binom{2n}{n}=\frac{1}{\pi}\int_{0}^{\pi}\cos^{2n}(\theta)\,d\theta$$ due to $\cos\theta=\frac{e^{i\theta}-e^{-i\theta}}{2}$, the binomial theorem (standard form) and $\int_{-\pi}^{\pi}e^{im\theta}\,d\theta = 2\pi\,\delta(m)$, for any $x$ such that $|x|<1$ we have
$$\sum_{n\geq 0}\frac{(2n-1)!!}{(2n)!!}x^{2n} = \frac{1}{\pi}\int_{0}^{\pi}\sum_{n\geq 0} x^{2n}\cos^{2n}(\theta)\,d\theta = \frac{2}{\pi}\int_{0}^{\pi/2}\frac{d\theta}{1-x^2\cos^2\theta}$$ and by enforcing the substitution $\theta=\arctan t$ in the last integral we get $$\sum_{n\geq 0}\frac{(2n-1)!!}{(2n)!!}x^{2n} =\frac{2}{\pi}\int_{0}^{+\infty}\frac{dt}{(1-x^2)+t^2}=\frac{1}{\sqrt{1-x^2}}$$ as wanted.
• Essentially this is using $1/(1-x)=1+x+x^{2}+\dots$ (sum of an infinite GP). It is rather very smart to use this formula and integration to obtain the series for $(1-x^{2})^{-1/2}$. +1 – Paramanand Singh Nov 7 '17 at 3:58
|
#### Volume 5, issue 2 (2005)
Recent Issues
The Journal About the Journal Editorial Board Editorial Interests Editorial Procedure Submission Guidelines Submission Page Subscriptions Author Index To Appear Contacts ISSN (electronic): 1472-2739 ISSN (print): 1472-2747
Bootstrapping in convergence groups
### Eric L Swenson
Algebraic & Geometric Topology 5 (2005) 751–768
arXiv: math.GR/0508172
##### Abstract
We prove a true bootstrapping result for convergence groups acting on a Peano continuum. We give an example of a Kleinian group $H$ which is the amalgamation of two closed hyperbolic surface groups along a simple closed curve. The limit set $\Lambda H$ is the closure of a “tree of circles" (adjacent circles meeting in pairs of points). We alter the action of $H$ on its limit set such that $H$ no longer acts as a convergence group, but the stabilizers of the circles remain unchanged, as does the action of a circle stabilizer on said circle. This is done by first separating the circles and then gluing them together backwards.
##### Keywords
convergence group, bootstrapping, Peano continuum
Primary: 20F34
Secondary: 57N10
|
## Introductory Algebra for College Students (7th Edition)
Published by Pearson
# Chapter 2 - Section 2.3 - Solving Linear Equations - Exercise Set: 23
x=5
#### Work Step by Step
3(x+1)=7(x-2)-3 3(x)+3(1)=7(x)-7(2)-3 3x+3=7x-14-3 3x+3=7x-17 3x-7x+3=-17 -4x+3=-17 -4x=-20 x=$\frac{-20}{-4}$ x=5 Check the answer. 3(5+1)=7(5-2)-3 3(6)=7(3)-3 18=21-3 18=18
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
# How to uniquely identify phase difference between harmonics?
Let's say I have a multi-frequencies signal of 440Hz and 880Hz, slightly out of phase.
I can take a chunk of the signal and calculate phase difference of the two frequencies with FFT bins. However, since frequencies are different, the phase difference will differ depending upon where I took the chunks, despite they are taken from the same signal.
I wonder whether there is a way to uniquely identify phase difference between harmonics, with arbitrary chunks? (so that I can identify harmonic signals.)
• Phase relative to what? 1970 Midnight GMT? Some start time when you flipped a switch, or other event? Some fundamental periodicity or pitch interval zero crossing? Or ? – hotpaw2 Dec 14 '18 at 20:11
• My question is how to come up with a definition of "phase difference" representing uniquely the same signal from which arbitrary chunks are taken. I think @robert bristow-johnson has given a good answer. – Wei Lin Dec 23 '18 at 2:18
i am assuming this is a periodic (or quasi-periodic) function. also seems like the signal is a musical tone. periodic signals have a period, $$P$$, and the reciprocal of that period is the fundamental frequency; $$f_0 \triangleq \frac{1}{P}$$
or, as angular frequency; $$\omega_0 = 2 \pi f_0 = \frac{2 \pi}{P}$$ .
the periodic signal is $$x(t)$$ where
$$x(t+P) = x(t) \qquad \forall t \in \mathbb{R}$$
it can be represented as a Fourier series
\begin{align} x(t) \ &= \ a_0 \ + \ 2 \sum_{k=1}^{\infty} a_k \cos(k \omega_0 t) - b_k \sin(k \omega_0 t) \\ &= \ a_0 \ + \ 2 \sum_{k=1}^{\infty} a_k \frac{e^{j k \omega_0 t} + e^{-j k \omega_0 t}}{2} \ - \ b_k \frac{e^{j k \omega_0 t} - e^{-j k \omega_0 t}}{2j} \\ &= \ a_0 \ + \ \sum_{k=1}^{\infty} (a_k + jb_k) e^{j k \omega_0 t} \ + \ (a_k - jb_k) e^{-j k \omega_0 t} \\ &= \ \sum_{k=-\infty}^{\infty} c_k \ e^{j k \omega_0 t} \\ \end{align}
where
$$c_k = \begin{cases} a_{-k} - jb_{-k}, & \quad \text{for } k < 0 \\ a_0, & \quad\quad k = 0 \\ a_k + jb_k, & \quad\quad k > 0 \end{cases}$$
and, going in the other direction,
$$\begin{array}{lcl} a_0 & = & c_0 \\ a_k & = & \Re\{c_k \} \quad\quad \text{for } k>0\\ b_k & = & \Im\{c_k \} \quad\quad\quad k>0 \end{array}$$
note that for real $$a_k$$ and real $$b_k$$, then
$$c_{-k} = c_k^* = \text{"complex conjugate of } c_k \text{ "}$$
it's easier to get $$c_k$$ from $$x(t)$$ and then compute $$a_k$$ and $$b_k$$ from the $$c_k$$ which is:
$$c_k = \tfrac{1}{P} \int\limits_{t_0}^{t_0+P} x(t) \, e^{-j k \omega_0 t} \, \mathrm{d}t \qquad \text{for any } t_0 \in \mathbb{R}$$
you can uniquely define the phase of overtones (or harmonics apart from the fundamental) with reference to the fundamental if there is any energy at the fundamental. (it is not always the case that a periodic signal has non-zero energy in its own fundamental frequency.)
now it's also true that:
\begin{align} x(t) \ &= \ \sum_{k=-\infty}^{\infty} c_k \ e^{j k \omega_0 t} \\ &= \ a_0 \ + \ 2 \sum_{k=1}^{\infty} a_k \cos(k \omega_0 t) - b_k \sin(k \omega_0 t) \\ &= \ a_0 \ + \sum_{k=1}^{\infty} r_k \cos(k \omega_0 t + \phi_k) \\ \end{align}
where
\begin{align} r_k &= 2 \big| a_k + j\,b_k \big| \\ &= 2 \sqrt{a_k^2 + b_k^2} \\ \\ \phi_k &= \arg \{a_k + j\,b_k \} \\ &= \operatorname{atan2}(b_k, a_k) \\ \end{align}
now, the phases of all of the harmonics are relative to some time $$t=0$$, but that reference time can be defined in such a way that the phase of the fundamental is zero.
let
\begin{align} \tilde{x}(t) \ &\triangleq \ x(t - \tfrac{\phi_1}{\omega_0}) \\ &= \ a_0 \ + \sum_{k=1}^{\infty} r_k \cos \big(k \omega_0 (t - \tfrac{\phi_1}{\omega_0}) + \phi_k \big) \\ &= \ a_0 \ + \sum_{k=1}^{\infty} r_k \cos(k \omega_0 t + \phi_k - k\phi_1) \\ &= \ a_0 \ + \sum_{k=1}^{\infty} r_k \cos(k \omega_0 t + \tilde{\phi}_k) \\ \end{align}
where $$\tilde{\phi}_k \triangleq \phi_k - k\phi_1$$.
you can see that $$\tilde{\phi}_1 = 0$$, that we defined our origin of $$t$$ so that the phase of the fundamental (when $$k=1$$) is 0. then all of the other harmonics have their phases referenced to that "first harmonic" or fundamental. the only problem is that the amplitude of that fundamental must not be zero. $$r_1 \ne 0$$ so that the reference phase, $$\phi_1 = \arg \{a_1 + j\,b_1 \}$$, can be determined.
|
## The Annals of Probability
### Quenched central limit theorem for random walks in doubly stochastic random environment
Bálint Tóth
#### Abstract
We prove the quenched version of the central limit theorem for the displacement of a random walk in doubly stochastic random environment, under the $H_{-1}$-condition, with slightly stronger, $\mathcal{L}^{2+\varepsilon}$ (rather than $\mathcal{L}^{2}$) integrability condition on the stream tensor. On the way we extend Nash’s moment bound to the nonreversible, divergence-free drift case, with unbounded ($\mathcal{L}^{2+\varepsilon}$) stream tensor. This paper is a sequel of [Ann. Probab. 45 (2017) 4307–4347] and relies on technical results quoted from there.
#### Article information
Source
Ann. Probab., Volume 46, Number 6 (2018), 3558-3577.
Dates
Revised: December 2017
First available in Project Euclid: 25 September 2018
https://projecteuclid.org/euclid.aop/1537862440
Digital Object Identifier
doi:10.1214/18-AOP1256
Mathematical Reviews number (MathSciNet)
MR3857862
Zentralblatt MATH identifier
06975493
#### Citation
Tóth, Bálint. Quenched central limit theorem for random walks in doubly stochastic random environment. Ann. Probab. 46 (2018), no. 6, 3558--3577. doi:10.1214/18-AOP1256. https://projecteuclid.org/euclid.aop/1537862440
#### References
• [1] Anders, S., Deuschel, J.-D. and Slowik, M. (2015). Invariance principle for the random conductance model in a degenerate ergodic environment. Ann. Probab. 43 1866–1891.
• [2] Barlow, M. (2004). Random walks on supercritical percolation clusters. Ann. Probab. 32 3024–3084.
• [3] Bass, R. F. (2002). On Aronson’s upper bounds for heat kernels. Bull. Lond. Math. Soc. 34 415–419.
• [4] Berger, N. and Biskup, M. (2007). Quenched invariance principle for simple random walk on percolation clusters. Probab. Theory Related Fields 137 83–120.
• [5] Biskup, M. (2011). Recent progress on the random conductance model. Probab. Surv. 8 294–373.
• [6] Biskup, M. and Prescott, T. M. (2007). Functional CLT for random walk among bounded random conductances. Electron. J. Probab. 12 1323–1348.
• [7] Biskup, M. and Rodriguez, P.-F. (2018). Limit theory for random walks in degenerate time-dependent random environment. J. Funct. Anal. 274 985–1046.
• [8] Chacon, R. V. and Ornstein, D. S. (1960). A general ergodic theorem. Illinois J. Math. 4 153–160.
• [9] Deuschel, J.-D. and Kösters, H. (2008). The quenched invariance principle for random walks in random environments admitting a bounded cycle representation. Ann. Inst. Henri Poincaré B, Calc. Probab. Stat. 44 574–591.
• [10] Hall, P. and Heyde, C. C. (1980). Martingale Limit Theory and Its Application. Academic Press, New York.
• [11] Helland, I. S. (1982). Central limit theorems for martingales with discrete or continuous time. Scand. J. Stat. 9 79–94.
• [12] Hopf, E. (1954). The general temporally discrete Markoff process. J. Rational Mech. Anal. 3 13–45.
• [13] Hopf, E. (1960). On the ergodic theorem for positive linear operators. J. Reine Angew. Math. 205 101–106.
• [14] Horváth, I., Tóth, B. and Vető, B. (2012). Relaxed sector condition. Bull. Inst. Math. Acad. Sin. (N.S.) 7 463–476.
• [15] Komorowski, T., Landim, C. and Olla, S. (2012). Fluctuations in Markov Processes—Time Symmetry and Martingale Approximation. Grundlehren der Mathematischen Wissenschaften 345. Springer, Berlin.
• [16] Kozlov, S. M. (1979). The averaging of random operators. Mat. Sb. 109 188–202.
• [17] Kozlov, S. M. (1985). The method of averaging and walks in inhomogeneous environments. Uspekhi Mat. Nauk 40 61–120.
• [18] Kozma, G. and Tóth, B. (2014). Central limit theorem for random walks in divergence-free random drift field: $H_{-1}$ suffices. Available at http://arxiv.org/abs/1411.4171v1.
• [19] Kozma, G. and Tóth, B. (2017). Central limit theorem for random walks in doubly stochastic random environment: $H_{-1}$ suffices. Ann. Probab. 45 4307–4347.
• [20] Krengel, U. (1985). Ergodic Theorems. De Gruyter Studies in Mathematics 6. de Gruyter, Berlin.
• [21] Kumagai, T. (2014). Random Walks on Disordered Media and Their Scaling Limits. In École d’Été de Probabilités de Saint-Flour XL–2010. Lecture Notes in Math. 2101. Springer, New York.
• [22] Morris, B. and Peres, Y. (2005). Evolving sets, mixing and heat kernel bounds. Probab. Theory Related Fields 133 245–266.
• [23] Nash, J. (1958). Continuity of solutions of parabolic and elliptic equations. Amer. J. Math. 80 931–954.
• [24] Osada, H. (1983). Homogenization of diffusion processes with random stationary coefficients. In Lecture Notes in Mathematics 1021 507–517. Springer, Berlin.
• [25] Papanicolaou, G. C. and Varadhan, S. R. S. (1981). Boundary value problems with rapidly oscillating random coefficients. In Random Fields, Vol. I, II (Esztergom, 1979). Colloquia Mathematica Societatis János Bolyai 27 835–873. North-Holland, Amsterdam.
• [26] Sidoravicius, V. and Sznitman, A.-S. (2004). Quenched invariance principles for walks on clusters of percolation or among random conductances. Probab. Theory Related Fields 129 219–244.
• [27] Tóth, B. (2018). Diffusive and Super-Diffusive Limits for Random Walks and Diffusions with Long Memory. In Proceedings of the International Congress of Mathematicians—2018. Rio de Janeiro. To appear.
• [28] Zeitouni, O. (2004). Random walks in random environment. In Lectures on Probability Theory and Statistics. Lecture Notes in Math. 1837 189–312. Springer, Berlin.
|
# Eddy Current Calculations
1. Aug 20, 2011
### mienaikoe
I searched a lot of google for calculations on eddy currents, and got a lot of things that describe how eddy currents work, but almost nothing about how to quantify the induced currents and the resulting magnetic field. Does anyone here know much about where to start?
In particular, I'm looking to quantify the induced eddy currents in a flat aluminum plate with a changing, perpindicular magnetic field.
2. Aug 21, 2011
### Philip Wood
Maybe it would help to look at a simple special case: a disc. By symmetry, the eddy currents will follow circular paths centred on the centre, O, of the disc. The emf acting in a circular path of radius r will, from Faraday’s law be given by
$$\varepsilon = \frac{d\Phi}{dt} = \pi r^2 \dot B$$
But if the current density is J around a path of radius r, we have
$$\varepsilon = \rho 2 \pi r J$$, in which $\rho$ is the resistivity.
So $$J = \frac{r \dot B}{2\rho}$$.
Say if this isn't clear.
Last edited: Aug 21, 2011
3. Aug 21, 2011
### mienaikoe
This Helps a Ton and it's very clear! thank you.
I'm still a little confused about how to derive the induced magnetic field. The current density that you've given is now a function of radius. Does this mean we can use Ampere's law to model the Magnetic Field as a function of radius?
$$J=\frac{r \dot B}{2\rho}$$
$$\oint B \cdot \partial L = \mu_0 I_{enc}$$
$$I_{enc} = \int J \cdot \partial A = J \pi r^2$$
$$\int \int_0^{2\pi} Br \cdot \partial \theta \cdot \partial r = \pi \mu_0 \int Jr^2 \cdot \partial r$$
Or is this just bad calculus?
4. Aug 21, 2011
### Philip Wood
Ampère's law
5. Aug 21, 2011
### Philip Wood
There are only a few cases where Ampère's law can be used to find B. These are cases in which there's enough symmetry for B to be effectively the same all along a particular integration path. Looking at your post, it seems that you don't have a particular path in mind. And the bad news is that for these circular eddy currents, as for a single circular loop of wire, there is no path along which B is constant. Ampère's law, beautiful though it is, can't help.
In fact the general problem of finding B at points in the vicinity of the disc, as for a circular loop, is very difficult. The only easy cases are for points on the axis of the disc, and, simplest of all, at its centre.
To find the field at the centre of the disc (of thickness b, say), think if it as made up of annuli of cross-sectional area b dr. Then the current in an annulus is Jbdr. But from the Biot-Savart law we know that the field at the centre of a ring carrying current I is $\mu$0I/2r. Using the J from my previous post, and integrating for a disc of radius a, I find
$$B_{ind} = \frac{\mu_0 ba \dot B}{4\rho}$$
A neat result, I thought. But I'm prone to slips...
Last edited: Aug 21, 2011
6. Aug 21, 2011
### mienaikoe
this makes a good amount of sense. As for the overall behavior of the areas outside the center, though, is it safe to say that the induced B field is greatest at the center, falls to zero where the Magnetic Field Source ends, and then goes a bit negative before returning to zero?
(Positive being the direction dictated by Lenz's Law)
7. Aug 22, 2011
### Philip Wood
Hadn't thought of the field ending; was thinking of the whole disc being subjected to a uniform normal field. But if the field 'covered' only an inner part, DB, of the disc, I wouldn't expect the induced field to drop to zero at exactly the edge of DB. Disc annuli outside DB will still have emfs induced in them, because changing flux will still be linked with them. The emf will be
$$\varepsilon = \pi r_B^2 \dot B$$
in which rB is the radius of DB.
But beyond DB, the current density will fall because of the increasing value of 2$\pi$r, and at some point, I think that B will indeed drop to zero and then reverse, as would happen for an ordinary current-carrying loop.
Thanks for such an interesting question.
8. Aug 22, 2011
### mienaikoe
You've been a tremendous help. Thank you!
|
Worksheet on Direct Variation
Worksheet on direct variation word problems there are various types of questions to practice. Students can recall how to solve word problems on direct variation and then try to solve the worksheet on direct variation or direct proportion.
1. If 8 oranges cost $10.40, how many oranges can be bought for$ 33.80?
(a) 21
(b) 23
(c) 25
(d) 26
2. If 18 dolls cost $630, how many dolls can be bought for$ 455?
(a) 9
(b) 11
(c) 13
(d) 15
3. If a man earns $805 per week, in how many days he will earn$ 1840?
(a) 7 days
(b) 16 days
(c) 19 days
(d) 23 days
4. If car covers 102 km in 6.8 litres of petrol, how much distance will it cover in 24.2 litres of petrol?
(a) 363 km
(b) 330 km
(c) 375 km
(d) 396 km
5. On a particular day, 200 US dollars are worth Rs 9666. On that day, how many dollars could be bought for Rs 5074.65?
(a) 105 US dollars
(b) 117 US dollars
(c) 127 US dollars
(d) 131 US dollars
6. If 5 men or 7 women earn $525 per day, how much would 7 men and 13 women earn per day? (a)$ 1331
(b) $1816 (c)$ 1710
(d) $1041 7. The cost of 16 bags of washing powder, each weighing 1.5 kg, is$ 672. Find the cost of 18 bags of the same, each weighing 2 kg.
(a) $1008 (b)$ 1128
(c) $1338 (d)$ 1000
8. If 3 persons can weave 168 shawls in 1 4 days, how many shawls will be woven by 8 persons days?
(a) 153
(b) 189
(c) 127
(d) 160
9. If the cost of transporting 160 kg of goods for 125 km is Rs 60, what will be the cost of transporting 200 kg of goods for 400 km?
(a) $118 (b)$ 196
(c) $240 (d)$ 275
10. If the wages of 12 workers for 5 days are $7500, find the wages of 17 workers for 6 days. (a)$ 10943
(b) $11057 (c)$ 12750
(d) $13473 Answers for worksheet on direct variation are given below to check the exact answers of the question. Answers: 1. 26 2. 13 3. 16 4. 363 km 5. 105 US dollars 6.$ 1710
7. $1008 8. 160 9.$ 240
10. \$ 12750
Ratio and Proportion (Direct & Inverse Variation)
Direct Variation
Inverse Variation
Practice Test on Direct Variation and Inverse Variation
Ratio and Proportion - Worksheets
Worksheet on Direct Variation
Worksheet on Inverse Variation
|
# zbMATH — the first resource for mathematics
Simple distributed $$\Delta+1$$-coloring of graphs. (English) Zbl 1002.68202
Summary: A very natural randomized algorithm for distributed vertex coloring of graphs is analyzed. Under the assumption that the random choices of processors are mutually independent, the execution time will be $$O(\log n)$$ rounds almost always. A small modification of the algorithm is also proposed.
##### MSC:
68W20 Randomized algorithms 05C85 Graph algorithms (graph-theoretic aspects) 68R10 Graph theory (including graph drawing) in computer science 05C15 Coloring of graphs and hypergraphs
##### Keywords:
distributed vertex coloring of graphs
Full Text:
|
# find a and b using the information given
I have been presented with the following question :
The polynomial
$$f(x) = x^3 - 2x^2 +ax + b$$
satisfies the following :
a) It is divisible (x-1)
b) it leaves a remainder of -24 when divided by (x+3)
I have no idea where to begin with this, I have tried long division but I get stuck when ax becomes the dividend. I have also tried substitution and rearanging the equation but im not getting anywhere. If anyone could show me how to complete this question, or shed any light on the steps I would be extremely grateful.
So $f(x)$ is divisible by $(x-1)$ also can be restated as
$$f(x) = (x-1)*\text{something}$$
$$f(1) = (1-1)*\text{something} = 0*\text{something} = 0$$
So we have that
$$f(1) = 1^3 - 2*1^2 +a*1 + b = a+b-1 = 0$$
Now the other condition is that $f(x)$ leaves remainder $-24$ when divided by $x+3$. That is to say $f(x) +24$ is divisible by $x+3$ meaning
$$f(-3) + 24 = 0 \rightarrow (-3)^3-2(-3)^2-3a + b + 24 =0 \rightarrow$$
$$-27-18-3a +b+24 =0 \rightarrow -21 -3a + b = 0$$
So now we have a system of two equations in two unknowns:
$$a+b-1 = 0 \\ -21 -3a + b = 0$$
We solve:
(constants on right side) $$a+b = 1 \\ -3a + b = 21$$
(adding top to bottom 3 times) $$a +b = 1 \\ 4b = 24 \rightarrow b = 6$$
(substituting $b$ into top) $$a = -5$$
So $x^3 - 2x^2 - 5x + 6$ is your final answer.
• I made a silly mistake somewhere here. Disregard the later calcuations, i'm going to fix it – frogeyedpeas May 15 '16 at 22:57
• got it! now go ahead and read it through, ask away for any parts that don't make sense – frogeyedpeas May 15 '16 at 23:00
• Thank you very much for the breakdown on a step by step level, this makes sense. I need to go back and revise simultaneous equations, its been a while ! – Flewitt Connor May 15 '16 at 23:35
What you're given is
$$(a)\;\; f(1)=0\;,\;\;\;(b)\;\;f(-3)=-24$$
Observe you get two equations for $\;a,b\;$ :
$$a+b=1\\-3a+b=21$$
Solve the easy system now.
• Thank you I see now how remainder -24 simply means = -24 instead of =0. this has certainly helped – Flewitt Connor May 15 '16 at 23:43
Hint:
Every polynomial $f(x)$ divided by a a first degree polynomial of the form $x-\rho$ satisfies: $$f(x) = (x-\rho) \cdot \pi(x) + r,$$ where $r$ is the remainder of the division. Now, what is $f(-1)$ and what is $f(-3)$?
|
# Roaming on Nitro Network
## Roaming Architecture
### Network Reference Model
Figure 1 and Figure 2 shows the Network Reference Model (NRM) for the LoRaWAN architecture.
Figure 1: LoRaWan Network Reference Model (NRM), End-Device at Home
### End-Device
The end-device is a sensor or an actuator. The end-device is wirelessly connected to a LoRaWAN network through Nitro ION Miners. The application layer of the end-device is connected to a specific Application Server in the cloud. All application layer payloads of this end-device are routed to its corresponding Application Server.
|
# How do you find the 7th term in the expansion of the binomial (7x+2y)^15?
Aug 18, 2017
$7$ th term is $15 {C}_{6} \cdot {\left(7 x\right)}^{9} \cdot {\left(2 y\right)}^{6} = 1.29E+13 \cdot {x}^{9} \cdot {y}^{6}$
#### Explanation:
Binomial theorem:
${\left(a + b\right)}^{n} = \left(n {C}_{0}\right) {a}^{n} {b}^{0} + \left(n {C}_{1}\right) {a}^{n - 1} {b}^{1} + \left(n {C}_{2}\right) {a}^{n - 2} {b}^{2} + \ldots \ldots \left(n {C}_{n}\right) {b}^{n}$
Here (a=7x ; b=2y , n= 15)
We know nC_r= (n!)/(r!(n-r)!) :. 15C_6=5005
$7$ th term is $15 {C}_{6} \cdot {\left(7 x\right)}^{9} \cdot {\left(2 y\right)}^{6} = 1.29E+13 \cdot {x}^{9} \cdot {y}^{6}$ [Ans]
|
# Munging fixed-width formats¶
## Introduction¶
Large data sets, such as publicly available US Government data, are often made available as fixed-width format (FWF) text files. At first glance, each record looks like a string of characters. The data dictionary or another file is usually provided that provides information about which columns or substrings represent each variable in the data set. This format requires some careful coding to read, to ensure the correctness of the mapping from data to each variable.
Our current project involves the Healthcare Cost and Utilization Project (HCUP) and their National Inpatient Sample (NIS). We need to munge data from their Core database to obtain the volume of discharges with particular DRG codes, and then classify hospitals by volume. The HCUP provides load files for their data in SAS, SPSS and Stata. No open-source software is supported, but, the data is just ASCII data; how hard can this be?
For this post, I will consider the 2008 NIS Core data and examples from it. I will not provide the data (as it is subject to a data use agreement with HCUP) or the exact analysis we are doing, but provide surrogate numbers to reflect the process.
## Figuring out formats¶
Well, it's probably not hard, but it might be tedious. The NIS Core data for 2008 has 128 fields, so we need variable names, widths (how many characters for each field) and variable types for each of these. I don't know about you, but having to type this out would be a barrier for me to do anything with it.
Still, I don't want to fork over thousands of dollars for supported software. But maybe I can take advantage of code that HCUP provides? Looking at the load files, I figured that the Stata code seemed to have the cleanest format. Why not just extract the needed information from the load file? With Python, it shouldn't be too hard. I cooked up the following code (which might give my friends who are professional Python coders the shivers):
In [1]:
import re
import pandas as pd
import numpy as np
def parseStataCode(statafile):
with open(statafile,'r') as f:
indStart = min([i for i in range(len(statacode)) if statacode[i].find('infix')>-1])
indEnd = min([i for i in range(len(statacode)) if statacode[i].find('using')>-1])
coldata =statacode[indStart:indEnd]
coldata[0]=coldata[0].replace('infix','')
startcol=[re.findall(' (\d+)', x)[0] for x in coldata];
maxString = re.findall(r'(\d+)\-*(\d*)', coldata[-1])[0][-1]
startcol.append(maxString)
startcol=np.array([int(x) for x in startcol])
widths = np.diff(startcol)
varnames = [re.findall('[A-Z]\w+', x)[0] for x in coldata]
out = pd.DataFrame({'varnames':varnames, 'widths':widths })
return out
So, we can get the information we need from the Stata load file by running
In [2]:
wt = parseStataCode('StataLoad_NIS_2008_Core.Do')
We could add to the function if we also needed the missing data coding that is provided, but for the current project, we don't need those.
## Ingesting data¶
The NIS data is provided as zipped text files. One advantage to Python is the zipfile module, which allows us to access the data without explicitly uncompressing the file. Yes, we'll take a computational hit at the expense of extra storage, but for this exercise that's probably fine. Also, for some of the years, we found that there was some problems with the compression metadata that made WinZip fail, and the Python method worked beautifully for that.
For 2010 and after, HCUP encrypted the data with AES-256 encryption. Currently the zipfile module cannot handle this, so we used 7-Zip via the subprocess module.
We will use the struct module to parse the FWF files, following the advice given on StackOverflow. For this, we need to specify the data format, in terms of widths, in a format string, in order to parse the data.
In [3]:
def defineFormat(wt, keep=['DRG','HOSPID']):
widths = list(wt.widths)
varnames=list(wt.varnames)
fieldwidths = [-x for x in widths]
for s in keep:
fieldwidths[varnames.index(s)] *= -1
fmtstring = ' '.join('{}{}'.format(abs(fw), 'x' if fw < 0 else 's') for fw in fieldwidths)
return fmtstring
defineFormat(wt)
Out[3]:
'3x 3x 2x 2x 1x 3x 2x 2x 2x 11x 2x 2x 2x 2x 3s 3x 2x 3x 17x 5x 5x 5x 5x 5x 5x 5x 5x 5x 5x 5x 5x 5x 5x 5x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 5x 5x 5x 5x 2x 4x 4x 4x 4x 2x 3x 2x 5s 2x 14x 5x 6x 2x 2x 5x 5x 2x 2x 3x 2x 4x 2x 2x 2x 10x 2x 10x 3x 4x 4x 4x 4x 4x 4x 4x 4x 4x 4x 4x 4x 4x 4x 4x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 3x 1x 8x 2x 10x 15x 2x 4x 1x'
This allows us to mask variables we don't need (all the x's), and extract variables we do need as strings.
We can then define a parser using this format:
In [4]:
import struct
fieldstruct = struct.Struct(defineFormat(wt))
parse = fieldstruct.unpack_from
We can now extract data from the compressed data file. We will also filter the data to only keep data with DRG code 500.
In [5]:
import zipfile
dat = []
drgcodes = ['500']
with zipfile.ZipFile('NIS_2008_Core.exe','r') as zf:
with zf.open("NIS_2008_Core.ASC", 'r') as f:
for line in f:
x = list(parse(line))
if x[0] in drgcodes:
dat.append(x)
dat = pd.DataFrame(dat, columns = ['DRG','HOSPID'])
We now have to compute the volumes for each hospital, categorize them as Low, Medium and High, and then find the frequency of each category.
In [6]:
def categorize(s, bins=[0,10,20,5000], labels=['Low','Medium','High']):
"""
Split into categories
"""
out = pd.cut(s, bins=bins, labels=labels)
return out
volumes = categorize(dat.HOSPID.value_counts())
freq = volumes.value_counts()
freq
Out[6]:
Low 365
Medium 10
High 1
dtype: int64
In [7]:
%matplotlib inline
freq.plot(kind='bar',rot=30);
|
# American Institute of Mathematical Sciences
August 2004, 4(3): 687-694. doi: 10.3934/dcdsb.2004.4.687
## Is there a sigmoid growth of Gause's Paramecium caudatum in constant environment
1 The National Laboratory of Integrated Management of Insect and Rodent Pests in Agriculture, Institute of Zoology, Chinese Academy of Sciences Beijing 100080
Received November 2002 Revised December 2003 Published May 2004
Gause's experiments of Paramecium caudatum have been thought as one of the most accurate experiments in ecology. Although it has been hypothesized by ecologists that the population dynamics can be approximated by the classical sigmoid curve, there are still some questions as to whether the analytical method is accurate enough in relation to experimental data. Therefore analytical results are frequently encountered with doubt. In this study, we estimated some growing parameters based strictly on the life history of Paramecium caudatum and with a more flexible logistic model. Since the intrinsic growth rate values fell in different regions, the population dynamics were considered to follow a complex pattern.
Citation: Dianmo Li, Zufei Ma, Baoyu Xie. Is there a sigmoid growth of Gause's Paramecium caudatum in constant environment. Discrete & Continuous Dynamical Systems - B, 2004, 4 (3) : 687-694. doi: 10.3934/dcdsb.2004.4.687
[1] Yuri Kifer. Ergodic theorems for nonconventional arrays and an extension of the Szemerédi theorem. Discrete & Continuous Dynamical Systems - A, 2018, 38 (6) : 2687-2716. doi: 10.3934/dcds.2018113 [2] Guy V. Norton, Robert D. Purrington. The Westervelt equation with a causal propagation operator coupled to the bioheat equation.. Evolution Equations & Control Theory, 2016, 5 (3) : 449-461. doi: 10.3934/eect.2016013 [3] Feng Qi, Bai-Ni Guo. Completely monotonic functions involving divided differences of the di- and tri-gamma functions and some applications. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1975-1989. doi: 10.3934/cpaa.2009.8.1975 [4] Matteo Costantini, André Kappes. The equation of the Kenyon-Smillie (2, 3, 4)-Teichmüller curve. Journal of Modern Dynamics, 2017, 11: 17-41. doi: 10.3934/jmd.2017002 [5] Thierry Horsin, Peter I. Kogut, Olivier Wilk. Optimal $L^2$-control problem in coefficients for a linear elliptic equation. II. Approximation of solutions and optimality conditions. Mathematical Control & Related Fields, 2016, 6 (4) : 595-628. doi: 10.3934/mcrf.2016017 [6] Sebastián Ferrer, Martin Lara. Families of canonical transformations by Hamilton-Jacobi-Poincaré equation. Application to rotational and orbital motion. Journal of Geometric Mechanics, 2010, 2 (3) : 223-241. doi: 10.3934/jgm.2010.2.223 [7] Manuel de León, Juan Carlos Marrero, David Martín de Diego. Linear almost Poisson structures and Hamilton-Jacobi equation. Applications to nonholonomic mechanics. Journal of Geometric Mechanics, 2010, 2 (2) : 159-198. doi: 10.3934/jgm.2010.2.159 [8] Thierry Horsin, Peter I. Kogut. Optimal $L^2$-control problem in coefficients for a linear elliptic equation. I. Existence result. Mathematical Control & Related Fields, 2015, 5 (1) : 73-96. doi: 10.3934/mcrf.2015.5.73 [9] Atul Narang, Sergei S. Pilyugin. Toward an Integrated Physiological Theory of Microbial Growth: From Subcellular Variables to Population Dynamics. Mathematical Biosciences & Engineering, 2005, 2 (1) : 169-206. doi: 10.3934/mbe.2005.2.169 [10] Frédérique Billy, Jean Clairambault, Franck Delaunay, Céline Feillet, Natalia Robert. Age-structured cell population model to study the influence of growth factors on cell cycle dynamics. Mathematical Biosciences & Engineering, 2013, 10 (1) : 1-17. doi: 10.3934/mbe.2013.10.1 [11] Linfeng Mei, Wei Dong, Changhe Guo. Concentration phenomenon in a nonlocal equation modeling phytoplankton growth. Discrete & Continuous Dynamical Systems - B, 2015, 20 (2) : 587-597. doi: 10.3934/dcdsb.2015.20.587 [12] J. Leonel Rocha, Sandra M. Aleixo. Dynamical analysis in growth models: Blumberg's equation. Discrete & Continuous Dynamical Systems - B, 2013, 18 (3) : 783-795. doi: 10.3934/dcdsb.2013.18.783 [13] Viktor I. Gerasimenko, Igor V. Gapyak. Hard sphere dynamics and the Enskog equation. Kinetic & Related Models, 2012, 5 (3) : 459-484. doi: 10.3934/krm.2012.5.459 [14] Proscovia Namayanja. Chaotic dynamics in a transport equation on a network. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 3415-3426. doi: 10.3934/dcdsb.2018283 [15] Genni Fragnelli, A. Idrissi, L. Maniar. The asymptotic behavior of a population equation with diffusion and delayed birth process. Discrete & Continuous Dynamical Systems - B, 2007, 7 (4) : 735-754. doi: 10.3934/dcdsb.2007.7.735 [16] Jitendra Kumar, Gurmeet Kaur, Evangelos Tsotsas. An accurate and efficient discrete formulation of aggregation population balance equation. Kinetic & Related Models, 2016, 9 (2) : 373-391. doi: 10.3934/krm.2016.9.373 [17] Abdelaziz Rhandi, Roland Schnaubelt. Asymptotic behaviour of a non-autonomous population equation with diffusion in $L^1$. Discrete & Continuous Dynamical Systems - A, 1999, 5 (3) : 663-683. doi: 10.3934/dcds.1999.5.663 [18] Sergey Zelik. Asymptotic regularity of solutions of a nonautonomous damped wave equation with a critical growth exponent. Communications on Pure & Applied Analysis, 2004, 3 (4) : 921-934. doi: 10.3934/cpaa.2004.3.921 [19] Michael Scheutzow. Exponential growth rate for a singular linear stochastic delay differential equation. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1683-1696. doi: 10.3934/dcdsb.2013.18.1683 [20] Federica Sani. A biharmonic equation in $\mathbb{R}^4$ involving nonlinearities with critical exponential growth. Communications on Pure & Applied Analysis, 2013, 12 (1) : 405-428. doi: 10.3934/cpaa.2013.12.405
2018 Impact Factor: 1.008
|
Viser treff 1-20 av 32
• #### $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Peer reviewed; Journal article, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ...
• #### Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Peer reviewed; Journal article, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a ...
• #### Centrality dependence of inclusive J/$psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Peer reviewed; Journal article, 2015-11)
We present a measurement of inclusive J/\).psi\) production in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV as a function of the centrality of the collision, as estimated from the energy deposited in the Zero Degree ...
• #### Centrality dependence of particle production in p-Pb collisions at $\sqrt{s_{\rm NN} }$= 5.02 TeV
(Peer reviewed; Journal article, 2015-06)
We report measurements of the primary charged particle pseudorapidity density and transverse momentum distributions in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV, and investigate their correlation with experimental ...
• #### Centrality dependence of the pseudorapidity density distribution for charged particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Peer reviewed; Journal article, 2017-09)
We present the charged-particle pseudorapidity density in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02\,\mathrm{Te\kern-.25exV}$ in centrality classes measured by ALICE. The measurement covers a wide pseudorapidity ...
• #### Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(Peer reviewed; Journal article, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
• #### Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC
(Peer reviewed; Journal article, 2017-01)
The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 ...
• #### Coherent $\psi(2S)$ photo-production in ultra-peripheral Pb-Pb collisions at $\sqrt{s_{\rm NN}}$= 2.76 TeV
(Peer reviewed; Journal article, 2015-12)
The ALICE Collaboration has performed the first measurement of the coherent $\psi(2S)$ photo-production cross section in ultra-peripheral Pb-Pb collisions at the LHC. This charmonium excited state is reconstructed via ...
• #### Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Peer reviewed; Journal article, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
• #### Differential studies of inclusive J/$psi$ and $\psi$(2S) production at forward rapidity in Pb-Pb collisions at $\mathbf{\sqrt{{\textit s}_{_{NN}}}}$ = 2.76 TeV
(Peer reviewed; Journal article, 2016-05)
• #### Measurement of charged jet production cross sections and nuclear modification in p-Pb collisions at $\sqrt{s_\rm{NN}} = 5.02$ TeV
(Peer reviewed; Journal article, 2015-10)
Charged jet production cross sections in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV measured with the ALICE detector at the LHC are presented. Using the anti-$k_{\rm T}$ algorithm, jets have been reconstructed ...
• #### Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Peer reviewed; Journal article, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
• #### Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Peer reviewed; Journal article, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D\)^0\), D\)^+\) and D\)^{*+}\) mesons are ...
|
# Finding more details about a triangle using the given details.
In the triangle $ABC$ we have
$\tan{\frac{A}{2}}=\frac{1}{3}$
$b+c=3a$
Specify which of the following answers is correct:
$a) m(\angle B)=\frac{\pi}{2}$ or $m(\angle C)=\frac{\pi}{2}$
$b) m(\angle A)=m(\angle B)$
$c) m(\angle A)=\frac{\pi}{2}$
$d) m(\angle B)=\frac{\pi}{4}$ or $m(\angle C)=\frac{\pi}{4}$
$e) m(\angle A)=m(\angle C)$
$f) m(\angle A)=\frac{\pi}{3}$
I'm lost here. I don't know how I can use $\tan{\frac{A}{2}}=\frac{1}{3}$ so I get to an answer. Can someone give me a solution? I have many exercises involving the $\tan{\frac{A}{2}}$ function and I can't continue.
Thank you very much!
-
What if the question said $\tan x = \frac{1}{3}$? What are the possible values for $x$? – Code-Guru Jul 18 '12 at 19:29
I'm confused about that. I don't know how to use $\tan\frac{A}{2}$. – Grozav Alex Ioan Jul 18 '12 at 19:31
What I'm saying is ignore the $\frac{A}{2}$ and just look at the $\tan$ function. What angles have a tangent value of $\frac{1}{3}$? – Code-Guru Jul 18 '12 at 19:33
Another approach is to evaluate $\tan x$ where $x$ is any of the values given in the choices involving the angle $A$. For example, what is $\tan \frac{\pi}{2}$? – Code-Guru Jul 18 '12 at 19:36
I will assume that in this multiple choice question only one of the answers can be correct. If we do not assume that, there is additional work to do.
It is handy but not necessary to know the formula $$\tan(x+y)=\frac{\tan x+\tan y}{1-\tan x\tan y}.$$ Putting $x=y=\frac{A}{2}$, after a while we get $\tan A =\frac{3}{4}$.
This is what we get with the good old $3$-$4$-$5$ right triangle with sides $3k$, $4k$, $5k$. And by a miracle we have in that case that $4k+5k=3(3k)$. So the second condition $b+c=3a$ is met.
So a right angle at $B$ or $C$ will do the job. If there is a unique answer, that answer is a).
Remark: We could get there with a calculator. Use it to find (approximately) $A/2$, and then $A$. Not very informative. But then we may get the lucky idea of computing $\tan A$, (or $\sin A$, or $\cos A$.) The calculator gives $0.75$, (or $0.6$, or $0.8$) and then we recognize the familiar triangle.
If we are not allowed to assume a unique answer to the multiple choice question, we need to rule out the other possibilities. Some are very quick to rule out. The possibilities b), d), and e) take a little longer.
-
Thank you very much! Helped me solve the rest of the problems too. Didn't think about plugging in $x=y=\frac{A}{2}$. – Grozav Alex Ioan Jul 19 '12 at 6:34
@GrozavAlexIoan: It is good to hear that the idea helped you solve other problems. – André Nicolas Jul 19 '12 at 6:40
|
# Initial Stages 2021
Jan 10 – 15, 2021
Weizmann Institute of Science
Asia/Jerusalem timezone
See you at IS2023 in Copenhagen in June 2023
## Non-perturbative renormalization of the average color charge and multi-point correlators of color charge from a non-Gaussian small-x action
Jan 11, 2021, 7:40 PM
1h 30m
Patio (vDLCC)
### Patio
#### vDLCC
bullet talk (poster) Physics at low-x and gluon saturation
André Giannini
### Description
The McLerran-Venugopalan (MV) model is a Gaussian effective theory
of color charge fluctuations at small-$x$ in the limit of large valence
charge density, i.e. a large nucleus made of uncorrelated
color charges. In this work, we explore the effects of the first non-trivial
(even C-parity) non-Gaussian correction on the color charge density to
the MV model in SU(2) and SU(3) color groups in the non-perturbative regime.
We also compare our results to existing perturbative ones on
a lattice setup, where multi-point correlators of color charges can be
computed for fixed configurations. We investigate three different
choices for the renormalization of the couplings figuring in the
non-Gaussian small-$x$ action and find that one of them allows to
control the deviations from the MV model as one approaches the
continuum while the other two lead to a scenario where the small-$x$
action evolves towards a critical theory dominated by strong
non-Gaussian fluctuations regardless of the system size.
### Co-author
Yasushi Nara (Akita International University)
### Presentation materials
AVGiannini_BulletTalk_IS2021.mp4 AVGiannini_BulletTalk_IS2021_slides.pdf
|
# Solve Engineering Problems with Laplace Transforms
0
7874
Laplace transforms are integral mathematical transforms widely used in physics and engineering. In this 21st article in the series on mathematics in open source, the author demonstrates Laplace transforms through Maxima.
In higher mathematics, transforms play an important role. A transform is mathematical logic to transform or convert a mathematical expression into another mathematical expression, typically from one domain to another. Laplace and Fourier are two very common examples, transforming from the time domain to the frequency domain. In general, such transforms have their corresponding inverse transforms. And this combination of direct and inverse transforms is very powerful in solving many real life engineering problems. The focus of this article is Laplace and its inverse transform, along with some problem-solving insights.
The Laplace transform
Mathematically, the Laplace transform F(s) of a function f(t) is defined as follows:
F(s) =∫∞0ƒ(t)*exp(–s*t)dt
where t represents time and s represents complex angular frequency.
To demonstrate it, lets take a simple example of f(t) = 1. Substituting and integrating, we get F(s) = 1/s. Maxima has the function laplace() to do the same. In fact, with that, we can choose to let our variables t and s be anything else as well. But, as per our mathematical notations, preserving them as t and s would be the most appropriate. Lets start with some basic Laplace transforms. (Note that string() has been used to just flatten the expression.)
```\$ maxima -q
(%i1) string(laplace(1, t, s));
(%o1) 1/s
(%i2) string(laplace(t, t, s));
(%o2) 1/s^2
(%i3) string(laplace(t^2, t, s));
(%o3) 2/s^3
(%i4) string(laplace(t+1, t, s));
(%o4) 1/s+1/s^2
(%i5) string(laplace(t^n, t, s));
Is n + 1 positive, negative, or zero?
p; /* Our input */
(%o5) gamma(n+1)*s^(-n-1)
(%i6) string(laplace(t^n, t, s));
Is n + 1 positive, negative, or zero?
n; /* Our input */
(%o6) gamma_incomplete(n+1,0)*s^(-n-1)
(%i7) string(laplace(t^n, t, s));
Is n + 1 positive, negative, or zero?
z; /* Our input, making it non-solvable */
(%o7) laplace(t^n,t,s)
(%i8) string(laplace(1/t, t, s)); /* Non-solvable */
(%o8) laplace(1/t,t,s)
(%i9) string(laplace(1/t^2, t, s)); /* Non-solvable */
(%o9) laplace(1/t^2,t,s)
(%i10) quit();```
In the above examples, the expression is preserved as is, in case of non-solvability.
laplace() is designed to understand various symbolic functions, such as sin(), cos(), sinh(), cosh(), log(), exp(), delta(), erf(). delta() is the Dirac delta function, and erf() is the error functionothers being the usual mathematical functions.
```\$ maxima -q
(%i1) string(laplace(sin(t), t, s));
(%o1) 1/(s^2+1)
(%i2) string(laplace(sin(w*t), t, s));
(%o2) w/(w^2+s^2)
(%i3) string(laplace(cos(t), t, s));
(%o3) s/(s^2+1)
(%i4) string(laplace(cos(w*t), t, s));
(%o4) s/(w^2+s^2)
(%i5) string(laplace(sinh(t), t, s));
(%o5) 1/(s^2-1)
(%i6) string(laplace(sinh(w*t), t, s));
(%o6) -w/(w^2-s^2)
(%i7) string(laplace(cosh(t), t, s));
(%o7) s/(s^2-1)
(%i8) string(laplace(cosh(w*t), t, s));
(%o8) -s/(w^2-s^2)
(%i9) string(laplace(log(t), t, s));
(%o9) (-log(s)-%gamma)/s
(%i10) string(laplace(exp(t), t, s));
(%o10) 1/(s-1)
(%i11) string(laplace(delta(t), t, s));
(%o11) 1
(%i12) string(laplace(erf(t), t, s));
(%o12) %e^(s^2/4)*(1-erf(s/2))/s
(%i13) quit();```
Interpreting the transform
A Laplace transform is typically a fractional expression consisting of a numerator and a denominator. Solving the denominator, by equating it to zero, gives the various complex frequencies associated with the original function. These are called the poles of the function. For example, the Laplace transform of sin(w * t) is w/(s^2 + w^2), where the denominator is s^2 + w^2. Equating that to zero and solving it, gives the complex frequency s = +iw, -iw; thus, indicating that the frequency of the original expression sin(w * t) is w, which indeed it is. Here are a few demonstrations of the same:
```\$ maxima -q
(%i1) string(laplace(sin(w*t), t, s));
(%o1) w/(w^2+s^2)
(%i2) string(denom(laplace(sin(w*t), t, s))); /* The Denominator */
(%o2) w^2+s^2
(%i3) string(solve(denom(laplace(sin(w*t), t, s)), s)); /* The Poles */
(%o3) [s = -%i*w,s = %i*w]
(%i4) string(solve(denom(laplace(sinh(w*t), t, s)), s));
(%o4) [s = -w,s = w]
(%i5) string(solve(denom(laplace(cos(w*t), t, s)), s));
(%o5) [s = -%i*w,s = %i*w]
(%i6) string(solve(denom(laplace(cosh(w*t), t, s)), s));
(%o6) [s = -w,s = w]
(%i7) string(solve(denom(laplace(exp(w*t), t, s)), s));
(%o7) [s = w]
(%i8) string(solve(denom(laplace(log(w*t), t, s)), s));
(%o8) [s = 0]
(%i9) string(solve(denom(laplace(delta(w*t), t, s)), s));
(%o9) []
(%i10) string(solve(denom(laplace(erf(w*t), t, s)), s));
(%o10) [s = 0]
(%i11) quit();```
Involved Laplace transforms
laplace() also understands derivative() / diff(), integrate(), sum(), and ilt() – the inverse Laplace transform. Here are some interesting transforms showing the same:
```\$ maxima -q
(%i1) laplace(f(t), t, s);
(%o1) laplace(f(t), t, s)
(%i2) string(laplace(derivative(f(t), t), t, s));
(%o2) s*laplace(f(t),t,s)-f(0)
(%i3) string(laplace(integrate(f(x), x, 0, t), t, s));
(%o3) laplace(f(t),t,s)/s
(%i4) string(laplace(derivative(sin(t), t), t, s));
(%o4) s/(s^2+1)
(%i5) string(laplace(integrate(sin(t), t), t, s));
(%o5) -s/(s^2+1)
(%i6) string(sum(t^i, i, 0, 5));
(%o6) t^5+t^4+t^3+t^2+t+1
(%i7) string(laplace(sum(t^i, i, 0, 5), t, s));
(%o7) 1/s+1/s^2+2/s^3+6/s^4+24/s^5+120/s^6
(%i8) string(laplace(ilt(1/s, s, t), t, s));
(%o8) 1/s
(%i9) quit();```
Note the usage of ilt() – inverse Laplace transform in the %i8 of the above example. Calling laplace() and ilt() one after the other cancels their effectthat is what is meant by inverse. Lets look into some common inverse Laplace transforms.
Inverse Laplace transforms
```\$ maxima -q
(%i1) string(ilt(1/s, s, t));
(%o1) 1
(%i2) string(ilt(1/s^2, s, t));
(%o2) t
(%i3) string(ilt(1/s^3, s, t));
(%o3) t^2/2
(%i4) string(ilt(1/s^4, s, t));
(%o4) t^3/6
(%i5) string(ilt(1/s^5, s, t));
(%o5) t^4/24
(%i6) string(ilt(1/s^10, s, t));
(%o6) t^9/362880
(%i7) string(ilt(1/s^100, s, t));
(%o7) t^99/933262154439441526816992388562667004907159682643816214685929638952175999932299156089414639761565182862536979208272237582511852109168640000000000000000000000
(%i8) string(ilt(1/(s-a), s, t));
(%o8) %e^(a*t)
(%i9) string(ilt(1/(s^2-a^2), s, t));
(%o9) %e^(a*t)/(2*a)-%e^-(a*t)/(2*a)
(%i10) string(ilt(s/(s^2-a^2), s, t));
(%o10) %e^(a*t)/2+%e^-(a*t)/2
(%i11) string(ilt(1/(s^2+a^2), s, t));
Is a zero or nonzero?
n; /* Our input */
(%o11) sin(a*t)/a
(%i12) string(ilt(s/(s^2+a^2), s, t));
Is a zero or nonzero?
n; /* Our input */
(%o12) cos(a*t)
(%i13) assume(a < 0) or assume(a > 0)\$
(%i14) string(ilt(1/(s^2+a^2), s, t));
(%o14) sin(a*t)/a
(%i15) string(ilt(s/(s^2+a^2), s, t));
(%o15) cos(a*t)
(%i16) string(ilt((s^2+s+1)/(s^3+s^2+s+1), s, t));
(%o16) sin(t)/2+cos(t)/2+%e^-t/2
(%i17) string(laplace(sin(t)/2+cos(t)/2+%e^-t/2, t, s));
(%o17) s/(2*(s^2+1))+1/(2*(s^2+1))+1/(2*(s+1))
(%i18) string(rat(laplace(sin(t)/2+cos(t)/2+%e^-t/2, t, s)));
(%o18) (s^2+s+1)/(s^3+s^2+s+1)
(%i19) quit();```
Observe that if we take the Laplace transform of the above %o outputs, they would give back the expressions, which are input to ilt() of the corresponding %is. %i18 specifically shows one such example. It does laplace() of the output at %o16, giving back the expression, which was input to ilt() of %i16.
Solving differential and integral equations
Now, with these insights, we can easily solve many interesting and otherwise complex problems. One of them is solving differential equations. Lets explore a simple example of solving f(t) + f(t) = e^t, where f(0) = 0. First, lets take the Laplace transform of the equation. Then substitute the value for f(0), and simplify to obtain the Laplace of f(t), i.e., F(s). Finally, compute the inverse Laplace transform of F(s) to get the solution for f(t).
```\$ maxima -q
(%i1) string(laplace(diff(f(t), t) + f(t) = exp(t), t, s));
(%o1) s*laplace(f(t),t,s)+laplace(f(t),t,s)-f(0) = 1/(s-1)```
Substituting f(0) as 0, and then simplifying, we get laplace(f(t),t,s) = 1/((s-1)*(s+1)), for which we do an inverse Laplace transform:
```(%i2) string(ilt(1/((s-1)*(s+1)), s, t));
(%o2) %e^t/2-%e^-t/2
(%i3) quit();```
That gives us f(t) = (e^t e^-t) / 2, i.e., sinh(t), which definitely satisfies the given differential equation.
Similarly, we can solve equations with integrals. And not just integrals, but also equations with both differentials and integrals. Such equations come up very often when solving problems linked to electrical circuits with resistors, capacitors and inductors. Lets again look at a simple example that demonstrates the fact. Lets assume we have a 1 ohm resistor, a 1 farad capacitor, and a 1 henry inductor in series being powered by a sinusoidal voltage source of frequency w. What would be the current in the circuit, assuming it to be zero at t = 0? It would yield the following equation: R * i(t) + 1/C * ? i(t) dt + L * di(t)/dt = sin(w*t), where R = 1, C = 1, L =1.
So, the equation can be simplified to i(t) + ? i(t) dt + di(t)/dt = sin(w*t). Now, following the procedure as described above, lets carry out the following steps:
```\$ maxima -q
(%i1) string(laplace(i(t) + integrate(i(x), x, 0, t) + diff(i(t), t) = sin(w*t), t, s));
(%o1) s*laplace(i(t),t,s)+laplace(i(t),t,s)/s+laplace(i(t),t,s)-i(0) = w/(w^2+s^2)```
Substituting i(0) as 0, and simplifying, we get laplace(i(t), t, s) = w/((w^2+s^2)*(s+1/s+1)). Solving that by inverse Laplace transform, we very easily get the complex expression for i(t) as follows:
```(%i2) string(ilt(w/((w^2+s^2)*(s+1/s+1)), s, t));
Is w zero or nonzero?```
```n; /* Our input: Non-zero frequency */
(%o2) w^2*sin(t*w)/(w^4-w^2+1)-(w^3-w)*cos(t*w)/(w^4-w^2+1)+%e^-(t/2)*(sin(sqrt(3)*t/2)*(-(w^3-w)/(w^4-w^2+1)-2*w/(w^4-w^2+1))/sqrt(3)+cos(sqrt(3)*t/2)*(w^3-w)/(w^4-w^2+1))
(%i3) quit();```
|
## ClearEdge Power and PNNL in $2.8M project to test fuel-cell combined heat and power ##### 15 June 2011 ClearEdge Power and the Department of Energy’s Pacific Northwest National Laboratory (PNNL) are teaming in a$2.8-million combined industry and government award to test fuel-cell-based combined heat and power systems. ClearEdge will install its ClearEdge5 system at 10 different businesses in California and Oregon, while PNNL will monitor the systems and measure the energy savings.
The DOE share is around $1.2 million and the industry share (ClearEdge and their partners) is around$1.6 million. The federal portion of funding for the award was provided by DOE’s Office of Energy Efficiency and Renewable Energy – Fuel Cell Technologies Program.
Combined heat and power fuel cell systems can help smaller commercial buildings with high energy demands reap significant savings in energy cost and use. We anticipate that this type of a system could reduce the fuel costs and carbon footprint of a commercial building by approximately 40 percent, compared with conventional electricity and heat use.
—Mike Rinker, the research program manager at PNNL
The ClearEdge5 system is a little larger than a typical home’s refrigerator and is fueled by natural gas from existing, conventional pipelines. A Fuel Processor in the system reforms the natural gas into ultra-clean hydrogen through a catalytic process. ClearEdge uses a PBI-based PEM fuel cell to convert the hydrogen to electricity. This PEM operates around 160 °C; this is a relatively low-temperature fuel cell compared to a Solid Oxide Fuel Cell (SOFC) which runs between 600°C–1,000°C.
Once the hydrogen is processed through a Fuel Cell Stack, it creates direct current (DC) power and heat. The Power Conditioning Unit converts the DC electricity into alternating current (AC), which ties directly to a facility’s main electrical panel. The heat produced by the fuel cell is transferred to the building through the use of a hydronic system or a heat exchanger, supplying a continuous source of heating for domestic hot water as well as space or radiant heating.
Excess electricity produced, but not consumed by the building, is then sold back to a local utility company. While the ClearEdge5 is not currently grid independent, future systems are being designed to operate during a grid outage, giving companies a continuous power advantage.
Each ClearEdge5 unit will have a high-speed Internet data feed, allowing researchers at PNNL continuous access to analyze each installation’s performance. PNNL will independently verify and analyze the engineering, economic and environmental performance and carbon footprint of these systems during the next five years. Then PNNL will provide its analysis in a report to DOE’s Fuel Cell Technologies Program.
Cogeneration where natural gas is being burned is the quickest to implement and a very low cost way of reducing CO2 release. It it far cheaper and simpler to implement and operate than ethanol from corn per unit CO2 not released.
Buildings and water can be cooled as well as electricity produced from the waste heat produced in cogeneration processes. No commercial steam power plants should be built that use natural gas, but the gas should be used in cogeneration systems in buildings.
Climate Energy and others even sell cogeneration systems for homes. One of the strangest one is the steam OTAG LION.
Capstone turbines makes units for large buildings as does the Carrier division of UTC. These units are cheaper and probably as efficient as the fuel cell units.
Because natural gas pipes, microturbines, computer control, power semiconductors, and long life batteries exist, there is no economic reason to connect a large or some small buildings to the electric grid unless it is to feed power into the grid. The first cost of cogeneration systems is high but lower operational costs repay that extra expense. ..HG..
PEM operates around 160 °C
When an HTPEM operates above water boiling point the balance of system components are reduced making it simpler and cost less. The CHP idea is a good one and now that they operate at this temperature they can use absorption cooling instead of air conditioning that uses electricity.
Henry you and I are on the same plane. I am proposing CHP in large and small industry and residential. The Japanese have gone far to develop this technology - though it's still in its infancy.
Large buildings like the Bank of America tower in NYC produce up to 70% of their own electricity via a 4.6 MW cogen unit. And they cool the air with chiller ice and use excess heat for area and hot water heating.
http://www.solaripedia.com/13/173/1728/bank_of_america_tower_cogeneration_diagram.html
CHP is a stopgap, but a very worthwhile one.
The comments to this entry are closed.
|
The expression: Nz(DMax(“[PONum]”,”tblPO”),0)+1 will check if a PONum already exists. If it doesn’t it returns a 1, if it does it returns the number incremented by 1. If the number exists, but is 0 it will return a 1. In my blog I advise that number should NOT be generated until the user is ready to save the record. And to immediately commit the record after generating the number. Therefore, there should be no issue about giving them a new number if they go back to it.
An InDesign document can only have one chapter, and these chapters are typically combined in an InDesign book. To insert a chapter number, create a text frame where you want the chapter number to appear on either a document or master page. Click on the "Type" menu, then "Text Variables," "Insert Text Variable" and then "Chapter Number." Update the chapter number if necessary to keep your chapter numbers consecutive by clicking on "Numbering & Section Options" in the Layout menu.
2. Yes, The code should be entered using CodeBuilder. Where you enter it depends on how and when you want to trigger the generation of the next number. If you want to use a button, that works. And no, you don’t use 000 in the NZ() function. If you want to DISPLAY at least 3 digits with leading zeros, then you do that in the Format function. Note, though, you will need to change that when you hit 1000 POs.
The new SQL stored procedure lookup rules in Forms 9.1 make doing something like this possible. The example in the online help shows how to use a stored procedure to auto-append an incrementing number from the database to a form when it loads, which might solve some of your problems. However, the number is incremented after the form loads (not when it is submitted), so that might not exactly fit your needs. Here's the link to the correct page of the online help.
There is very simple solution that we use and that is to lay out the sheet say 6 up on a A4 sheet as a master page and in document setup set the number of pages to 1,000 if that is the amount you require. Put a page number on each ticket on the page and although they will all have the same number on each page, we put the the first two letters of the customers business name before each number followed by the letters of the alphabet so it then reads for example BT1A, BT2A, BT3A, BT1B, BT2B, BT2C and so on as each page is printed.
Here is my problem. i have a series of 3 digit numbers that need to be cinverted to a series of 4 digit numbers using this following 722 needs to read as 5622 and in the next collum SV-7822 in the collum's to the right. what type of formula is this and how can i do it? The above is an example, i have a whole range of 3 digit numbers that need the exact. rules applied to all numbers. which is why i need a formula to do it. Help someone please!!!! I'm not completely clear on what your looking for but if 722 was in A1 in B1: = A1 + 4900 in B2: = ="SV-" & A1 + 7100...
Note that we can consider multiple sequences at the same time by using different variables; e.g. {\displaystyle (b_{n})_{n\in \mathbb {N} }} could be a different sequence than {\displaystyle (a_{n})_{n\in \mathbb {N} }} . We can even consider a sequence of sequences: {\displaystyle ((a_{m,n})_{n\in \mathbb {N} })_{m\in \mathbb {N} }} denotes a sequence whose mth term is the sequence {\displaystyle (a_{m,n})_{n\in \mathbb {N} }} .
###### we have printed AP checks using the check number from 0000000001 to 0000000006, but we havent posted those batches. we have only one checbook. Now can we restart the check number from 000001. Then do we need to delete the previous batches for checks printed. What is the best approach in this regard. Thanks in Advance, Arun. In the Post Payables Checks batch window (Transactions > Purchasing > Post Checks), choose the batch in question, then select Reprint Checks from the drop-down list. Enter 000001 for your starting check number. You will also need to go ...
If you’re producing any kind of numbered items in-house that are multiple-up on a sheet where you need to control all the variables to meet your production needs, the autonumbering feature through numbered lists is the way to go! Just step and repeat away & InDesign will do all the work. No need to fool with a seperate “numbers” file or deal with a data merged document. I think it’s by far the best option for basic numbering.
Joshua, I described this problem in my post #5 — the footnote settings are doc-global. I know no present solution to your problem — which is why we still need Adobe to code a counter! And we also need to be able to set up footnotes which are frame-wide, not just column-wide. And, natch, we need headings which are frame-wide, spanning multiple columns — so the ID engineers can’t retire quite yet. :-)
I want to a sequential number to fill in automatically each time the form is filled out. Malissa, A simple way would be to use something like this, you could assign it to a button, an open or before print event. Sheets("Sheet1").Range("A1").Value = _ Sheets("Sheet1").Range("A1").Value + 1 For other ways to do this or if this is going to be used in a temple have a look here http://www.mcgimpsey.com/excel/udfs/sequentialnums.html -- Paul B Always backup your data before trying something new Please post any response to the newsgroups so others...
# Footnotes, after all, are always numbered sequentially and update when you add or remove one. The problem is that each time you add a footnote you get an extra space down at the bottom of the column. The solution? Make a paragraph style for your footnotes that specifies a .1 pt tall size with a 0 (zero) leading, then choose that paragraph style in the Document Footnote Options dialog box.
One of the easiest ways to begin applying numbers is by starting to type a numbered list. Word recognizes that you are creating a list and responds accordingly by converting text that you type into numbered items. The number scheme, delimiter characters that mark the beginning or end of a unit of data and formatting are all based on what you have typed. sequential numbering printing
|
# Understanding performance: graph connected components
I need to analyze directed graphs with 10M edges, 1M vertices, and 300K strongly connected components, so that the largest one contains 400K vertices.
I read some explanations of Leonid Shifrin here, here and here. Although I don't quite understand why his code merge is so blazingly fast, I learned two things: no recursions only iterations, linked lists are cool. Trying to mimic his approach, I prepared two function for finding strongly connected components. The first one is Kosaraju's algorithm:
ClearAll[readEdge, SowDFS, DFS, kosaraju]
(* Sow as a postvisit vertex function *)
SowDFS[q[v_?newQ, rest_]] := SowDFS[newQ[v] = False; adjIn[v] /. q[] -> q[Sow[v], rest]];
SowDFS[q[v_, rest_]] := SowDFS[rest];
(* deep first scan *)
DFS[q[v_?newQ, rest_]] := DFS[Sow[v, tag]; newQ[v] = False; adjOut[v] /. q[] -> rest];
DFS[q[v_, rest_]] := DFS[rest];
(* finding connected components *)
kosaraju[graphList_] := Block[{adjOut, adjIn, order, newQ, q, tag = 1, $IterationLimit = Infinity}, (* construction of adjacency lists *) adjOut[v_] = q[]; adjIn[v_] = q[]; Scan[readEdge, graphList]; SetAttributes[q, HoldAllComplete]; (* the first scan: topological sort of the reverse graph *) newQ[v_] = True; order = Reverse[Reap[Scan[SowDFS[q[#, q[]]] &, DeleteDuplicates@Flatten[graphList]]][[2, 1]]]; (* the second scan: finding components *) Clear[adjIn, newQ]; newQ[v_] = True; Last@Reap[Scan[(tag++; DFS[q[#, q[]]]) &, order], _, #2 &] ] The second one is based on Tarjan's algorithm, as it is described in Wikipedia. I only adapted it in non-recursive way: (* reading edge *) readOut[{a_, b_}] := (adj[a] = q[b, adj[a]]); (* test to start component *) strongQ[v_] := index[v] == link[v]; (* pop from stack *) pop[v_, q[v_, rest_]] := (inQ[Sow[v, v]] = False; rest); pop[v_, q[a_, rest_]] := pop[v, inQ[Sow[a, v]] = False; rest]; (* deep first scan with two stacks *) biDFS[q[v_?newQ, rest_], stack_] := biDFS[ (* previsit function *) newQ[v] = False; index[v] = link[v] = ++idx; {p[v], root} = {root, v}; adj[v] /. q[] -> q[h[v], rest], inQ[v] = True; q[v, stack]]; biDFS[q[h[v_?strongQ], rest_], stack_] := biDFS[rest, pop[v, stack]]; biDFS[q[v_?inQ, rest_], stack_] := biDFS[link[root] = Min[link[root], index[v]]; rest, stack]; biDFS[q[v_, rest_], stack_] := biDFS[rest, stack] (* postvisit vertex function *) h[a_] := (root = p[a]; link[root] = Min[link[root], link[a]]; a); (* start scan *) start[v_?newQ] := Block[{p, root = v, inQ, link, index, idx = 0}, (* p is for parent *)p[a_] := a; inQ[a_] = False; biDFS[q[v, q[]], q[]]] (* finding connected components *) tarjan[graphList_] := Block[{adj, newQ, q,$IterationLimit = Infinity},
(* construction of adjacency out-lists *)
(* finding components *)
newQ[v_] = True;
Last@Reap[Scan[start, DeleteDuplicates@Flatten[graphList]], _, #2 &]
]
I try not to use WM build in functions for graph analysis in order to have a clear comparison.
To test the performance I chose Google+ graphs from Stanford Large Network Dataset Collection. The renamed version of one of them is available here.
SetDirectory[NotebookDirectory[]];
graphList = Import["edges.dat"];
This graph contains 5126172 edges but only 16405 vertices. The structure of SCC is not completely trivial: a single connected component with 11064 vertices, another one — 11 vertices, one more — 5 vertices, three ones — 3 vertices each, ten components with 2 vertices, and 5296 vertices which are not strongly connected.
On the one hand, WM somehow requires a lot of time to form the graph:
result2 = ConnectedComponents[
Graph[DirectedEdge @@@ graphList]]; // AbsoluteTiming
(* {139.523577, Null} *)
and only half of a second to find the strongly connected components.
On the other hand, reading the graph takes 29 seconds in my function tarjan, and 12 seconds to find the components, thus 42 second in total:
result1 = tarjan[graphList]; // AbsoluteTiming
(* {42.207118, Null} *)
Hence, my question is it possible to speed up the tarjan function so that it will be at least 10 times slower than the built-in function ConnectedComponents?
My nb-file is available here.
-
The merge function is fast because it is type-specialized on numerical lists, on which it gets compiled. No time to look at your code in detail right now, but any top-level code with some sort of iteration will likely be much slower than the built-in function. You may want to check out my answer here, for a semi-compiled implementation of connected components which might rival a built-in one, and actually the entire thread to which it belongs, for some extra context / ideas. – Leonid Shifrin Jun 10 '14 at 19:48
If you don't want to restrict yourself to Mathematica you can take a look here and here – Sektor Jun 10 '14 at 20:04
@LeonidShifrin Thanks for the quick response. Sure, I will read the thread, but at a glance it is about connected components of an undirected graph, that is a bit easier. – Grisha Kirilin Jun 10 '14 at 20:22
@Sektor looks interesting, but I have a problem for one time only. I use it to learn something new about Mathematica. – Grisha Kirilin Jun 10 '14 at 20:26
There might be methods of interest at this link. My guess is mostly they will be too slow though (except maybe the built-in ConnectedComponents). – Daniel Lichtblau Jun 11 '14 at 13:47
|
# Control a 2.4 Ghz AR Drone from the computer [closed]
I had a Doyusha Nano Spider R/C mini-copter, it's controlled by a 4ch joystick 2.4 Ghz.
I look for a low cost method to control it from the computer. The software is not a problem, but how can I transform the WIFI or the Bluetooth signal of the computer to an R/C signal compatible with the mini-copter receptor?
Or is there another solution that is low cost?
The simplest way to do this, is to buy a more advanced PPM transmitter that has a trainer port, and use this PCTx device to control it from your PC through USB. They provice a simple library and some sample code to get your started. The control signals go from your
software -> through the PCTx device -> PPM transmitter -> over RF to your copter
Compatible transmitters are listed on that link.
I'm not familiar with your particular brand of mini-copter, but im assuming it uses standard 4 channel RC-PPM control signals. If it doesn't, the above solution will not work.
You can also, if you are so inclined roll out your own PC based PWM transmitter. This would involve writing software to implement the PPM signal, which can get a bit involved. You might even need some sort of an oscilloscope or a signal analyzer to debug issues. Some people have created Arduino based solutions. Examples: 1
Again, the assumption is that your copter uses standard RC-PWM. If it doesn't you'll have to first figure out what protocol it uses and then try to emulate that using software and an RF Tx module.
Since your copter receives 2.4Ghz radio signals, there is no drop-in solition to directly use WiFi or Bluetooth.
• Thanks for the useful information, yes I have a 4 channel RC controller, so I need just a PCTx device to interface it with a computer or an arduino? – Amine Horseman Sep 27 '14 at 1:33
• does your controller have a trainer port? – RaGe Sep 28 '14 at 11:06
• I don't know, how can I verify this? – Amine Horseman Sep 28 '14 at 11:47
• Thanks for the useful information, yes I have a 6 channel RC controller, so I need just a PCTx device to interface it with a computer or an arduino? – user15105 Oct 25 '16 at 22:12
• If you have a new question, please ask it by clicking the Ask Question button. Include a link to this question if it helps provide context. - From Review – Ben Oct 26 '16 at 0:36
In my university lab we hacked the radio controller with an arduino that receives the inputs from the computer and outputs to the controller. The arduino only substitutes the joysticks. So we still use the 2.4GHz controller to control the drone but matlab is sending control signals to the arduino which sends them to the original 2.4GHz controller, which sends them to the drone. It works! We've can control the drones position within 3cm in a 3x3x3[m] area just by using a PID controller and a stereo camera as the sensor. And this was done in a 20€ plastic quadcopter.
• Goncalo, I'm trying to do something pretty similar to what you have done in your lab, but I'm quite new to this subject. So I wanted to know if you used any other flight controller aboard your drone? Because as far as I know, the flight controllers handle the control task by themselves; They just need high level commands (i.e. throttle level, etc.). I can't understand which part of the control algorithm had you handled with the PID (implemented in Matlab) and which part was done on the on-board controller. Could you please shed some light on this matter? – Manzoori Sep 9 '15 at 12:15
|
# Math Help - trigonomic identities
1. ## trigonomic identities
i cant seem to solve the following identity.
(1+sin2x)/cos2x = cos2x/(1-sin2x)
any help would be greatly appreciated
2. Originally Posted by rookster101
i cant seem to solve the following identity.
(1+sin2x)/cos2x = cos2x/(1-sin2x)
any help would be greatly appreciated
start by doing this to the left side ...
$\frac{1+\sin(2x)}{\cos(2x)} \cdot \frac{1-\sin(2x)}{1-\sin(2x)}$
|
Gregory Ditzler
December 04, 2013
# Introduction to LaTeX
#### Gregory Ditzler
December 04, 2013
## Transcript
1. ### An Introduction to Typesetting with L A TEX Sponsored by
IGSA, DIG, and GSA Gregory Ditzler Drexel University PhD Candidate Vice President of DIG Dept. of Electrical & Computer Engineering Gregory Ditzler (Drexel University) L ATEXTutorial 1 / 20
2. ### What is L A TEX? L A TEXand Typesetting According
to Wikipedia, L A TEX is a “document preparation system and document markup language” extends the TEX typesetting system well-suited for the production of long articles, books and theses easy to learn, but there is a bit of a learning curve documents can be prepared with a simple text editor (e.g., Vim, Emacs) Central Idea: focus on the content of the document, not the presentation of the document central difference between typesetting and document processing! typesetting lets the user focus on the logical structure of their document rather than the initial presentation. Is L A TEX only for word-like documents? No. It can be used for presentations and posters too! Word Processor vs. L A TEX Word processors, such as Word and Pages, integrate all document processing operations into a single computer program. The formatting, display, and final output is all presented in the same program. By formatting we mean defining lines, sections, paragraphs, chapters, . . . Typesetting with L A TEX only focused on formatting the document. Content not presentation! input text into the document defining the structure compile the document view output (i.e., DVI, PDF, PS) Gregory Ditzler (Drexel University) L ATEXTutorial 2 / 20
3. ### More L A TEX What type of tools do you
recommend? Editors/GUIs: TeXShop (Mac), TeXworks (Linux/Windows), Texmaker (Mac/Linux/Windows) At the end of the day, all you really need is a simple text editor Bibliography: BibDesk (Mac), JabRef (Linux/Windows) BibDesk and JabRef can act as na¨ ıve reference managers. Other tools, such as Mendelay and Endnote, can export to the bibtex file format. Getting help: http://google.com Advantages / Disadvantages Pros: great for large documents (journals, theses, books), beautiful mathematics, consistent formatting, great figure generation tools Cons: learning curve, not for those work a nit-picky about the absolute appearance of their document, grammar/spell checker (?). Gregory Ditzler (Drexel University) L ATEXTutorial 3 / 20
4. ### My L A TEX Setup Gregory Ditzler (Drexel University) L
ATEXTutorial 4 / 20
5. ### More L A TEX advantages auto title pages auto table
of contents auto table of figures auto table of tables easy macros easy to customize write “sub”-documents easy to keep track of references light weight within document links superior quality cost (ugh, free!) Time Saver: Changing from one document format to another could be as simple as changing a few lines of code rather than the format to the entire document. Gregory Ditzler (Drexel University) L ATEXTutorial 5 / 20
18. ### Citations Using Citations BibTex is the easiest way to handle
citations in your document. There are may reference managers freely available that handle the BibTex format without actually view the BibTex format. (I’ll show you a demo) A citation package must be added to the top of your document. I recommend using cite or natbib. Depending on the reference style would call for one of the packages over the other (i.e., APA, Chicago, IEEE, . . . ). Like other types of references, each citation has a label that is called when you want to make a citation. For example, if I wanted to cite my recent IEEE Transactions on Knowledge & Data Engineering article I would use something like \cite{Ditzler2013TKDE} Just add the following lines before \end{document} \bibliographystyle{ieeetr} \bibliography{myrefs} Gregory Ditzler (Drexel University) L ATEXTutorial 18 / 20
19. ### Working with Large Documents Handling Multiple Files Working with very
large documents is very difficult with programs such as Word. It becomes increasingly difficult as you add many figures into the document (such as a thesis). All chapters, sections, subsections, . . . are located in the same document. Recall that a L A TEX document is simply a text file. Doesn’t get too much more light weight than that! the logical structure of a document may contain several different sub-levels L A TEX allows authors to write their document with multiple files and combine them together. This keeps the user from ending up with 1000+ line text files. \documentclass{article} \begin{document} \title{Our First Document} \author{You and I} \maketitle \input{section1.tex} \input{section2.tex} \input{section3.tex} \end{document} Gregory Ditzler (Drexel University) L ATEXTutorial 19 / 20
20. ### A Few Notes Before the Demo and Example Time New
to L A TEX? Compile often! My best piece advice: practice! reading about it is not as helpful Other useful things \tableofcontents: generate a table of contents \listoffigures: generate a table of figures \def: create command definitions. e.g., \def\xbf{\mathbf{x}} allows use to use \xbf instead of \mathbf{x} to create a bold face x. Gregory Ditzler (Drexel University) L ATEXTutorial 20 / 20
|
# Addition of Matrices – Properties and Examples
Here you will learn how to add matrix and properties of addition of matrices with examples.
Let’s begin –
Let A, B be two matrices, each of order $$m \times n$$. Then their sum A + B is a matrix of order $$m \times n$$ and is obtained by adding the correspoding elements of A and B.
Thus, if A = $$[a_{ij}]_{m\times n}$$ and B = $$[b_{ij}]_{m\times n}$$ are two matrices of the same order, their sum A + B is defined to be the matrix of order $$m\times n$$ such that
$$(A + B)_{ij}$$ = $$a_{ij}$$ + $$b_{ij}$$ for i = 1, 2, ……. , m and j = 1, 2, ……. n
Note : The sum of two matrices is defined by only when they are of the same order.
Example : If A = $$\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix}$$, B = $$\begin{bmatrix} 6 & 5 & 4 \\ 3 & 2 & 1 \end{bmatrix}$$, then
A + B = $$\begin{bmatrix} 1 + 6 & 2 + 5 & 3 + 4 \\ 4 + 3 & 5 + 2 & 6 + 1 \end{bmatrix}$$ = $$\begin{bmatrix} 7 & 7 & 7 \\ 7 & 7 & 7 \end{bmatrix}$$
If A = $$\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix}$$, B = $$\begin{bmatrix} -1 & 2 & 1 \\ 3 & 2 & 1 \\ 2 & 5 & -2 \end{bmatrix}$$, then A + B is not defined, because A and B are not of the same order.
(a) Commutativity : If A and B are two $$m\times n$$ matrices, then A + B = B + A. i.e. matrix addition is commutative.
(b) Associativity : If A, B, C are three matrices of the same order, then (A + B) + C = A + (B + C) i.e. matrix addition is associative.
(c) Existence of Identity : The null matrix is the identity element for matrix addition.
(d) Existence of Inverse : for every matrix A = $$[a_{ij}]_{m\times n}$$ there exist a matrix $$[-a_{ij}]_{m\times n}$$, denoted by -A, such that A + (-A) = O = (-A) + A
(e) Cancellation Laws : If A, B, C are matrices of the same order, then
A + B = A + C $$\implies$$ B = C
and, B + A = C + A $$\implies$$ B = C
|
### There are not many Selves, there are no Separate Selves, there is only One Self.
There is no such thing as division. Not just in mathematics. Division is an erroneous mathematical- and societal construct that simply does not exist. That which exists is indivisible. Variegated? Yes. Variation or diversity exists but division does not. Division does not exist. All is but One existing as variegated. In other words; there are not many Selves, there are no Separate Selves, there is only One Self.
~ Wald Wassermann
|
# RMS Value of Non-Sinusoidal Current MCQ [Free PDF] – Objective Question Answer for RMS Value of Non-Sinusoidal Current Quiz
1. If the maximum value of current is 5√2 A, what will be the value of RMS current?
A. 10 A
B. 5 A
C. 15 A
D. 25 A
We know, the value of RMS current = value of max current/√2
Substituting the value of max current we get, RMS current = 5A.
2. If Im is the maximum value of a sinusoidal voltage, what is the instantaneous value?
A. i = Im/2
B. i = Imsinθ
C. i = Imcosθ
D. i = Imsinθ or i = Imcosθ
The instantaneous value of a sinusoidally varying current is
i = Imsinθ or i = Imcosθ
where Im is the maximum value of current.
3. Average value of current over a half cycle is?
A. 0.67Im
B. 0.33Im
C. 6.7Im
D. 3.3Im
Average current = ∫0πidθ/π
= ∫0πImsinθ dθ/π = 2Im/π = 0.67 Im.
4. What is the correct expression for the RMS value of current?
A. Irms = Im/2
B. Irms = Im/√2
C. Irms = Im/4
D. Irms = Im
Irms2 = ∫0πdθ i2/2π = Im2/2
Irms = Im/√2.
5. Average value of current over a full cycle is?
A. 0.67Im
B. 0
C. 6.7Im
D. 3.3Im
The average of sine or cosine over a period is zero so, the average value of current over a full cycle is zero.
6. What is the correct expression for the form factor?
A. Irms × Iav
B. Irms / Iav
C. Irms + Iav
D. Irms – Iav
The correct expression for the form factor is Irms/Iav where Irms is the RMS value of the current and Iav is the average current.
7. For a direct current, the RMS current is ________ the mean current.
A. Greater than
B. Less than
C. Equal to
D. Not related to
For a direct current, the mean current value is the same as that of the RMS current.
8. For a direct current, the RMS voltage is ________ the mean voltage.
A. Greater than
B. Less than
C. Equal to
D. Not related to
For a direct current, the mean voltage value is the same as that of the RMS voltage.
9. What is the value of the form factor for the sinusoidal current?
A. π/2
B. π/4
C. 2π
D. π/√2
For sinusoidal current, Irms = Im/√2
Iav = √2 Im/π
So, form factor = Irms/Iav = π/2.
10. If the maximum value of the current is 5√2 A, what will be the value of the average current?
A. 10/π A
B. 5/π A
C. 15/π A
D. 25/π A
|
# pyqn
pyqn is a Python 3 package for handling physical quantities with names, values, uncertainties and units. It is hosted on github and is available as free software under the GPL-v3 open source licence.
### Units
The units of a physical quantity are represented by a Units object, which has methods for deducing the dimensions of a set of units and for calculating conversion factors between units. The simplest Units, expressed as a string are simply the commonly-used unit symbols such as m (metre), Pa (pascal), Å (Ångstrom), and Hz (hertz). Click here for a full list of recognised units.
These simple units can carry any of the SI prefixes between y (yocto, 10-24) and Y (yotta, 10+24). For example, nm (nanometres), kg (kilograms), GHz (gigahertz), and μs (microseconds). Note that unit strings are represented as unicode strings encoded as UTF-8 throughout the pyqn package. They can be raised to any integer power by placing the exponent to the right of the unit symbol, as in J2 (joules squared) and cm-1 (inverse centimetres). Note that no exponentiation operator such as ^ or ** is used.
Units are combined by indicating multiplication by a period ('.') and division by a solidus ('/'). Note that a maximum of one units division is supported, and parentheses are not implemented (yet). Some examples are: g.mol-1 (grams per mole), kg.m.s-2 (kilogram metres per second per second) and V/m (volts per metre). Different Units objects can be further combined using the usual Python mathematical operators, * and /:
In [1]: from pyqn.units import Units
In [2]: u1 = Units('km')
In [3]: u2 = Units('hr')
In [4]: u3 = u1/u2
In [5]: print(u3)
km.hr-1
Units can be converted from one system to another using the conversion method to return the conversion factor, only if both systems have the same dimensions:
In [6]: u4 = Units('m/s')
In [7]: u3.conversion(u4) # OK: can convert from km/hr to m/s
Out[7]: 0.2777777777777778
In [8]: u3.conversion(u2) # Oops: can't convert from km/hr to m!
...
UnitsError: Failure in units conversion: units km.hr-1[L.T-1] and hr[T] have
different dimensions
The conversion method also accepts a string argument so the valid conversion above is equivalent to:
In [9]: u3.conversion('m/s')
Out[9]: 0.2777777777777778
It is also possible to convert between units without creating a Units object at all, using the convenience method convert:
In [10]: from pyqn.units import convert
In [11]: convert('km/hr', 'm/s')
Out[11]: 0.2777777777777778
In [12]: 299792458 * convert('m/s', 'fur/ftn')
Out[12]: 1802617499785.2542 # speed of light in furlongs per fortnight
### Physical Quantities
The Quantity class, found in the pyqn.quantity module, provides a simple representation of physical quantities with a name, value, units, uncertainty and description.
#### Creating a pyqn Quantity
The complete initialization method for Quantity objects allows one to set the Quantity's plain-text name (name), LaTeX representation (latex), HTML representation (html), value (value, a floating-point number), units (units, either a text string to be parsed, or a Units object), uncertainty (sd, expressed as a 1σ standard deviation), and description or definition (definition). For example,
Mr = Quantity ('M_EtOH',
r'$M_\mathrm{EtOH}$',
'<em>M</em><sub>EtOH</sub>',
44.01,
'g.mol-1',
0.012,
'molar mass of ethanol')
It is not necessary to set all (or any) of these attributes:
k = Quantity(name='k', value=2300., units='N/m',
definition='N2 bond force constant')
#### Parsing pyqn Quantities
pyqn will do its best to parse a plain text string into a Quantity object. Quantity names are assigned in this string with name =, uncertainties given as an integer value in parentheses at the appropriate significant figure after the value, and units as a string separated from the value/sd by whitespace. Scientific notation is supported with the exponent denoted by one of e, E, d or D). For example,
Mr = Quantity.parse('Mr = 44.010(12) g.mol-1')
k = Quantity.parse('k = 2.3e3 N/m')
x = Quantity.parse('400')
#### Outputing pyqn Quantities
In addition to the usual __str__() method, which returns a string representation of the Quantity object with its name and units (if defined), there is also a method, as_str(), which takes boolean arguments, b_name, b_sd and b_units to control which of the Quantity's name, uncertainty and units should be output in the returned string. For example,
>>> g = Quantity.parse('g = 9.818(7) m.s-2')
>>> print(g.as_str()) # by default, output name, sd and units
g = 9.818(7) m.s-2
>>> print(g.as_str(b_sd=False))
g = 9.818 m.s-2
>>> print(g.as_str(b_name=False, b_sd=False))
9.818 m.s-2
#### Quantity Algebra
Quantity objects support simple algebra (addition, subtraction, multiplication and division). Errors (i.e. standard deviation uncertainties) are propagated, but assumed to be uncorrelated. Note that addition and subtraction can only be carried out with Quantity operands with the same units (not just the same dimensions). For example,
>>> DeltaH = Quantity.parse('DeltaH = 47.15(4) kJ.mol-1')
>>> T = Quantity(name='T', units='K', value=298., sd=0.5)
>>> DeltaS = DeltaH / T # DeltaS now has units kJ.K-1.mol-1
>>> DeltaS.convert_units_to('J.K-1.mol-1')
>>> print(DeltaS.as_str())
158.22(30) J.K-1.mol-1
|
# Math Algebra
posted by .
Give an orthogonal vector to the plane:
7x - 5y + 8z = 9
• Math Algebra -
x = 9 - 8z
-----
35y
## Similar Questions
1. ### math
A trigonmetric polynomial of order n is t(x) = c0 + c1 * cos x + c2 * cos 2x + ... + cn * cos nx + d1 * sin x + d2 * sin 2x + ... + dn * sin nx The output vector space of such a function has the vector basis: { 1, cos x, cos 2x, ..., …
2. ### Math - Vectors
Prove that vector i,j and k are mutually orthogonal using the dot product. What is actually meant by mutually orthogonal?
3. ### Linear Algebra, orthogonal
The vector v lies in the subspace of R^3 and is spanned by the set B = {u1, u2}. Making use of the fact that the set B is orthogonal, express v in terms of B where, v = 1 -2 -13 B = 1 1 2 , 1 3 -1 v is a matrix and B is a set of 2 …
4. ### Math
Mark each of the following True or False. ___ a. All vectors in an orthogonal basis have length 1. ___ b. A square matrix is orthogonal if its column vectors are orthogonal. ___ c. If A^T is orthogonal, then A is orthogonal. ___ d. …
5. ### calculus
I have 3 points: P(-3, 1, 2), Q(-1, 2, 3), R(2, 1, 0) and I need to find a nonzero vector orthogonal to the plane through these three points. I seem to recall this having something to do with the cross product, so I mad vectors PQ …
6. ### Precalc
Given the plane 3x+2y+5z=54 and the points P0(6, 8, 4)[on plane] and P1(13, 18, 5) [not on plane] A. Find vector n, a vector normal to the plane B. Find vector v from P0 to P1 C. Find the angle between vector n and vector v D. Find …
7. ### Linear Algebra
Ok this is the last one I promise! It's from a sample exam and I'm practicing for my finals :) Verify if the following 4 points are consecutive vertices of a parallelogram: A(1,-1,1); B(3,0,2);C(2,3,4);D(0,2,3) (b) Find an orthogonal …
8. ### Math
I'm doing a bunch of practice finals and I don't know how to approach this problem. Find a vector a such that a is orthogonal to < 1, 5, 2 > and has length equal to 6. If I want to find a vector that is orthogonal to <1,5,2>, …
9. ### Math - Vectors
Let the points A = (−1, −2, 0), B = (−2, 0, −1), C = (0, −1, −1) be the vertices of a triangle. (a) Write down the vectors u=AB (vector), v=BC(vector) and w=AC(vector) (b) Find a vector n that is …
10. ### Precalculus
Latex: The vector $\begin{pmatrix} k \\ 2 \end{pmatrix}$ is orthogonal to the vector $\begin{pmatrix} 3 \\ 5 \end{pmatrix}$. Find $k$. Regular: The vector <k, 2>, is orthogonal to the vector <3, 5>. Find k. I can't seem …
More Similar Questions
|
# How to prove that determinant can take any real value using only this definition of the determinant?
I was reading some facts about the determinant and refreshed my memory with the fact that the determinant of the $n\times n$ matrix can be defined as $\det(A)=\sum_{\sigma \in S_n} sgn(\sigma) \prod_{i=1}^{n} a_{i, \sigma_i}$.
Now,I would like to know this:
How to prove that for every real number $\alpha$ there exist sequence of matrices $A_1(\alpha),A_2(\alpha),...,A_n(\alpha),...$ ($A_i$ is $i \times i$ matrix) such that we have $\det(A_i(\alpha))=\alpha$ for every $i \in \mathbb N$?
In other words I would like to know how it can be proven from the above stated definition of determinant that every real number is the value of the determinant of matrix of any number rows and columns.
I am aware that this can easily be proven by using other properties of the determinant but would like to know is there an easy way to prove this by using only the above stated definition.
• You can take the diagonal matrix with entries $1, 1, \dots, 1, 1, \alpha$. – David Feb 1 '16 at 18:03
Consider diagonal matrices with entries $\alpha$, $1,\ldots,1$ in the main diagonal. Then the only term in your sum is the one coming from the identity permutation.
|
# Equivalent fractions table
## What is an equivalent fraction chart?
Chart of fractions that are all equal using different denominators and equivalent decimal values. Fraction. Equivalent Fractions. Decimal. 1/2.
## How do you find equivalent fractions?
To find the equivalent fractions for any given fraction, multiply the numerator and the denominator by the same number. For example, to find an equivalent fraction of 3/4, multiply the numerator 3 and the denominator 4 by the same number, say, 2. Thus, 6/8 is an equivalent fraction of 3/4.
## What is an example of a equivalent fraction?
Equivalent fractions are fractions that represent the same value, even though they look different. For example, if you have a cake, cut it into two equal pieces, and eat one of them, you will have eaten half the cake.
## What is 3/5 equivalent to as a fraction?
So, 3/5 = 6/10 = 9/15 = 12/20.
## What is 3/4 equivalent to as a fraction?
Equivalent fractions of 3/4 : 6/8 , 9/12 , 12/16 , 15/ Equivalent fractions of 1/5 : 2/10 , 3/15 , 4/20 , 5/
## What is the equivalent fraction of 2 by 5?
Answer: The fractions equivalent to 2/5 are 4/10, 6/15, 8/20, etc.
## What is 2 4 equal to as a fraction?
Fractions equivalent to 2/4: 4/8, 6/12, 8/16, 10/20 and so on … Fractions equivalent to 3/4: 6/8, 9/12, 12/16, 15/20 and so on …
## What is 1/4 equivalent to as a fraction?
Answer: The fractions equivalent to 1/4 are 2/8, 3/12, 4/16, etc. Equivalent fractions have the same value in their reduced form.
## What is the fraction 4/5 equivalent to?
Hence, 8/10, 12/15 and 16/20, 20/25 are equivalent fractions of 4/5.
## What is 5/10 equivalent to as a fraction?
½What are the equivalent fractions of 5/10? 5/10 is equal to ½ after simplification. Hence, the equivalent fractions of 5/10 are: ½, 2/4, 3/6, 4/8 and so on.
1 2 and 6 12 .
## What is 5/6 equal to as a fraction?
10/12Decimal and Fraction Conversion ChartFractionEquivalent Fractions5/610/1260/721/72/1412/842/74/1424/843/76/1436/8423 more rows
## How do you find equivalent equations?
Combine any like terms on each side of the equation: x-terms with x-terms and constants with constants. Arrange the terms in the same order, usually x-term before constants. If all of the terms in the two expressions are identical, then the two expressions are equivalent.
## How do you find equivalent fractions for kids?
0:245:15Equivalent Fractions | Math for 3rd Grade | Kids Academy – YouTubeYouTubeStart of suggested clipEnd of suggested clipIn so for example an equivalent fraction to 1 4 could be 2 8. to show this in a picture instead ofMoreIn so for example an equivalent fraction to 1 4 could be 2 8. to show this in a picture instead of cutting our fraction into four pieces we’ll cut our fraction into eight pieces.
## What is the equivalent fraction of 2 by 3?
Answer: The fractions equivalent to 2/3 are 4/6, 6/9, 8/12, etc. Equivalent fractions have the same value in the reduced form. Explanation: Equivalent fractions can be written by multiplying or dividing both the numerator and the denominator by the same number.
## How do you find equivalent fractions 5th grade?
14:2615:58Equivalent Fractions – 5th Grade Math – YouTubeYouTubeStart of suggested clipEnd of suggested clipIf I multiply by the top by 4 in the bottom by 4 on the top I’ll get 8 and on the bottom I will getMoreIf I multiply by the top by 4 in the bottom by 4 on the top I’ll get 8 and on the bottom I will get 12. So I look for 8. 12.
## What is equivalent table?
The Equivalent Fraction Table Calculator is an excellent fraction calculator that has several useful features that allow you to compare fractions, put fractions into value order (to answer the question “which fraction is bigger?”) and to produce equivalent fraction values for several related fractions based on the denominator value of one of the fractions. Lets start with the basic use:
## How do I add sort the order of fraction values in the Equivalent Fraction Table?
By default, the Equivalent Fraction Table Calculator will sort the fraction values within the Equivalent Fraction Table Results with the highest value at the top of the table, to change the way the fraction values are ordered:
## Is the first fraction in the equivalent fraction table?
Congratulations! The first fraction will now appear in the Equivalent Fraction Table Calculator. Not only that, but you will also see a simplified version of the fraction in its lowest form and a value in the equivalent fraction column (See the instructions ” How do I change the common fraction denominator of the equivalent fractions in the Equivalent Fraction Table? ” for more on how to use equivalent fraction column).
## How to find equivalent fractions?
To find equivalent fractions, you just need to multiply the numerator and denominator of that reduced fraction ( 13) by the same natural number, ie, multiply by 2, 3, 4, 5, 6
## What is the equivalent of 2 6?
The fraction 2 6 is equal to 1 3 when reduced to lowest terms. To find equivalent fractions, you just need to multiply the numerator and denominator of that reduced fraction ( 1 3) by the same integer number, ie, multiply by 2, 3, 4, 5, 6 … and so on …
## Is 1 3 a fraction?
Important: 1 3 looks like a fraction, but it is actually an improper fraction.
## Can you convert fractions to decimals?
This Equivalent Fractions Table/Chart contains common practical fractions. You can easily convert from fraction to decimal, as well as, from fractions of inches to millimeters.
## What are Equivalent Fractions?
Equivalent fractions are fractions with different numbers representing the same part of a whole. They have different numerators and denominators, but their fractional values are the same.
## How to make a fraction equivalent?
Multiply both the numerator and denominator of a fraction by the same whole number. As long as you multiply both top and bottom of the fraction by the same number, you won’t change the value of the fraction , and you’ll create an equivalent fraction.
## What is half of a fraction?
For example, think about the fraction 1/2. It means half of something. You can also say that 6/12 is half, and that 50/100 is half. They represent the same part of the whole. These equivalent fractions contain different numbers but they mean the same thing: 1/2 = 6/12 = 50/100
|
anonymous 5 years ago how do i find the magnetic susceptibility of a material in relation to it's permittivity and permeability?
1. anonymous
magnetic field of any materials depend on its permeability... permeability of the materials can be determined by $\epsilon$ . k = $\epsilon'$
2. anonymous
Don't worry, found it. FYI: $X _{b} = (1-\mu _{r})^{-1}$ where $\mu _{r}$ is the relative permeability
3. anonymous
right
|
- Current Issue - Ahead of Print Articles - All Issues - Search - Open Access - Information for Authors - Downloads - Guideline - Regulations ㆍPaper Submission ㆍPaper Reviewing ㆍPublication and Distribution - Code of Ethics - For Authors ㆍOnlilne Submission ㆍMy Manuscript - For Reviewers - For Editors
Stability of pexiderized Jensen and Jensen type functional equations on restricted domains Bull. Korean Math. Soc. 2019 Vol. 56, No. 3, 801-813 https://doi.org/10.4134/BKMS.b180607Published online March 13, 2019Printed May 31, 2019 Chang-Kwon Choi Kunsan National University Abstract : In this paper, using the Baire category theorem we investigate the Hyers-Ulam stability problem of pexiderized Jensen functional equation $$2f \left(\frac{x+y}{2} \right) - g(x) - h(y) = 0 \nonumber$$ and pexiderized Jensen type functional equations \begin{align} & f(x+y)+g(x-y)-2h(x)=0, \nonumber \\ & f(x+y)-g(x-y)-2h(y)=0 \nonumber \end{align} on a set of Lebesgue measure zero. As a consequence, we obtain asymptotic behaviors of the functional equations. Keywords : Baire category theorem, first caetgory, second category, Hyers-Ulam stability, pexiderized Jensen functional equation, pexiderized Jensen type functional equation, restricted domain MSC numbers : 39B82 Downloads: Full-text PDF Full-text HTML
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.