text
stringlengths 100
356k
|
---|
Limit superior and limit inferior, measure theory
Let (E,A,$\mu$) be a measured space. Let $(A_n)_{n \geq 1}$ be a sequence of A. We have $B_n = \bigcap_{k \geq n} A_k$ and $C_n = \bigcup_{k \geq n} A_k$. We have also $\lim \sup (A_n) = \bigcap_{n \geq 1} C_n$ and $\lim \inf (A_n) = \bigcup_{n \geq 0} B_n$.
I want to show that :
a) $\mu (\lim \inf (A_n)) \leq \lim \inf (\mu(A_n))$
b) If we suppose $\mu(\bigcup_{n \geq 1} A_n) < \infty$, $\mu (\lim \sup (A_n)) \geq \lim \sup (\mu(A_n))$.
For the question a) I know that $\mu (\lim \inf (A_n)) = \mu (\bigcup B_n)) = \sum \mu(B_n) = \sum \mu (\bigcap (A_k))$... I don't see how to prove the inequality. I'm a beginner.
Someone could help me ? Thank you in advance...
• The $B_n$-s are not disjoint (they are actually an increasing sequence of sets), therefore the identity $\mu\left(\bigcup_n B_n\right)=\sum_n \mu(B_n)$ never holds (unless $\mu(B_n)$ is $0$ for all $n$). – user228113 Apr 23 '17 at 16:41
• $\mu(\cup_n B_n) \le \sum_n\mu(B_n)$. – oliverjones Apr 23 '17 at 16:48
• For (a) use (i) $B_n\subset B_{n+1}$ for all $n$; and (ii) $B_n\subset A_k$ for all $k\ge n$ and all $n$. – John Dawkins Apr 23 '17 at 17:04
First let's come up with the following result:
Let $B_i=\cap_{k \ge i}A_k \in \mathscr A$, and thus $B_i \uparrow B$. Then $\mu (B) = \lim_{n\rightarrow \infty}\mu(B_n)$
Proof. $Let D_1=B_1,D_2=B_2-B_1,D_3=B_3-(B_1 \cup B_2),..., D_i=B_i-(\cup_{j=1}^{i-1}B_j)$. The $D_i$ are pairwise disjoint, $D_i \subset B_i$ for each $i$, and $\cup_{i =1}^{\infty}B_i=\cup_{i =1}^{\infty}D_i$. Hence: $$\mu(B)=\mu(\cup_{i =1}^{\infty}B_i)=\mu(\cup_{i =1}^{\infty}D_i)=\sum_{i =1}^{\infty}\mu(D_i)$$ $$=\lim_{n\rightarrow \infty} \sum_{i=1}^{n}\mu(D_i)=\lim_{n \rightarrow \infty} \mu(\cup_{i=1}^n D_i)=\lim_{n\rightarrow \infty}\mu(\cup_{i =1}^{n}B_i)=\lim_{n\rightarrow \infty}\mu(B_n)$$
Now notice that $$\liminf_{n \rightarrow \infty} (A_n)=\cup_{n \ge 1}B_n=B$$ And $B_n \subset A_k, \forall k \ge n$, thus $\mu(B_n ) \le \mu( A_k), \forall k \ge n$, therefore $$\lim_{n \rightarrow \infty}\mu(B_n) \le \lim_{n \rightarrow \infty}(\inf_{k \ge n}\mu(A_k))=\liminf_{n \rightarrow \infty}\mu(A_n)$$
Thus $$\mu(\liminf_{n \rightarrow \infty} A_n)=\mu(B)=\lim_{n \rightarrow \infty}\mu(B_n) \le \liminf_{n \rightarrow \infty}\mu(A_n)$$
The other one is similar (the proof is not difficult, but it's just so much typing, esp. with MathJax, so I'll save it).
• @MélanieDelaCheminée you're welcome – Yujie Zha Apr 24 '17 at 11:16
• Of course, It's done ! :-) – Mélanie De la Cheminée Apr 24 '17 at 17:40
|
[tex-eplain] indexing appendices
Sat Mar 1 18:32:41 CET 2008
On Tue, Feb 26, 2008 at 04:08:57PM +0100, Helmut Jarausch wrote:
> Page numbering for the main part as well as for each appendix starts at
> page 1. I'd like to create an index which shows the plain page numbers
> for the main part (only) and page numbers prefixed by a letter for the
> corresponding appendix, something like
>
> loop 7, A2
> condition A2, D5
>
> Is this possible with Eplain?
Well, as far as Eplain is concerned, the code below seems to work,
with one restriction that appendixes have to start on a new page
(which is quite reasonable to expect in most cases).
Another thing is getting the index out of the .idx file -- MakeIndex
chokes on page numbers prefixed with C and D, because it takes them
for roman numerals. I've heard xindy (http://xindy.sourceforge.net)
is much more flexible, and can probably be convinced to do the right
thing.
HTH,
Oleg
\input eplain
\newif\ifprefixpagenumbers \prefixpagenumberstrue
\def\part#1 #2 \par{%
\vfil\eject
\gdef\partprefix{#1}%
\pageno=1
{\bf #2}\par
\nobreak\smallskip
}
\let\plainfolio\folio
\def\folio{%
\ifprefixpagenumbers
\partprefix\plainfolio
\else
\plainfolio
\fi
}
% Avoid "undefined macro" errors if \partprefix gets used
% before it gets defined.
\let\partprefix\empty
% Do not prefix part letter inside the footer.
\footline={\hss\tenrm\prefixpagenumbersfalse\folio\hss}
\part {} Main Part
Text of Main Part.\sidx{foo}
\part A Appendix~A
Text of Appendix~A.\sidx{bar}
\part B Appendix~B
Text of Appendix~B.\sidx{term}\sidx{bar}\sidx{foo}
\part C Appendix~C
Text of Appendix~C.\sidx{another term}\sidx{bar}
\part D Appendix~D
Text of Appendix~D.\sidx{yet another term}\sidx{foo}
\part {} Index
|
# Re: [tlaplus] What do formulas inside "[]" operate on?
Hello Pascal,
you are completely right: P ~> Q is shorthand for [](P => <>Q), and your semantic intuitions are correct. For more details on the semantics of TLA, you may want to look at [1]. In fact, the equivalence holds not just for state predicates P and Q, but even if P and Q are arbitrary temporal formulas.
Equivalences such as this one are not typically checked using TLC, although you could create a spec in which P and Q are just Boolean variables that can change values non-deterministically and then verify the two implications separately.
Regards,
Stephan
[1] https://members.loria.fr/SMerz/papers/tla+logic2008.html
> On 27 Jul 2018, at 14:47, pascal.s...@xxxxxxxxx wrote:
>
> Hi, newbie here.
>
> I was playing around with "~>" and wondering if it can be defined in terms of "[]" and "<>".
>
> So I came up with this: I think "P ~> Q" is equivalent to "[](P => <>Q)". Unfortunately, I can't compare the two above Temporal Formulas with <=>, as TLC shouts "cannot handle" at me when I try..
>
> When Lamport mentioned that applying a state formula to a behaviour is the same as applying it to the first state of that behaviour, similarly for action formulas, I came up with the intuition that "[]" and "<>" could therefore probably be seen as quantifiers over every possible suffix of a behaviour.
>
> (I'm also assuming that what Lamport calls a behaviour is just a sequence of states. Please correct me if I'm wrong.)
>
> This now leads me to believe that the above equivalence holds, because of the following argument:
>
> Let P and Q be state formulas, and let B be a behaviour, then []P applied to B (as I understand it) means "(\A B_s \in suffixes(B) : P applied to B_s)". And since P applied to a behaviour is the same a P applied to the first state of that behaviour, then "[](P => <>Q)" means that for every state s in B where P is true, Q must hold true in some state t in the suffix of B, starting at s, which is equivalent to the definition of "~>".
>
> Can anyone confirm or refute my assumptions, such as the assumption that everything inside of "[]" operates on a suffix? Thank you very much for your time and sorry for the long post.
>
> Best regards
> Pascal
>
> --
> You received this message because you are subscribed to the Google Groups "tlaplus" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+u...@xxxxxxxxxxxxxxxx.
> To post to this group, send email to tla...@xxxxxxxxxxxxxxxx.
> Visit this group at https://groups.google.com/group/tlaplus.
> For more options, visit https://groups.google.com/d/optout.
|
## [email protected]
Options: Use Classic View Use Monospaced Font Show HTML Part by Default Condense Mail Headers Topic: [<< First] [< Prev] [Next >] [Last >>]
|
1. ## sequence
What is the next number in the sequence: 1, 2, 3, 4, 5, 8, 7, 16, 9, ...?
(A) 8 (B) 11 (C) 18 (D) 23 (E) 32
2. Originally Posted by sri340
What is the next number in the sequence: 1, 2, 3, 4, 5, 8, 7, 16, 9, ...?
(A) 8 (B) 11 (C) 18 (D) 23 (E) 32
While I'm sure you could justify any answer the one here is 32.
Why?
Write the sequence like this...
$\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|}
Term & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\
Value & 1 & 2 & 3 & 4 & 5 & 8 & 7 & 16 & 9 & ?
\end{array}$
Now look at ONLY the odd terms. What can you say about them?
$\begin{array}{|c|c|c|c|c|c|}
Term & 1 & 3 & 5 & 7 & 9 \\
Value & 1 & 3 & 5 & 7 & 9
\end{array}$
Then look at the even terms.
$\begin{array}{|c|c|c|c|c|c|}
Term & 2 & 4 & 6 & 8 & 10 \\
Value & 2 & 4 & 8 & 16 & ?
\end{array}$
It seems to be increasing...But by how much each time.
3. how would one find an nth term for this?
4. Originally Posted by Mukilab
how would one find an nth term for this?
If $n$ is odd, nth term = $n$.
If $n$ is even, call it $n=2k$.
Then it's just $2^k$
|
## A Modified Simplex Partition Algorithm to Test Copositivity
A real symmetric matrix $A$ is copositive if $x^\top Ax\geq 0$ for all $x\geq 0$. As $A$ is copositive if and only if it is copositive on the standard simplex, algorithms to determine copositivity, such as those in Sponsel et al. (J Glob Optim 52:537–551, 2012) and Tanaka and Yoshise (Pac J Optim 11:101–120, 2015) … Read more
## On monotonicity and search traversal in copositivity detection algorithms
Matrix copositivity has an important theoretical background. Over the last decades, the use of algorithms to check copositivity has made a big progress. Methods are based on spatial branch and bound, transformation to Mixed Integer Programming, implicit enumeration of KKT points or face-based search. Our research question focuses on exploiting the mathematical properties of the … Read more
## Testing Copositivity via Mixed-Integer Linear Programming
We describe a simple method to test if a given matrix is copositive by solving a single mixed-integer linear programming (MILP) problem. This methodology requires no special coding to implement and takes advantage of the computational power of modern MILP solvers. Numerical experiments demonstrate that the method is robust and efficient. Citation Dept. of Business … Read more
## On the algebraic structure of the copositive cone
We decompose the copositive cone $\copos{n}$ into a disjoint union of a finite number of open subsets $S_{\cal E}$ of algebraic sets $Z_{\cal E}$. Each set $S_{\cal E}$ consists of interiors of faces of $\copos{n}$. On each irreducible component of $Z_{\cal E}$ these faces generically have the same dimension. Each algebraic set $Z_{\cal E}$ is … Read more
|
# Gammaray Bursts (all you ever wanted to know)
1. May 31, 2006
### marcus
A new review article on GRB has come out
http://arxiv.org/abs/astro-ph/0605208
Gamma-Ray Bursts
P. Meszaros
To appear in Rep. Prog. Phys., 74 pages, 11 figures
"Gamma-ray bursts are the most luminous explosions in the Universe, and their origin and mechanism are the focus of intense research and debate. More than three decades after their discovery, and after pioneering breakthroughs from space and ground experiments, their study is entering a new phase with the recently launched Swift satellite. The interplay between these observations and theoretical models of the prompt gamma ray burst and its afterglow is reviewed."
GRBs are a furnace where it may be possible to test quantum gravity at some extremes where its predictions differ from those of ordinary gravity theory
"Gamma-ray bursts are the most luminous explosions in the Universe." That says it. They are worth knowing about and this review article covers both what has been observed so far and what people's ideas are about how GRBs are caused.
The article will likely be out of date soon.
This is the venue, in case anyone is interested in where it will be published:
http://www.iop.org/EJ/journal/RoPP (Reports on Progress in Physics)
Last edited: May 31, 2006
2. May 31, 2006
### marcus
===sample quote===
...These proved that they were at cosmological distances, comparable to those of the most distant galaxies and quasars known in the Universe. Since even at these extreme distances (up to Gigaparsecs, or around 10^28 cm) they outshine galaxies and quasars by a very large factor, albeit briefly, their energy needs must be far greater. Their electromagnetic energy output during tens of seconds is comparable to that of the Sun over a few 10^10 years, the approximate age of the universe, or to that of our entire Milky Way over a few years. The current interpretation of how this prodigious energy release is produced is that a correspondingly large amount of gravitational energy (roughly a solar rest mass) is released in a very short time (seconds or less) in a very small region (tens of kilometers or so) by a cataclysmic stellar event (the collapse of the core of a massive star, or the subsequent mergers of two remnant compact cores). Most of the energy would escape in the first few seconds as thermal neutrinos, while another substantial fraction may be emitted as gravitational waves. This sudden energy liberation would result in a very high temperature fireball expanding at highly relativistic speeds, which undergoes internal dissipation leading to gamma-rays, and it would later develop into a blast wave as it decelerates against the external medium, producing an afterglow which gets progressively weaker. The resulting electromagnetic energy emitted appears to be of the order of a percent or less of the total energy output, but even this photon output (in gamma-rays) is comparable to the total kinetic energy output leading to optical photons by a supernova over weeks. The remarkable thing about this theoretical scenario is that it successfully predicts many of the observed properties of the bursts. This fireball shock scenario and the blast wave model of the ensuing afterglow have been extensively tested against observations,...
===endquote===
BTW this is part of a discussion assuming the burst is isotropic same in all directions, there is another model he discusses where there is less energy required because some mechanism beams the energy and we just happen to be in the way of a beam----so then the estimates are different
Last edited: May 31, 2006
3. Jun 1, 2006
### Chronos
In my mind GRB's must be pop III events. The distance factor alone is very suspicious. I also think GRB's play a significant role in reionization and metalization of the early universe.
4. Jun 1, 2006
### Garth
I agree, except it is the long GRB's that I think are PopIII events. Short GRBs seem to be the mergers of two BH's.
But of course these two could be connected.
Hypothesis: Early in the universe's history Pop III's go hyper-nova and produce long GRB's leaving behind IMBHs. This leaves a population of IMBHs in the present universe, some of which are gravitationally bound to each other. Eventually through orbital decay these occasionally coalesce and produce short GRBs.
A plausible scenario?
Garth
5. Jun 3, 2006
### Chronos
If GRB's are jets [which I think they must be], the difference between long and short GRB's might be a matter of alignment.
6. Jun 3, 2006
### marcus
the picture I'm getting from your words is that the jet "rakes" across us and it can rake quickly (short burst received) or rake more slowly (long burst received). is that what you have in mind?
=================
BTW I have no reason to form a personal opinion about the different mechanisms underlying long and short but I have read quite a bit favoring what Garth said----so i will repeat it.
Namely LONGIES are produced by collapse of a BIG STAR
while on the other hand
SHORTIES are the result of NEUTRONSTAR MERGER
for some reason there seems to be widespread support for this idea, at least at present
===================
ooops, I have misquoted Garth, what he actually said in post #4 was
**Short GRBs seem to be the mergers of two BH's.**
If I remember what I have heard is shorties are surmised to be mergers of neutronstars, not mergers of BHs. But it could be six of one and halfdozen of the other, or I could be misremembering. In any case there is this widespread idea that shorties come from the merger of SMALL THINGS whatever they might be.
Last edited: Jun 3, 2006
7. Jun 3, 2006
### marcus
since we seem to believe or else to have read slightly different things, I will quote this review article (maybe more to focus discussion than as authority---I doubt there is any real authority on this as yet)
"...A GRB emission which is concentrated in a jet, rather than isotropically, alleviates significantly the energy requirements. There is now extensive observational evidence for such collimated emission from GRBs, provided by breaks in the optical/IR light curves of their afterglows [244, 140, 62]. The inferred total amount of radiant and kinetic energy involved in the explosion is in this case comparable to that of supernovae (except that in GRBs the energy is mostly emitted in a jet in gammarays over tens of seconds, whereas in supernovae it is emitted isotropically in the optical over weeks). While the luminous (electromagnetic) energy output of a GRB is thus “only” of the same order of magnitude as that of supernovae, the explosion is much more concentrated, both in time and in direction, so its specific brightness for an observer aligned with the jet is many orders of magnitude more intense, and appears at much higher characteristic photon energies. Including the collimation correction, the GRB electromagnetic emission is energetically quite compatible with an origin in, say, either compact mergers of neutron star-neutron star (NS-NS) or black hole-neutron star (BHNS) binaries [343, 105, 331, 299], or with a core collapse (hypernova or collapsar) model of a massive stellar progenitor [514, 346, 380, 283, 513], which would be related to but much rarer than core-collapse supernovae..."
the reason people classify into two groups is the observed hard gamma DURATION TIMES are in a roughly BIMODAL DISTRIBUTION (i.e. with roughly speaking two peaks).
one peak is somewheres less than 2 seconds and the the other is somewheres greater than 2 seconds
"...The gamma-ray durations range from 0.001 s to about 1000 s, with a roughly bimodal distribution of long bursts greater than 2 s and short bursts of less than 2s [237], and substructure sometimes down to milliseconds..."
the paper shows LIGHTCURVES of various events and the SUBSTRUCTURE made of little SPIKES which sometimes look like "microbursts" a sort of ratatat-tat. I must say that I like seeing the MICROSTRUCTURE pictures----to me it is one of the good things about the article.
it is in this millisecond scale microburst phenomenon that one might be able to look for tiny deviations in the speed of light----predicted by some QG theories----and measure fine enough to possibly exclude deviation thereby falsifying those theories.
Last edited: Jun 3, 2006
8. Jun 7, 2006
### Lonewolf
As far as I know, the best candidate to explain short gamma ray bursts are magnetars. These are neutron stars with surface magnetic fields of the order of $$10^{15} G$$, and interior fields up to $$10^{18} G$$. The shortness essentially comes from the short time the magnetar can retain such a pathological field.
9. Jun 7, 2006
### Garth
Agreed, I could well be wrong!
Is not another difference between Long and Short GRBs is that the short ones are harder, i.e. of higher energy gamma rays?
Garth
10. Jun 7, 2006
### Lonewolf
At least with the short GRBs there is no correlation between length of the bursts and energy of the gamma rays. They come in both soft and hard varities. I'm not sure about the long GRBs. If I was feeling more energetic, I'd dig out some references. I need to get them out of an old, long buried, folder.
11. Jun 7, 2006
### Garth
There is the Wikipedia article Gamma ray burst.
Links to refereed papers would be good.....
Garth
Last edited: Jun 7, 2006
12. Jun 7, 2006
### Lonewolf
Silly me, there are two main types of short GRB. One is of the type that Marcus alluded to. The other is a soft gamma repeater, which is in essence a magnetar. The soft gamma ray repeater is an example of a short burst (I think, I can't remember the typical duration of a pulse) that emits soft gamma-rays/hard x-rays on a regular basis.
13. Jun 8, 2006
### Chronos
No doubt there are different models for long and short GRB emissions:
Quiescent Burst Evidence for Two Distinct GRB Emission Components
http://arxiv.org/abs/astro-ph/0403360
I also agree the spectral signatures are different. The hard facts, as I see them, is GRB's are remote events. From that perspective, I perceive them as different manifestations of similar events. Since I'm feeling bold at the moment, I think hard GRB's should be strongly polarized compared to short GRB's.
|
## Hole probabilities and overcrowding estimates for products of complex Gaussian matrices [PDF]
Gernot Akemann, Eugene Strahov
We consider eigenvalues of a product of n non-Hermitian, independent random matrices. Each matrix in this product is of size N\times N with independent standard complex Gaussian variables. The eigenvalues of such a product form a determinantal point process on the complex plane (Akemann and Burda J. Phys A: Math. Theor. 45 (2012) 465201), which can be understood as a generalization of the finite Ginibre ensemble. As N\rightarrow\infty, a generalized infinite Ginibre ensemble arises. We show that the set of absolute values of the points of this determinantal process has the same distribution as {R_1^{(n)},R_2^{(n)},...}, where R_k^{(n)} are independent, and (R_k^{(n)})^2 is distributed as the product of n independent Gamma variables Gamma(k,1). This enables us to find the asymptotics for the hole probabilities, i.e. for the probabilities of the events that there are no points of the process in a disc of radius r with its center at 0, as r\rightarrow\infty. In addition, we solve the relevant overcrowding problem: we derive an asymptotic formula for the probability that there are more than m points of the process in a fixed disk of radius r with its center at 0, as m\rightarrow\infty.
View original: http://arxiv.org/abs/1211.1576
|
Waiting for answer This question has not been answered yet. You can hire a professional tutor to get the answer.
QUESTION
# What is polarizing power of cation ?
A cation's ability to distort the electron cloud of an anion.
The idea behind the polarizing power of a cation on one hand, and the polarizability of an anion on the other, is that the bond formed when two ions are close to each other will depend on
• the ionic sizes of the two ions;
• the magnitude of their charges;
A set of rules called Fajan's rules was put in place to try and predict what the predominant character of a bond will be when the above factors, i.e. ionic size and charge, vary for two ions.
So, depending on thos two factors, you'd get
• Small cation + large anion + high charges -> ;
• Large cation + small anion + low charges -> .
So, if a cation is small enough and has a relatively high charge, it will attract the electron clound of a large anion, pulling some of the anion's electron towards itself.
Moreover, the positive charge of the cation will repel the positively charged nucleus of the anion. This results in a distortion of the anion's electron cloud.
In this case, the larger the anion, the more easily its electrons will be attracted by the cation's positive charge. That happens because larger anions have their outermost electrons located further away from the nucleus.
This, in addition to the fact that these outermost electrons will be pulled less tightly by their own nucleus because of the screening from core electrons, will allow the cation to attract them more easily.
So, a cation's polarizing power refers to its ability to attract the outermost electrons from a nearby anion.
It depends on its size and charge - smaller cations that have higher positive charges will have better polarizing abilities because the positive charge is distributed on a relatively small area.
But remember, the anion counts, too. Even the "best" polarizing cations will have trouble distorting fluorine's electron could, because fluorine is a very, very small anion.
|
# Largest $\sigma$-algebra on which an outer measure is countably additive
If $m$ is an outer measure on a set $X$, a subset $E$ of $X$ is called $m$-measurable iff $$m(A) = m(A \cap E) + m(A \cap E^c)$$ for all subsets $A$ of $X$.
The collection $M$ of all $m$-measurable subsets of $X$ forms a $\sigma$-algebra and $m$ is a complete measure when restricted to $M$.
Is $M$ the largest $\sigma$-algebra on $X$ on which $m$ is a measure (i.e., on which $m$ is countably additive)? If not, what is?
Is $M$ the largest $\sigma$-algebra on $X$ on which $m$ is a complete measure? If not, what is?
I am especially interested in the case when $X$ is $\mathbb{R}$ or $\mathbb{R}^n$ and $m$ is the Lebesgue outer measure. In this case $M$ is the Lebesgue $\sigma$-algebra.
ADDED:
Julián Aguirre (thanks!) has shown in his response below that the answer to the first question is yes when $X$ is $\mathbb{R}^n$ and $m$ is the Lebesgue outer measure. Hence the answer to the second question in this situation is also yes.
-
## 1 Answer
The answer to the first question (and the one in the title) is yes when $M$ is the $\sigma$-algebra of Lebesgue measurable subsets of $\mathbb{R}^n$. Suppose $N$ is another $\sigma$-algebra such that $M\subsetneq N$. Then there exists $E\in N$ non measurable. Since $E=\cup_{k=1}^\infty (E\cap\{x\in\mathbb{R}^n:|x|\le k\})$, for at least one $k$ the set $E\cap\{x\in\mathbb{R}^n:|x|\le k\}$ is not measurable. Thus, we may assume without loss of generality that $E$ is bounded, and in particular, $m(E)<+\infty$.
Since $E$ is not measurable, there exists $\epsilon>0$ such that if $G$ is an open set and $E\subset G$, then $m(G\setminus E)\ge \epsilon$. (This follows from an equivalent definition of measurable set; cf. Proposition 15 on p.63 of Royden's Real Analysis, 3 ed.)
Now, since $m(E)=\inf\{m(G):G\text{ open, }E\subset G\}$, there exists an open set $O$ such that $E\subset O$ and $m(E)\ge m(O)-\epsilon/2$. Then $O=(O\setminus E)\cup E$ but $$m(O\setminus E)+m(E)\ge\epsilon+ m(O)-\epsilon/2>m(O),$$ and hence $m$ is not additive on $N$.
For the general case it is easy to see that if $N$ is another $\sigma$-algebra such that $M\subsetneq N$, then there exists $A\subset X$ such that $m$ is not additive on the $\sigma$-algebra generated by $N\cup\{A\}$, but I have not been able to show that it is not additive on $N$.
-
|
Tay R.
# Finding the surface area of a cylinder.
If the surface area of a cylinder with a radius of 3 cm is 54(pie) cm^2 , what is the cylinder's volume?
By:
|
# Forecasting with after x lags values
I like to build a forecasting model where am allowed to use only l lagged values.
That means the model should forecast only l lagged values like $$y_{t}$$ can be only predicted using values $$y_{t-l}$$, $$y_{t-l-1}$$, etc.
$$y_{t}=b_{t-l}y_{t-l} + b_{t-l-1}y_{t-l-1} + {...} + b_0y_0 + e_t$$
Could you suggest which model should work for such problem?
• I suggested a transfer function where one could include lags of the pseudo x variable that was generated . This allows you to form the model that you are after where lags BEFORE l are precluded. If so then please upvote and accept or explain to me why my answer does not work for you using ANY software of your choice. Feb 11 '20 at 15:51
what you propose is an ARIMA model where the past of y is used to predict the next y but expressly ignoring lags 1 , 2 and 3 .
Consider the original series being here having 60 values . Now create a supporting causal series (pseudo x) using a 3 period delay . You know have y(t) as a function of y(t-3) with 57 pairs of y and x. If you disable ARIMA structure and enable possible lags of the pseudo x series you will restrict the model to not employ lags 1,2 and 3 in the originally observed series.
Even St. Thomas needed proof ... here goes ..
I took the original series and computed the acf .. notice acf lag 3 is -.155
I then took the bivariate data set where the x variable was lag 3 of the output series and estimated an OLS regression model and specified ( no arima structure , no latent deterministic structure , no test for constantcy of parameters , no test for constacy of error variance and no stepdown AND obtained a regression coefficient of -.161 which is might darn close to -.155.
The program simulated via montecarlo bootstrapping a family (1000) of forecasts for x AND then simulated 100 values for the causal model y=f(x) and then combined them to obtain prediction limits for y taking into account the uncertainty in the future x.
Q.E.D.
• Not quite: it is a constrained model in which the immediately preceding $l-1$ values of $y$ are not used for the prediction.
– whuber
Feb 6 '20 at 23:32
• that is still an arima model where the # of auto-regressive coefficients is NOT equal to # . For example the model y(t)= .2 *y(t-1) +.3*y(t-4)+ .2*y(t-9) is an ARIMA model , Generalized arima modelling is not restricted to (p,d,q)(P,D,Q) like auto.arima Feb 7 '20 at 0:45
• not convinced. we cannot use preceding $l-1$ values for predicting $y_t$ as @whuber mentioned. For example, if $l=3$, to forecast $y_4$, we can only use $y_1$. so the model can be $y_4 = b_1y_1+e_t$ Feb 7 '20 at 6:11
• How does creating a "supporting causal series" work for providing correct estimates of variances, prediction intervals, and so on? I'm not saying it doesn't; I'm only saying you have a burden of demonstrating this approach is correct and by simply asserting it's a solution you haven't met that burden.
– whuber
Feb 7 '20 at 16:53
• This answer seems to be focused on the specific contortions required to get AUTOBOX to fit a specific model which, aside from the fact that it doesn't appear to be the one that was asked about, is completely off topic here. Feb 8 '20 at 1:56
|
# How do you evaluate sec(tan^-1(8)) without a calculator?
$\sqrt{65}$
${\tan}^{-} 1 \left(8\right)$ means the angle whose tan is 8
Mark in $\theta$ opposite the side length 8.
The hypotenuse is length $\sqrt{65}$ -Pythagoras
The sec of $\theta$ is hypotenuse divided by adjacent
|
Final Exam Part 3¶
👨🏫 Overview: This is Part 3 of 3 Parts of your Final Exam/Project.
• Part 1. Questions on the Course Material. (40%)
• Part 2. Applying the Course material. (40%)
• Part 3. Discussing your exam and the Course Material. (20%)
You must submit your Jupyter notebooks for Part 1 and Part 2 at least 48 hours prior to your appointment for Part 3. You will be given your grade on Part 1 and Part 2 before the oral exam, so that you know what your status is. *For late submission of Part 1 and/or Part 2, I will deduct 2 points per hour.
📖 Rules for "Open Book" Exam: Like all other portions of this exam, this part of the exam is open notes and open library. It is "open internet search" but you (obviously!) can't post questions on an internet discussion board or homework/problem/exam help site. You are not allowed to communicate with your classmates or any other human being (except me) about these questions or your responses, and this includes human beings (singular or plural, known or anonymous) online.
💯 Grading: When we discuss Parts 1 and 2 of the exam, your grade will be "plus", "minus" or "neutral." You can gain, or lose, up to 20 points based on your performance on the oral portion of the exam.
• A "plus" response indicates that you know the answer to the question, and can expound upon it and related topics with mastery.
• A "minus" response indicates that while you may have answered a question correctly, you "got the right answer for the wrong reason" or your understanding of the topic being tested is only superficial. A "minus" response is also indicative of cases where you simply can't answer a question. If for whatever reason you do not schedule an oral exam, that will be taken as a "minus" response for all questions. Don't do this: it will be -40% on your final exam grade.
• A "neutral" response is the default, and is anything except mastery ("plus") or naïvté ("minus"). I anticipate that most responses will be "neutral".
In addition to discussing your answers in Parts 1 and 2, we will discuss the following questions. Each question group has 6 questions in it. I will randomly select one question from each group; we will then discuss that question. Each question will be graded "plus" (5 points) "minus" (0 points) or "neutral" (3 points). You can score up to 20 points on this portion of the exam, and the "default" score would be about 12 points.
📜 Instructions: Be prepared to discuss the following questions/topics. I will randomly select 1 question from each group.
🔁 Describe the following effects/experiments/equations, and explain their importance to the origins of quantum mechanics.¶
1. Black-Body Radiation and the Ultraviolet Catastrophe.
2. The Davisson-Germer experiment and Electron Scattering.
3. Compton Scattering.
4. The Photoelectric Effect
5. The Rydberg Formula
6. The Stern-Gerlach Experiment
🔁 Explain the following topics.¶
1. What are the key postulates of quantum mechanics?
2. What is the Heisenberg Uncertainty Principle? What does it say about the fundamentals of quantum mechanics? What is the commutator between two operators, and how does it relate to the Heisenberg Uncertainty Principle?
3. Explain what the Born-Oppenheimer Approximation is and why it is important.
4. Explain what the united-atom and separated-atom limit are for a diatomic molecule. How do these limits relate to molecular-orbital theory and the linear-combination-of-atomic-orbitals technique.
5. What are the quantum numbers that describe the state of an atom (neglecting relativity)? What do these quantum numbers have to do with term symbols? What do these quantum numbers have to do with simultaneous observables?
6. What is the Stern Gerlach experiment? Can you describe what is happening in the following diagram?
🔁 What are the eigenfunctions and eigenvalues for the following Hamiltonians.¶
You should be able to explain key characteristics of the systems and the general strategy for (possibly approximate) solving the associated time-independent Schrödinger equation.
\begin{align} \hat{H}_1 &= -\frac{\hbar^2}{2m} \frac{d^2}{dx^2} -\frac{\hbar^2}{2m} \frac{d^2}{dy^2} + V_{a_x}(x) + V_{a_y}(y) \\ \hat{H}_2(\lambda) &= -\frac{\hbar^2}{2m} \frac{d^2}{dx_1^2} -\frac{\hbar^2}{2m} \frac{d^2}{dx_2^2} + V_{a}(x_1) + V_{a}(x_2) - \lambda |x_1 - x_2|\\ \hat{H}_3(\lambda) &= -\tfrac{1}{2} \nabla_1^2 -\tfrac{1}{2} \nabla_2^2- \tfrac{Z}{r_1} - \tfrac{Z}{r_2} + \tfrac{\lambda}{|\mathbf{r}_1 - \mathbf{r}_2|}\\ \hat{H}_4 &= -\frac{\hbar^2}{2m} \frac{d^2}{dx^2} -\frac{\hbar^2}{2m} \frac{d^2}{dy^2} -\frac{\hbar^2}{2m} \frac{d^2}{dz^2} + V_{a}\left(\sqrt{x^2 + y^2 +z^2}\right) \\ \hat{H}_5(\mathbf{R}_B) &= -\tfrac{1}{2} \nabla_{\mathbf{r}}^2 - \tfrac{Z_A}{r} - \tfrac{Z_B}{|\mathbf{r} - \mathbf{R}_B|} \\ \hat{H}_6 &= -\frac{\hbar^2}{2m} \frac{d^2}{dx^2} -\frac{\hbar^2}{2m} \frac{d^2}{dy^2} -\frac{\hbar^2}{2m} \frac{d^2}{dz^2} + V_{a_z}(z) + V_{a_r}(\sqrt{x^2+y^2}) \end{align}
where $$V_a(x) = \begin{cases} +\infty & x\leq 0\\ 0 & 0\lt x \lt a\\ +\infty & a \leq x \end{cases}$$
🤔 Discuss the following in your own words.¶
1. What is a Slater determinant? Why does one use a Slater determinant for a many-fermion system? When is the exact ground-state wavefunction for a many-fermion system a Slater determinant?
2. What is the variational principle? How is it used to approximate the ground-state energy and wavefunction of realistic, not-exactly-solvable, Hamiltonians? Can you give an example of how it is used?
3. What is the Hartree-Fock method? What is the intuition behind it, and when is it used?
4. What is Perturbation Theory? When is perturbation theory most appropriate? Least appropriate?
5. What is the secular equation? How does it relate to the technique of expanding a wavefunction in a basis set?
6. What is electron correlation? How can one approximate the effects of electron correlation?
|
# Inorganic Chemistry/Chemical Bonding/MO Diagram
A molecular orbital diagram or MO diagram for short is a qualitative descriptive tool explaining chemical bonding in molecules in terms of molecular orbital theory in general and the linear combination of atomic orbitals molecular orbital method (LCAO method) in particular [1] [2] [3]. This tool is very well suited for simple diatomic molecules such as dihydrogen, dioxygen and carbon monoxide but becomes more complex when discussing polynuclear molecules such as methane. It explains why some molecules exist and not others, how strong bonds are, and what electronic transitions take place.
## Dihydrogen MO diagram
The smallest molecule, hydrogen gas exists as dihydrogen (H-H) with a single covalent bond between two hydrogen atoms. As each hydrogen atom has a single 1s atomic orbital for its electron, the bond forms by overlap of these two atomic orbitals. In figure 1 the two atomic orbitals are depicted on the left and on the right. The vertical axis always represents the orbital energies. The atomic orbital energy correlates with electronegativity as a more electronegative atom holds an electron more tightly thus lowering its energy. MO treatment is only valid when the atomic orbitals have comparable energy; when they differ greatly the mode of bonding becomes ionic. Each orbital is singly occupied with the up and down arrows representing an electron.
The two AO's can overlap in two ways depending on their phase relationship. The phase of an orbital is a direct consequence of the oscillating, wave-like properties of electrons. In graphical representations, the orbital phase is depicted either by a plus or minus sign (confusing because there is no relationship to electrical charge) or simply by shading. The sign of the phase itself does not have physical meaning except when mixing orbitals to form molecular orbitals.
Then two same-sign orbitals have a constructive overlap forming a molecular orbital with the bulk of electron density located between the two nuclei. This MO is called the bonding orbital and its energy is lower than that of the original atomic orbitals. The orbital is symmetrical with respect to rotation around the molecular axis (no change) and therefore also called a sigma bond (σ-bond).
The two hydrogen atoms can also interact with each other with their 1s orbitals out-of-phase which leads to destructive cancellation and no electron density between the two nuclei depicted by the so-called nodal plane as the vertical dashed line. In this anti-bonding MO with energy much higher than the original AO's the electrons are located in lobes pointing away from the central axis. Like the bonding orbital this orbital is symmetrical but differentiated from it by an asterisk σ* bond
The next step in constructing an MO diagram is filling the molecular orbitals with electrons. With the case of dihydrogen at hand two electrons have to be distributed over a bonding orbital and an anti-bonding orbital. Three general rules apply:
• The Aufbau principle states that orbitals are filled starting with the lowest energy
• The Pauli exclusion principle states that the maximum number of electrons occupying an orbital is two having opposite spins
• Hund's rule states that when there are several MO's with equal energy the electrons fill one MO at a time.
Application of these rules for dihydrogen results in having both electrons in the bonding MO. This MO is called the Highest Occupied Molecular Orbital or HOMO which makes the other orbital the Lowest Unoccupied Molecular Orbital or LUMO. The electrons in the bond MO are called bonding electrons and any electrons in the antibonding orbital would be called antibonding electrons. The reduction in energy of these electrons is the driving force for chemical bond formation. For bonding to exist the bond order defined as:
${\displaystyle \ {\mbox{Bond Order}}={\frac {({\mbox{Number of electrons in bonding MOs}})-({\mbox{number of electrons in antibonding MOs}})}{2}}}$
must have a value larger than 0. The bond order for dihydrogen is (2-0)/2 = 1.
This MO diagram also helps explain how a bond breaks. When applying energy to dihydrogen, a molecular electronic transition takes place when one electron in the bonding MO is promoted to the antibonding MO. The result is that there is no longer a net gain in energy.
## Dihelium MO diagram
Dihelium' (He-He) is a hypothetical molecule and MO theory helps to explain why. The MO diagram for dihelium (2 electrons in each 1s AO) looks very similar to that of dihydrogen but instead of 2 electrons it is now required to place 4 electrons in the newly formed molecular orbitals.
The only way to accomplish this is by occupying the antibonding orbital with two electrons as well which reduces the bond order ((2-2)/2) to zero and cancels the net energy stabilization.
Another molecule that is precluded based on this principle is diberyllium (beryllium with electron configuration 1s22s2). On the other hand by removing one electron from dihelium, the stable gas-phase species He2+ ion is formed with bond order 1/2.
## Dilithium MO diagram
Next up in the periodic table is lithium and MO theory correctly predicts that dilithium is a stable molecule with bond order 1. The 1s MO's are completely filled and do not participate in bonding.
Dilithium is a gas-phase molecule with a much lower bond strength than dihydrogen because the 2s electrons are further removed from the nucleus.
## Diboron MO diagram
The MO diagram for diboron (B-B electron configuration boron: 1s22s22p1) requires the introduction of an atomic orbital overlap model for p orbitals. The three dumbbell-shaped p-orbitals have equal energy and are oriented mutually perpendicular (or orthogonal). The p-orbitals oriented in the x-direction (px) can overlap end-on forming a bonding (symmetrical) σ orbital and an antibonding σ* molecular orbital. In contrast to the σ1s MOs, the σ2p has some electron density at either side of the nuclei that has a phase (blue) opposite that of the electron density between the nuclei (white). The σ*2p also has some electron density between the nuclei. (Note that in this figure, the phases [colors] of the σ*2p orbital are slightly incorrect: the phases of the right-hand lobes of electron density should be switched so that the colors go blue, white, blue, white.)
The other two p-orbitals py and pz can overlap side-on. The resulting bonding orbital has its electron density in the shape of two sausages above and below the plane of the molecule. The orbital is not symmetrical around the molecular axis and is therefore a pi orbital. The antibonding pi orbital (also asymmetrical) has four lobes pointing away from the nuclei. Both py and pz orbitals form a pair of pi orbitals equal in energy (degenerate) and can be higher or lower than that of the sigma orbital.
In diboron the 1s and 2s electrons do not participate in bonding but the single electrons in the 2p orbitals occupy the 2πpy and the 2πpz MO's resulting in bond order 1. Because the electrons have equal energy (they are degenerate) diboron is a diradical and since the spins are parallel the compound is paramagnetic.
Like diboron, dicarbon (C-C electron configuration:1s22s22p2) is a reactive gas-phase molecule. Two additional electrons are placed in the 2πp MO's increasing the bond order to 2. The bond order for dinitrogen is three because now two electrons are added in the 2σ MO as well.
## Dioxygen MO diagram
MO treatment of dioxygen is different from that of the previous diatomic molecules because the pσ MO is now lower in energy than the 2π orbitals. This is attributed to interaction between the 2s MO and the 2pz MO [4]. Distributing 8 electrons over 6 molecular orbitals leaves the final two electrons as a degenerate pair in the 2pπ* antibonding orbitals resulting in a bond order of two. Just as diboron, this type of dioxygen called triplet oxygen is a paramagnetic diradical. When both HOMO electrons pair up the other oxygen type is called singlet oxygen.
The bond order decreases and the bond length increases in the order O2+ (112.2 pm), O2 (121 pm), O2- (128 pm) and O22- (149 pm) [4].
In difluorine two additional electrons occupy the 2pπ* with a bond order of 1. In dineon Ne2 (as with dihelium) the number of bonding electrons equals the number of antibonding electrons and this compound does not exist.
## References
1. Organic Chemistry. Jonathan Clayden, Nick Greeves, Stuart Warren, and Peter Wothers 2001 ISBN 0-19-850346-6
2. Organic Chemistry, Third Edition Marye Anne Fox James K. Whitesell 2003 ISBN 978-0-7637-3586-9
3. Organic Chemistry 3rd Ed. 2001 Paula Yurkanis Bruice ISBN 0-13-017858-6
4. a b Modern Inorganic Chemistry William L. Jolly 1985 ISBN 0-07-032760-2
|
# All Questions
290 views
### PBKDF2 for login key, scrypt for encryption key
I need to derive two keys from a single password client side. One for in-browser login, the other for encryption. Is the following secure ? ...
251 views
### LUKS multiple key slots - what's the intuition?
LUKS volumes have the ability to allow multiple independently usable passwords, as explained here: [https://code.google.com/p/cryptsetup/wiki/FrequentlyAskedQuestions] The intuition behind basic ...
157 views
### Crypto hash function vs encryption algorithms
Why is a cryptographic hash function generally faster to execute in software than conventional encryption algorithms such as DES?
237 views
### IV's for different AES encryption modes
I wanted to know if my understanding is correct! I have been reading up on aes encryption and modes(specifically CTR and CBC) and it took me sometime to understand the concept of IV. If the IV is ...
88 views
260 views
### Large file validation on an embedded system through hash and encryption
As a preface, I have to say that I am a noob in this area. Having said that, I will ask the question. I have a situation where I need to validate and protect against tampering a handful of large ...
140 views
### Implementation of garbled circuits using RSA
I was just reading these notes on garbled (Yao) circuits and I just stuck trying to figure out how an implementation of Table 1 would work using RSA encryption. In RSA the public key is (n,e). So for ...
394 views
### Serpent 256bit key wrong round keys
Assume that we have this 256bit key: 15FC0D48 D7F8199C BE399183 4D96F327 10000000 00000000 00000000 On first 0-7 keys we can't apply formula wi=(wi-8 xor wi-5 xor wi-3 xor wi-1 xor phi xor i)<<&...
519 views
### Generation of N bit prime numbers -> what is the actual range?
Short version: When generating a prime number of N bits, should I draw random numbers from the range $[0 , 2^n]$, or $[2^{(n-1)} , 2^n]$? Context: I'm trying to implement a toy-version of RSA as a ...
82 views
Groth-Sahai framework enables us to commit to QE, MSME, PP equations. Now, is the equation below committable in GS framework? A bilinear map $e: G_1 \times G_2 \rightarrow G_T$. $g_1 \in G_1$ and $... 1answer 484 views ### VKO GOST R 34.10-2001 (described in RFC4357), Key Agreement Algorithm. Looking for its implementation/detailed description/examples VKO GOST R 34.10-2001 was described in RFC4357 Page 7 But the description is very poor. Here it is: This algorithm creates a key encryption key (KEK) using 64 bit UKM, the sender's private key, ... 1answer 139 views ### Help to understand the chameleon hash function? Recently, I'm reading Non-Interactive Key Exchange, in the section 4.2(Towards a factoring-based scheme in the standard model,Page 10), the authors uses$$t\gets ChamH_{hk}(Z||ID;r);\\ Y\gets ... 2answers 283 views ### How can I map arbitrary group elements to unique integers without using Hash functions? Let's say, I have a group$G$of large prime order$p$. A set$S$consists of$n$random elements chosen from$G$. Without using a collision resistant hash function$H$, how can I map elements of$G$... 2answers 1k views ### SHA512withRSA - Looking for details about the Signature Algorithm I am trying to find information about the Signature Algorithm SHA512withRSA and have been unsuccessful so far. In the current state, the signature is too long, so I would like to check the code for ... 1answer 133 views ### Generating S boxes that satisfy Coppersmith's criteria? I'd like to generate all possible 6-bit to 4-bit S-Boxes that satisfy the criteria for S-Box design given by Coppersmith, but I have a few doubts: How many such S-Boxes are possible? Is there any ... 1answer 167 views ### Bit padding instead of PKCS#5 padding Padding oracle attacks are a huge nuisance when using CBC mode encryption without authentication. Wouldn't all those padding oracle attacks be avoided if we'd just use bit padding instead? Or is does ... 1answer 204 views ### Is multiple encryption using NaCL and TLS better? In general, the idea of multiple encryption sounds interesting - for creating stronger cyphers (especially against implementation bugs), one could always get the configuration details wrong and mess ... 1answer 332 views ### Prevent MITM attack while encrypting data by using ElGamal ECC? I am using ElGamal ECC to encrypt my plain text data. I want to ensure that my data is safe from a Man-In-The-Middle attack. What methodology I can adopt to achieve this goal? How can we prevent a ... 1answer 106 views ### How can finding a collision help an attacker with tampering messages with HMAC As stated in the HMAC RFC (RFC 2104): The strongest attack known against HMAC is based on the frequency of collisions for the hash function H. How can a collision benefit an attacker? I would ... 1answer 236 views ### PBKDF2-SHA256+SHA256 for password storage I recently came across an interesting paper detailing the use of hardened session cookies. Each cookie includes a preimage of the password hash, and the preimage is hashed once more and compared to ... 1answer 123 views ### Encryption method with master key and user keys I am working on a concept for a game that in the end will allow data sharing via plain text, but I would like to give the user the option of encrypting the data so that other players cannot modify it, ... 1answer 125 views ### Construct block cipher from a smaller one with mixing function I read about the AEZ encryption scheme as presented at the CAESAR competition. To me it seems like a construction of an arbitrary length block cipher from a smaller one. The key component is the ... 1answer 120 views ### Cryptanalysis of Nonlinear Table Lookup I am trying to derive a symmetric key based on a master key, combined with a simple string. Based on my limited knowledge, it seems that something like PBKDF2 would do that for me in a well-defined ... 1answer 225 views ### Are there any cryptographic flaws in my webhook signing process? Out of band my services have exchanged secret keys. These secret keys are then used to prevent two things of the webhooks (HTTP calls triggered by Service A to Service B when an event occurs in ... 1answer 110 views ### run length testing [closed] Looking for guidance/references with/about calculations: I am designing a widget that creates N-bit binary sequences. My customers target spec for the sequence's minimum entropy is X bits/bit. I plan ... 1answer 93 views ### Composing and Inverting Symmetric Encryption Keys Using a symmetric encryption algorithm$E$on a message$M$, users usually send$E(M)$to the recipient and then the recipient will compute$E^{-1}(E(M))=M\$ to retrieve the message. For what symmetric ...
78 views
I'm trying to analyze a login protocol based on symmetric key cryptography. I'm just starting out, so this may very likely be a very bad idea, but nonetheless I'd like to hear your thoughts on it. ...
|
# Does the residue always exist for an isolated singularity?
Consider a function f holomorphic everywhere in a domain $\Omega$ except at a point of singularity $z_0$. If the function f was say $\frac{sin(z)}{z^3}$ the Laurent series expansion at $z_0 = 0$ would have no $z^{-1}$ term, so would the residue of this function be $0$ or would it not exist?
If it is $0$, does this mean that the residue always exist for an isolated singularity? Furthermore, since the Taylor series expansion is just the Laurent series with no negative power terms, meaning the the coefficient of $z^{-1}$ is zero, can we say that the residue always exists for a point in the domain? Regardless of whether it is a singularity or not.
• The residue is just a (complex) number. It can be of course equal to 0. – user Apr 11 '18 at 9:11
• You can take the residue to be 0 when there is no singularity but this has no theoretical significance. Residue is useful only when you have an isolated singularity. – Kavi Rama Murthy Apr 11 '18 at 9:13
|
# 6.5 Newton’s universal law of gravitation (Page 4/11)
Page 4 / 11
## Tides
Ocean tides are one very observable result of the Moon’s gravity acting on Earth. [link] is a simplified drawing of the Moon’s position relative to the tides. Because water easily flows on Earth’s surface, a high tide is created on the side of Earth nearest to the Moon, where the Moon’s gravitational pull is strongest. Why is there also a high tide on the opposite side of Earth? The answer is that Earth is pulled toward the Moon more than the water on the far side, because Earth is closer to the Moon. So the water on the side of Earth closest to the Moon is pulled away from Earth, and Earth is pulled away from water on the far side. As Earth rotates, the tidal bulge (an effect of the tidal forces between an orbiting natural satellite and the primary planet that it orbits) keeps its orientation with the Moon. Thus there are two tides per day (the actual tidal period is about 12 hours and 25.2 minutes), because the Moon moves in its orbit each day as well).
The Sun also affects tides, although it has about half the effect of the Moon. However, the largest tides, called spring tides, occur when Earth, the Moon, and the Sun are aligned. The smallest tides, called neap tides, occur when the Sun is at a $\text{90º}$ angle to the Earth-Moon alignment.
Tides are not unique to Earth but occur in many astronomical systems. The most extreme tides occur where the gravitational force is the strongest and varies most rapidly, such as near black holes (see [link] ). A few likely candidates for black holes have been observed in our galaxy. These have masses greater than the Sun but have diameters only a few kilometers across. The tidal forces near them are so great that they can actually tear matter from a companion star.
tree physical properties of heat
tree is a type of organism that grows very tall and have a wood trunk and branches with leaves... how is that related to heat? what did you smoke man?
what are the uses of dimensional analysis
Dimensional Analysis. The study of relationships between physical quantities with the help of their dimensions and units of measurements is called dimensional analysis. We use dimensional analysis in order to convert a unit from one form to another.
Emmanuel
meaning of OE and making of the subscript nc
Negash
kinetic functional force
what is a principal wave?
A wave the movement of particles on rest position transferring energy from one place to another
Gabche
not wave. i need to know principal wave or waves.
Haider
principle wave is a superposition of wave when two or more waves meet at a point , whose amplitude is the algebraic sum of the amplitude of the waves
kindly define principal wave not principle wave (principle of super position) if u can understand my question
Haider
what is a model?
hi
Muhanned
why are electros emitted only when the frequency of the incident radiation is greater than a certain value
b/c u have to know that for emission of electron need specific amount of energy which are gain by electron for emission . if incident rays have that amount of energy electron can be emitted, otherwise no way.
Nazir
Nazir
what is ohm's law
states that electric current in a given metallic conductor is directly proportional to the potential difference applied between its end, provided that the temperature of the conductor and other physical factors such as length and cross-sectional area remains constant. mathematically V=IR
ANIEFIOK
hi
Gundala
A body travelling at a velocity of 30ms^-1 in a straight line is brought to rest by application of brakes. if it covers a distance of 100m during this period, find the retardation.
just use v^2-u^2=2as
Gundala
how often does electrolyte emits?
alhassan
just use +€^3.7°√π%-4¢•∆¥%
v^2-u^2=2as v=0,u=30,s=100 -30^2=2a*100 -900=200a a=-900/200 a=-4.5m/s^2
akinyemi
what's acceleration
The change in position of an object with respect to time
Mfizi
Acceleration is velocity all over time
Pamilerin
hi
Stephen
It's not It's the change of velocity relative to time
Laura
Velocity is the change of position relative to time
Laura
acceleration it is the rate of change in velocity with time
Stephen
acceleration is change in velocity per rate of time
Noara
what is ohm's law
Stephen
Ohm's law is related to resistance by which volatge is the multiplication of current and resistance ( U=RI)
Laura
acceleration is the rate of change. of displacement with time.
the rate of change of velocity is called acceleration
Asma
how i don understand
how do I access the Multiple Choice Questions? the button never works and the essay one doesn't either
How do you determine the magnitude of force
mass × acceleration OR Work done ÷ distance
Seema
Which eye defect is corrected by a lens having different curvatures in two perpendicular directions?
acute astigmatism?
|
Contact (+1)-surgeries along Legendrian Two-component Links
# Contact +1 surgeries along Legendrian two-component links
## Abstract.
In this paper, we prove that the contact Ozsváth-Szabó invariant of a contact 3-manifold vanishes if it can be obtained from the standard contact 3-sphere by contact -surgery along a Legendrian two-component link with the linking number of and being nonzero and satisfying . As a corollary, the contact Ozsváth-Szabó invariant of the contact 3-manifold obtained from the tight contact by contact -surgery along a homologically essential Legendrian knot vanishes for any positive integer . We also prove that the contact Ozsváth-Szabó invariant of a contact 3-manifold vanishes if it can be obtained from the standard contact by contact -surgery along a Legendrian Whitehead link. In addition, we give a sufficient condition for the contact 3-maniold obtained from the standard contact 3-sphere by contact -surgery along a Legendrian two-component link being overtwisted.
## 1. Introduction
A contact structure on a smooth oriented 3-manifold is a smooth tangent 2-plane field such that any smooth 1-form locally defining as satisfies the condition . A contact structure is coorientable if and only if there is a global 1-form with . Throughout this paper, we will assume our 3-manifolds are oriented, connected and our contact structures are cooriented. A contact structure on is called overtwisted if one can find an embedded disc in such that the tangent plane field of along its boundary coincides with ; otherwise, it is called tight. Any closed oriented 3-manifold admits an overtwisted contact structure (cf. [7]). It is much harder to find tight contact structures on a closed oriented 3-manifold. The following question is still open: Which closed oriented 3-manifolds admit tight contact structures?
One way of obtaining new contact manifolds from the existing one is through contact surgery. Suppose is a Legendrian knot in a contact 3-manifold , i.e., is tangent to the given contact structure on . Contact surgery is a version of Dehn surgery that is adapted to the contact category. Roughly speaking, we delete a tubular neighborhood of , then reglue it, and obtain a contact structure on the surgered manifold by extending from the complement of the tubular neighborhood of to a tight contact structure on the reglued solid torus (see [3] for details). In [3], the first author and Geiges proved that every closed contact 3-manifold can be obtained by contact -surgery along a Legendrian link in , where denotes the standard contact structure on .
Heegaard Floer theory associates an abelian group to a closed, oriented Spin 3-manifold , and a homomorphism
FW,s:ˆHF(Y1,t1)→ˆHF(Y2,t2)
to a Spin cobordism between two Spin 3-manifolds and . Write for the direct sum over all Spin structures on and for the sum over all Spin structures on . Throughout this paper, we work with the Heegaard Floer homology with coefficients in . In [23], Ozsváth and Szabó introduced an invariant for any closed contact 3-manifold . We call it the contact Ozsváth-Szabó invariant, or simply the contact invariant of . It is shown that if is overtwisted [23], and if is strongly symplectically fillable [8]. If the contact manifold is obtained from by contact -surgery along a Legendrian knot , then we have
(1.1) F−W(c(Y,ξ))=c(YL,ξL),
where stands for the cobordism induced by the surgery with reversed orientation. This functorial property of the contact invariant can be proved by an adaption of [23, Theorem 4.2] (cf. [15, Theorem 2.3]).
It is natural to ask whether the contact invariant of a contact 3-manifold obtained by contact -surgery along a Legendrian link is trivial or not. Here, contact -surgery along a Legendrian link means contact -surgery along each component of the Legendrian link. Most known results concern contact -surgeries along Legendrian knots. In [14], Lisca and Stipsicz showed that contact -surgeries along certain Legendrian knots in yield contact 3-manifolds with nonvanishing contact invariants for any positive integer . In [9], Golla considered contact 3-manifolds obtained from by contact -surgeries along Legendrian knots, where is any positive integer. He gave a necessary and sufficient condition for the contact invariant of such a contact 3-manifold to be nonvanishing. In [18], Mark and Tosun extended Golla’s result to contact -surgeries, where is rational.
In this paper, we study contact -surgeries along Legendrian two-component links in . Our main result is:
###### Theorem 1.1.
Suppose is a Legendrian two-component link in the standard contact 3-sphere whose two components have nonzero linking number. Assume satisfies , where denotes the mirror of . Then contact -surgery on along yields a contact 3-manifold with vanishing contact invariant.
The main tool for proving this theorem is a link surgery formula for the Heegaard Floer homology of integral surgeries on links developed by Manolescu and Ozsváth [17]. Here, is a numerical invariant defined by Hom and the third author in [12] based on work of Rasmussen [28]. It is shown in [10, Proposition 3.11] that a knot satisfies the condition if and only if we have a filtered chain homotopy equivalence
(1.2) CFK∞(K)≃CFK∞(U)⊕A
where denotes the unknot and is acyclic, i.e., . Applying (1.2) enables us to treat effectively like the unknot in the proof of Theorem 1.1.
Below, we list some interesting families of knots that satisfy the condition
ν+(K)=ν+(¯¯¯¯¯K)=0
of Theorem 1.1.
###### Example 1.2.
1. The most basic examples are the slice knots.
We are particularly interested in Legendrian slice knots with Thurston-Bennequin invariant , as contact -surgeries along these knots result in contact 3-manifolds with nonvanishing contact invariants [9]. Nontrivial knot types of smoothly slice knots with at most 10 crossings that have Legendrian representatives with Thurston-Bennequin invariant are (the mirror of ) and [2]. Moreover, recall that , where denotes the Legendrian connected sum of the Legendrian knots and [6]. By performing Legendrian connected sums, we obtain infinitely many Legendrian slice knots which have Thurston-Bennequin invariant equal to .
In Figures 1, 2 and 3 below, we give several Legendrian two-component links in that include a Lengendrian unknot and a Legendrian knot of type , respectively. Note that one obtains nonvanishing contact invariant after performing contact -surgery along each knot component of the depicted links. On the other hand, Theorem 1.1 implies that contact -surgeries along these links result in contact 3-manifolds with vanishing contact invariants. It will be shown in Examples 1.8 and 1.10 that contact -surgeries along Legendrian two-component links in Figures 1 and 2 yield overtwisted contact 3-manifolds. It is still unknown whether the contact 3-manifold obtained by contact -surgery along the Legendrian link in Figure 3 is overtwisted or tight.
2. More generally, all rationally slice knots satisfy [13, Theorem 1.10].
Recall that a knot is rationally slice if there exists an embedded disk in a rational homology -ball such that . Examples of rationally slice knots include strongly amphicheiral knots and Miyazaki knots, i.e., fibered, amphicheiral knots with irreducible Alexander polynomial [13]. In particular, the figure-eight knot is rationally slice but not slice.
###### Example 1.3.
Let be a Legendrian link in . Suppose is a Legendrian unknot with , and is a meridional curve of . Then, by Theorem 1.1, contact -surgery along yields a contact structure on with vanishing contact invariant. Hence by the classification of tight contact structures on [4] and , the contact 3-manifold is overtwisted.
In the special case of Theorem 1.1 where is a Legendrian unknot with Thurston-Bennequin invariant , contact -surgery on along yields the unique (up to isotopy) tight contact structure on . Hence, in this case, we may interpret the theorem as a result of contact -surgery along a Legendrian knot in . More generally, we have the following corollary.
###### Corollary 1.4.
Suppose is a Legendrian knot in , the contact connected sum of copies of . If is not null-homologous, then contact -surgery on along yields a contact 3-manifold with vanishing contact invariant for any positive integer .
We also consider contact 3-manifolds obtained by contact -surgeries along Legendrian two-component links in whose two components have linking number zero. There are examples of such contact 3-manifolds whose contact invariants do not vanish.
###### Example 1.5.
1. Contact -surgery on along the Legendrian two-component unlink whose two components both have Thurston-Bennequin invariant yields the contact 3-manifold with nonvanishing contact invariant.
2. [20, Exercise 12.2.8(c)] provides examples of Legendrian two-component links in by removing one of the two Legendrian unknot components in [20, Figure 12.4] along which contact -surgeries yield contact 3-manifolds with nonvanishing contact invariants. One of the resulted contact surgery diagrams can be transformed to Figure 4 by Legendrian Reidemeister moves. One should be careful that there is a wrong crossing in [20, Figure 12.4].
Consider contact 3-manifolds obtained from by contact -surgeries along Legendrian Whitehead links. To the best of our knowledge, the contact invariants and tightness of such manifolds have not been explicitly given in the literature.
###### Proposition 1.6.
Contact -surgery on along a Legendrian Whitehead link yields a contact 3-manifold with vanishing contact invariant.
In light of the above vanishing results in Theorem 1.1 and Proposition 1.6, one may wonder whether the contact manifolds studied there are tight or not. Although we have not yet been fully successful in answering this question, we came up with the following sufficient condition for contact -surgeries yielding overtwisted contact 3-manifolds. Note that this theorem is irrelevant to the linking number of the two components of the Legendrian link along which we perform contact -surgery. It is inspired by the work of Baker and Onaran [1, Proposition 4.1.10].
###### Theorem 1.7.
Suppose there exists a front projection of a Legendrian two-component link in the standard contact 3-sphere that contains one of the configurations exhibited in Figure 5, then contact -surgery on along yields an overtwisted contact 3-manifold.
###### Example 1.8.
Contact -surgery along the Legendrian link in Figure 1 yields an overtwisted contact 3-manifold. This is because the dashed box in Figure 6 contains the configuration in Figure 5(c).
We can transform the four configurations in Figure 5 to that in Figure 7 through Legendrian Reidemeister moves. So we have the following corollary.
###### Corollary 1.9.
Suppose there exists a front projection of a Legendrian two-component link in the standard contact 3-sphere that contains one of the configurations exhibited in Figure 7, then contact -surgery on along yields an overtwisted contact 3-manifold.
###### Example 1.10.
In Figure 8, and are parts of and , respectively. Contact -surgery along the Legendrian link in the left of Figure 8 yields an overtwisted contact 3-manifold. This is because we can transform the Legendrian link in the left of Figure 8 to that in the right of Figure 8 which contains the configuration in Figure 7(a) through Legendrian Reidemeister moves. It follows that contact -surgery along the Legendrian link in Figure 2 yields an overtwisted contact 3-manifold.
The remainder of this paper is organized as follows. In Section 2, we review basic properties of the contact invariant. We also reformulate the statement of Golla concerning the conditions under which contact -surgery along a Legendrian knot yields a contact 3-manifold with nonvanishing contact invariant. In Section 3, we go through the construction of the link surgery formula of Manolescu and Ozsváth in the special case of two-components links. We elaborate on the page of an associated spectral sequence and identify the relevant maps in the differential with the well-known and in the knot surgery formula of Ozsváth and Szabó [25]. In Section 4, we analyze the page and give a proof of Theorem 1.1 based on diagram chasing. This idea is partly inspired by the work of Hom and Lidman [11], and also constitutes the most novel part of our paper. Due to some technical issues, the above argument does not apply to the linking number 0 case, so in Section 5, we use a different machinery in Heegaard Floer homology, namely, the grading to prove Proposition 1.6. Finally in Section 6, we prove Theorem 1.7 and Corollary 1.9.
###### Acknowledgements.
The authors would like to thank Ciprian Manolescu, Tye Lidman, Jen Hom, Faramarz Vafaee, Eugene Gorsky and John Etnyre for helpful discussions and suggestions. Part of this work was done while the second author was visiting The Chinese University of Hong Kong, and he would like to thank for its generous hospitality and support. The first author was partially supported by Grant No. 11371033 of the National Natural Science Foundation of China. The second author was partially supported by Grant No. 11471212 of the National Natural Science Foundation of China. The third author was partially supported by grants from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. 14301215).
## 2. Preliminaries on contact invariants
Let be an oriented Legendrian two-component link in . Suppose is the linking number of and . The result of contact -surgery along is denoted by , where
Λ=(tb(L1)+1lltb(L2)+1)
is the topological surgery framing matrix.
Let be the cobordism from to induced by the surgery, and be with reversed orientation. One has the contact invariant
c(S3,ξstd)∈ˆHF(−S3)=ˆHF(S3)
and
c(S3Λ(L),ξL)∈ˆHF(−S3Λ(L))=ˆHF(S3−Λ(¯¯¯¯L)),
where is the mirror of . We have a map
F−W:ˆHF(S3)→ˆHF(S3−Λ(¯¯¯¯L)).
From the functoriality (1.1) and the composition law [24, Theorem 3.4], we see that
(2.3) F−W(c(S3,ξstd))=c(S3Λ(L),ξL).
In [9], Golla investigated the contact invariant of a contact manifold given by contact surgery along a Legendrian knot in . In particular, by [9, Theorem 1.1], the contact 3-manifold obtained by contact -surgery along the Legendrian knot () in has nonvanishing contact invariant if and only if satisfies the following three conditions:
(2.4) tb(Li)=2τ(Li)−1,
(2.5) rot(Li)=0,
(2.6) τ(Li)=ν(Li).
Hence, if either or does not satisfy one of these three conditions, then it follows readily from the functoriality (1.1) that the contact invariant must vanish as well.
###### Remark 2.1.
There exists a two-component link such that the knot type of each component has a Legendrian representative satisfying the above three conditions, but the link type of this two-component link has no Legendrian representative with both two components satisfying the above three conditions simultaneously [5, Section 5.6].
In this section, we recall the link surgery formula for two-component links developed in [17]. The link surgery formula is a generalization of the knot surgery formula given by Ozsváth and Szabó [25][26]. The idea is to compute the Heegaard Floer homology of a 3-manifold obtained by surgery along a knot in terms of the mapping cone of the knot Floer complex, and more generally surgery along a link in terms of the hyperbox of the link surgery complex. In addition, the cobordism map is realized as the induced map of the inclusion of complexes in the system of hyperboxes [17, Theorem 14.3].
We go over the construction for two-component links. Suppose is an oriented link with two components and , and the linking number of and is . Suppose the topological surgery framing matrix is
Λ=(p1llp2).
We will see the computation of .
###### Definition 3.1.
Let The affine lattice over is defined by
H(L)=H(L)1⊕H(L)2.
The elements of correspond to Spin structures on relative to .
Similarly, let . The elements of correspond to Spin structures on relative to for . Furthermore, let and represent the component with the given and opposite orientation, respectively. If is or where is or for , we can define
ψM:H(L)→H(L−M),s=(s1,s2)↦sj−lk(Kj,M)2,
where is the other component of . One caveat is that is independent of for .
Now we consider Spin structures on . Let be the (possibly degenerate) sublattice of generated by the two columns of . We denote the quotient of in by . For any , there is a standard way of associating an element . This gives an identification of the set with .
Fix . Manolescu and Ozsváth constructed a hyperbox of complexes which is a twisted gluing of the following four squares of chain complexes:
Here, , , , and are generalized Floer complexes that can be determined from a given Heegaard diagram of the link . There are also maps ’s between these generalized Floer complexes which count holomorphic polygons in the Heegaard diagram. We refer the reader to [17] for details. In Figure 9, we exhibit a more concrete representation of for which a square is drawn at the lattice point , and the generalized complexes (), , and are at the lower left, the upper left, the lower right and the upper right corner of the square, respectively. Note that while and stay at the original lattice point, and map to the complexes at the lattice points and ), respectively.
It is often convenient to study by introducing a filtration and consider the associated spectral sequence. Here, we define the filtration to be the number of components of if . Thus, the complex at the lower left corner of each square has filtration level 2; the complex at the lower right or the upper left corner of each square has filtration level 1; and the complex at the upper right corner of each square has filtration level 0. Since the largest difference in the filtration levels is 2, the th differential in the spectral sequence, , mush vanish if . By [17], the associated spectral sequence has
E0=(^C,∂0),
E1=(H∗(^C,∂0),∂1),
and
ˆHF(S3Λ(L),u)=E∞=H∗(E2)=H∗(H∗(H∗(^C,∂0),∂1),∂2).
Let us explain the page of the surgery chain complex in greater detail. Figure 10 exhibits a typical example of an page associated to a 2-dimensional hyperbox . Observe that is the internal differential of each generalized Floer complex. Hence we have at the lower left corner of the square at the lattice point , which turns out to be isomorphic to of a large surgery along in certain Spin structure. Similarly, at the upper left corner and at the lower right corner of each square are isomorphic to of large surgeries along and in certain Spin structures, respectively; and at the upper right corner of each square is isomorphic to .
Next, we consider the differential . Note that consists of a collection of short edge maps and that stay at the original lattice point, and another collection of long edge maps and that shift the position by the vectors and , respectively. The most relevant maps for our purposes are the ones that map into the homology at the upper right corner of each square. Under the above identification of with Heegaard Floer homology of large integer surgeries, we can identify the short edge map initiated from the upper left corner as
ϕ+K1ψ+K2(s):ˆHF(S3p(K1),s1−l2)→ˆHF(S3),
the long edge map initiated from the upper left corner as
ϕ−K1ψ+K2(s):ˆHF(S3p(K1),s1−l2)→ˆHF(S3),
the short edge map initiated from the lower right corner as
ϕ+K2ψ+K1(s):ˆHF(S3p(K2),s2−l2)→ˆHF(S3),
and the long edge map initiated from the lower right corner as
ϕ−K2ψ+K1(s):ˆHF(S3p(K2),s2−l2)→ˆHF(S3),
where is a sufficiently large integer. In fact, and are the induced homomorphisms of and and are equivalent to the vertical and horizontal maps and defined in [25], respectively. The same thing holds for and .
## 4. Vanishing contact invariants
###### Proof of Theorem 1.1.
It follows from that . We claim that it suffices to consider the case where the Thurston-Bennequin invariant . Otherwise, must be strictly less than by the inequality ([27, Theorem 1]), thus violating the condition of (2.4). This then implies the triviality of the contact invariant by our discussion near the end of Section 2.
We first treat the case where is a Legendrian unknot. We try to determine the contact invariant . Note that is the unique generator of . Hence by (2.3), is the image of the generator under the cobordism map , so is equivalent to being a zero map.
We resort to [17, Theorem 14.3] to understand this map, which identifies with the induced map of the inclusion
∏s∈H(¯¯¯L),[s]=u^A(H∅,ψϵ1¯¯¯¯¯L1∪ϵ2¯¯¯¯¯L2(s))↪(^C,^D,u).
In order to prove that vanishes, it suffices to show that for each , , the generator of at the upper right corner of the square at the lattice point , is a boundary in the page of the spectral sequence (or equivalently, trivial in the page).
For the subsequent argument, we will still refer to Figure 10 for a schematic picture of the page of the spectral sequence, although we should point out that at present the surgery is performed along the link , and correspond to and in Figure 10, respectively, and the topological surgery framing matrix is
−Λ=(−(tb(L1)+1)−l−l0).
Since is an unknot, the homology group that was identified with at the lower right corner of the square at the lattice point is 1-dimensional. We denote the generator of the homology group by . Clearly, we have:
(1) If , then is an isomorphism, and is the trivial map. So
∂1b2s1,s2=cs1,s2.
(2) If , then is the trivial map, and is an isomorphism. So
∂1b2s1+l,s2=cs1,s2.
(3) If , then both and are isomorphisms. So
∂1b2s1,−l2=cs1,−l2+cs1−l,−l2.
On the other hand, we understand the general properties of the maps well when is sufficiently large. In that case, the homology group , and is an isomorphism while is the trivial map. Thus, if we denote the generator of the homology group at the upper left corner of the square at the lattice point by , then
(4.7) ∂1b1s1,−l2=cs1,−l2,whens1≫0.
Let us put them together. When , we can immediately see from Claims (1) and (2) that lies in the image of . When , we can use Claim (3) and (4.7) to find an explicit element such that under the assumption that the linking number is nonzero. More precisely, one can check that
∂1(b2s1+l,−l2+b2s1+2l,−l2+⋯+b2s1+nl,−l2+b1s1+nl,−l2)=cs1,−l2
for large enough and ; and
∂1(b2s1,−l2+b2s1−l,−l2+⋯+b2s1−nl,−l2+b1s1−(n+1)l,−l2)=cs1,−l2
for large enough and . In either case, this proves that lies in the image of for each , thus implying the theorem for the special case where is a Legendrian unknot.
More generally, since satisfies , we apply (1.2) and conclude that is filtered chain homotopy equivalent to for some acyclic complex . Then, the above argument for the unknot case nearly extends verbatim to the general case, except that the homology group may not necessarily be 1-dimensional. Nevertheless, we can use the filtered chain homotopy equivalence to define the generators from the summand . The rest of the proof carries over for the general case. ∎
As a corollary, we show that contact -surgery on along a homologically essential Legendrian knot yields a contact 3-manifold with vanishing contact invariant for any positive integer , as claimed in Corollary 1.4.
###### Proof of Corollary 1.4.
The contact 3-manifold can be obtained by contact -surgery on along a Legendrian -component unlink . There exists a Legendrian knot in which becomes the Legendrian knot in after the contact -surgery along . To find such an , it suffices to perform Legendrian surgery on along a Legendrian -component link, each component of which lies in a summand and is disjoint from , so that the result is . Then the image of in is the desired .
Note that contact -surgery on along is equivalent to contact -surgery on along Legendrian push-offs of for any positive integer . Therefore, contact -surgery on along is equivalent to contact -surgery on along a Legendrian -component link , which is the union of the aforementioned Legendrian -component unlink and Legendrian push-offs of the Legendrian knot . See Figure 11 for an example.
|
# Solving an ODE with power series
• I
## Summary:
In power series method, after putting in the summations in the DE and simplifying, I found three powers of x.. how do I solve it?
## Main Question or Discussion Point
I have an ODE:
(x-1)y'' + (3x-1)y' + y = 0
I need to find the solution about x=0. Since this is an ordinary point, I can use the regular power series solution.
Let y = $\sum_{r=0}^\infty a_r x^r$
after finding the derivatives and putting in the ODE, I have:
$\sum_{r=0}^\infty a_r (r)(r-1) x^{r-1} - \sum_{r=0}^\infty a_r (r)(r-1) x^{r-2} + 3 \sum_{r=0}^\infty a_r (r) x^{r} - \sum_{r=0}^\infty a_r (r) x^{r-1} + \sum_{r=0}^\infty a_r x^{r}=0$
Now I have three powers of x, so If I transform the index I'll still end up with a recurrence relation relating three coefficients.
I don't know how to proceed in that case.
Last edited:
Related Differential Equations News on Phys.org
FactChecker
Gold Member
2018 Award
When you put the power series into the ODE, what happened to the " = 0" part? It is necessary to use that to solve for the coefficients.
FactChecker
Gold Member
2018 Award
For each power of x, the total on the left must equal that power of x on the right, which is zero. Work on that and see how far it gets you.
HallsofIvy
Homework Helper
Summary:: In power series method, after putting in the summations in the DE and simplifying, I found three powers of x.. how do I solve it?
I have an ODE:
(x-1)y'' + (3x-1)y' + y = 0
I need to find the solution about x=0. Since this is an ordinary point, I can use the regular power series solution.
Let y = $\sum_{r=0}^\infty a_r x^r$
after finding the derivatives and putting in the ODE, I have:
$\sum_{r=0}^\infty a_r (r)(r-1) x^{r-1} - \sum_{r=0}^\infty a_r (r)(r-1) x^{r-2} + 3 \sum_{r=0}^\infty a_r (r) x^{r} - \sum_{r=0}^\infty a_r (r) x^{r-1} + \sum_{r=0}^\infty a_r x^{r}=0$
Now I have three powers of x, so If I transform the index I'll still end up with a recurrence relation relating three coefficients.
I don't know how to proceed in that case.
Actually you have a many powers of x hidden in each sum. Also, though all your sums start at r= 0, in the first sum you have coefficients r and r- 1 so the first two terms are 0, It would be better to write it as $\sum_{r= 2}^\infty a_r r(r-1)x^{r- 2}$. In order to get $x^i$, let i= r- 2 so that r= i+ 2 and the sum becomes $\sum_{i= 0}^\infty a_{i+2}(i+2)(i+1)x^{i}$. Do the same with each of the other sums so that you have $x^i$ in each sum and can combine coefficients of "like powers".
|
## Essential University Physics: Volume 1 (4th Edition)
We know that the wave will have the shape of a sine/cosine function. $4.7 \ m/s$
We use the equation for the speed of the wave: $v = \frac{\lambda}{T}=\frac{14m}{3s}\approx 4.7 \ m/s$
|
# Algebrogram
Solve algebrogram for sum of three numbers:
BEK
KEMR
SOMR
________
HERCI
Result
a = 249
b = 9467
c = 5067
d = 14783
#### Solution:
$a = 249$
$b = 9467$
$c = 5067$
$d = a+b+c = 249+9467+5067 = 14783$
Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):
Be the first to comment!
#### Following knowledge from mathematics are needed to solve this word math problem:
Do you have a linear equation or system of equations and looking for its solution? Or do you have quadratic equation?
## Next similar math problems:
1. Three unknowns
Solve the system of linear equations with three unknowns: A + B + C = 14 B - A - C = 4 2A - B + C = 0
2. Equations
Solve following system of equations: 6(x+7)+4(y-5)=12 2(x+y)-3(-2x+4y)=-44
3. Equations - simple
Solve system of linear equations: x-2y=6 3x+2y=4
4. Linsys2
Solve two equations with two unknowns: 400x+120y=147.2 350x+200y=144
5. Three figures - numbers
The sum of three numbers, if each is 10% larger than the previous one, is 662. Determine the figures.
6. Schools
Three schools are attended by 678 pupils. To the first attend 21 students more and to the third 108 fewer students than to second school. How many students attend the schools?
7. Summerjob
Three students participated in the summerjob. Altogether they earn 1780, -. Peter got a third less than John and Paul got 100,- more than Peter. How much got every one of them?
8. Football match 4
In a football match with the Italy lost 3 goals with Germans. Totally fell 5 goals in the match. Determine the number of goals of Italy and Germany.
9. 13 tickets
A short and long sightseeing tour is possible at the castle. Ticket for a short sightseeing circuit costs CZK 60, for a long touring circuit costs CZK 100. So far, 13 tickets have been sold for 1140 CZK. How much did you pay for tickets for a short tour?
10. Dining room
The dining room has 11 tables (six and eight seats). In total there are 78 seats in the dining room. How many are six-and eight-seat tables?
11. Book
Alena read a book at speed 15 pages per day. If she read twice as fast she should read a book four days earlier. How many pages have a book?
12. Football season
Dalibor and Adam together scored 97 goals in the season. Adam scored 9 goals more than Dalibor. How many goals scored each?
13. The dormitory
The dormitory accommodates 150 pupils in 42 rooms, some of which are triple and some are quadruple. Determine how many rooms are triple and how many quadruples.
14. Mushrooms
Eva and Jane collected 114 mushrooms together. Eve found twice as much as Jane. How many mushrooms found each of them?
15. Rabbits 3
Viju has 40 chickens and rabbits. If in all there are 90 legs. How many rabbits are there with Viju?
16. Theatro
Theatrical performance was attended by 480 spectators. Women were in the audience 40 more than men and children 60 less than half of adult spectators. How many men, women and children attended a theater performance?
17. Glass
Trader ordered from the manufacturer 200 cut glass. The manufacturer confirmed the order that the glass in boxes sent a kit containing either four or six glasses. Total sent 41 boxes. a) How many boxes will contain only 4 glasses? b) How many boxes will co
|
# Moon
Corresponding Wikipedia article: Moon
The unsolved mysteries, which provided the theory of artificial origin of Moon:
• Moon has a perfectly spherical shape and a perfectly circular orbit.
• Ratio of the Sun linear size to the Moon size is exactly equal to the ratio of distances Earth-Sun to Moon-Earth. Therefore the lunar disk completely covers the solar disk of exactly the same size during the total solar eclipses.
• The mass ratio of Earth to Moon is abnormally high in comparison with other planets and their satellites.
• Period of the lunar rotation around the Earth is exactly equal to the lunar rotation period around the lunar axis, so the Moon is facing to the Earth always one side.
• The lunar “highlands” significantly prevail over the “lunar mares” on the back side of the Moon.
• The lunar mares are not shaped as the usual craters, and their origin is unknown. The lunar mares have lesser amount of the meteor craters than the “highlands” have for some reason. The depth of these craters is disproportionately small in comparison to their area. As if something prevents the penetration of meteorites deep into.
• Moon contains abnormally high amounts of titanium under the ground at a low average density of the planet[1]. So it is like a metallic hollow sphere. When the ascent stage of "Apollo-12" fell down to the lunar surface, the seismometer has recorded the surface fluctuations like a sound of a bell (a gong) during one hour[2].
• The lunar magnetic field is not typical for the planets.
Arithmetic curiosities:
• Ratio of the solar linear size to the lunar size is the round number 400.
• Ratio of the Earth linear size to the lunar size is approximately 367% (almost equal to the number of days in a year).
• The orbital period of Moon (a lunar “year”) is approximately 27,322 days. The durations product (least common multiple) of Moon’s and Earth's years is 27,322 x 365,24 ≈ 9980 days (almost a round number 10000).
Hypothetical system of the length units for the ancient megalithic structures:
Formula Value Name Description
$\frac{Earth's\;circumference\;along\;a\;meridian}{366\cdot 60}$ ≈ 1822 m Megalithic Mile (MM) Similarly to a nautical mile, but the circumference is divided into 366⁰ instead of 360⁰.
$\frac{Solar\;circumference}{400\cdot 100\cdot 60}$
$\frac{Lunar\;circumference}{100\cdot 60}$
$\frac{MM}{366\cdot 6}$ 82,9 cm Megalithic Yard Discovered by professor A. Thom for the British structures.
$\frac{MM}{100\cdot 60}$ 30,36 cm Minoan Foot Discovered by professor J. Walter Graham for the temples of Phaistos, Malia and Knossos[3].
$\frac{Earth's\;circumference}{100\cdot 100\cdot 100}\sqrt{2}$ ≈ 57 м Discovered by archeologist Hugh Harleston Jr. for the Teotihuacan[4].
$\frac{Lunar\;circumference}{100\cdot 100\cdot 100\cdot 3,66}\sqrt{2}$
$\frac{Earth's\;circumference}{100\cdot 100\cdot 100\cdot 3\cdot 6\cdot 6}\sqrt{2}$ ≈ 52,5 cm Royal Cubit Discovered by J. Greaves for the Egyptian pyramids.
## References
1. The palaces of Crete / by J. Walter Graham. Princeton, N.J. : Princeton University Press, 1987.
2. Mysteries of the Mexican Pyramids by Peter Tompkins, 1987, ISBN 978-0-0609-1366-3
|
# source:anuga_work/publications/anuga_2007/anuga_validation.tex@5667
Last change on this file since 5667 was 5667, checked in by duncan, 15 years ago
More anuga paper - Added a formula
File size: 27.4 KB
Line
1%Anuga validation publication
2%
3%Geoscience Australia and others 2007-2008
4
5% Use the Elsevier LaTeX document class
6%\documentclass{elsart3p} % Two column
7%\documentclass{elsart1p} % One column
8%\documentclass[draft]{elsart} % Basic
9\documentclass{elsart} % Basic
10
11% Useful packages
12\usepackage{graphicx} % avoid epsfig or earlier such packages
13\usepackage{url} % for URLs and DOIs
14\usepackage{amsmath} % many want amsmath extensions
15\usepackage{amsfonts}
16\usepackage{underscore}
17\usepackage{natbib} % Suggested by the Elsevier style
18 % Use \citep and \citet instead of \cite
19
20
21% Local LaTeX commands
22%\newcommand{\Python}{\textsc{Python}}
23%\newcommand{\VPython}{\textsc{VPython}}
24\newcommand{\pypar}{\textsc{mpi}}
25\newcommand{\Metis}{\textsc{Metis}}
26\newcommand{\mpi}{\textsc{mpi}}
27
28\newcommand{\UU}{\mathbf{U}}
29\newcommand{\VV}{\mathbf{V}}
30\newcommand{\EE}{\mathbf{E}}
31\newcommand{\GG}{\mathbf{G}}
32\newcommand{\FF}{\mathbf{F}}
33\newcommand{\HH}{\mathbf{H}}
34\newcommand{\SSS}{\mathbf{S}}
35\newcommand{\nn}{\mathbf{n}}
36
37\newcommand{\code}[1]{\texttt{#1}}
38
39
40
41
42\begin{document}
43
44
45\begin{frontmatter}
46\title{On The Validation of A Hydrodynamic Model}
47
48
49\author[GA]{D.~S.~Gray}
51\author[GA]{O.~M.~Nielsen}
53\author[GA]{M.~J.~Sexton}
55\author[GA]{L.~Fountain}
56\author[GA]{K.~VanPutten}
57\author[ANU]{S.~G.~Roberts}
59\author[UQ]{T.~Baldock}
61\author[UQ]{M.~Barnes}
63
65 Geospatial and Earh Monitoring Division,
66 Geoscience Australia, Canberra, Australia}
67
69Australian National University, Canberra, Australia}
70
72
73
74% Use the \verb|abstract| environment.
75\begin{abstract}
76Modelling the effects on the built environment of natural hazards such
77as riverine flooding, storm surges and tsunami is critical for
78understanding their economic and social impact on our urban
79communities. Geoscience Australia and the Australian National
80University have developed a hydrodynamic inundation modelling tool
81called ANUGA to help simulate the impact of these hazards.
82The core of ANUGA is a Python implementation of a finite-volume method
83for solving the conservative form of the Shallow Water Wave equation.
84
85In this paper, a number of tests are performed to validate ANUGA. These tests
86range from benchmark problems to wave and flume tank examples.
87ANUGA is available as Open Source to enable
89use, validate and contribute to the software in the future.
90
91%This method allows the study area to be represented by an unstructured
92%mesh with variable resolution to suit the particular problem. The
93%conserved quantities are water level (stage) and horizontal momentum.
94%An important capability of ANUGA is that it can robustly model the
95%process of wetting and drying as water enters and leaves an area. This
96%means that it is suitable for simulating water flow onto a beach or
97%dry land and around structures such as buildings.
98
99\end{abstract}
100
101
102\begin{keyword}
103% keywords here, in the form: keyword \sep keyword
104% PACS codes here, in the form: \PACS code \sep code
105
106Hydrodynamic Modelling \sep Model validation \sep
107Finite-volumes \sep Shallow water wave equation
108
109\end{keyword}
110
111\date{\today()}
112\end{frontmatter}
113
114
115
116
117% Begin document in earnest
118\section{Introduction}
119\label{sec:intro}
120
121Hydrodynamic modelling allows impacts from flooding, storm-surge and
122tsunami to be better understood, their impacts to be anticipated and,
123with appropriate planning, their effects to be mitigated. A significant
124proportion of the Australian population reside in the coastal
125corridors, thus the potential of significant disruption and loss
126is real. The extent of
127inundation is critically linked to the event, tidal conditions,
128bathymetry and topography and it not feasible to make impact
129predictions using heuristics alone.
130Geoscience
131Australia in collaboration with the Mathematical Sciences Institute,
132Australian National University, is developing a software application
133called ANUGA to model the hydrodynamics of floods, storm surges and
134tsunami. These hazards are modelled using the conservative shallow
135water equations which are described in section~\ref{sec:model}. In
136ANUGA these equations are solved using a finite volume method as
137described in section~\ref{sec:model}. A more complete discussion of the
138method can be found in \citet{Nielsen2005} where the model and solution
139technique is validated on a standard tsunami benchmark data set
140or in \citet{Roberts2007} where the numerical method and parallelisation
141of ANUGA is discussed.
142This modelling capability is part of
143Geoscience Australia's ongoing research effort to model and
144understand the potential impact from natural hazards in order to
145reduce their impact on Australian communities \citep{Nielsen2006}.
146ANUGA is currently being trialled for flood
147modelling \citep{Rigby2008}.
148
149The validity of other hydrodynamic models have been reported
150elsewhere, with \citet{Hubbard02} providing an
151excellent review of 1D and 2D models and associated validation
152tests. They described the evolution of these models from fixed, nested
153to adaptive grids and the ability of the solvers to cope with the
154moving shoreline. They highlighted the difficulty in verifying the
155nonlinear shallow water equations themselves as the only standard
156analytical solution is that of \citet{Carrier58} that is strictly for
157non-breaking waves. Further,
158whilst there is a 2D analytic solution from \citet{Thacker81}, it appears
159that the circular island wave tank example of Briggs et al will become
160the standard data set to verify the equations.
161
162This paper will describe the validation outputs in a similar way to
163\citet{Hubbard02} to
164present an exhaustive validation of the numerical model.
165Further to these tests, we will
166incorporate a test to verify friction values. The tests reported in
167this paper are:
168\begin{itemize}
169 \item Verification against the 1D analytical solution of Carrier and
170 Greenspan (p~\pageref{sec:carrier})
171 \item Testing against 1D (flume) data sets to verify wave height and
172 velocity (p~\pageref{sec:stage and velocity})
173 \item Determining friction values from 1D flume data sets
174 (p~\pageref{sec:friction})
175 \item Validation against a genuinely 2D analytical
176 solution of the model equations (p~\ref{sec:XXX})
177 \item Testing against the 2D Okushiri benchmark problem
178 (p~\pageref{sec:okushiri})
179 \item Testing against the 2D data sets modelling wave run-up around a circular island by Briggs et al.
180 (p~\pageref{sec:circular island})
181\end{itemize}
182
183
184Throughout the paper, qualitative comparisons will be drawn against
185other models. Moreover, all source code necessary to reproduce the
186results reported in this paper is available as part of the ANUGA
187distribution in the form of a test suite. It is thus possible for
188anyone to readily verify that the implementation meets the
189requirements set out by these benchmarks.
190
191
192%Hubbard and Dodd's model, OTT-2D, has some similarities to ANUGA, and
193%whilst the mesh can be refined, it is based on rectangular mesh.
194
195%The ANUGA model and numerical scheme is briefly described in
196%section~\ref{sec:model}. A more detailed description of the numerical
197%scheme and software implementation can be found in \citet{Nielsen2005} and
198%\citet{Roberts2007}.
199The six case studies to validation and verify ANUGA
200will be presented in section~\ref{sec:validation}, with the
201conclusions outlined in section~\ref{sec:conclusions}.
202
203NOTE: This is just a brain dump at the moment and needs to be incorporated properly
204in the text somewhere.
205
206Need some discussion on Bousssinesq type models - Boussinesq equations get the
207nonlinearity and dispersive effects to a high degree of accuracy
208
209moving wet-dry boundary algorithms - applicability to coastal engineering
210
211Fuhrman and Madesn 2008 \cite{Fuhrman2008}do validation - they have a Boussinesq type
212model, finite
213difference (therefore needing a supercomputer), 4th order, four stage RK time stepping
214scheme.
215
216their tests are (1) nonlinear run-up on periodic and transient waves on a sloping
217beach with excellent comparison to analytic solutions (2) 2d parabolic basin
218(3) solitary wave evolution through 2d triangular channel (4) solitary wave evolution on
219conical island (we need to compare to their computation time and note they use a
220vertical exaggeration for their images)
221
222excellent accuracy mentioned - but what is it - what does it mean?
223
224of interest is that they mention mass conservation and calculate it throughout the simulations
225
226Kim et al \cite{DaiHong2007} use Riemann solver - talk about improved accuracy by using 2nd order upwind
227scheme. Use finite volume on a structured mesh. Do parabolic basic and circular island. Needed?
228
229Delis et all 2008 \cite{Delis2008}- finite volume, Godunov-type explicit scheme coupled with Roe's
230approximate Riemann solver. It accurately describes breaking waves as bores or hydraulic jumps
231and conserves volume across flow discontinuties - is this just a result of finite volume?
232
233They also show mass conservation for most of the simulations
234
235similar range of validation tests that compare well - our job to compare to these as well
236
237\section{Mathematical model, numerical scheme and implementation}
238\label{sec:model}
239
240The ANUGA model is based on the shallow water wave equations which are
241widely regarded as suitable for modelling 2D flows subject to the
242assumptions that horizontal scales (e.g. wave lengths) greatly exceed
243the depth, vertical velocities are negligible and the fluid is treated
244as inviscid and incompressible. See e.g. the classical texts
245\citet{Stoker57} and \citet{Peregrine67} for the background or
246\citet{Roberts1999} for more details on the mathematical model
247used by ANUGA.
248
249The conservation form of the shallow water wave
250equations used in ANUGA are:
251$252\frac{\partial \UU}{\partial t}+\frac{\partial \EE}{\partial 253x}+\frac{\partial \GG}{\partial y}=\SSS 254$
255where $\UU=\left[ {{\begin{array}{*{20}c} 256 h & {uh} & {vh} \\ 257\end{array} }} \right]^T$ is the vector of conserved quantities; water depth
258$h$, $x$-momentum $uh$ and $y$-momentum $vh$. Other quantities
259entering the system are bed elevation $z$ and stage (absolute water
260level above a reference datum such as Mean Sea Level) $w$,
261where the relation $w = z + h$ holds true at all times.
262The fluxes in the $x$ and $y$ directions, $\EE$ and $\GG$ are given
263by
264$265\EE=\left[ {{\begin{array}{*{20}c} 266 {uh} \hfill \\ 267 {u^2h+gh^2/2} \hfill \\ 268 {uvh} \hfill \\ 269\end{array} }} \right]\mbox{ and }\GG=\left[ {{\begin{array}{*{20}c} 270 {vh} \hfill \\ 271 {vuh} \hfill \\ 272 {v^2h+gh^2/2} \hfill \\ 273\end{array} }} \right] 274$
275and the source term (which includes gravity and friction) is given
276by
277$278\SSS=\left[ {{\begin{array}{*{20}c} 279 0 \hfill \\ 280 -{gh(z_{x} + S_{fx} )} \hfill \\ 281 -{gh(z_{y} + S_{fy} )} \hfill \\ 282\end{array} }} \right] 283$
284where $S_f$ is the bed friction. The friction term is modelled using
285Manning's resistance law
286$287S_{fx} =\frac{u\eta ^2\sqrt {u^2+v^2} }{h^{4/3}}\mbox{ and }S_{fy} 288=\frac{v\eta ^2\sqrt {u^2+v^2} }{h^{4/3}} 289$
290in which $\eta$ is the Manning resistance coefficient.
291
292%%As demonstrated in our papers, \cite{modsim2005,Roberts1999} these
293%%equations provide an excellent model of flows associated with
294%%inundation such as dam breaks and tsunamis. Question - how do we
295%%know it is excellent?
296
297ANUGA uses a finite-volume method as
298described in \citet{Roberts2007} where the study area is represented by an
299unstructured triangular mesh in which the vector of conserved quantities
300$\UU$ is maintained and updated over time. The flexibility afforded by
301allowing unstructed meshes rather than fixed resolution grids
302is the ability for the user to refine the mesh in areas of interest
303while leaving other areas coarse and thereby conserving computational
304resources.
305
306
307The approach used in ANUGA are distinguished from many
308other implementations (e.g. \citet{Hubbard02} or \citet{Zhang07}) by the
309following features:
310\begin{itemize}
311 \item The fluxes across each edge are computed using the semi-discrete
312 central-upwind scheme for approximating the Riemann problem
313 proposed by \citet{KurNP2001}. This scheme deals with different
314 flow regimes such as shocks, rarefactions and sub to super
315 critical flow transitions using one general approach. We have
316 found this scheme to be pleasingly simple, robust and efficient.
317 \item ANUGA does not employ a shoreline detection algorithm as the
318 central-upwind scheme is capable of resolving fluxes arising between
319 wet and dry cells. ANUGA does optionally bypass unnecessary
320 computations for dry-dry cell boundaries purely to improve performance.
321 \item ANUGA employs a second order spatial reconstruction of triangles
322 to produce a piece-wise linear function construction of the conserved
323 quantities. This function is allowed to be discontinuous across the
324 edges of the cells, but the slope of this function is limited to avoid
325 artificially introduced oscillations. This approach provides good
326 approximation of steep gradients in the solution. However,
327 where the depths are very small compared to the bed-slope a linear
328 combination between second order and first order reconstructions is
329 employed to guarantee numerical stability that may arise form very
330 small depths.
331\end{itemize}
332
333In the computations presented in this paper we use an explicit Euler
334time stepping method with variable timestepping subject to the
335CFL condition:
336$337 \delta t = \min_k \frac{r_k}{v_k} 338$
339where $r_k$ refers to the radius of the inscribed circle of triangle
340$k$, $v_k$ refers to the maximal velocity calculated from fluxes
341passing in or out of triangle $k$ and $\delta t$ is the resulting
342'safe' timestep to be used for the next iteration.
343
344
345ANUGA utilises a general velocity limiter described in the
346manual which guarantees a gradual compression of computed velocities
347in the presence of very shallow depths:
348\begin{equation}
349 \hat{u} = \frac{\mu}{h + h_0/h}, \bigskip \hat{v} = \frac{\nu}{h + h_0/h},
350\end{equation}
351where $h_0$ is a regularisation parameter that controls the minimal
352magnitude of the denominator. The default value is $h_0 = 10^{-6}$.
353
354
355ANUGA is mostly written in the object-oriented programming
356language Python with computationally intensive parts implemented
357as highly optimised shared objects written in C.
358
359Python is known for its clarity, elegance, efficiency and
360reliability. Complex software can be built in Python without undue
361distractions arising from idiosyncrasies of the underlying software
362language syntax. In addition, Python's automatic memory management,
363dynamic typing, object model and vast number of libraries means that
364ANUGA scripts can be produced quickly and can be adapted fairly easily to
365changing requirements.
366
367
368
369\section{Validation}
370\label{sec:validation} Validation is an ongoing process and the purpose of this paper
371is to describe a range of tests that validate ANUGA as a hydrodynamic model.
372This section will describe the six tests outlined in section~\ref{sec:intro}.
373Run times where specified measure the model time only and exclude model setup,
374data conversions etc. All examples were timed on a a 2GHz 64-bit
375Dual-Core AMD Opteron(tm) series 2212 Linux server. %This is a tornado compute node (cat /proc/cpuinfo).
376
377
378\subsection{1D analytical validation}
379
380Tom Baldock has done something here for that NSW report
381
382\subsection{Stage and Velocity Validation in a Flume}
383\label{sec:stage and velocity}
384This section will describe tilting flume tank experiments that were
385conducted at the Gordon McKay Hydraulics Laboratory at the University of
386Queensland that confirm ANUGA's ability to estimate wave height
387and velocity. The same flume tank simulations were also used
388to explore Manning's friction and this will be described in the next section.
389
390The flume was set up for dam-break experiments, having a
391water reservior at one end. The flume was glass-sided, 3m long, 0.4m
392in wide, and 0.4m deep, with a PVC bottom. The reservoir in the flume
393was 0.75m long. For this experiment the reservoir water was 0.2m
394deep. At time zero the reservoir gate is manually opened and the water flows
395into the other side of the flume. The water ran up a flume slope of
3960.03 m/m. To accurately model the bed surface a Manning's friction
397value of 0.01, representing PVC was used.
398
399% Neale, L.C. and R.E. Price. Flow characteristics of PVC sewer pipe.
400% Journal of the Sanitary Engineering Division, Div. Proc 90SA3, ASCE.
401% pp. 109-129. 1964.
402
403Acoustic displacement sensors that produced a voltage that changed
404with the water depth was positioned 0.4m from the reservoir gate. The
405water velocity was measured with an Acoustic Doppler Velocimeter 0.45m
406from the reservoir gate. This sensor only produced reliable results 4
407seconds after the reservoir gate opened, due to limitations of the sensor.
408
409
410% Validation UQ flume
411% at X:\anuga_validation\uq_sloped_flume_2008
412% run run_dam.py to create sww file and .csv files
413% run plot.py to create graphs heere automatically
414% The Coasts and Ports '2007 paper is in TRIM d2007-17186
415\begin{figure}[htbp]
416\centerline{\includegraphics[width=4in]{uq-flume-depth}}
417\caption{Comparison of wave tank and ANUGA water height at .4 m
418 from the gate}\label{fig:uq-flume-depth}
419\end{figure}
420
421\begin{figure}[htbp]
422\centerline{\includegraphics[width=4in]{uq-flume-velocity}}
423\caption{Comparison of wave tank and ANUGA water velocity at .45 m
424 from the gate}\label{fig:uq-flume-velocity}
425\end{figure}
426
427Figure~\ref{fig:uq-flume-depth} shows that ANUGA predicts the actual
428water depth very well, although there is an initial drop in water depth
429within the first second that is not simulated by ANUGA.
430Water depth and velocity are coupled as described by the nonlinear
431shallow water equations, thus if one of these quantities accurately
432estimates the measured values, we would expect the same for the other
433quantity. This is demonstrated in Figure~\ref{fig:uq-flume-velocity}
434where the water velocity is also predicted accurately. Sediment
435transport studies rely on water velocity estimates in the region where
436the sensors cannot provide this data. With water velocity being
437accurately predicted, studies such as sediment transport can now use
438reliable estimates.
439
440
441\subsection{Okushiri Wavetank Validation}
442\label{sec:okushiri}
443As part of the Third International Workshop on Long-wave Runup
444Models in 2004 (\url{http://www.cee.cornell.edu/longwave}), four
445benchmark problems were specified to allow the comparison of
446numerical, analytical and physical models with laboratory and field
447data. One of these problems describes a wave tank simulation of the
4481993 Okushiri Island tsunami off Hokkaido, Japan \cite{MatH2001}. A
449significant feature of this tsunami was a maximum run-up of 32~m
450observed at the head of the Monai Valley. This run-up was not
451uniform along the coast and is thought to have resulted from a
452particular topographic effect. Among other features, simulations of
453the Hokkaido tsunami should capture this run-up phenomenon.
454
455This dataset has been used by to validate tsunami models by
456a number of tsunami scientists. Examples include Titov ... lit review
457here on who has used this example for verification (Leharne?)
458
459\begin{figure}[htbp]
460%\centerline{\includegraphics[width=4in]{okushiri-gauge-5.eps}}
461\centerline{\includegraphics[width=4in]{ch5.png}}
462\centerline{\includegraphics[width=4in]{ch7.png}}
463\centerline{\includegraphics[width=4in]{ch9.png}}
464\caption{Comparison of wave tank and ANUGA water stages at gauge
4655,7 and 9.}\label{fig:val}
466\end{figure}
467
468
469\begin{figure}[htbp]
470\centerline{\includegraphics[width=4in]{okushiri-model.jpg}}
471\caption{Complex reflection patterns and run-up into Monai Valley
472simulated by ANUGA and visualised using our netcdf OSG
473viewer.}\label{fig:run}
474\end{figure}
475
476The wave tank simulation of the Hokkaido tsunami was used as the
477first scenario for validating ANUGA. The dataset provided
478bathymetry and topography along with initial water depth and the
479wave specifications. The dataset also contained water depth time
480series from three wave gauges situated offshore from the simulated
481inundation area. The ANUGA model comprised $41404$ triangles
482and took about $1330$ s to run on the test platform described in
483Section~\ref{sec:validation}.
484
485The script to run this example is available in the ANUGA distribution in the subdirectory
486\code{anuga_validation/automated_validation_tests/okushiri_tank_validation}.
487
488
489Figure~\ref{fig:val} compares the observed wave tank and modelled
490ANUGA water depth (stage height) at one of the gauges. The plots
491show good agreement between the two time series, with ANUGA
492closely modelling the initial draw down, the wave shoulder and the
493subsequent reflections. The discrepancy between modelled and
494simulated data in the first 10 seconds is due to the initial
495condition in the physical tank not being uniformly zero. Similarly
496good comparisons are evident with data from the other two gauges.
497Additionally, ANUGA replicates exceptionally well the 32~m Monai
498Valley run-up, and demonstrates its occurrence to be due to the
499interaction of the tsunami wave with two juxtaposed valleys above
500the coastline. The run-up is depicted in Figure~\ref{fig:run}.
501
502This successful replication of the tsunami wave tank simulation on a
503complex 3D beach is a positive first step in validating the ANUGA
504modelling capability.
505
506\subsection{Runup of solitary wave on circular island wavetank validation}
507\label{sec:circular island}
508This section will describe the ANUGA results for the experiments
509conducted by Briggs et al (1995). Here, a 30x25m basin with a conical
510island is situated near the centre and a directional wavemaker is used
511to produce planar solitary waves of specified crest lenghts and
512heights. A series of gauges were distributed within the experimental
513setup. As described by Hubbard and Dodd \cite{Hubbard02}, a number of
514researchers have used this benchmark problem to test their numerical
515models. {\bf Jane: check whether these results are now avilable as
516they were not in 2002}. Hubbard and Dodd \cite{Hubbard02} note that a
517particular 3D model appears to obtain slightly better results than the
5182D ones reported but that 3D models are unlikely to be competitive in
519terms of computing power for applications in coastal engineering at
520least. Choi et al \cite{Choi07} use a 3D RANS model (based on the
521Navier-Stokes equations) for the same problem and find a very good
522comparison with laboratory and 2D numerical results. An obvious
523advantage of the 3D model is its ability to investigate the velocity
524field and Choi et al also report on the limitation of depth-averaged
5252D models for run-up simulations of this type.
526
527Once results are availble, need to compare to Hubbard and Dodd and
528draw any conclusions from nested rectangular grid vs unstructured
529gird. Figure \ref{fig:circular screenshots} shows a sequence of
530screenshots depicting the evolution of the solitary wave as it hits
531the circular island.
532
533\begin{figure}[htbp]
534\centerline{
535 \includegraphics[width=5cm]{circular1.png}
536 \includegraphics[width=5cm]{circular2.png}}
537\centerline{
538 \includegraphics[width=5cm]{circular3.png}
539 \includegraphics[width=5cm]{circular4.png}}
540\centerline{
541 \includegraphics[width=5cm]{circular5.png}
542 \includegraphics[width=5cm]{circular6.png}}
543\centerline{
544 \includegraphics[width=5cm]{circular7.png}
545 \includegraphics[width=5cm]{circular8.png}}
546\centerline{
547 \includegraphics[width=5cm]{circular9.png}
548 \includegraphics[width=5cm]{circular10.png}}
549\caption{Screenshots of the evolution of solitary wave around circular island.}
550\label{fig:circular screenshots}
551\end{figure}
552
553
554\subsection{Flume tank validation before and after breaking waves}
555
556To explicitly determine if ANUGA can model waves after breaking
557several experiments were conducted at the Monash University Institute for
558Sustainable Water Resources using a wave flume. The experiment was
559designed to produce a variety of breaking waves. The experiment was
560conducted on a 2.5$^\circ$ and a 1.5$^\circ$ plane beach slope set-up
561in a glass-sided wave flume of 40m in length, 1.0m wide and 1.6m deep.
562The wave generator can generate waves up to 0.6m in height, with a
563period range of 0.3 - 7.0 seconds.
564
565Four tests with different combinations of wave height and wave period
566were used, with each test being repeated once.
567
568A variety of measurements were taken during the simulation. The wave
569height at breaking, when present, were measured using a video
570recorder. Mid-depth water velocity and wave height were measured on
571the approach section. The water height at several points along the
572flume were measured using pressure transducers.
573
574Details of the tests performed are given in Table \ref{tab:hinwoodSummary}.
575
576\begin{table}
577\caption{Details of the Monash University experiments.} % Can't get right
578\begin{center}
579 \begin{tabular}{ c p{3cm} p{3cm} p{3cm} }
580
581 \hline
582 Test Name & Beach slope nominal, \emph{degrees} & Water depth offshore,
583 \emph{mm } & Wave frequency nominal, \emph{Hz} \\ \hline
584 T1R3 & 3.5 & 400 & 0.200 \\ \hline
585 T1R5 & 3.5 & 400 & 0.200 \\ \hline
586 T2R7 & 3.5 & 400 & 0.125 \\ \hline
587 T2R8 & 3.5 & 400 & 0.125 \\ \hline
588 T3R28 & 1.5 & 336 & 0.200 \\ \hline
589 T3R29 & 1.5 & 336 & 0.200 \\ \hline
590 T4R31 & 1.5 & 336 & 0.125 \\ \hline
591 T4R32 & 1.5 & 336 & 0.125 \\ \hline
592
593 \end{tabular}
594 \label{tab:hinwoodSummary}
595
596\end{center}
597\end{table}
598
599 All of these experiments were simulated using ANUGA. The Mid-depth
600water velocity and wave height measured on the approach section were
601as boundary conditions for the ANUGA simulations. For both the
602experimental and simulation results the zero data was the still water
603line. To quantify the difference between the simulated stage and the
604experimental stage the Root Mean Square Deviation (RMSD)
605(\cite{Kobayshi2000}) was used
606
607$608RMSD =\sqrt {\frac{1 }{n} \displaystyle\sum_{i=1}^{n}{(x_i - y_i)}^2} 609$
610
611
612\label{sec:Hinwood}
613
614
615
616
617\clearpage
618
619\section{Conclusions}
620\label{sec:conclusions}
621ANUGA is a flexible and robust modelling system
622that simulates hydrodynamics by solving the shallow water wave
623equation in a triangular mesh. It can model the process of wetting
624and drying as water enters and leaves an area and is capable of
625capturing hydraulic shocks due to the ability of the finite-volume
626method to accommodate discontinuities in the solution.
627ANUGA can take as input bathymetric and topographic datasets and
628simulate the behaviour of riverine flooding, storm surge,
629tsunami or even dam breaks.
630Initial validation using wave tank data supports ANUGA's
631ability to model complex scenarios. Further validation will be
632pursued as additional datasets become available.
633The ANUGA source code and validation case studies reported here are available
634at \url{http://sourceforge.net/projects/anuga}.
635
636something about use on flood modelling community and their validation initiatives
637
638
639%\bibliographystyle{plainnat}
640\bibliographystyle{elsart-harv}
641\bibliography{anuga-bibliography}
642
643\end{document}
Note: See TracBrowser for help on using the repository browser.
|
# Boxing Pythagoras
## Yet another failed attempt at showing 0.999…≠1
I’ve discussed before how mathematics can sometimes lead to very counterintuitive results. One of the most common, and famous, of these counterintuitive properties of math is that the number 0.999… (that is, zero point nine, nine, nine, repeating) is equal to 1. This one is so well known that it is fairly often taught even to Elementary and High School students. If you are unfamiliar with this discussion, I highly recommend that you watch this video from Vi Hart, in which she discusses 10 different reasons to accept this concept. Additionally, you may have fun watching this video, in which she lampoons the common objections to the concept.
Despite the fact that it is fairly simple to prove that 0.999…=1, the concept is so counterintuitive that I find people try to struggle against it– even when they know and accept the reasoning behind the equality. One such attempt comes from Presh Talwalkar. In the following video, Mr. Talwalkar attempts to demonstrate that on the Surreal number system, 0.999…≠1.
Unfortunately for Mr. Talwalkar, he is wrong. Even on the Surreals, it is still true that 0.999…=1.
In the video, Mr. Talwalkar acknowledges that it is absolutely true that 0.999…=1 on the Real numbers. However, he then asserts that it is not true that 0.999…=1 on the Surreal numbers. Right away, this should look fairly suspect to anyone familiar with the Surreals. The reason for that is that the Surreals are a superset which contains the Real numbers. Anything which is true for a number which exists within the Real numbers will similarly be true for that number in the Surreals. It is therefore entirely incoherent to claim that 0.999… is a different number in the Surreals than it is in the Reals.
Most of Mr. Talwalkar’s video is fairly accurate, though I would prefer a more rigorous treatment of its subject matter. The point where it goes wrong, however, comes when he attempts to discuss some “weird numbers” which can be constructed on the surreals. He begins by discussing $\{0|1,\frac{1}{2},\frac{1}{4},\frac{1}{8},...\}=\epsilon$, which he erroneously claims to be “1 divided by infinity” and “point zero repeating, with a one at the end.” Neither of these descriptions is even coherent. Infinity is not a number. You cannot divide 1 by infinity any more than you can divide 1 by Blue, or by Sweet, or by Alexander Hamilton. The latter description “point zero repeating, with a one at the end” is quite obviously self-contradictory. If we have “point zero repeating,” then there is no “end” at which to write a “one.” That said, the Surreal number which Mr. Talwalkar defines here, $\epsilon$, is an actual number and can be utilized in mathematics. As defined, $\epsilon$ represents a number which is greater than zero, but smaller than all of the Real numbers. That is to say, for any Real number $r$, it is true that $0 <\epsilon .
He continues by defining another "weird number," $\{ 0,\frac{1}{2},\frac{3}{4},\frac{7}{8},...|1\}=1-\epsilon$. Again, this number is correctly defined, and it is an actual number. It is therefore possible to deduce certain properties which are possessed by $1-\epsilon$. For example, since we know that $\epsilon> 0$, we can know with certainty that $1-\epsilon <1$. However, Mr. Talwalkar oversteps the bounds of logic when he baldly asserts that "you can think about it as point nine repeating." He gives absolutely no justification for asserting that $1-\epsilon=0.999...$, and it is actually quite simple to prove that this assertion is, in fact, entirely untrue. I shall do so, now, by Reductio ad Absurdum.
Let’s start by assuming Mr. Talwalkar’s assertion to be true. Then, by exploring the properties of the numbers, I’ll show that this assertion leads to a logical contradiction, and that it therefore cannot be true.
1. $1-\epsilon=0.999...$
2. $\epsilon>0$
3. Given (1) and (2), $1-\epsilon\neq 1$ and therefore $0.999...\neq 1$
4. $\frac{1}{3}=0.333...$
5. $\frac{1}{3}\times 3=1$
6. $0.333... \times 3=0.999...$
7. Given (4) and (6), $\frac {1}{3}\times 3=0.999...$
8. Given (5) and (7), $0.999...=1$
9. Given (3) and (8), $0.999... \neq 0.999...$
This, of course, is nonsensical. A number must always equal itself. Therefore, our premise (1) cannot be true.
It doesn’t matter whether we are talking about the Rational numbers or the Real numbers or the Hyperreal numbers or the Surreal numbers. The simple fact of the matter is that 0.999… is equal to 1. For some reason, many people find this very difficult to accept, but it is absolutely true. Presh Talwalkar’s attempt to show otherwise fails just as surely as all those which came before it. To quote Vi Hart on the issue:
If you’re having Math problems, I feel bad for you, son.
I got 98.999… problems, but 0.999… equals 1.
## 29 thoughts on “Yet another failed attempt at showing 0.999…≠1”
1. It is perhaps unfortunate that infinite decimal expansions of 1/3, 1/7, etcetera are the first exposure to kids of “the infinite”, and we hide the handwaving while happily subtracting 0.9999…. from 9.99999… to get 9, without actually performing the calculation.
Strictly speaking an infinite decimal is an infinite series, and has to be seen as a limit of partial sums of the terms of the corresponding infinite sequence 0.9, 0.09, 0.009,…..
It doesn’t take too long to establish the limit as 1
What is the value of pi – e ?????
2. Ignostic Dave on said:
It seems to me that he could have skipped the business with surreal numbers and said, “Look at all those nines, it’s clearly not equal to one.”
• The funny thing is that it is extremely easy to find a way to legitimately make 0.999… not equal to 1. Just change the base of your number system to anything greater than 10.
For example, 0.999… in Hexadecimal is exactly 9/F (that is, 9/15 in Decimal). This is CLEARLY not equal to 1.
3. In 2015 it was proved that 0.999… does NOT equal 1. I will explain why here, but you may prefer to watch the two videos by ‘Karma Peny’ on YouTube.
(1) If we can prove 0.333… = 1/3 then it follows that 0.999… = 1, so all we need to do is prove 0.333… = 1/3.
(2) We start with 1.000… divided by 3.000…. (we start by assuming all our ‘real’ numbers have ‘infinitely many’ digits to the right of the decimal point).
(3) Next is the tricky bit. We need to explain how the division process can complete, because if it does not complete then we have not got a fixed value. This argument assumes numbers have fixed values and are not constantly changing their value.
If we stop the division process after n decimal places have been processed, then to the right of the decimal point we will have n iterations of the of the digit ‘3’ plus an expression for the remainder part, which is 1/(3 times (10 to the power n)). The n digits plus the remainder expression equals 1/3 exactly.
(4) Now, however large we make n, there will still be this remainder expression which has a positive non-zero value.
(5) We could assert that after ‘infinitely many’ iterations, the remainder part is somehow no longer relevant and we will have a fixed value that equates to 1/3. But equally we could assert that after ‘infinitely many’ iterations there are still ‘infinitely many’ more non-zero terms that need to be processed by the division process. Here the remainder expression still represents a non-zero value (even though we appear to have lost the ability to talk about it accurately because we have replaced ‘n’ by ‘infinitely many’).
(6) Now we need a clear understanding of what ‘infinitely many’ means in order to resolve this issue. We need to show how ‘infinitely many’ can be achieved because the two assertions made above directly contradict one another.
(7) We find we cannot show how the division process can end, and so we cannot resolve the problem of how the reminder expression can be ignored. In short, ‘infinitely many’ cannot overcome the contradiction it creates.
(8) Traditional proofs like the 10x -x proof use ‘bad mathematics’ because they informally take terms ‘from infinity’ in order to produce the result they want to produce. If you do this proof with rigour (as shown in the ‘Karma Peny’ video), then the flaw in the so-called proof is obvious.
(9) The ‘epsilon-delta’ argument says that 0.999… must equal 1 because there is no number between the two. But this argument starts with the assumption that 0.999… can exist in ‘entirety’ and does have a static value. That is to say, its starting assumption is that ‘infinitely many’ is a valid concept. But we have already seen above that this concept results in contradiction.
In short, if we model ‘endlessness’ instead of trying to realise ‘infinity’ then mathematics will work without paradoxes and without contradictions.
• As I mentioned in my reply on the BP Facebook page, this argument does not seem so much to be that 0.333… does not equal 1/3 (and, therefore, 0.999… does not equal 1) so much as it is the claim that 0.333… and 0.999… are not actually numbers. You seem to justify this position by rejecting the concept of a completed infinite set.
As I see no reason to accept that there cannot be a completed infinite set, and since such a position would require the rejection of ALL Real numbers, your argument does not seem very convincing.
The main problem with your position comes in (3) through (5). You are here admitting that:
..which is an infinite sum. You then rightly say that any nth partial sum of this series would differ from 1/3 by exactly:
Interestingly, this seems to be a tacit admission that 0.333…=1/3, since we could replace the 1/3 in this expression with 0.333… and it would be exactly as true.
However, you assert that “after ‘infinitely many’ iterations there are still ‘infinitely many’ non-zero terms that need to be processed,” which does not seem to be true. In fact, it seems fairly trivial to see that the cardinality of the terms in our infinite series is Aleph Null, which is the smallest possible cardinality an infinite series can have. Therefore, if it is true that we have a completed, infinite set of all of the terms in the sum, then it cannot be true that we have infinitely many more terms which are not yet accounted for.
• By the way, you are correct to say that I am claiming that 0.333… and 0.999… are not actually numbers, if a number is considered to have a fixed/static value.
Also,
1/1 + 1/2 + 1/3 + 1/4 + … is not a number
1+1+1+1+… is not a number
1+2+3+4+…. is not a number
1-1+1-1+1-1+… is not a number
But if you accept ‘infinitely many’ combined with the idea of ‘convergence at infinity’ then you will think 0.333… is a number,
and you will think
1/2 + 1/4 + 1/8 + 1/16 + … is a number
All these objects are essentially of the same fundamental type; they are endless series were we have an expression to define the nth term.
You would not think that parallel lines meet at ‘infinity’ because we can prove with simple logic that the two paths will not meet at any distance.
We can also prove using simple logic that 0.999… can not equate to 1 however far we go. But the wordplay trick of ‘convergence’ is to forget simple logic and take it that the paths somehow magically meet at a mystical place called infinity.
• Also, my position does not reject ALL real numbers, just those that are claimed to contain ‘infinitely many’ non-zero terms.
Note that Pi, the square root of 2, and one third can all be represented in entirety on a computer (or in written form) in finite terms.
This finite representation includes a finite numeric part (e.g. diameter = 1) and the definition of an algorithm (e.g. algorithm to calculate the circumference, or the division algorithm).
But when we apply the algorithm, either it needs to stop on its own, producing a finite numeric result, or after n terms we include an expression for the remainder part. This way we maintain equality, we have no ’rounding errors’, and most importantly we maintain mathematical rigour.
What we should not do is invent a mystical place called infinity and pretend everything will magically end and not end at the same time at this strange place.
• You would not think that parallel lines meet at ‘infinity’ because we can prove with simple logic that the two paths will not meet at any distance.
Interesting that you would choose this example, as the truth of the matter is actually precisely the opposite of what you claim. This problem is known as the Parallel Postulate, and it stumped the world’s greatest mathematicians for 2500 years. The simple fact of the matter is that it is not possible to prove with “simple logic” that the two paths will not meet at any distance. We have to take this position axiomatically, in Euclidean geometry.
It was the realization of this exact point which led to the exploration of non-Euclidean geometries in the 19th Century.
We can also prove using simple logic that 0.999… can not equate to 1 however far we go.
The “no matter how far we go” is the problem with this. This presumes a finite exploration of the number. I will absolutely agree that:
…for any finite Natural number i. This does not imply that the same thing holds for, say, an infinite Hyperreal number i.
Yes, if you hold to an a priori rejection of the concept of the infinite, then there is no such thing as an infinite decimal expansion– in which case, the whole question is moot. If 0.999… is not a number, then it makes no more sense to proclaim, “0.999… does not equal 1” than to proclaim, “Blue does not equal 1.” Personally, I see no good reason to reject the concept of the infinite; and the successes of infinite and infinitesimal mathematics more than justify its adoption, in my eyes.
Also, my position does not reject ALL real numbers, just those that are claimed to contain ‘infinitely many’ non-zero terms.
So, you do not reject all Real numbers, just all Real numbers which are not Rational numbers.
Note that Pi, the square root of 2, and one third can all be represented in entirety on a computer (or in written form) in finite terms.
As a computer scientist and mathematician, I’d be incredibly interested in how you propose this is possible– presuming you mean that decimal expansions of these numbers can be represented in their entirety in finite terms.
What we should not do is invent a mystical place called infinity and pretend everything will magically end and not end at the same time at this strange place.
Infinity is not a place, mystical or otherwise. It is a property of numbers.
• I did not impose a priori rejection of the concept of ‘infinitely many’ in my argument. I found that if I assume this concept is true, then it leads to a contradiction.
Your counter argument now seems to be that my reasoning holds for objects that consist of a finite amount of terms, but that my reasoning does not hold for objects with ‘infinitely many’ terms.
There are two problems with this counter argument.
1. I was allowing the possibility of ‘infinitely many’ when I highlighted the contradiction.
2. Here you are imposing your priori acceptance of the concept of ‘infinitely many’. You are effectively saying that because you already believe ‘infinitely many’ is valid, then any counter argument must therefore be invalid.
Regarding decimal expansions, we can express one third as 0.333… to the n-th 3, plus 1/(3(10 to the power n)). Thus we can express the decimal digits to any required level of accuracy for our user (an engineer, say), and completely without loss of accuracy. There are no rounding errors here. As you are a computer scientist, you are likely to be familiar with the algorithms behind vector graphics. The principal is very similar.
Is your objection to my found contradiction that there are no more terms after ‘infinitely many’ and thus the series is not endless after all?
Can you explain why there cannot be ‘infinitely many’ after ‘infinitely many’? This is a valid scenario according to the example of Hilbert’s Hotel (where the hotel has ‘infinitely many’ guests and can still accommodate ‘infinitely many’ more).
• I did not impose a priori rejection of the concept of ‘infinitely many’ in my argument. I found that if I assume this concept is true, then it leads to a contradiction.
When I pointed out the reasons why there is no contradiction, you replied that if you presume the contradiction to be applicable, then the properties of infinite sets cannot be utilized to show that the contradiction is inapplicable. I took this to be an a priori dismissal of actual infinites.
If, however, you want to assume that the concept of “infinitely many” is valid, then it would seem you should also assume that the properties of infinite sets– including cardinality– are valid. That is, of course, unless you can show some good reason for rejecting those properties.
Here you are imposing your priori acceptance of the concept of ‘infinitely many’. You are effectively saying that because you already believe ‘infinitely many’ is valid, then any counter argument must therefore be invalid.
That’s not what I’m saying. I’m saying that if we assume the validity of the concept of “infinitely many,” which (as you’ve said) is the scenario which you were adopting for this argument, then the alleged contradiction which you presented is not actually present. There may be other valid counter-arguments. I’m not aware of any, but they may exist. I’m only speaking to the specific argument which you laid out.
Regarding decimal expansions, we can express one third as 0.333… to the n-th 3, plus 1/(3(10 to the power n)). Thus we can express the decimal digits to any required level of accuracy for our user (an engineer, say), and completely without loss of accuracy.
It is not “completely without loss of accuracy,” in the least. There is a completely obvious loss of accuracy. You and I have already agreed that no finite decimal expansion of 0.333… will ever equal 1/3. Therefore, any finite decimal expansion of 0.333… will bear a loss of accuracy as compared to 1/3.
Can you explain why there cannot be ‘infinitely many’ after ‘infinitely many’? This is a valid scenario according to the example of Hilbert’s Hotel (where the hotel has ‘infinitely many’ guests and can still accommodate ‘infinitely many’ more).
Hilbert’s Hotel requires manipulating the arrangement of guests in rooms. If you manipulate the arrangement of decimal digits in the number 0.333… in an analogous manner, then you will no longer have the number 0.333…
Nor is the concept of “after” even coherent when discussing the digits of 0.333… The word “after,” in this context, means “subsequent to the final term of the series.” As there is no final term of the series, there cannot be an “after.”
• I am not admitting that 0.3 recurring equals ‘infinitely many’ terms, which is what you are saying with that statements that includes “from n= 1 to infinity”.
What I did was I assumed ‘infinitely many’ was possible, and then I showed that this leads to a contradiction. Therefore statements such as ‘for n=1 to infinity’ and ‘infinitely many’ are invalid.
If my logic is correct and there is a contradiction, then Stevin’s real numbers and Cantor’s infinite sets are invalid and cannot be used as arguments against the original contradiction (i.e. ‘infinite sets’ and Aleph Null are inadmissible).
This is a fundamental problem and it needs to be addressed using fundamental core logic. .
But I think the meaning of your reply can still be gleamed. Your counter argument appears to be that after ‘infinitely many’ terms there are no more terms. This is the same as the first assertion in point (5) of my original argument.
Therefore my original argument still stands.
• If I propose 1=2, then both these counter arguments are invalid:
(1) Since I already know 1=2, this statement is true
(2) Since I already know 1 does not equal 2, this statement is false
The arguments against the validity of the statement should not start by assuming the validity or not of the statement.
If I point out to you that your argument is only valid if 1=2, then has case (1) as its basis and is therefore not valid. My pointing this out to you does not mean that I am assuming (2).
You said: “Therefore, any finite decimal expansion of 0.333… will bear a loss of accuracy as compared to 1/3.”
But if you re-read what I said, you will notice that I am only expanding to the desired level of accuracy (i.e. to a finite number of ‘3’ digits) and then I am adding a ‘remainder’ expression that makes the whole sum equate exactly to 1/3. This maintains mathematical rigour.
You said “in an analogous manner, then you will no longer have the number 0.333…”
But whenever we want to find a square root, find a circumference, or perform a division don’t we use analogous processes?
Do you believe that:
(1) The division process when applied to 1 divided by 3 does not operate in an analogous manner. If so, how else can a division process operate?
(2) The division process does operate in an analogous manner, but the answer it reveals already exists somewhere, and it already has infinite length. If so, where does the square root of 2 exist and how can you retrieve the n-th character from this place?
Getting back to the main point…
Your counter argument to the alleged contradiction is “ if it is true that we have a completed, infinite set of all of the terms in the sum, then it cannot be true that we have infinitely many more terms which are not yet accounted for.”.
Here you appear to be saying we have accounted for all the terms. You have not said how this is possible without having a ‘last term’ and importantly you have not explained how the division process can end, which was the whole point of my original argument. This is not a very convincing counter argument.
• Apologies for my mixing up of ‘analogous’ and ‘algorithmic’ in my previous post!
• But if you re-read what I said, you will notice that I am only expanding to the desired level of accuracy (i.e. to a finite number of ‘3’ digits) and then I am adding a ‘remainder’ expression that makes the whole sum equate exactly to 1/3. This maintains mathematical rigour.
What is the purpose of this decimal expansion?
If the purpose is to create an approximation of 1/3 for use in calculations, then the remainder expression is extraneous, because it will not be used in those calculations. As such, there is certainly a loss in accuracy.
If the purpose is simply to create an expression equal to 1/3, then the whole exercise is simply redundant, as we could simply say “1/3” and be done with it.
Apologies for my mixing up of ‘analogous’ and ‘algorithmic’ in my previous post!
No worries. The point I was making is that Hilbert’s Hotel has no bearing on the discussion because we are not manipulating the digits of 0.333… in any way similar to the manipulation of vacancies in the Grand Hotel.
Here you appear to be saying we have accounted for all the terms. You have not said how this is possible without having a ‘last term’ and importantly you have not explained how the division process can end, which was the whole point of my original argument. This is not a very convincing counter argument.
I see no reason why a last term is necessary. All that is necessary for a number to have meaning is for that number to be well-defined. It is quite easy to define what we mean by the symbol, “0.333…” This definition does not require a last term.
Nor is the Division operation the same as the algorithm by which we evaluate Division. I can offer a number of different algorithms for evaluating Division. The fact that we can iterate a particular algorithm indefinitely does not imply anything about the operation which that algorithm is intended to evaluate.
• You said: “If the purpose is to create an approximation of 1/3 for use in calculations, then the remainder expression is extraneous”
Often a result may be required (by an engineer, say) to make a real-world measurement to a desired level of accuracy. Often such results can feed into other calculations. If the value fed into other calculations is not entirely accurate, then the margin of error increases making later results less accurate and potentially outside of the required level of accuracy.
You said: “It is quite easy to define what we mean by the symbol, “0.333…” This definition does not require a last term.”
But it is not at all obvious because I interpret this in a completely different way than you do. I see this as a process that is endless, whereas you see this as a fixed value that has no more terms after ‘infinitely many’.
You said: “The fact that we can iterate a particular algorithm indefinitely does not imply anything about the operation which that algorithm is intended to evaluate.”
I’m not sure I understand what you are getting at. Does this mean although we try to evaluate pi, or a square root, or a division via processes, in a purer sense these are just transformations of static values into a different form. These values already exist; we are merely using processes to look at the values in different ways. This belief that all numbers/values already exist is known as mathematical platonism with a small ‘p’ (apologies for this sounding very silly if it is not what you mean).
I am still no closer to understanding your counter argument.
• Also…
Symbolic representations of endless iteration and/or endless recursion (often combined with phrases like ‘inductive proof’) are often purported to demonstrate that something is ‘infinite’ and thus contains ‘infinitely many’ elements.
But however many digits are produced by a division algorithm (or any other form of repetition/recursion), it can only ever be a finite number. It does not matter if we think of the algorithm as happening instantaneously; the count still cannot change from being finite to being infinite after some really big number, just like if you start counting the natural numbers you cannot reach ‘infinity’.
Therefore a cyclic algorithm that performs division cannot achieve a result containing ‘infinitely many’ digits. If the ‘division operation’ is something different to the algorithms used to perform division, then the division algorithms are invalid because they are not true to the definition of ‘division’. They should be scrapped and replaced by valid algorithms.
What is the true definition of ‘division’ if it is different to the algorithms? I would like this to be clarified so that I can understand how a result with ‘infinitely many’ digits can be achieved.
It appears we cannot produce ‘infinitely many’ digits and we cannot add ‘infinitely many’ digits and we cannot add up ‘infinitely many’ digits. So the only way we can attempt to ‘prove’ what infinitely many digits add up to is by arguments such as the 10x – x so-called ‘proof’ (which actually proves 0.999… does NOT equal 1 when done with rigour – as in the ‘Karma Peny’ video) and the epsilon-delta argument (which starts out by assuming that 0.999… can exist in ‘entirety’ and does have a static value – it effectively assumes what it sets out to prove).
With no valid mathematical definition for ‘infinitely many’ and with no valid proof, I see no reason to accept the concept.
• There is nothing real about the real numbers. Check out a good book on analysis and see that the words infinite and infinity are only used as a shorthand, and no claim for existence is made.
I can set up a sequence of partial sums of the sequence 0.5, 0.25, 0.125,… or 1/2, 1/4, 1/8, 1/16, … which has the same limit as the 0.9, 0.99, 0.999,…, which is 1.
The infinite, or never ending decimal expansion is a very non mathematical thing. It’s a shame that kids are obliged to deal with it using this description.
It is the limits of sequences that are the real numbers, and the only reason the word ‘number” is used is that you can show that they obey the same rules of arithmetic as the rational numbers.
• Often a result may be required (by an engineer, say) to make a real-world measurement to a desired level of accuracy. Often such results can feed into other calculations. If the value fed into other calculations is not entirely accurate, then the margin of error increases making later results less accurate and potentially outside of the required level of accuracy
I’m not sure how this is meant to address my point. If you split the number into a non-repeating decimal approximation and an error correction, you are still performing your calculations upon the approximation. Carrying along the error correction doesn’t change this, and each new calculation made requires an amendment to that error correction if it’s going to ensure arbitrary accuracy, down the line. You might as well just keep track of the exact calculation, alone, rather than an approximation plus an exact error calculation.
But it is not at all obvious because I interpret this in a completely different way than you do. I see this as a process that is endless, whereas you see this as a fixed value that has no more terms after ‘infinitely many’.
And therein lies the problem. When mathematicians discuss the number 0.333… they are not referring to an iterative algorithm. They are referring to a number just as static as 1 or 4 or 7,362. If you insist on discussing a number as if it was an algorithm, then you are knocking down a Straw Man. If you want to show that the way mathematicians treat a particular number leads to a paradox or a contradiction or an absurdity, you need to address the actual way that mathematicians treat that number. You cannot substitute a different understanding than they utilize and expect it to be truly meaningful.
I’m not sure I understand what you are getting at. Does this mean although we try to evaluate pi, or a square root, or a division via processes, in a purer sense these are just transformations of static values into a different form.
They are not transformations. They are approximations. An approximation takes a number which does not have a simple form, when using a particular symbology, and substitutes another number which is arbitrarily close to the original, but which can be expressed in a simple form using that symbology.
This belief that all numbers/values already exist is known as mathematical platonism with a small ‘p’ (apologies for this sounding very silly if it is not what you mean).
I am not a mathematical platonist. I am a Nominalist, through and through. When I use the phrase, “a number exists,” I do not mean it in the same way as I mean, “a horse exists.” Rather, I simply mean that the number in question can be well-defined.
What is the true definition of ‘division’ if it is different to the algorithms? I would like this to be clarified so that I can understand how a result with ‘infinitely many’ digits can be achieved.
A Division is the ratio of two numbers. An algorithm can help us to evaluate this ratio, but the algorithm is not, itself, the ratio.
I am still no closer to understanding your counter argument.
Then I will narrow the scope. I see no reason to think that your assertion from (5) is true. Specifically, I see no reason to think that there could be any remainder after an infinite expansion of 0.333…
• First my reply to howardat58…
If a series has endless non-zero terms then it is incorrect to pretend it equals ‘its limit’. An endless series is a perfectly valid object to use in mathematics just by correctly representing its property of endlessness. It is wrong to pretend it is something that it is not.
You can define what the concept of a ‘limit’ means, and you can associate a limit with an endless series. But the only reason anyone ever does this is to then pretend that in some respect the endless series equals the limit, it does not. This is self-delusion.
Furthermore, the concept of a ‘limit’ can be misleading and it leads to inconsistencies.
An endless series can have alternating positive and negative terms, where the sum to the n-th term alternates above and below a certain value, but which gets ever closer to the value as the series extends. The n-th sum is not ‘limited’ by the so-called limit in this case.
(2) An example of ‘inconsistency’
Some endless series are said to have a limit and other are not, which is inconsistent. To determine if a series like 1 + 1/2 + 1/3 + 1/4 +… has a limit we have to perform something called a ‘convergence test’, which is not elegant.
It also causes inconsistencies on graphs. For example, the famous Zeta function is defined as being an endless series but when we do a graph for it, we actually plot the ‘limits’ and we pretend this represents the actual value of the function at the plotted x-axis values.
This produces the misnomer that the Zeta function returns fixed values for input values greater than 1, but it “blows up to infinity” for values of 1 and under (hence the need for ‘analytic continuations’).
But if we stop deluding ourselves that we are plotting the actual value of the function and we realise what we are actually plotting, we can plot any value without the need for analytic continuations.
For any given input value, the Zeta function produces an endless series. We can form an expression for the sum to the n-th term for this series. This expression has a fixed part and a variable part. If we plot the fixed part of the expression for the sum to the n-th term, then we can plot a value for any given input, and when the value is greater than 1 this will equal the so-called ‘limit’. We have no inconsistency here.
And my reply to Boxing Pythagoras…
You said: “When mathematicians discuss the number 0.333… they are not referring to an iterative algorithm. They are referring to a number just as static as 1 or 4 or 7,362.”
Exactly, it is the assertion that such a number, with ‘infinitely many’ digits is a valid concept. I assumed it was and found a contradiction. With no valid mathematical definition for ‘infinitely many’ and with no valid proof, I see no reason to accept the concept.
You said: “ I simply mean that the number in question can be well-defined.”
But as I said in my last point, a definition based on repetition/recursion does not define ‘infinitely many’ because however many loops you count, it will always be a finite value. If you start counting the natural numbers you cannot reach ‘infinity’ you can only reach finite values. If your definition is based on inductive repetition then it is not well-defined.
You said “A Division is the ratio of two numbers. An algorithm can help us to evaluate this ratio, but the algorithm is not, itself, the ratio.”
If this is correct then to do a division we just need to show the two numbers and a symbol to indicate ‘ratio’. All the algorithms that try to get a single value are incorrect and false.
• If a series has endless non-zero terms then it is incorrect to pretend it equals ‘its limit’.
Again, I see no reason why such a series wouldn’t equal its limit. The mathematics is consistent and provides reliable results. You haven’t yet demonstrated a reason to think otherwise.
An endless series can have alternating positive and negative terms, where the sum to the n-th term alternates above and below a certain value, but which gets ever closer to the value as the series extends. The n-th sum is not ‘limited’ by the so-called limit in this case.
Can you give an example, and explain why you do not feel that the proposed limit is the actual limit of the series?
(2) An example of ‘inconsistency’
Some endless series are said to have a limit and other are not, which is inconsistent. To determine if a series like 1 + 1/2 + 1/3 + 1/4 +… has a limit we have to perform something called a ‘convergence test’, which is not elegant.
The fact that a concept is defined only for a limited set of parameters does not imply that it is inconsistent. If the concept produced differing results for the same set of parameters, then it would be inconsistent. Can you provide any examples of such?
Nor do I find anything inelegant about the concept of a convergence test. Why do you find this inelegant?
Exactly, it is the assertion that such a number, with ‘infinitely many’ digits is a valid concept. I assumed it was and found a contradiction.
As I’ve stated several times, I do not agree that you have demonstrated any contradiction. You’ve baldly asserted that one exists, but I’ve seen no demonstration of this claim, as yet.
But as I said in my last point, a definition based on repetition/recursion does not define ‘infinitely many’ because however many loops you count, it will always be a finite value.
At no point did I say that the definition is based upon repetition or recursion. I simply said that the number needs to be well-defined.
That said, I disagree with your statement, even as regards infinite iterative definitions. Once again, it is fallacious to pretend that you can draw conclusions about an infinite number of iterations based upon the properties of finite numbers of iterations.
If this is correct then to do a division we just need to show the two numbers and a symbol to indicate ‘ratio’. All the algorithms that try to get a single value are incorrect and false.
Yes, when we want to perform an operation, we can symbolize this. That does not, in any way, imply that algorithms which are utilized in helping to evaluate such operations are “incorrect and false.” Algorithms are tools which we utilize to help perform an operation. They are not the operation, itself.
A hammer and nails are tools which are used to attach things together. A hammer and nails are not the concept of attachment. Similarly, the algorithms which we utilize to evaluate a division are tools. These algorithms are not, themselves, division.
• You said: “I see no reason why such a series wouldn’t equal its limit. The mathematics is consistent ”
Either an endless series is:
(1) Finite. This is clearly false as the series would have a last term and would not be endless.
(2) Infinite. If we assume this to be true and then try to prove that the sum of the series 0.999… equates to 1 then we fail, in fact we end up proving that it cannot equal 1 (As is shown from 06:00 to 08:45 in this video: https://www.youtube.com/watch?v=–HdatJwbQY ).
(3) Endless. This third option is usually not even contemplated, but mathematics is more than capable of modelling a series as an endless process with no fixed value.
Choosing option (2) leads to paradoxes (google ‘infinity paradox’), contradictions (most obvious of which is how can something that is endless somehow end), and lots of counter-intuitive results (such as the Ramanujan sums for diverging series).
Choosing (3) leads to provable results, no paradoxes, and no counter-intuitive results.
You said: “give an example, and explain why you do not feel that the proposed limit is the actual limit of the series”
My claim is that the word ‘limit’ is misleading. A limit is a value beyond which something does not pass, but in the case of an alternating series it passes this ‘limit’ after each subsequent term is included.
If you want an example, just take any endless series consisting of all positive terms and which is said to ‘converge’. Next multiply each term by [(-1) to the power (n-1)].
A big downside of the way the word ‘limit’ is interpreted is that when we try to plot values for the Zeta function we fail for input values of 1 and below. A better term in my opinion would be something like ‘fixed part’, which would be short for the fixed part of the expression for the n-th sum. Then we could plot the ‘fixed part’ of the Zeta function for any input value and we would no longer be limited by the word limit.
You said: “If the concept produced differing results for the same set of parameters, then it would be inconsistent. Can you provide any examples”
My understanding of the word inconsistent is “not staying the same throughout”. This is why I stand by my example. Another example is that if we accept that in decimal form, real numbers have ‘infinitely many’ digits to the right of the decimal point, then our notation system allows three ways to represent numbers that are allowed all trailing zeros/nines (e.g. 1, 0.999… and 1.000…) but it allows just one way to represent other numbers (e.g. 0.333…).
You said “Nor do I find anything inelegant about the concept of a convergence test. Why do you find this inelegant?”
Because I see an endless series as one type of object and I believe it should be treated in the same way in mathematics regardless of if the n-th sum is increasing, decreasing or alternating as n increases.
Performing a convergence test and then asserting that if a series converges it can be said to equate to a real number, but other endless series don’t, does not appear elegant to me. But this is a futile argument because beauty is in the eye of the beholder.
You said “ I do not agree that you have demonstrated any contradiction. ”
Put simply, the contradiction is this:
(a) After any amount of terms, there will still be more positive non-zero terms remaining.
(b) After ‘infinitely many’ terms there will be no more terms
You may assume ‘infinitely many’ is valid and devise your own logic for infinite objects, but unless you can demonstrably prove that there is some overlap with finite and infinite objects you have no basis to assert that (b) is right.
You cannot demonstrate how the division process can end, and you cannot prove mathematically that ‘infinitely many’ digits will make 0.333… equate to 1/3. Therefore you cannot assert that an object described in finite terms, like 1/3, has anything at all to do with your infinite object 0.333…
You said “At no point did I say that the definition is based upon repetition or recursion. I simply said that the number needs to be well-defined”
If you are going to connect things consisting of finite terms (like 1/3 and 1), to things supposedly consisting of ‘infinitely many’ terms, (like how you define 0.333… and 0.999…), then you need to clearly define how finite terms and infinite terms relate to one another.
If your definition leads to a clear understanding of how an infinite object can be constructed, then I will concede it is well-defined. Otherwise it is not and you cannot assert a relationship that you cannot prove exists.
You said “Algorithms are tools which we utilize to help perform an operation. They are not the operation, itself.”
Words are very important in mathematics. Complicated formulas have no intrinsic meaning without precise descriptions of what the symbols mean. At the lowest level, mathematics is all about clarity of understanding using words.
Unfortunately words can have multiple meanings and this is where a lack of clarity is allowed to muddy the waters and create a lack of clarity.
One trick is the claim that ‘infinite’ means the same as ‘endless’ and then to claim that this means we must have ‘infinitely many’ of something.
Another trick is to say we can associate the limit of an endless series with the series itself, and in this way we can equate the series to a fixed value.
Why not call a ratio “a ratio” and call the process of division “division”? Either way, you know what I am talking about when I ask “how can the division process end?” To claim that it is not division is to avoid answering the question.
• (2) Infinite. If we assume this to be true and then try to prove that the sum of the series 0.999… equates to 1 then we fail, in fact we end up proving that it cannot equal 1 (As is shown from 06:00 to 08:45 in this video
Again, I do not see that “we end up proving that it cannot equal 1,” even accounting for Mr. Peny’s video.
Choosing option (2) leads to paradoxes (google ‘infinity paradox’), contradictions (most obvious of which is how can something that is endless somehow end), and lots of counter-intuitive results
I completely agree that there are counterintuitive results. However, beyond your bald assertions that they exist, you have yet to demonstrate that there are any legitimate paradoxes or contradictions implicit in the notion of infinity.
My claim is that the word ‘limit’ is misleading. A limit is a value beyond which something does not pass
This is not what the word “limit” means in mathematics. A brief discussion of the mathematical definition for “limit” can be found, here: http://www.wolframalpha.com/input/?i=mathematical+definition+of+limit
If you are basing your objections to mathematical limits based upon the definition you gave, you are committing an equivocation fallacy.
My understanding of the word inconsistent is “not staying the same throughout”.
Again, this is not what mathematicians mean by “inconsistent.” So, again, using such a definition to object to mathematics is to commit an equivocation fallacy.
Put simply, the contradiction is this:
(a) After any amount of terms, there will still be more positive non-zero terms remaining.
(b) After ‘infinitely many’ terms there will be no more terms
The problem is that you have not demonstrated (a) to be true. I will agree that after any finite amount of terms, there will still be more terms remaining. You have not demonstrated that this extends to any infinite amount of terms, as well.
You cannot demonstrate how the division process can end
Once again, division is not a process. The algorithms which we utilize to evaluate division are processes. Division is a mathematical operation. Mathematical operations are descriptions of relationships between quantities.
Words are very important in mathematics.
I completely agree! This is why your equivocation fallacies are particularly egregious.
One trick is the claim that ‘infinite’ means the same as ‘endless’ and then to claim that this means we must have ‘infinitely many’ of something.
Can you cite a single Set Theorist or textbook which has ever claimed “that ‘infinite’ means the same as ‘endless’ and [therefore] this means we must have ‘infinitely many’ of something?” Because I’ve never seen such a claim.
It is true that the concept of “infinity” has an unfortunate bit of ambiguity attached to it. It can refer to an indefinitely repeating iterative process, as you mention. However, it can also refer to a property of numbers. These are two entirely different concepts. When we talk about the quantity of digits in 0.999…, we are referring to a property of numbers; we are not referring to an iterative process. Objecting to the numeric property based upon an understanding of iterative processes is yet another equivocation fallacy.
Why not call a ratio “a ratio” and call the process of division “division”?
Because division is not a process.
Either way, you know what I am talking about when I ask “how can the division process end?” To claim that it is not division is to avoid answering the question.
It’s certainly not avoiding answering the question. It’s pointing out that the question is malformed and irrelevant. The fact that a particular algorithm for evaluating division may iterate indefinitely does not imply anything about the division, itself.
• You cannot prove that objects that can be expressing in a finite number of terms (like 1/3 or 1) are in any way related to your objects you claim to contain ‘infinitely many’ terms (in this case, namely the decimals 0.333… and 0.999…).
The well known proof of 10x – x proves the reverse. You cannot show how an algorithm can get from one to the other, and all you are left with are assertions and assumptions.
• However, it has been very enjoyable discussing this with you, so thank you very much.
As we have got to the point where we are repeating ourselves, I think this has run its course.
Once again many thanks, Bye.
• You cannot prove that objects that can be expressing in a finite number of terms (like 1/3 or 1) are in any way related to your objects you claim to contain ‘infinitely many’ terms (in this case, namely the decimals 0.333… and 0.999…).
Once again, with the exception of only a tiny few, every mathematician on the planet disagrees with you. It’s fairly simple to prove that 0.999…=1.
The well known proof of 10x – x proves the reverse. You cannot show how an algorithm can get from one to the other, and all you are left with are assertions and assumptions.
It most certainly does not “prove the reverse.” And I can quite easily show how an algorithm “can get from one to the other.” Some specific, iterative algorithms can be indefinitely repeated, but that does not imply that they cannot be shown to produce this particular result; and it certainly doesn’t imply that there are no algorithms which can demonstrate this particular result.
However, it has been very enjoyable discussing this with you, so thank you very much.
As we have got to the point where we are repeating ourselves, I think this has run its course.
Once again many thanks, Bye.
I have also enjoyed this discussion, and thank you very much for it.
• “(9) The ‘epsilon-delta’ argument says that 0.999… must equal 1 because there is no number between the two.”
No! The beauty of the epsilon-delta argument is that it is totally finite.
4. I understand the instinctive response that 0.999… shouldn’t equal 1. It’s hard to shake the feeling one gets from a finite expansion of 0.999…999. But it should at least be intellectually convincing when you try working out how big the difference between the infinite series and 1 should be.
5. I am not admitting that 0.3 recurring equals ‘infinitely many’ terms, which is what you are saying with that statements that includes “from n= 1 to infinity”.
What I did was I assumed ‘infinitely many’ was possible, and then I showed that this leads to a contradiction. Therefore statements such as ‘for n=1 to infinity’ and ‘infinitely many’ are invalid.
If my logic is correct and there is a contradiction, then Stevin’s real numbers and Cantor’s infinite sets are invalid and cannot be used as arguments against the original contradiction (i.e. ‘infinite sets’ and Aleph Null are inadmissible).
This is a fundamental problem and it needs to be addressed using fundamental core logic. .
But I think the meaning of your reply can still be gleamed. Your counter argument appears the be that after ‘infinitely many’ terms there are no more terms. This is the same as the first assertion in point (5) of my original argument.
Therefore my original argument still stands.
6. I give up !
|
# One end of a horizontal rope is attached to a prong of an electrically driven tuning fork that vibrates at a frequency 119 Hz . The other end passes over a pulley and supports a mass of 1.50 kg. The linear mass density of the rope is 0.0500 kg/m. What is the speed of a transverse wave on the rope?
###### Question:
One end of a horizontal rope is attached to a prong of an electrically driven tuning fork that vibrates at a frequency 119 Hz . The other end passes over a pulley and supports a mass of 1.50 kg. The linear mass density of the rope is 0.0500 kg/m. What is the speed of a transverse wave on the rope?
### What is a satelite in geography what is the definition
What is a satelite in geography what is the definition...
### Determine the area of the square ABCD with AB=3cm
Determine the area of the square ABCD with AB=3cm...
### An element of a national mythology is that it is written to celebrate the nation's heroes. How is this reflected in "Rip Van Winkle"? Select the two correct answers. Because of his positive outlook and willingness to be of service, Rip is beloved by his community After years of abuse, Rip leaves his family home for an adventure and finally finds freedom Rip stands out among the villagers for his strong work ethic and aversion to gossip Rip is portrayed as an endearing underdog who must endure en
An element of a national mythology is that it is written to celebrate the nation's heroes. How is this reflected in "Rip Van Winkle"? Select the two correct answers. Because of his positive outlook and willingness to be of service, Rip is beloved by his community After years of abuse, Rip leaves his...
### What sea creature is Rontu intrigued by in Chapter 16?If you read island of the blue dolphins.A. A WalrusB. A devil fish C. A seagull D. A dolphin
What sea creature is Rontu intrigued by in Chapter 16?If you read island of the blue dolphins.A. A WalrusB. A devil fish C. A seagull D. A dolphin ...
### Which planet in our solar system is closest to the sun? A)Jupiter B)Earth C)Mercury D)Saturn
Which planet in our solar system is closest to the sun? A)Jupiter B)Earth C)Mercury D)Saturn...
### Please help brainlyest+10 points=0ne big thank u if u cant see sorryWhat is the value of the expression shown below? 8 8 and 1 over 6 8 and 1 over 8 79
please help brainlyest+10 points=0ne big thank u if u cant see sorryWhat is the value of the expression shown below? 8 8 and 1 over 6 8 and 1 over 8 79...
### List three ways in which “The Monsters Are Due on Maple Street” is different from “The Lord of the Flies”.
List three ways in which “The Monsters Are Due on Maple Street” is different from “The Lord of the Flies”....
### Why is El Cinco De Mayo Celebrated in the United States?
Why is El Cinco De Mayo Celebrated in the United States?...
### What is a summary for chapter 6 French Louisiana
What is a summary for chapter 6 French Louisiana...
### Plz Help 10 points 0.0
Plz Help 10 points 0.0...
### Gravity is dependent on which of the two factors? mass and weight distance and weight mass and force mass and distance
Gravity is dependent on which of the two factors? mass and weight distance and weight mass and force mass and distance...
### Which system of linear inequalities is graphed? A. y < 3x-2 x + 2y ≥ 4 B. y < 3x - 2 x + 2y > 4 C. y > 3x - 2 x + 2y < 4 D. y ≥ 3x - 2 x + 2y ≤ 4
Which system of linear inequalities is graphed? A. y < 3x-2 x + 2y ≥ 4 B. y < 3x - 2 x + 2y > 4 C. y > 3x - 2 x + 2y < 4 D. y ≥ 3x - 2 x + 2y ≤ 4...
### Why green office became popular among businesses?
why green office became popular among businesses? ...
### Can you pls help me i reallly need it
can you pls help me i reallly need it...
### The atomic mass of an element is A. the sum of the protons and electrons in one atom of the element. B. twice the number of protons in one atom of the element. C. a ratio based on the mass of a carbon-12 atom. D. a weighted average of the masses of an element’s isotopes.
The atomic mass of an element is A. the sum of the protons and electrons in one atom of the element. B. twice the number of protons in one atom of the element. C. a ratio based on the mass of a carbon-12 atom. D. a weighted average of the masses of an element’s isotopes....
### Isocyanates are good electrophiles that have been used for protein modification. However, they have limited stability in water. As a result isothiocyanates have been developed at less hydrolytically sensitive variants. Molecules like fluorescein isothiocyanate (FITC) have been used extensively to fluorescently modify proteins. Draw the product of a lysine side chain with FITC. (for the sake of simplicity we are just using a model for lysine's side chain butyl amine)
Isocyanates are good electrophiles that have been used for protein modification. However, they have limited stability in water. As a result isothiocyanates have been developed at less hydrolytically sensitive variants. Molecules like fluorescein isothiocyanate (FITC) have been used extensively to fl...
### A golfer strikes a golfball on the ground
a golfer strikes a golfball on the ground...
|
In a world filled with uncertainty, he was glad to have many good friends. He had always assisted them in times of need and was confident that they would reciprocate. However, the events of the last week proved him wrong. Which of the following inference(s) is/are logically valid and can be inferred from the above passage?
1. His friends were always asking him to help them.
2. He felt that when in need of help, his friends would let him down.
3. He was sure that his friends would help him when in need.
4. His friends did not help him last week.
1. $(i)$ and $(ii)$
2. $(iii)$ and $(iv)$
3. $(iii)$ only
4. $(iv)$ only
|
Practice the questions of McGraw Hill Math Grade 8 Answer Key PDF Lesson 14.4 Slope to secure good marks & knowledge in the exams.
Exercises
FIND THE SLOPE
Question 1.
$$\frac{5.6}{10}$$,
Explanation:
Using two of the points on the line, we can find the slope of the line by
finding the rise and the run. The vertical change between two points is
called the rise and the horizontal change is called the run. The slope equals the rise divided by the run:
Slope =rise ÷ run = (y2 – y1) ÷ (x2 – x1),
We have given points as (-2, -5.1) and (8, 0.5),
So slope = (0.5 – (-5.1))/ (8 -(-2)) = 5.6/10.
Question 2.
–$$\frac{8.1}{14}$$,
Explanation:
Using two of the points on the line, we can find the slope of the line by
finding the rise and the run.
The vertical change between two points is called the rise and the horizontal change is called the run. The slope equals the rise divided by the run:
Slope =rise ÷ run = (y2 – y1) ÷ (x2 – x1),
We have given points as (-7, 7) and (7, -1.1),
So slope = (-1.1 – 7)/ (7 -(-7)) = -8.1/14.
Question 3.
y = 6x – 2
Slope is 6,
Explanation:
A slope of a line is the change in y coordinate with respect to
the change in x coordinate. The equation of the line is written in the
slope-intercept form, which is: y = mx + b,
where m represents the slope and b represents the y-intercept.
Therefore the given line y = 6x – 2 has slope 6.
Question 4.
y – x = 4
Slope is 1,
Explanation:
A slope of a line is the change in y coordinate with respect to
the change in x coordinate.
The equation of the line is written in the slope-intercept form,
which is: y = mx + b, where m represents the slope and b represents
the y-intercept. Therefore the given line y – x = 4, y = x + 4 has slope 1.
Question 5.
-3 = $$\frac{2}{3}$$x – y
Slope is $$\frac{2}{3}$$,
Explanation:
A slope of a line is the change in y coordinate with respect to
the change in x coordinate.
The equation of the line is written in the slope-intercept form,
which is: y = mx + b, where m represents the slope and
b represents the y-intercept.
Therefore the given line -3 = $$\frac{2}{3}$$x – y
y = $$\frac{2}{3}$$x + 3 has slope $$\frac{2}{3}$$.
Question 6.
x + y = -5
|
Q
# Need clarity, kindly explain! A metal coin of mass 5g and radius of 1 cm is fixed to a thin stick AB of negligible mass as shown in the figure . The system is initially at rest . the constant torque, that will make the system rotate about AB at 25 ro
A metal coin of mass 5g and radius of 1 cm is fixed to a thin stick AB of negligible mass as shown in the figure . The system is initially at rest . the constant torque, that will make the system rotate about AB at 25 rotations per second in 5s, is closed to:
• Option 1)
$4.0\times 10^{-6}Nm$
• Option 2)
$1.6\times 10^{-5}Nm$
• Option 3)
$7.9\times 10^{-6}Nm$
• Option 4)
$2.0\times 10^{-5}Nm$
Views
$\alpha =\frac{\Delta \omega }{\Delta t}=\frac{25\times 2\pi }{5}=10\pi\, \, \, rad/s^{2}$
$=\left ( \frac{5}{4}MR^{2} \right )\alpha$
$=\frac{5}{4}\times 5\times 10^{-3}\times \left ( 10^{-2} \right )^{2}\times 10$
$\simeq 2\times 10^{-5}Nm$
Option 1)
$4.0\times 10^{-6}Nm$
Option 2)
$1.6\times 10^{-5}Nm$
Option 3)
$7.9\times 10^{-6}Nm$
Option 4)
$2.0\times 10^{-5}Nm$
Exams
Articles
Questions
|
Linear Algebra Home
## Geometry of Solution Sets
Homogeneous linear system: A system of linear equations is homogeneous if its matrix equation is $$A\overrightarrow{x}=\overrightarrow{0}$$. Note that $$\overrightarrow{0}$$ is always a solution called the trivial solution. Any nonzero solution is called a nontrivial solution.
Example.
1. $\begin{eqnarray*} \begin{array}{rcrcrcr} x_1&+&x_2 &-&x_3&=&0\\ &&3x_2&-&2x_3&=&0 \end{array} \end{eqnarray*}$ The corresponding matrix equation $$A\overrightarrow{x}=\overrightarrow{0}$$ has the solution set $\left\lbrace s \left[\begin{array}{r} 1\\ 2\\ 3 \end{array} \right] \; |\; s\in \mathbb R \right\rbrace$ which is also denoted by $$\operatorname{Span}\left\lbrace \left[\begin{array}{r} 1\\ 2\\ 3 \end{array} \right]\right\rbrace$$. This solution set corresponds to the points on the line in the 3-space $$\mathbb R^3$$ passing through the point $$(1,2,3)$$ and the origin $$(0,0,0)$$. Recall that the vector $$\left[\begin{array}{r} 1\\ 2\\ 3 \end{array} \right]$$ is the position vector of the point $$(1,2,3)$$ which is a directed line segment from the origin $$(0,0,0)$$ to the point $$(1,2,3)$$.
2. $x_1-x_2-2x_3=0$ The corresponding matrix equation $$A\overrightarrow{x}=\overrightarrow{0}$$ has the solution set $\left\lbrace s \left[\begin{array}{r}1\\ 1\\ 0 \end{array} \right] +t \left[\begin{array}{r}2\\ 0\\ 1 \end{array} \right] \; |\; s,t\in \mathbb R \right\rbrace =\operatorname{Span}\left\lbrace \left[\begin{array}{r} 1\\ 1\\0 \end{array} \right],\;\left[\begin{array}{r} 2\\0\\1 \end{array} \right] \right\rbrace.$ This solution set corresponds to the points on the plane in the 3-space $$\mathbb R^3$$ passing through the points $$(1,1,0)$$, $$(2,0,1)$$, and the origin $$(0,0,0)$$.
Remark. If $$A\overrightarrow{x}=\overrightarrow{0}$$ has $$k$$ free variables, then its solution set is the span of $$k$$ vectors.
The solution set of $$A\overrightarrow{x}=\overrightarrow{0}$$ is $$\operatorname{Span}\{\overrightarrow{v_1},\ldots,\overrightarrow{v_k}\}$$ for some vectors $$\overrightarrow{v_1},\ldots,\overrightarrow{v_k}$$.
The solution set of $$A\overrightarrow{x}=\overrightarrow{b}$$ is $$\{\overrightarrow{p}+\overrightarrow{v}\:|\: A\overrightarrow{v}=\overrightarrow{0}\}$$ where $$A\overrightarrow{p}=\overrightarrow{b}.$$
So a nonhomogenous solution is a sum of a particular solution and a homogeneous solution. To justify it, let $$\overrightarrow{y}$$ be a solution of $$A\overrightarrow{x}=\overrightarrow{b}$$, i.e., $$A\overrightarrow{y}=\overrightarrow{b}$$. Then $A(\overrightarrow{y}-\overrightarrow{p})=\overrightarrow{b}-\overrightarrow{b}=\overrightarrow{0}.$ Then $$(\overrightarrow{y}-\overrightarrow{p})=\overrightarrow{v}$$ where $$A\overrightarrow{v}=\overrightarrow{0}$$. Thus $$\overrightarrow{y}=\overrightarrow{p}+\overrightarrow{v}$$.
Geometrically we get the solution set of $$A\overrightarrow{x}=\overrightarrow{b}$$ by shifting the solution set of $$A\overrightarrow{x}=\overrightarrow{0}$$ to the point whose position vector is $$\overrightarrow{p}$$ along the vector $$\overrightarrow{p}$$.
Example. The nonhomogeneous system $$x_1-x_2-2x_3=-2$$ has a particular solution $$\overrightarrow{p}=\left[\begin{array}{r} 1\\ 1\\ 1 \end{array} \right]$$. The corresponding homogeneous system $$x_1-x_2-2x_3=0$$ has the solution set $\left\lbrace s \left[\begin{array}{r}1\\ 1\\ 0 \end{array} \right] +t \left[\begin{array}{r}2\\ 0\\ 1 \end{array} \right] \; |\; s,t\in \mathbb R \right\rbrace.$ Thus the solution set of the nonhomogeneous system $$x_1-x_2-2x_3=-2$$ is $\left\lbrace \left[\begin{array}{r} 1\\ 1\\ 1 \end{array} \right] +s \left[\begin{array}{r}1\\ 1\\ 0 \end{array} \right] +t \left[\begin{array}{r}2\\ 0\\ 1 \end{array} \right] \; |\; s,t\in \mathbb R \right\rbrace.$
The solution set of $$A\overrightarrow{x}=\overrightarrow{b}$$ is a translation of that of $$A\overrightarrow{x}=\overrightarrow{0}$$
Last edited
|
# A Quick Markdown Syntax Error Detection for Writing MathJax Equations in Octopress Posts (2)
## Problem
In the previous post in this series, I’ve listed a number of steps and included a group of Vim editor commands for screening out kramdown syntax errors. This is useful for Octopress posts with ordinary contents. However, if one wants to write some math equations using MathJax, then one will encounter great difficulties.1
Even though the kramdown command line utility enables users to instantly convert code across different formats, and a web browser enables them to notice any syntax error, these tools just help them to find the mistakes, but not the solution. In the cited post in the first footnote, I spent hours to realise that exactly eight ‘\’s were needed to break the current line in MathJax. After that, I feel that using eight ‘\’s and adding a ‘\’ before a ‘_’ in MathJax equations interrupt my thinking.
## Solution
Reading an old post on removing Linux kernels on Ubuntu, I remembered that the markdown attribute in <div markdown="1"> enabled kramdown to interpret the Markdown code within the <div> block. Therefore, I tried to surround the HTML code for an inline math expression with a <span> tag. Unluckily, unlike <div> blocks, the surrounded code was still interpreted by kramdown.
After that, I read the syntax documentation. It claimed that setting markdown="0" disabled parsing of contents inside the tag except for <span> tags.2 Though the manual says that it’s impossible, I still inserted it into the <span> tag which surrounded the code for the MathJax math expression, because it didn’t take much time to see the result: the inline MathJax expression could be successfully displayed.3
1. For example, it’s troublesome to start a new line
2. See the first bullet point of the section “HTML Spans” in kramdown’s syntax documentation
3. See A Group of 689 Elements and its source code in Blog 1 at commit af4f216 for a working example.
|
I have forgotten
# Portal Frames
Bending Moments for Portal Frames
## Introduction
Portal frame construction is a method of building and designing simple structures, primarily using steel or steel-reinforced precast concrete although they can also be constructed using laminated timber such as glulam. The connections between the columns and the rafters are designed to be moment-resistant, i.e. they can carry bending forces.
A beam is a horizontal structural element that is capable of withstanding load primarily by resisting bending. The bending force induced into the material of the beam as a result of the external loads, own weight, span and external reactions to these loads is called a bending moment.
Because of these very strong and rigid joints some of the bending moment in the rafters is transferred to the columns. This means that the size of the rafters can be reduced or the span can be increased for the same size rafters. This makes portal frames a very efficient construction technique to use for wide span buildings.
Portal frame construction is therefore typically seen in warehouses, barns and other places where large, open spaces are required at low cost and a pitched roof is acceptable.
## Portal Frames.
The drawing shows a Portal Frame in which the ends $\inline&space;A$ and $\inline&space;D$ are fixed vertically and a distributed load is carries on $\inline&space;BC$.
If $\inline&space;\displaystyle&space;M_1$ and $\inline&space;M_2$ are the Bending Moments at $\inline&space;A$ and $\inline&space;B$, then the B.M. diagrams for $\inline&space;AB$ and $\inline&space;BC$ are as shown. As the joints at $\inline&space;B$ and $\inline&space;C$ are rigid, the angle $\inline&space;\phi$ is the same for $\inline&space;AB$ and $\inline&space;BC$.
For an upright $\inline&space;AB$.
The intercept at $\inline&space;B=0$ i.e. using moment-areas about $\inline&space;B$ ( See Bending of Beams Part 3)
$\frac{1}{2}(M_1+M_2)\times2l\times\frac{4\;l}{3}&space;-&space;M_2\times2\;l\timesl&space;=&space;0$
$\therefore\;\;\;\;\;\;M_1=\frac{1}{2}\;M_2$
As $\inline&space;\displaystyle&space;\phi$ is very small
$\phi&space;=\frac{Z_1}{2\;l}&space;=&space;\frac{M_2\times&space;2\;l\times&space;l&space;-&space;\displaystyle\frac{1}{2}(M_1+M_2)\times&space;2\;l\times2\;\displaystyle\frac{l}{3}}{2\;E\;I}=\frac{M_2\;l}{2\;E\;I}$
For the top of the Frame $\inline&space;BC$
$\phi&space;=\frac{Z_2}{l}&space;=&space;\frac{[\displaystyle\frac{2}{3}(w\;\displaystyle\frac{l^2}{8})\;l]\times\displaystyle\frac{&space;l}{2}-M_2\;l\times&space;l\times&space;\displaystyle\frac{l}{2}}{E\;I\;l}$
$=\frac{W\;\displaystyle\frac{l^3}{24}&space;-&space;M_2\;\displaystyle\frac{l}{2}}{E\;I}$
Equating Equations (2) and (3), $\inline&space;M_2=\displaystyle\frac{w\;l^2}{24}$
And from Equation (1), $\inline&space;M_1=\displaystyle\frac{w\;l^2}{48}$
The maximum Bending Moment occurs at the middle of $\inline&space;BC$ and
$\hat{M}=\frac{w\;l^2}{8}-\frac{w\;l^2}{24}=\frac{w\;l^2}{12}$
|
1. ## Cauchy Criterion.
Prove that the following infinite sum is converges using Cauchy Criterion.
$\displaystyle \Sigma^{\infty}_{n=1} \frac{cos{nx}-cos{(n+1)x}}{n}$
$\displaystyle x-Constant$
2. Just to clarify, you're trying to show that
$\displaystyle \displaystyle{\sum_{n=1}^{\infty} \frac{\cos(nx)-\cos((n+1)x)}{n}}}$
converges, correct? And you're required to use the Cauchy criterion?
What have you done so far?
3. Yes.
I say the next thing:
$\displaystyle \Sigma^{\infty}_{n=1} \frac{cos{nx}-cos{(n+1)x}}{n}=2sin(\frac{1}{2}x)\Sigma^{\infty}_ {n=1}\frac{sin(\frac{2n+1}{2}x}{n}$
So, I need to prove that for every $\displaystyle \epsilon >0$, exist $\displaystyle N(\epsilon)$, for all $\displaystyle n$, $\displaystyle n>N(\epsilon)$, and for all $\displaystyle p$ natural:
$\displaystyle |S_{n+p}-S_n|<\epsilon$
I choose $\displaystyle p=n$.
$\displaystyle |S_{2n}-S_n|=|\frac{sin(\frac{2(n+1)+1}{2}x)}{n+1}+...+\fr ac{sin(\frac{4n+1}{2}x)}{2n}|<|\frac{sin(\frac{2(n +1)+1}{2}x)}{n+1}|+...+|\frac{sin(\frac{4n+1}{2}x) }{2n}|<\frac{1}{n+1}+...+\frac{1}{2n}<n\frac{1}{n+ 1}<\frac{n}{n}<1$
My $\displaystyle \epsilon$ is not containing $\displaystyle n$...
Choose $\displaystyle \epsilon >0$ and write:
$\displaystyle x_n= \cos(nx)$
$\displaystyle a_n = \frac{1}{n}(x_n-x_{n+1})$
We need to show that for all $\displaystyle \epsilon>0$ we can find $\displaystyle N$ such that for all $\displaystyle n>N$ and $\displaystyle p\geq 1$ we have $\displaystyle A_{n,p}:= |a_n+\cdots +a_{n+p}|< \epsilon$
We can write: $\displaystyle A_{n,p}= |\frac{x_n}{n}-\frac{x_{n+1}}{n(n+1)}-\frac{x_{n+2}}{(n+1)(n+2)}-\cdots - \frac{x_{n+p}}{(n+p-1)(n+p)}-\frac{x_{n+p+1}}{n+p}|$
since $\displaystyle -1\leq \cos(nx)\leq 1$ for all $\displaystyle n$ we may assume (with triangle inequality)
$\displaystyle A_{n,p}\leq \frac{1}{n}+\frac{1}{n(n+1)}+\frac{1}{(n+1)(n+2)}+ \cdots + \frac{1}{(n+p-1)(n+p)}+\frac{1}{n+p}$
Now it can't be too hard to show that we can find $\displaystyle N$ such that for all $\displaystyle n>N$ and $\displaystyle p\geq 1$ we have
$\displaystyle \frac{1}{n}+ \frac{1}{n+p}< \frac{\epsilon}{2}$
and
$\displaystyle \frac{1}{n(n+1)}+\frac{1}{(n+1)(n+2)}+\cdots + \frac{1}{(n+p-1)(n+p)} < \frac{\epsilon}{2}$
|
Compressed Series (Part II)
Last week we looked at an alternative series to compute $e$, and this week we will have a look at the computation of $e^x$. The usual series we learn in calculus textbook is given by
$\displaystyle e^x=\sum_{n=0}^\infty \frac{x^n}{n!}$
We can factorize the expression as
$\displaystyle e^x=\sum_{n=0}^\infty \frac{x^n}{n!}=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}+\cdots$
We can make the factorials and power disappear. Indeed, we can rewrite the above as
$\displaystyle =1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}+\cdots$
$\displaystyle =1+x+\frac{x^2}{2!}+\frac{x^3}{3!}\big(1+\frac{x}{4}\big(+\cdots$
$\displaystyle =1+x+\frac{x^2}{2!}\big(1+\frac{x}{3}\big(1+\frac{x}{4}\big(+\cdots$
$\displaystyle =1+x\big(1+\frac{x}{2}\big(1+\frac{x}{3}\big(1+\frac{x}{4}\big(+\cdots$
This series already seems much easier to compute. A straightforward C implementation leads to something like
float_x naive_exp(float_x x, int nb_terms)
{
float_x s=1;
float_x n=1;
float_x xx=1;
for (int i=1;i<=nb_terms;i++)
{
n*=i;
xx*=x;
s+=xx/n;
}
return s;
}
Let us now look at series compression in this case. It’s a bit more laborious than for the series for $e$. Indeed, if we group terms two by two, we find, somewhere in the series, the successive terms
$\displaystyle \cdots+\left(\frac{x^n}{n!}+\frac{x^{n+1}}{(n+1)!}\right)+\cdots$
which we rewrite as
$\displaystyle \cdots+\left(\frac{x^{2k}}{(2k)!}+\frac{x^{2k+1}}{(2k+1)!}\right)+\cdots$
which we simplify to find
$\displaystyle \cdots+\left(\frac{x^{2k}(2k+1+x)}{(2k+1)!}\right)+\cdots$
which yields, again, a series with a convergence rate roughly double of the original.
This yield the following code:
float_x compressed_exp(float_x x, int nb_terms)
{
float_x s=0;
float_x k_bang=1;
float_x xx=1;
for (int k=0,t=0;k<=nb_terms;k++,t+=2,k_bang*=(t+1)*t)
{
s+=( xx*(t+1+x) )/ k_bang;
xx*=(x*x); // x^(2k)
}
return s;
}
Let us now compare the convergence for, say, $n=3$. We get
it. Naive Compressed
1 1 4
2 4 13
3 8.5 18.4
4 13 19.84642857142857
5 16.375 20.06339285714286
6 18.4 20.08410308441558
7 19.4125 20.08546859390609
8 19.84642857142857 20.08553443097081
9 20.00915178571428 20.08553685145113
10 20.06339285714285 20.08553692151767
11 20.07966517857142 20.08553692315559
12 20.08410308441558 20.08553692318715
13 20.08521256087662 20.08553692318766
14 20.08546859390609 20.08553692318767
15 20.08552345812669 20.08553692318767
16 20.08553443097081 20.08553692318767
17 20.08553648837908 20.08553692318767
18 20.08553685145113 20.08553692318767
19 20.08553691196314 20.08553692318767
20 20.08553692151767 20.08553692318767
21 20.08553692295084 20.08553692318767
22 20.08553692315558 20.08553692318767
23 20.08553692318350 20.08553692318767
24 20.08553692318715 20.08553692318767
25 20.08553692318760 20.08553692318767
26 20.08553692318765 20.08553692318767
27 20.08553692318766 20.08553692318767
28 20.08553692318766 20.08553692318767
29 20.08553692318766 20.08553692318767
30 20.08553692318766 20.08553692318767
Again, we see that the compressed series converges more rapidly. It reaches the “exact” value $20.08553692318767$ (as computed by the C stdlib exp function, the reference in this case) at iteration 14, while the naive series undershoots a bit, even after 30 iterations.
*
* *
Of course, if one looks at the series
$\displaystyle \sum_{k=0}^\infty \frac{x^{2k}(2k+1+x)}{(2k+1)!}$,
he could be hard-pressed to figure out where it comes from. If you’re going to use it in code, you should leave an explanatory comment, something that explains how to derive this series and why you’re using it. This also holds if you’re writing a mathematical text. The reader may not be stupid, but he is, I suppose, unlikely to figure out all the tricks you use by himself—and that’s why he’s reading you in the first place.
|
0
# Is there any way that I can disable FilteringEnabled on my game?
I’m trying to disable FilteringEnabled on my game, but I haven’t found any way to do so yet. Any suggestions?
I tried:
game.Workspace.FilteringEnabled = false
0
I dont believe they allow this to happen anymore, or so i have been told.
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 09 Oct 2015, 21:49
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# A cube is made up of equal smaller cubes. Two of the sides
Author Message
TAGS:
Senior Manager
Joined: 17 Dec 2012
Posts: 395
Location: India
Followers: 20
Kudos [?]: 284 [3] , given: 10
A cube is made up of equal smaller cubes. Two of the sides [#permalink] 19 Mar 2013, 22:17
3
KUDOS
8
This post was
BOOKMARKED
00:00
Difficulty:
95% (hard)
Question Stats:
15% (02:18) correct 85% (01:39) wrong based on 391 sessions
A cube is made up of equal smaller cubes. Two of the sides of the larger cube are called A and B. What is the total number of smaller cubes?
(1) When n smaller cubes are painted on A, n is 1/9 of the total number of smaller cubes.
(2) When m smaller cubes are painted on B, m is 1/3 of the total number of smaller cubes
[Reveal] Spoiler: OA
_________________
Srinivasan Vaidyaraman
Sravna Test Prep
http://www.sravnatestprep.com
http://www.nirguna.co.in
Classroom Courses in Chennai
Free Online Material
Manager
Joined: 11 Jun 2010
Posts: 84
Followers: 0
Kudos [?]: 13 [0], given: 17
Re: A cube is made up of equal smaller cubes. Two of the sides [#permalink] 21 Mar 2013, 01:13
This question does not make any sense. Could you please let us know the source of this question.
Senior Manager
Joined: 17 Dec 2012
Posts: 395
Location: India
Followers: 20
Kudos [?]: 284 [1] , given: 10
Re: A cube is made up of equal smaller cubes. Two of the sides [#permalink] 21 Mar 2013, 05:09
1
KUDOS
1
This post was
BOOKMARKED
SravnaTestPrep wrote:
A cube is made up of equal smaller cubes. Two of the sides of the larger cube are called A and B. What is the total number of smaller cubes?
(1) When n smaller cubes are painted on A , n is 1/9 of the total number of smaller cubes.
(2) When m smaller cubes are painted on B, m is 1/3 of the total number of smaller cubes
Fact 1: When a cube is made of equal smaller cubes, the number of smaller cubes is $$r^3$$ where r is natural number.
Fact 2: Out of the six sides of the bigger cube, we are considering two sides and have named them A and B resp.
Statement 1: when on side A, n smaller cubes are painted we have that value of n equal to 1/9 of the total number of smaller cubes. So n is divisible by 9. Or the number of smaller cubes is a multiple of 9. Statement is not sufficient to answer the question because for example both 27 smaller cubes and 729 smaller cubes satisfy the conditions that the number of smaller cubes is a multiple of 9 and satisfies fact 1.
Statement 2: In this case the number of smaller cubes is a multiple of 3. The number of smaller cubes cannot be greater than $$3^3$$ based on this statement alone.
Assume first the number of smaller cubes which is a multiple of 3 and satisfies fact 1, is 27. In this case each side has 9 smaller cubes exposed. If the number of smaller cubes painted, which is m, is 9.we have m =1/3 of the number of smaller cubes.
Assume next the the number of smaller cubes which is a multiple of 3 and satisfies fact 1, is 216. In this case each side has 36 smaller cubes exposed. So you could paint a maximum of 36 smaller cubes. Thus maximum value of m is 36/216 = 1/6 which cannot be equal to 1/3.
For higher multiples of 3, the value of m would keep on decreasing and would never satisfy the conditions mentioned.
Thus statement 2 alone is sufficient to answer this question.
_________________
Srinivasan Vaidyaraman
Sravna Test Prep
http://www.sravnatestprep.com
http://www.nirguna.co.in
Classroom Courses in Chennai
Free Online Material
Last edited by SravnaTestPrep on 25 Sep 2013, 15:56, edited 1 time in total.
Intern
Joined: 24 May 2010
Posts: 14
Followers: 0
Kudos [?]: 3 [0], given: 1
Re: A cube is made up of equal smaller cubes. Two of the sides [#permalink] 25 Sep 2013, 10:04
Just to double check, isn't the number of smaller cubes within a larger cube $$r^3$$not $$r^r$$?
Senior Manager
Joined: 17 Dec 2012
Posts: 395
Location: India
Followers: 20
Kudos [?]: 284 [0], given: 10
Re: A cube is made up of equal smaller cubes. Two of the sides [#permalink] 25 Sep 2013, 15:55
nphilli1 wrote:
Just to double check, isn't the number of smaller cubes within a larger cube $$r^3$$not $$r^r$$?
You are right. Made the correction.
_________________
Srinivasan Vaidyaraman
Sravna Test Prep
http://www.sravnatestprep.com
http://www.nirguna.co.in
Classroom Courses in Chennai
Free Online Material
Manager
Joined: 17 Mar 2014
Posts: 67
Followers: 0
Kudos [?]: 34 [0], given: 36
Re: A cube is made up of equal smaller cubes. Two of the sides [#permalink] 30 May 2014, 04:45
SravnaTestPrep wrote:
SravnaTestPrep wrote:
A cube is made up of equal smaller cubes. Two of the sides of the larger cube are called A and B. What is the total number of smaller cubes?
(1) When n smaller cubes are painted on A , n is 1/9 of the total number of smaller cubes.
(2) When m smaller cubes are painted on B, m is 1/3 of the total number of smaller cubes
Fact 1: When a cube is made of equal smaller cubes, the number of smaller cubes is $$r^3$$ where r is natural number.
Fact 2: Out of the six sides of the bigger cube, we are considering two sides and have named them A and B resp.
Statement 1: when on side A, n smaller cubes are painted we have that value of n equal to 1/9 of the total number of smaller cubes. So n is divisible by 9. Or the number of smaller cubes is a multiple of 9. Statement is not sufficient to answer the question because for example both 27 smaller cubes and 729 smaller cubes satisfy the conditions that the number of smaller cubes is a multiple of 9 and satisfies fact 1.
Statement 2: In this case the number of smaller cubes is a multiple of 3. The number of smaller cubes cannot be greater than $$3^3$$ based on this statement alone.
Assume first the number of smaller cubes which is a multiple of 3 and satisfies fact 1, is 27. In this case each side has 9 smaller cubes exposed. If the number of smaller cubes painted, which is m, is 9.we have m =1/3 of the number of smaller cubes.
Assume next the the number of smaller cubes which is a multiple of 3 and satisfies fact 1, is 216. In this case each side has 36 smaller cubes exposed. So you could paint a maximum of 36 smaller cubes. Thus maximum value of m is 36/216 = 1/6 which cannot be equal to 1/3.
For higher multiples of 3, the value of m would keep on decreasing and would never satisfy the conditions mentioned.
Thus statement 2 alone is sufficient to answer this question.
Sorry to point this out but something here is leading to confusion.
First of all if n is the total number of smaller cubes then each surface would have n/6 cubes exposed. This is what I derive from the green highlighted text where it says if 216 is total number of cubes then each surface would have 36 exposed which means we are doing 216/6= 36
By the same logic if 27 is the total number of smaller cubes then each side should have 27/6 number of cubes exposed which means 4.5cubes.
so you can paint a maximum of 4.5 cubes.
(So how come you say that for 27 total number of smaller cubes each side would have 9 cubes exposed?shouldn't it be 27/6 as you did for 216/6)
4.5/27= 1/6 , this is what we were getting for 216 cubes too!, so we cannot have 1/3 hence it seems that even 27 total smaller cubes is not a possibility.
Please let me know if I have not understood something.
Manager
Joined: 04 Jan 2014
Posts: 128
Followers: 1
Kudos [?]: 7 [0], given: 24
Re: A cube is made up of equal smaller cubes. Two of the sides [#permalink] 04 Jun 2014, 17:37
I am confused.
Quote:
Assume first the number of smaller cubes which is a multiple of 3 and satisfies fact 1, is 27. In this case each side has 9 smaller cubes exposed. If the number of smaller cubes painted, which is m, is 9.we have m =1/3 of the number of smaller cubes.
Assume next the the number of smaller cubes which is a multiple of 3 and satisfies fact 1, is 216. In this case each side has 36 smaller cubes exposed. So you could paint a maximum of 36 smaller cubes. Thus maximum value of m is 36/216 = 1/6 which cannot be equal to 1/3.
It seems like both Statement 1 and 2 can satisfy the relationship.
Manager
Joined: 17 Mar 2014
Posts: 67
Followers: 0
Kudos [?]: 34 [1] , given: 36
Re: A cube is made up of equal smaller cubes. Two of the sides [#permalink] 05 Jun 2014, 06:40
1
KUDOS
pretzel wrote:
I am confused.
Quote:
Assume first the number of smaller cubes which is a multiple of 3 and satisfies fact 1, is 27. In this case each side has 9 smaller cubes exposed. If the number of smaller cubes painted, which is m, is 9.we have m =1/3 of the number of smaller cubes.
Assume next the the number of smaller cubes which is a multiple of 3 and satisfies fact 1, is 216. In this case each side has 36 smaller cubes exposed. So you could paint a maximum of 36 smaller cubes. Thus maximum value of m is 36/216 = 1/6 which cannot be equal to 1/3.
It seems like both Statement 1 and 2 can satisfy the relationship.
if i am not mistaken , question may be flawed as I have tried to show above. check this :
Let me know if you see any discrepancies in my explanation.
Thanks
Intern
Joined: 26 Apr 2015
Posts: 2
Location: United States
Concentration: Strategy, Social Entrepreneurship
GMAT 1: 730 Q49 V39
GRE 1: 1410 Q760 V650
GPA: 3.5
WE: Engineering (Computer Software)
Followers: 0
Kudos [?]: 15 [1] , given: 1
A cube is made up of equal smaller cubes. Two of the sides [#permalink] 23 May 2015, 13:51
1
KUDOS
I solved this problem differently.
We are trying to solve for $$x{^3}$$, the number of smaller cubes that compose the larger cube.
1) On Surface A we paint n cubes where n = $$\frac{1}{9}x{^3}$$. We also know that on any surface there $$x{^2}$$ cubes so the maximum number of cubes we can paint is $$x{^2}$$. This means n = $$\frac{1}{9}x{^3}\le{x{^2}}$$ => $$x\le{9}$$. Since n is an integer we know that $$x{^3}$$must be a multiple of 9. Therefore x=3 OR x=6 OR x = 9. Insufficient.
2) Using the same math as before we get $$x\le{3}$$ with $$x{^3}$$ a multiple of 3 to meet the requirement that n is an integer. The only options we have for x are 1, 2, 3. Only x = 3 gives a multiple of 3. Sufficient.
A cube is made up of equal smaller cubes. Two of the sides [#permalink] 23 May 2015, 13:51
Similar topics Replies Last post
Similar
Topics:
1 Q is a cube. Find the volume of the cube. 5 24 Feb 2015, 07:00
6 A certain cube is composed of 1000 smaller cubes, arranged 1 3 30 Nov 2013, 04:02
13 Jean puts N identical cubes, the sides of which are 1 inch 8 03 Sep 2012, 01:16
20 What is the absolute difference between the cubes of two 19 15 Mar 2012, 22:23
5 What is the volume of the cube above? 7 26 Oct 2010, 08:07
Display posts from previous: Sort by
|
## anonymous 4 years ago The sum of two numbers is 112. If three times the smaller number is sibtracted from the larger number, the result is 12. Find the two numbers.
1. anonymous
25 smaller 87 larger?
2. myininaya
$x+y=112$ from first sentence ----------------- Assume x is the smallest! $y-3x=12$ from second sentence
3. myininaya
$y=3x+12 \text{ from second equation}$ now plug this into the first equation and solve for x
4. myininaya
$x+(3x+12)=112$ $4x+12=112$ $4x=100$ x=25 then y=3(25)+12=75+12=87 so yes
did he ask three times smaller number ?
|
location: Publications → journals
Search results
Search: MSC category 03E17 ( Cardinal characteristics of the continuum )
Expand all Collapse all Results 1 - 3 of 3
1. CMB 2013 (vol 57 pp. 119)
Mildenberger, Heike; Raghavan, Dilip; Steprans, Juris
Splitting Families and Complete Separability We answer a question from Raghavan and SteprÄns by showing that $\mathfrak{s} = {\mathfrak{s}}_{\omega, \omega}$. Then we use this to construct a completely separable maximal almost disjoint family under $\mathfrak{s} \leq \mathfrak{a}$, partially answering a question of Shelah. Keywords:maximal almost disjoint family, cardinal invariantsCategories:03E05, 03E17, 03E65
2. CMB 2012 (vol 57 pp. 61)
Geschke, Stefan
2-dimensional Convexity Numbers and $P_4$-free Graphs For $S\subseteq\mathbb R^n$ a set $C\subseteq S$ is an $m$-clique if the convex hull of no $m$-element subset of $C$ is contained in $S$. We show that there is essentially just one way to construct a closed set $S\subseteq\mathbb R^2$ without an uncountable $3$-clique that is not the union of countably many convex sets. In particular, all such sets have the same convexity number; that is, they require the same number of convex subsets to cover them. The main result follows from an analysis of the convex structure of closed sets in $\mathbb R^2$ without uncountable 3-cliques in terms of clopen, $P_4$-free graphs on Polish spaces. Keywords:convex cover, convexity number, continuous coloring, perfect graph, cographCategories:52A10, 03E17, 03E75
3. CMB 2009 (vol 52 pp. 303)
Shelah, Saharon
A Comment on $\mathfrak{p} < \mathfrak{t}$'' Dealing with the cardinal invariants ${\mathfrak p}$ and ${\mathfrak t}$ of the continuum, we prove that ${\mathfrak m}={\mathfrak p} = \aleph_2\ \Rightarrow\ {\mathfrak t} =\aleph_2$. In other words, if ${\bf MA}_{\aleph_1}$ (or a weak version of this) holds, then (of course $\aleph_2\le {\mathfrak p}\le {\mathfrak t}$ and) ${\mathfrak p}=\aleph_2\ \Rightarrow\ {\mathfrak p}={\mathfrak t}$. The proof is based on a criterion for ${\mathfrak p}<{\mathfrak t}$. Categories:03E17, 03E05, 03E50
top of page | contact us | privacy | site map |
|
Is there a speed needed to achieve flight?
Is there a minimum speed needed for flight (universally)? For example, if something was going 10 miles per hour, could it achieve flight? And by flight I don’t mean throwing a paper airplane, I’m talking about the item going to that speed, then flying.
Is there a formula for calculating the speed needed for flight?
• You may want to narrow this down a bit. Helicopters, lighter than air ships, and VTOL aircraft can achieve flight with 0 forward speed. If you are strictly asking about heavier than air, fixed wing aircraft the answer is different. – Dave Mar 20 '18 at 21:35
• Definitely a legitimate question even if it appears to be a beginner question but I agree with @Dave here that it could be narrowed down a bit. – dalearn Mar 20 '18 at 21:56
• The pedal-powered Gossamer Albatross crossed the English Channel (22.2miles) in 2.81 hours meaning that it had an average speed of just under 8mph. So to your point about 10mph, absolutely you can achieve flight below 10mph, not terribly practical flight in terms of speed, carrying capacity or range but technically and demonstrably possible. – AndyW Mar 21 '18 at 10:29
• It depend how you define speed, aircraft measure their speed relative to the air since this is handy to know when computing the lift generated and stall speed. However humans tend to be more interested in groundspeed, so the distance the aircraft travels over the earth per unit of time. Airspeed vs Groundspeed in practice: youtu.be/WPyCywd4o5E?t=43s – Brilsmurfffje Mar 21 '18 at 14:53
There may be an ultimate lower bound, about where the ratio of random air motion to forward motion is near 1:1. But that is an extremely slow speed.
I've seen competitive balsa-wood gliders, weighing just a few ounces, achieve flight from a rubber band, going less than 1 mph, or a slow walking speed.
If the craft is light enough, the wings are big enough, and a little air is moving over them, lift is generated.
• Thanks, sorry if it wasn’t detailed enough. I’ll my best to detail it. Say an item was going 10 miles per hour, had wings, etc, basically an airplane (except small enough to fly) could anything with the correct wingspan, etc, fly under its circumstances? – Jackson Mar 20 '18 at 23:12
• m.youtube.com/watch?v=ZqNEoeOKH8s – Greg Taylor Mar 21 '18 at 9:42
• or this one – qq jkztd Mar 21 '18 at 10:36
• This is exactly what I was looking for, thanks! – Jackson Mar 21 '18 at 21:24
• Not even 'a few ounces': the F1D planes (like in the second video in the comments) weigh on the order of 1 gram! – Zeus Mar 22 '18 at 3:54
I suppose the most basic formula for flight is $F=ma$. Whether flying or not, your aircraft feels a force called weight (m is the mass of your aircraft and a is the acceleration felt due to gravity). In order to fly, your aircraft needs to create enough lift to equal or exceed the weight. The formula for how much lift is created is $$L = \frac{1}{2} d v^2 s C_L$$
L = Lift, which must equal the airplane's weight in pounds
d = density of the air. This will change due to altitude. These values can be found in a I.C.A.O. Standard Atmosphere Table.
v = velocity of an aircraft expressed in feet per second
s = the wing area of an aircraft in square feet
CL = Coefficient of lift, which is determined by the type of airfoil and angle of attack.
Check out this video for a more detailed explanation. Your formula is at 4:45
NASA page with more details: https://www.grc.nasa.gov/www/k-12/WindTunnel/Activities/lift_formula.html
|
# The velocity of the flowing coming out of the balloons?
Last day , when i was working on two interconnected balloons , a question was kicking my brains !!! This is the explanation of the question:
First , suppose a system that composed of two spherical membranes filled with air (two balloons have different initial volumes {means that the pressure inside which balloons are different} and the air pressure is 1 a.t.m) . We connect them with hollow tube and a valve. When we open the valve , one of them shrinks and the other one expands (It depends on their pressure) . So how can we get the velocity of the flowing air between two balloons? (Consider everything but if you have reasons for not considering one of them -for example the ratio of friction in the tube- Don't consider them and just tell me the reason)
Thanks
• Why the general relativity tag? – MBN Jan 22 '15 at 7:40
The air flow rate through a tube is approximately given by the Hagen-Poiseuille equation. If the pressure difference between the two ballons is $\Delta P$ then the HP equation gives the volumetric flow rate $Q$ as:
$$Q = \frac{\pi r^2}{8\mu \ell} \Delta P$$
where $r$ is the radius of the pipe, $\ell$ is the pipe length and $\mu$ is the viscosity of the air. The equation actually only applies to incompressible fluids, but gives a reasonable answer even for compressible fluids like gases a long as the pressure differences aren't too high. In the case of two balloons it should be fine.
• @DavidMichaele: Google for the viscosity of air. Mass flow rate = $\rho Q$, where $\rho$ is the density of the air (about 1.25kg/m$^3$). – John Rennie Jan 22 '15 at 7:50
• @DavidMichaele: $Q = vA$ where $v$ is the (average) velocity in the tube and $A$ is the tube area. – John Rennie Jan 22 '15 at 7:55
|
# Smale, S.
## What is the homotopy type of the group of diffeomorphisms of the 4-sphere? ★★★★
Author(s): Smale
\begin{problem} $Diff(S^4)$ has the homotopy-type of a product space $Diff(S^4) \simeq \mathbb O_5 \times Diff(D^4)$ where $Diff(D^4)$ is the group of diffeomorphisms of the 4-ball which restrict to the identity on the boundary. Determine some (any?) homotopy or homology groups of $Diff(D^4)$. \end{problem}
Keywords: 4-sphere; diffeomorphisms
## $C^r$ Stability Conjecture ★★★★
Author(s): Palis; Smale
\begin{conjecture} Any $C^r$ structurally stable diffeomorphism is hyperbolic. \end{conjecture}
Keywords: diffeomorphisms,; dynamical systems
|
### Home > APCALC > Chapter 7 > Lesson 7.1.5 > Problem7-50
7-50.
Use the first and second derivatives to determine the following locations for $f(x) = xe^x$.
1. Relative minima and maxima
Remember: Finding where $f^\prime(x)= 0$ or $f^\prime(x) =$ DNE will identify CANDIDATES for minima and maxima.
You need to complete the 1st or 2nd Derivative Test to confirm which is which.
2. Intervals over which $f$ is increasing and decreasing
When $f^\prime(x)$ is positive, then $f(x)$ has positive slopes, which means $f(x)$ is increasing.
3. Inflection points
Remember: Finding where $f^{\prime\prime}(x)$ or $f^{''}(x)=$ DNE will identify CANDIDATES for inflection points.
You need to do further investigation to determine if it is (or is not) an inflection point.
Either test for a sign change of $f^{\prime\prime}(x)$ before and after the candidate point. Or evaluate $f^{\prime\prime\prime}$ at the candidate.
If $f^{\prime\prime\prime}(\text{candidate})≠ 0$ , then it is a point of inflection.
4. Intervals over which $f$ is concave up and concave down
When $f^{\prime\prime}(x)$ is positive, then $f^\prime(x)$ has positive slopes, which means $f(x)$ is concave up.
|
Question
# The minimum area of triangle formed by the tangent to the ellipse $$\displaystyle \frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}=1$$ and coordinate axes is:
A
ab sq. units
B
a2+b22 sq. units
C
(a+b)22 sq. units
D
a2+ab+b23sq. units
Solution
## The correct option is A $$ab$$ sq. unitsThe equation of tangent at $$(a\cos\theta ,b\sin\theta )$$ is, $$\displaystyle \frac{x\, \cos\theta }{a}+\frac{y\: \sin\theta}{b }=1$$ It meets the coordinate axes at $$A (0,b \cos ec\, \theta ),B=(a\sec\, \theta,0 )$$Thus area of triangle is $$=\displaystyle \frac{ab}{2\sin\, \theta \cos\theta }=\frac{ab}{\sin2\theta }\geq ab.$$Mathematics
Suggest Corrections
0
Similar questions
View More
People also searched for
View More
|
# MathJax issue in hugo
If using brackets in title of article, and MathJax was enabled,
+++
title= "[Math]Finding the shortest distance between two lines in 3D"
date= "2019-09-16T16:46:02+08:00"
mathjax= ["ture"]
+++
then characters in brackets would be zoomed in, as shown in following pic:
but it works fine without brackets in title:
+++
title= "(Math)Finding the shortest distance between two lines in 3D"
date= "2019-09-16T16:46:02+08:00"
mathjax= ["ture"]
+++
So it’s there any way to let brackets and MathJax work fine?
Hi!
I guess we need some information: What theme are you using and how did you implement MathJaX? (But I am not an expert here.)
BTW: Just tested a title with brackets using KaTeX, and it works fine. If you are using your own theme you could consider using KaTex. It seems a lot quicker. Here is a Hugo KaTeX example.
1 Like
KaTex worked, thanks a lot Grob~
|
## Physics (10th Edition)
$k=303N/m$
Before the trigger is pulled, we choose the height $h_0=0$ and the spring is compressed by $y_0=9.1\times10^{-2}m$. When the pellet rises to its maximum height $h_f=6.1m$, the spring is completely released and unstrained, so $y_f=0$ At both points, the pellet has zero speed, so $KE=0$. There is also no rotational kinetic energy. Since total energy is conserved, $$\frac{1}{2}ky_0^2=mgh_f$$ $$k=\frac{2mgh_f}{y_0^2}$$ The pellet's weight $mg=(2.1\times10^{-2}kg)\times(9.8m/s^2)=0.206N$. Therefore, $$k=303N/m$$
|
## Morph Animation
Sometimes we need only a subset of the vertices in a mesh to be animated without a full skeleton, such as a set of mouth shapes or face vertices for facial animation. A easy way to do this is using Morph Target Animation. In this animation we blend vertices instead of bones, this is why it’s also called per-vertex animation. The animation is stored as a series of deformed versions of the original mesh vertices. The deformed version is called morph target while the original is called base. We can use a weights array to blend between a base and morph targets:
$vertex = base + \sum_{i=1}^n ( w_i * ( target_i – base ) )$
The formula above can be used for vertex position, normal, UV, etc.
By using a jd file format we can do interpolation between pos0 and pos1, normal0 and normal1
The following example from Unity wiki simply use Lerp to blend vertices between 2 meshes:
using UnityEngine;
/// REALLY IMPORTANT NOTE.
/// When using the mesh morpher you should absolutely make sure that you turn
/// off generate normals automatically in the importer, or set the normal angle to 180 degrees.
/// When importing a mesh Unity automatically splits vertices based on normal creases.
/// However the mesh morpher requires that you use the same amount of vertices for each mesh and that
/// those vertices are laid out in the exact same way. Thus it wont work if unity autosplits vertices based on normals.
[RequireComponent(typeof(MeshFilter))]
public class MeshMorpher : MonoBehaviour
{
public Mesh[] m_Meshes;
public bool m_AnimateAutomatically = true;
public float m_OneLoopLength = 1.0F; /// The time it takes for one loop to complete
public WrapMode m_WrapMode = WrapMode.Loop;
private float m_AutomaticTime = 0;
private int m_SrcMesh = -1;
private int m_DstMesh = -1;
private float m_Weight = -1;
private Mesh m_Mesh;
/// Set the current morph in
public void SetComplexMorph(int srcIndex, int dstIndex, float t)
{
if (m_SrcMesh == srcIndex && m_DstMesh == dstIndex && Mathf.Approximately(m_Weight, t))
return;
Vector3[] v0 = m_Meshes[srcIndex].vertices;
Vector3[] v1 = m_Meshes[dstIndex].vertices;
Vector3[] vdst = new Vector3[m_Mesh.vertexCount];
for (int i = 0; i < vdst.Length; i++)
vdst[i] = Vector3.Lerp(v0[i], v1[i], t);
m_Mesh.vertices = vdst;
m_Mesh.RecalculateBounds();
}
/// t is between 0 and m_Meshes.Length - 1.
/// 0 means the first mesh, m_Meshes.Length - 1 means the last mesh.
/// 0.5 means half of the first mesh and half of the second mesh.
public void SetMorph(float t)
{
int floor = (int)t;
floor = Mathf.Clamp(floor, 0, m_Meshes.Length - 2);
float fraction = t - floor;
fraction = Mathf.Clamp(t - floor, 0.0F, 1.0F);
SetComplexMorph(floor, floor + 1, fraction);
}
void Awake()
{
enabled = m_AnimateAutomatically;
MeshFilter filter = GetComponent(typeof(MeshFilter)) as MeshFilter;
// Make sure all meshes are assigned!
for (int i = 0; i < m_Meshes.Length; i++)
{
if (m_Meshes[i] == null)
{
Debug.Log("MeshMorpher mesh " + i + " has not been setup correctly");
m_AnimateAutomatically = false;
return;
}
}
// At least two meshes
if (m_Meshes.Length < 2)
{
Debug.Log("The mesh morpher needs at least 2 source meshes");
m_AnimateAutomatically = false;
return;
}
filter.sharedMesh = m_Meshes[0];
m_Mesh = filter.mesh;
int vertexCount = m_Mesh.vertexCount;
for (int i = 0; i < m_Meshes.Length; i++)
{
if (m_Meshes[i].vertexCount != vertexCount)
{
Debug.Log("Mesh " + i + " doesn't have the same number of vertices as the first mesh");
m_AnimateAutomatically = false;
return;
}
}
}
void Update()
{
if (m_AnimateAutomatically)
{
float deltaTime = Time.deltaTime * (m_Meshes.Length - 1) / m_OneLoopLength;
m_AutomaticTime += deltaTime;
float time;
if (m_WrapMode == WrapMode.Loop)
time = Mathf.Repeat(m_AutomaticTime, m_Meshes.Length - 1);
else if (m_WrapMode == WrapMode.PingPong)
time = Mathf.PingPong(m_AutomaticTime, m_Meshes.Length - 1);
else
time = Mathf.Clamp(m_AutomaticTime, 0, m_Meshes.Length - 1);
SetMorph(time);
}
}
}
2016-06-18 Comments
|
# Why is it so that the total torque about a given line is equal in the case described below?
If the resultant force on a body is zero, why is it so that the total torque about any line perpendicular to the plane of forces is equal ?
The statement that you have made is true in general.
This is because if there is no resultant the only thing have left is a couple.
A couple is two equal, opposite and parallel (non co-linear) forces and has the property that its torque is independent of position about which the torque is evaluated.
In simple terms I can show you this in terms of three coplanar forces $\vec F_1, \vec F_2$ and $\vec F_3$ acting at points $A, B$ and $C$ as in the diagram below.
The resultant of these three forces is zero, $\vec F_1 + \vec F_2 + \vec F_3 = 0$
Just for ease of drawing set up some axes such that force $\vec F_1$ only have an x-component $F_{1 \rm x}$.
Noting that $F_{2\rm x} = F_{1\rm x} + F_{3\rm x}$ and $F_{2\rm y} =F_{3\rm y}$ there are three couples acting on this body with fixed magnitudes $F_{1\rm x}y_{12}, F_{3\rm x}y_{23}$ and $F_{3\rm y}x_{23}$.
With extra hand-waving you can extend this to any number of forces and into three dimensions.
Alternatively . .
Suppose that you have $n$ forces $\vec F_n$ acting on a body and the resultant force is zero then $\sum \limits_n \vec F_n =0$.
The torque about any point is $\sum \limits_n \vec r_n \times \vec F_n$ where $\vec r_n$ are the position vectors of the point of application of the forces.
Now move from the origin by $\vec r$ and the torque about that new point is
$\sum \limits_n (\vec r_n - \vec r) \times \vec F_n = \sum \limits_n \vec r_n \times \vec F_n - \vec r \times \sum \limits_n \vec F_n = \sum \limits_n \vec r_n \times \vec F_n - \vec r \times \vec 0 = \sum \limits_n \vec r_n \times \vec F_n$ as before.
I think there is a misnomer here. A pure torque (if net forces are zero) does not have a location associated with it. It is shared identically throughout the rigid body.
Only a torque (force moment) as a result of a force at a distance changes value with location (due to the distance to the force changing).
Torque transforms from point A to point B with the law $$\vec{\tau}_B = \vec{\tau}_A + \vec{r}_{A/B} \times \vec{F}$$
If the force $\vec{F}$ is zero then the torque is identical on all the points ($\vec{\tau}_B = \vec{\tau}_A$).
PS. The transformation law is identical to angular momentum $$\vec{L}_B = \vec{L}_A + \vec{r}_{A/B} \times \vec{p}$$ and also with velocities $$\vec{v}_B = \vec{v}_A + \vec{r}_{A/B} \times \vec{\omega}$$
|
# How to compute Implied Volatility Calculation?
We all know if you back out of the BS option pricing model you can derive and solve what the options is "implying" as its volatility.
However, what is the formula used to derive Implied Volatility (IV) (can anyone direct me to the equation)?
Is there a closed form equation?
Or is it solved numerically?
-
I found this one via Google: Implied Volatility Formula – chrisaycock Apr 17 '13 at 2:28
yea, saw that one too. Newton method was used here. am I right? But how is IV calculated? Anyone here use a standard procedure? – jessica Apr 17 '13 at 2:30
The Black-Scholes option pricing model provides a closed-form pricing formula $BS(\sigma)$ for a European-exercise option with price $P$. There is no closed-form inverse for it, but because it has a closed-form vega (volatility derivative) $\nu(\sigma)$, and the derivative is nonnegative, we can use the Newton-Raphson formula with confidence.
Essentially, we choose a starting value $\sigma_0$ say from yoonkwon's post. Then, we iterate
$$\sigma_{n+1} = \sigma_n - \frac{BS(\sigma_n)-P}{\nu(\sigma_n)}$$
until we have reached a solution of sufficient accuracy.
This only works for options where the Black-Scholes model has a closed-form solution and a nice vega. When it does not, as for exotic payoffs, American-exercise options and so on, we need a more stable technique that does not depend on vega.
In these harder cases, it is typical to apply a secant method with bisective bounds checking. A favored algorithm is Brent's method since it is commonly available and quite fast.
-
Brenner and Subrahmanyam (1988) provided a closed form estimate of IV, you can use it as the initial estimate:
$$\sigma \approx \sqrt{\cfrac{2\pi}{T}} . \cfrac{C}{S}$$
-
If you could embed the link to the article in your answer, it would be great. – SRKX Apr 17 '13 at 9:24
What are the definitions of T,C and S ? I'm guessing T is the Duration of the option-contract, C is the theoretical Call-value and S is the Strike-price, correct ? – Dominique Oct 9 '13 at 12:49
No, S is the current price of the underlying. However the approximation by Brenner and Subrahmanyam works best for at the money options, hence the difference should be small in that case. – jcfrei May 9 '14 at 14:29
It is a very simple procedure and yes, Newton-Raphson is used because it converges sufficiently quickly:
• You need to obviously supply an option pricing model such as BS.
• Plug in an initial guess for implied volatility -> calculate the the option price as a function of your initial iVol guess -> apply NR -> minimize the error term until it is sufficiently small to your liking.
• the following contains a very simple example of how you derive the implied vol from an option price: http://risklearn.com/estimating-implied-volatility-with-the-newton-raphson-method/
• You can also derive implied volatility through a "rational approximation" approach (closed form approach -> faster), which can be used exclusively if you are fine with the approximation error or as a hybrid in combination with a few iterations of NR (better initial guess -> less iterations). Here a reference: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=952727
-
A Matrixwise Matlab implementation which uses Li's rational function approximation, followed by iterations of 3rd order householder method – StudentT Jun 26 '14 at 18:17
To get IV I do the following: 1) change sig many times and calculate C in BS formula every time. That can be done with OIC calculator All other parameters are kept constant in BS call price calculations. The sig that corresponds to C value closest to the call market value is probably right. 2) without OIC calculator for every chosen sig I am using old approach: calculate d1, d2, Nd1, Nd2 and BS option value. Again calculated BS value closest to the market value probably correspond to correct IV.
-
|
+1.617.933.5480
# Q: Quantum numbers
If l= 3, what can you deduce about n?
Quantum number l denotes the orbital. If l=0 then S orbital. If l=1 then it denotes p orbital and in same way l=3 denotes f orbital
Related Questions in Physical chemistry
|
# Light Cone 2022 Online: Physics of Hadrons on the Light Front
Sep 19 – 23, 2022
ONLINE (ZOOM)
GMT timezone
## Signature of gluon orbital angular momentum
Sep 21, 2022, 1:00 PM
30m
ONLINE (ZOOM)
### Speaker
Shohini Bhattacharya (BNL)
### Description
By considering double spin asymmetry (DSA) in exclusive dijet production in ep collisions, we demonstrate for the first time that the cos($\phi$) angular correlation between the scattered electron and proton is a direct probe of the gluon orbital angular momentum and its interplay with the gluon helicity. We also make an estimate of the DSA for typical kinematics of the future Electron Ion Collider.
### Presentation materials
LC2022_Shohini.pdf Shohini Bhattacharya.mp4
|
# Derivation of the negative hypergeometric distribution
Suppose we've given an urn which contains $R$ red and $W$ white balls. These balls are drawn randomly from the urn and are not placed back. Let
• $X:=$ number of attempts, before we've drawn at least $r\le R$ red balls
• $Y:=$ number of red balls after $n-1$ attempts
I want to calculate $\text{Pr}(X=n)$.
Suppose we've already drawn $n-1$ balls from the urn and received $r-1$ red balls. The probability of this to happen is given by $$\text{Pr}(Y=r-1)=h(r-1|R+W,R,n-1):=\frac{\begin{pmatrix} R \\ r-1 \end{pmatrix}\begin{pmatrix} W \\ n-r \end{pmatrix}}{\begin{pmatrix} R+W \\ n-1 \end{pmatrix}}$$ where $h$ denotes the hypergeometric distribution. The probability that now another red ball is drawn is given by $$\text{Pr}(X=n)=\text{Pr}(Y=r-1)\;h(1|R+W-(n-1),R-(r-1),1)=\frac{\begin{pmatrix} R \\ r-1 \end{pmatrix}\begin{pmatrix} W \\ n-r \end{pmatrix}}{\begin{pmatrix} R+W \\ n-1 \end{pmatrix}}\frac{R-(r-1)}{R+W-(n-1)}$$ While I think that's correct, I would really like to get a more compact form of that. After some research on the internet, I found out, that I should be able to receive $$\text{Pr}(X=n)=\ldots =\frac{\begin{pmatrix} r-1 \\ n-1 \end{pmatrix}\begin{pmatrix} R+W-(r-1) \\ R-(n-1) \end{pmatrix}}{\begin{pmatrix} R+W \\ R \end{pmatrix}}$$ But I don't see how I get the $\ldots$ filled.
• I'm unsure whether the final expression is correct or not. However, I'm sure we can rewrite the result in a more compact way. – 0xbadf00d Jun 8 '14 at 20:47
Your final expression may not be quite correct: for example I am dubious about ${r-1 \choose n-1}$ when $n \gt r$, but you can approach it from what you have done with something like this, which you should check and simplify. You also seem to have $K$ and $r$ representing the same thing.
$$\dfrac{\displaystyle{R \choose r-1}{W \choose n-r} }{\displaystyle{R+W \choose n-1} } \; \dfrac{R-(r-1)}{R+W-(n-1)}$$
$$=\dfrac{R!\; W!\; (n-1)! \;(R+W-(n-1))!\; (R-(r-1))}{(R+W)!\; (r-1)!\; (R-(r-1))! \; (n-r)!\; (W-(n-r))! \;(R+W-(n-1))}$$
$$=\dfrac{R!\; W!}{ (R+W)!}\; \dfrac{ (n-1)!}{(r-1)!\; (n-r)!} \;\dfrac{(R+W-(n-1))!}{(R-(r-1))! \; (W-(n-r))!}\dfrac{(R-(r-1))}{\; \;(R+W-(n-1))}$$
$$=\dfrac{\displaystyle {n-1 \choose r-1}\; {R+W-n \choose R-r}}{\displaystyle {R+W \choose R}}$$
though I have not checked this.
An alternative approach would be to find the probability of $r$ reds in the first $n$, i.e. $\dfrac{\displaystyle{n \choose r} {R+W-n \choose R-r}}{ \displaystyle{R+W \choose R}}$ and multiply this is the conditional probability that the last one of the $n$ is red, which I believe is $\dfrac{r}{n}$. I think this gives the same answer more quickly.
• You're right. $K$ should be $r$, sorry for that. – 0xbadf00d Jun 8 '14 at 20:49
|
## Elementary Algebra
$x=-14$
Using the properties of equality and combining like terms, the value of the variable that satisfies the given equation, $-x+6=-2x-8 ,$ is \begin{array}{l}\require{cancel} -x+2x=-8-6 \\\\ x=-14 .\end{array}
|
# Dynamical systems
1. Oct 20, 2008
### johnson123
(1) Show that any C1 vector Field on S2 (the torus) possesses at least one singularity.
(2)Show that any isolated periodic orbit T of a C1
planar vector field X is a limit cycle.
Any help/suggestions are appreciated.
2. Aug 21, 2009
### Reb
(1) There are C^1 vector fields on the torus without singularities. You must be omitting something.
(2) Since the orbit is periodic it is a cycle, and since it is isolated it must be a limit cycle.
3. Aug 21, 2009
### Phrak
S2 usually denotes a 2-sphere rather than a torus.
4. Aug 21, 2009
### Reb
You're right... Johnson must have taken the liberty of denoting the Cartesian product $$S^2 := S\times S$$ for the torus, which is OK set-theoretically, but goes against standard notation.
Last edited: Aug 21, 2009
|
Get A+ with YaClass!
Register now to understand any school subject with ease and get great results on your exams!
Theory:
For heat to move through a solid substance, there must be a temperature difference between two points. Likewise, the movement of electric charges in a conductor also requires a difference in electric potential.
The electric potential at a point is defined as the amount of work done in moving a unit positive charge from infinity to that point against the electric force.
The potential of electric charges can be determined by the concentration of charges at a particular point. The number of electric charges at a point can be increased by doing some amount of work. The work done in moving a unit positive charge to a specific point is known as electric potential. It is measured in terms of volts ($$V$$).
In other words, electric potential determines the direction of current flow. Charges will always flow from a higher electric potential to a lower electric potential point in a conductor.
Conventional current and electron current:
A positively charged proton is at a higher potential as compared to a negatively charged electron. Hence, the protons move from a higher potential to a lower potential, whereas the electrons move from the lower potential to a higher potential.
The flow of protons and electrons
The movement of the positive charges or protons constitutes the conventional current whereas, the movement of the negative charges or electrons constitutes the electron current.
The direction of conventional current
The flow of negatively charged electrons is always opposite to the direction of protons. Hence, the direction of conventional current is always opposite to the direction of electron current.
The conventional current moves from the positive terminal to the negative terminal of the battery. At the same time, the electron current (negative charge) moves from the negative terminal to the positive terminal of the battery.
The direction of electron current
The direction of electrons makes the electric current move from the positive terminal to the negative terminal of the circuit.
|
# Is dry needling painful?
## Dry Needling Pin Point Relief of Muscle Pain
The trigger point is a direct and palpable source of pain, often containing multiple contracted knots in the muscle that may feel like tight bands. These “muscle knots” can cause pain, limit motion and affect performance. If left untreated, they can worsen over time. Trigger point dry needling is a treatment used by physical therapists to eliminate these trigger points. It involves inserting a sterile thin filament/acupuncture needle into the tight or sore muscle. It is called “dry needling” because, unlike a steroid injection, no substance is put into the body. When the needle penetrates the knots in a muscle, it elicits a twitch response, indicating a release or deactivation of the painful trigger point. At the cellular level, the muscle’s physiology changes to better absorb calcium, improve circulation, encourage tissue remodeling and promote healing. This process can be compared to re-booting the hard drive on a computer.
## Dry Needling, Combined With Other Physical Therapy Treatments, Can Help the Following Conditions:
• Acute and chronic tendonitis/tendinosis
• Athletic overuse injuries
• Baseball throwing related tightness/discomfort
• Carpal tunnel syndrome
• Chronic pain conditions
• Ehlers Danlos Syndrome
• Frozen shoulder
• Fibromyalgia
• Groin and hamstring strains
• Hip pain and knee pain
• IT band syndrome
• Muscle spasms
• Neck and lower back pain
• Post-traumatic injuries, motor vehicle accidents, and work related injuries
• Repetitive strain injuries
• Sciatic pain
• Shoulder pain
• Tennis/golfer’s elbow
• TMJ
• Many other musculoskeletal conditions
## How Long Does it Take for Dry Needling to Work?
In many cases, improved mobility is immediate and decreased pain is felt within 24 hours. Typically, it may take a few treatment sessions (once a week for 2-3 weeks) for a lasting positive effect.
## What are the Advantages of Dry Needling?
• Access – The advantage over other techniques is that we can treat parts of the muscle and deeper layers of muscles which our hands and fingers cannot reach, and it works faster than massage at relaxing the muscles.
• No Drugs – There are no drugs used in dry needling, so we can treat many trigger points during each treatment.
• Immediate Relief – Deactivation of the trigger points can bring immediate relief of symptoms, and then we can immediately stretch and train the muscles to work in their new pain free range of motion. Thus, results are achieved with dry needling which cannot be obtained with any other treatment.
## This Sounds Like Acupuncture
Both services do use the same needles, but that’s where the comparisons end. Our basic understanding of acupuncture is that it is more of an Eastern medicine practice in origin emphasizing energy or flow in relation to the body’s meridians. Conversely, dry needling is more of a Western medicine practice where we target specific problem areas within the muscles themselves.
## Does Dry Needling Hurt?
We use very thin filament needles. The initial feeling of the needle entering through the skin is very minimal; much less than a vaccination or having blood drawn. Once the needle reaches the muscle, the twitch sensation feels more like a deep cramp and doesn’t last long (15-30 seconds). Muscle soreness after a treatment session, may last 12-24 hours, commonly called being “needle sore” but the long-term results are worth it!
## Will Dry Needling Help Me?
Individuals who see good results with massage, but are disappointed when the discomfort returns, will find dry needling a better way to get longer-lasting and deeper relief. Dry Needling allows us to treat almost any muscle in the body, and treat the muscle at depths impossible with other types of bodywork. Many trigger points are just too deep in the tissue for massage, even deep tissue work, to treat effectively. Dry needling is a great way to get more out of your physical therapy by allowing us to eliminate the deep knots and restrictions that have, up until now, been unreachable.
## How Many Needles Will I Need?
We like to start slowly during the first session to give you a feel for the technique. The first session will focus on a few muscles that are key to your problems. These key areas can give you excellent relief with less soreness. Subsequent treatments will target more specific areas to fine-tune the effect. Sessions are usually spaced 5-7 days apart and you should expect to feel a marked difference after only 2 or 3 sessions.
## How Will I Feel After Dry Needling?
You will know positive change has occurred right after the session, as you should have decreased pain and increased mobility and because you will be sore in the way that you would feel after a heavy work out. The muscle will feel fatigued, and the soreness can last from a few hours to 1 or 2 days, but should not interfere with your everyday activities. We encourage you to be active during this time to keep the soreness to a minimum. You can continue your normal activities and gym routine. After a day or so, you’ll experience a new and lasting feeling of less pain and tightness. The injury and pain you thought was there to stay will actually start to diminish. If you’ve been having long standing, chronic muscle pain we’d welcome the opportunity to explain this treatment option in more detail and answer any questions you may have.
## Fed up of Being Injured? Will Dry Needling Work for You?
There is nothing a runner hates more than being told not to run.
Unless you find a medical professional you can trust, it is hard to know whether it actually would be best to rest, or whether there is another treatment that is worth a try.
It seems every year there is a new type of treatment that emerges claiming to help injured runners get back sooner, but one that seems to be gaining a lot of momentum in the running world is dry needling.
Today we are going to look at what dry needling is (and how it is different to acupuncture), what research has found about how effective dry needling is for treatment, whether it is worth giving it a try for your running injury, and of course, the question we all want to know; just how bad does it hurt?
Curing a running injury by sticking needles into your skin might sound crazy, but it’s an increasingly popular treatment for many of the common ailments that distance runners suffer from.
The technique is called dry needling.
It’s quite similar to acupuncture in that it involves inserting several needles into your skin near the point of injury.
The proposed mechanism is a little murky, but many runners nevertheless swear by it.
## What is Dry Needling and is it Safe?
Dry needling is an injury treatment performed by physical therapists and chiropractors who have been certified to treat myofascial injuries.
It arose partially out of scientific studies into acupuncture and injections of medication like cortisone, but has developed into its own area of treatment, and continues to grow in popularity.
Your therapist will use individually packaged, sterile needles for one site only, and very little, if any bleeding will occur due to the needles being around 1/4 mm thick.
The practice is very safe, especially as the therapist must undergo intense training to become certified in the practice of dry needling, and will already have vast knowledge of the anatomy of the body to know where to insert the needles.
### How does trigger point dry needling work?
The therapist inserts needles through the skin into areas of muscle pain, known as trigger points, explaining the name trigger point dry needling, although the treatment is also called intramuscular manual therapy.
The name dry needling is used because there is no medication involved, and the solid filament needles are the same as the ones used in acupuncture, rather than the hyperdermic syringes that traditional injections use.
The number of needles inserted will vary depending on the size of the area in pain, and how often you are able to get treatment. Most treatments will involve 5-20 needles, which are removed just a few minutes after insertion.
### What is the difference between acupuncture and dry needling?
This is one of the most common questions about the technique, and one that is still causing some friction between the two worlds.
Dry needling is part of a western medicine treatment technique that is supported by research, and used to target relief in specific areas. Acupuncture is based on traditional Chinese medicine, which treats the body as a whole, by creating balance within the body.
Therapists use dry needling to relieve pain by using the needles to release trigger points of specific muscles.
### How much does dry needling hurt?
Dry needling can be painful, and the location of the injury affect the amount of pain experienced, but it usually manifests in two ways:
As the needle is inserted through the skin into the muscle, there may be a slight contraction or twitch within the muscle, that creates pain.
Although twitches in the muscles can elicit an initial (but brief!) painful response, twitches in the muscles are considered a good sign that the desired trigger point has been hit.
After the treatment itself, there may be some soreness in the area for up to 48 hours afterwards, but this is not considered a cause for concern, and should be expected for most patients.
If the pain is initially worse:
Don’t panic! Give the muscle 48 hours to calm down after treatment, and if it still feels bad, dry needling probably isn’t for you.
## How Effective is Dry Needling?
We’ll have to look to the scientific research to find out.
### Where did dry needling originate?
One way medical researchers test out treatments is to devise sham treatments to function as a control.
In traditional acupuncture, needles must be applied to specific locations on the body to achieve the intended effect.
To test acupuncture, researchers intentionally inserted needles into the “wrong” location to test whether the theory behind acupuncture holds any water.
Likewise, a “dry” injection (using a needle but injecting no medication) is sometimes used as a placebo treatment in studies on medical injections.
According to a review article by T. Michael Cummings and Adrian R. White, researchers first noticed that the needle itself seemed to have some therapeutic effect in the late 1970s.1
In one study published in 2002, for example, true acupuncture (with proper needle placement) was compared with sham acupuncture (wrong needle placement) in a group of patients with muscle pain.
Both treatment groups experiencing muscle pain noticed substantial improvement following treatment, but there was no difference between true and sham acupuncture!2
After this and other similarly surprising discoveries, researchers began to consider dry needling as a treatment in and of itself more closely.
By 2005, enough evidence had accumulated to allow for a review study.
A group of researchers led by Andrea Furlan at the Institute for Work & Health in Canada looked over the scientific literature on dry needling and acupuncture for low back pain in English-language journals as well as Chinese and Japanese databases (acupuncture being an understandably popular research topic in Asian medical research).3
Though Furlan et al. did not find the bulk of the evidence strong enough for a ringing endorsement of dry needling or acupuncture, the authors did have a different conclusion about back problems.
There is some evidence of an effect for both acupuncture and dry needling, and that dry needling functions best as an adjunctive treatment, not as a standalone solution.
### Does dry needling help running injuries?
The scientific literature is more sparse when it comes to dry needling for athletic injuries, but there is some emerging evidence that it can be useful.
A 2010 report by Nichola Osborne and Ian Gatt described four elite female volleyball players with shoulder injuries who were successfully treated with dry needling treatments over the course of a month which included frequent competitions and tough training.4
Osborne and Gatt hypothesized that the needling treatments deactivated trigger points in the muscle, allowing the players to regain the ability to practice and compete while undergoing the treatment.
Likewise, a 2007 study by Steven James and colleagues published in the British Journal of Sports Medicine described how 44 patients were successfully treated for patellar tendonitis via dry needling and injection of the patients’ own blood into the tendon.5
In this case, the researchers used ultrasound imaging to insert a “dry” needle repeatedly into the degenerated area of the tendon.
This dry needling was followed by an injection of blood into the needled area.
## Why Does Dry Needling Work?
Here’s the deal:
While all of these studies show promise, they highlight one major problem with dry needling:
We don’t know how it actually works.
Some scientists hypothesize that it manipulates a mechanism called “gate control”—by focusing your nervous system’s response on the acute pain caused by the needle puncture, the chronic pain from your injury goes away.
Others propose that endorphins, mood-stimulating chemicals released in response to a painful stimulus like a needle wound, are responsible for alleviating pain.
Still another theory is that the needle deactivates muscular trigger points when inserted into the right location, relieving tension and referred pain.
Finally, the approach used by James et al. tries to take advantage of the direct trauma of the puncture, using the acute, local damage from the needle to kick-start the body’s healing process (aided in this case by a direct blood injection).
Clearly, much more research is needed on this front.
It’s likely that all of these responses play some part in how dry needling works.
The proper way to use dry needling may well depend on the injury it’s being used to treat.
### What injuries is dry needling most effective for treating?
A chronic, degenerative injury in tissue with poor healing capacity, like the Achilles tendon or the plantar fascia, might require the kind of repeated, direct trauma that was used in James et al.’s paper, possibly combined with external methods to boost healing, like autologous blood injections.
On the other hand, long-standing pain with nervous or muscular roots might benefit more from trigger-point targeted needling, or even just the rush of endorphins and redirection of pain that occurs in response to a needle wound.
### Is dry needling available everywhere?
Dry needling is available worldwide, but a few states in the US do not currently allow physical therapists and medical professionals other than licensed acupuncturists to use dry needling. This is because of an ongoing battle between physical therapists and acupuncturists.
This includes:
California, Utah, New York, Idaho, Hawaii and Florida.
In the remaining 44 states of the US, the argument has been settled, but if you live in any of the above states, you may need to seek alternative treatment.
Just keep this in mind:
There is no Procedural Terminology (CPT) code for dry needling itself, and many insurance carriers will not cover the treatment, so you have to decide if it is worth paying out of pocket for.
## What Does This Mean for the Injured Runner?
Here’s the bottom line:
Dry needling (or acupuncture) is best used as a tool to kick-start or speed up healing, not as a primary stand-alone treatment for running injuries.
You still need to get to the root of why you got hurt in the first place and how you can prevent that from happening again.
Further, it is best to seek dry needling from a medical professional you can trust with plenty of experience with the technique.
Even in the best case scenario, you can expect some significant soreness in the needled area, so you want to be sure the pain is worth it.
## What is Dry Needling?
Dry Needling is a treatment technique whereby a sterile, single-use, fine filament needle (acupuncture needle) is inserted into the muscle to assist with decreasing pain and improving function through the release of myofascial trigger points (knots in the muscle).
### What is the Difference Between Dry Needling and Acupuncture?
Dry Needling is a not the same as acupuncture, although there are similarities between the two techniques. The main difference between Dry Needling and acupuncture is the theory behind why the techniques work. Dry Needling is primarily focused on the reduction of pain and restoration of function through the release of myofascial trigger points in muscle. In comparison, acupuncture focuses on the treatment of medical conditions by restoring the flow of energy (Qi) through key points in the body (meridians) to restore balance.
### What is a Myofascial Trigger Point?
A myofascial trigger point, also known as a knot in the muscle, is a group of muscle fibres which have shortened when activated but have not been able to lengthen back to a relaxed state after use. A myofascial trigger point is characterised by the development of a sensitive nodule in the muscle (Simons, Travell & Simons, 1999). This occurs as the muscle fibres become so tight that they compress the capillaries and nerves that supply them (McPartland, 2004; Simons, et al., 1999). As a result, the muscle is unable to move normally, obtain a fresh blood supply containing oxygen and nutrients, or flush out additional acidic chemicals (McPartland, 2004; Simons, et al., 1999). In addition to this nodule, the remainder of the muscle also tightens to compensate (Simons, et al., 1999; Simons, 2002). The presence of a myofascial trigger point in a muscle can lead to discomfort with touch, movement and stretching; to decreased movement at a joint; and even a temporary loss of coordination (Simons, et al., 1999).
### What Causes a Myofascial Trigger Point?
A myofascial trigger point develops as part of the body’s protective response following:
• injury – the muscle will tighten in an attempt to reduce the severity of an injury;
• unexpected movements e.g. descending a step that is lower than originally anticipated;
• quick movements e.g. looking over your shoulder while driving;
• change in regular activity or muscle loading e.g. an increase in the number or intensity of training sessions for sport;
• sustained postures e.g. prolonged sitting for work or study;
• nerve impingement – the muscle will tighten to protect the nerve;
• stress;
• illness (bacterial or viral);
• nutritional deficiencies, or;
• metabolic and endocrine conditions.
(Simons, et al., 1999)
### How does Dry Needling Work?
Dry Needling assists with decreasing local muscular pain and improving function through the restoration of a muscle’s ability to lengthen and shorten normally by releasing myofascial trigger points.
When a fine filament needle is inserted into the center of a myofascial trigger point, blood pools around the needle triggering the contracted muscle fibers to relax by providing those fibers with fresh oxygen and nutrients, as well as by flushing away any additional acidic chemicals. This, in turn, leads to the decompression of the local blood and nerve supply.
### When is it Appropriate to Use Dry Needling as a Form of Treatment?
Dry Needling can be used in treatment:
• to help release myofascial trigger points (muscle knots);
• to assist with pain management, and;
• to restore movement at a joint if inhibited by myofascial trigger points.
### What Will You Feel During Dry Needling Treatment?
During a Dry Needling treatment, you may feel a slight sting as the needle is inserted and removed. However, this discomfort should last no longer than a second before settling.
A brief muscle twitch can also be experienced during a Dry Needling treatment. This may occur during treatment when the needle is inserted into a myofascial trigger point.
### Where Does Dry Needling Fit Within Your Rehabilitation Program?
Dry Needling is one of many techniques that can be utilised by your physiotherapist to assist with your rehabilitation. Dry Needling is often used in combination with other techniques including massage, manual therapy, and exercise prescription.
### What are the Side Effects of Dry Needling?
Every form of treatment can carry associated risk. Your physiotherapist can explain the risks and can determine whether Dry Needling is suitable for you based on your injury and your general health.
When Dry Needling is performed, single-use, sterile needles are always used and disposed of immediately after use into a certified sharps container.
### Is Dry Needling Safe?
Everybody is different and can respond differently to various treatment techniques, including Dry Needling. In addition to the benefits that Dry Needling can provide, there are a number of side effects that may occur, including spotting or bruising, fainting, nausea, residual discomfort or even altered energy levels. However, these symptoms should last no longer than 24 to 48 hours after treatment.
### Can You Exercise After Dry Needling?
It is recommended to avoid strenuous or high impact activities immediately after Dry Needling, to allow the body time to recover, and to maximise the benefits of the treatment.
At PhysioWorks, most of our physiotherapists are qualified and skilled in Dry Needling and would be happy to discuss your treatment options.
Call PhysioWorks
Book Online
### FAQs about Dry Needling & Acupuncture.
• What Conditions May Acupuncture Help?
• What is Dry Needling?
• What is a Trigger Point?
• What is Acupressure?
• Heat Packs. Why Does Heat Feel So Good?
• Non Attendance Policy
• What are the Early Warning Signs of an Injury?
• What is a TENS Machine?
• What is Chronic Pain?
• What is Nerve Pain?
• What is the Correct Way to Sit?
Acupuncture & Dry Needling
Call PhysioWorks
Book Online
### Related Information
• Early Injury Treatment
• Acupuncture and Dry Needling
• Gait Analysis
• Biomechanical Analysis
• Balance Enhancement Exercises
• Proprioception & Balance Exercises
• Medications?
• Real Time Ultrasound Physiotherapy
• Soft Tissue Massage
• Dry Needling
• Electrotherapy & Local Modalities
• Heat Packs
• Joint Mobilisation Techniques
• Kinesiology Tape
• Neurodynamics
• Physiotherapy Instrument Mobilisation (PIM)
• Running Analysis
• Stretching Exercises
• Supportive Taping & Strapping
• TENS Machine
• Video Analysis
• ## Research Studies Show Dry Needling Has No Long-Term Benefits
An increasing number of physical therapists (PTs), in the United States and throughout the world, are incorporating the dry needling (DN) technique to treat mainly musculoskeletal pain (MSP). So much so, the Federation of State Boards of Physical Therapy (FSBPT), in their most recent DN resource paper (1), dated July 2013 (pp2), state that “The volume of activity in the states from 2010-2013 regarding DN … has necessitated annual updates of the FSBPT original resource paper published in March 2010.”
Let me start by saying, researching this topic was kinda frustrating!! I like to understand a topic thoroughly before I even begin to think about writing about it. It wasn’t so easy, in this case. From clarifying the difference, if it exists, between the techniques of DN and acupuncture, to reviewing and comparing the available literature on DN with the varying methodologies (and conflicting commentaries among colleagues and professionals). It was even tricky clarifying in which U.S. states the DN technique is currently not considered within the practicing scope of PTs and, therefore, illegal.
I will take you through what I have managed to find out, though the ride may be a little “knotty”. The purpose of this article is to examine the current research on DN and whether it’s effective or to be avoided.
## What Is Dry Needling
The DN definition, according to the FSBPT (1) (pp4), is “a technique using the insertion of a solid filament needle, without medication, into or through the skin to treat various impairments including, but not limited to: scarring, myofascial pain, motor recruitment and muscle firing problems. Goals for treatment vary from pain relief, increased extensibility of scar tissue to the improvement of neuromuscular firing patterns.”
According to the FSBPT (1) (pp3), DN is known synonymously as intramuscular manual therapy (IMT), trigger point dry needling (TDN), or intramuscular needling (IN). The American Physical Therapy Association (APTA) had initially recommended using the term ‘IMT’. Since late 2011, though, it advocates using the term ‘DN’.
Interestingly, many groups still debate the proper term, and exact definition, to describe the technique.
Keeping up? 🙂
The ‘dry’ part refers to the fact that nothing is injected with the needle, as opposed to its opposite of “wet needling” which injects medications such as steroids, etc.
Mostly, it is used to treat MSP through the targeting of ‘myofascial trigger points’ (MTrPs). You’ll find this term used a lot in the DN literature. MTrPs are described as localized, hyper-sensitive, hyper-irritable spots in a taut band of muscle. They can be active MTrPs, when they produce spontaneous pain, or latent MTrPs that do not produce spontaneous pain and are only painful when touched.
DN can be divided into deep and superficial DN. Deep DN has allegedly been shown to inactivate MTrPs by eliciting local twitch responses, a.k.a. muscle contractions. This process is said to activate endogenous opioids, thereby relaxing the muscle.
MTrPs are often claimed to be a key cause of MSP. Problematic, however, is the finding (2) from a recent systematic review that “physical examination cannot currently be recommended as a reliable test for the diagnosis of trigger points” and that the reliability of MTrP diagnosis needs further investigation by high-quality studies!
## DN versus Acupuncture
Is there a difference, apart from the fact that one uses hand gloves during treatment? (that would be the PTs during DN, for a heads up).
Now this topic, I have discovered, seems to be quite a sensitive area. Understandably so – one has been around for centuries while the other is relatively new. Both seemingly perform a similar needling technique?! Will one potentially lose some of their clientele.. and mystique.. to the other?
In Team Acupuncture Corner. According to Fan et al (3), DN is being promoted “by simply rebranding (a) acupuncture as DN and (b) acupuncture points as trigger points.” Strike! DN touted as an “over-simplified” version of acupuncture except for “emphasis on biomedical language”, using English biomedical terms to replace their equivalent Chinese medical terms. Strike 2! “Trigger points belong to .. traditional Chinese acupuncture, and they are not a new discovery”. Strike 3! “For patients’ safety, DN practitioners should meet standards required for licensed acupuncturists and physicians.” .. and DN is out!
In Team DN Corner. The FSBPT paper (1) (pp4) states that the method by which acupuncture works is completely different, “based on a theory of energetic physiology”, focusing on the energy meridians or flows, and the unblocking of these pathways. Further, unlike acupuncture, the paper clarifies that PTs do not use DN to treat conditions such as fertility, smoking cessation, allergies, depression or other non-neuro-musculoskeletal conditions. They also identify an important distinction in that acupuncture is an entire discipline and profession, whereas DN is merely one technique which should be available to any professional with the appropriate background and training. Strike, strike, strike!
### Who wins this battle?
Taking the middle ground, the FSBPT (1) (pp6) finally state that, “the accepted premise must be that overlap occurs among professions. The question (for state licensing) … should only be whether or not DN is within the scope of practice of PT, not determining whether it is part of acupuncture.”
Fair enough, I guess.
## Is Dry Needling Legal Everywhere?
This brings us to the important topic of the legality of DN in the U.S.
Key organisations, such as the A.P.T.A., and other health professions claim that DN is within their “scope of practice”. However, each U.S. state has its own rules, regulations and guidelines concerning whether to permit the practice of DN. It is the responsibility of each therapist to be aware of these, and to practice within the grounds of their state, and professional, license.
The FSBPT paper (1) (pp9) specifies in which U.S. states DN is legally permitted, within the scope of practice for PTs, as at July 2013.
I have managed to source a more current map. At the time of writing, according to this information, the states that have made a ruling against DN being within the scope of practice for PTs (and therefore being illegal) are: California, Florida, Hawaii, New Jersey, New York, Pennsylvania, South Dakota, and Washington. According to my reading, this is “mainly due to verbiage in the practice act against puncturing the skin.” Notably, since the 2013 FSBPT paper, Utah has legalized the DN procedure in their state.
However, there are other categories indicating the practice is not necessarily yet legal in the state. They include the categories of ‘unclear or conflicting standards’ (Idaho, Michigan, Minnesota, Oregon, Texas) and ‘unknown’ (Connecticut, Massachusetts, Missouri, Oklahoma).
## How Much Research Is There on Dry Needling?
Not a great deal. A PubMed search of the terms, ‘dry needling’, ‘trigger point dry needling’, or ‘intramuscular needling’, limited to the English language, humans and spanning from the year 2000 to present, yields about 150 results. Of these, 12 are meta-analyses and 29 are systematic reviews.
The relatively low number, and overall quality, of studies reporting on DN performed by PTs, at this time at least, coupled with the high variability found in the results of the meta-analyses, made it challenging to piece this article together.
One 2015 systematic review (4) of “high-quality”, randomised, controlled trials (RCTs), concluded a “broad applicability of TDN treatment for multiple muscle groups”. Yet, it was swiftly challenged a couple of months later in a piece (5) entitled ‘TDN: the data do not support broad applicability or robust effect.’ stating “This strongly worded conclusion overstates the findings of the actual data and misleads casual readers into believing that the research supporting TDN is quite robust. We contend that it is not.” It then goes on to list seven potential limitations of the individual data, from the so-called high-quality RCTs, included in the said review.
Get the idea?!!
Compare it, say, to the amount of research on a better known form of physiotherapy, such as ‘hydrotherapy’, I retrieved over 300 results – double the amount. Of these, 54 are systematic reviews.
## What Has The Research Concluded?
### Does Dry Needling Help Facial Wrinkles?
Evan though DN may involve (6) the insertion of needles in to subcutaneous fascia, I was not able to find any credible research here. Not very biologically plausible, either, especially in the longer term. Skin wrinkling, as explained by the Mayo Clinic, is largely the result of a permanent breakdown of the skins connective tissue-collagen and elastin fibers.
Bottom Line: There is no evidence that DN improves facial wrinkles.
### Does Dry Needling Effect Scar Tissue?
Here, I could not find any research supporting DN in the treatment of scar tissue, even though it is known (6) to be used for such cases. One article (7) did suggest caution in applying DN in surgical patients, after the surgical scar of a recent patient showed signs of inflammation 2 weeks following DN treatment. Yikes.
Bottom Line: There is no evidence that DN improves scar tissue.
### Does Dry Needling Help Shoulder Pain or Frozen Shoulder?
There are a dozen or so research papers, of which one is a fairly recent (2015) meta-analysis (8) and systematic review. It reviews the topic of DN of the MTrPs associated with both neck and shoulder pain. Guess what? They found no longterm benefits, but stated that it “can be recommended … in the short and medium term”. However, wet needling was more effective than DN in the medium term.
The most recent research paper (9) (Jan 2017) is a RCT of 120 patients with non-specific shoulder pain. They found no benefit offered by DN, over other, personalized, evidence-based physical therapy treatment.
Bottom Line: There is weak evidence that DN may over short to medium-term benefits to shoulder pain.
### Does Dry Needling Help Back Pain or Sciatica?
Again, there are almost a dozen research papers investigating DN in this context. I found no meta-analysis papers and one systematic review (10), from 2005. Although it identified most of the studies were of low-quality, and highlighting the need for high-quality trials in this area, they concluded that “The data suggest that … DN may be useful adjuncts to other therapies for chronic low back pain (LBP)”.
The results of a more recent RCT (11), of a whole 12 patients, “suggest that MTrP DN was effective for improving pain, disability … and widespread pressure sensitivity in patients with mechanical LBP at short-term”.
Convincing??!
Bottom Line: There is weak evidence that DN may assist with LBP in the short-term.
### Does Dry Needling Help Plantar Heel Pain (Plantar Fasciitis)?
There is minimal, weak evidence supporting DN of MTrPs in the treatment of this condition. According to Cotchett et al (12), “in patients with plantar heel pain this technique is thought to improve muscle activation patterns, increase joint range of motion and alleviate pain.”
The same team conducted the first (and, at this time, only) RCT (13) of 84 patients with plantar heel pain, in a University health-sciences clinic setting. They concluded that DN “provided statistically significant reductions in plantar heel pain”, when compared to sham DN. This study was published in the peer-reviewed journal Physical Therapy, the official journal of the APTA. It currently has a low impact factor of 2.8.
A response (5) identified a number of issues with the findings of this particular RCT, making “the results’ clinical relevance questionable” and stating their conclusion to be “dubious at best”.Bottom Line: Evidence of benefits derived from DN in the treatment of plantar heel pain is weak, minimal and questionable.
### Does Dry Needling Induce Labor?
I was not able to find any evidence that DN assists with childbirth. As there is currently little high-quality evidence in support of DN, it is probably best avoided in this situation, at this point.
Bottom Line: No current evidence supporting DN to induce childbirth.
### Does Dry Needling Help Arthritis?
I was not able to find any credible evidence that DN assists with the pain associated with arthritis. It is not likely to be biologically plausible for DN to (a) minimize the joint inflammation associated with rheumatoid arthritis, nor (2) rebuild any of the cartilage ‘wear’n’tear’ contributing to the arthritic pain associated with osteoarthritis.
Bottom Line: No current evidence supporting DN in the treatment of arthritis.
### Does Dry Needling Help Tennis Elbow?
Credible evidence that DN assists with the pain associated with tennis elbow, yet again, eludes me here.
Bottom Line: No current evidence supporting DN in the treatment of tennis elbow.
## Precautions & Side Effects To Dry Needling
There are certain precautions to be considered with the use of DN:
1. Patients need to be able to give consent to the procedure.
2. Local skin lesions must be avoided.
3. Local or systemic infections are generally considered to be contraindicated.
4. Local lymphedema (note: there is no evidence that DN would cause or contribute to increased lymphedema, ie, postmastectomy, and as such is not a contraindication).
5. Severe hyperalgesia or allodynia may interfere with the application of DN, but should not be considered an absolute contraindication.
6. Some patients may be allergic to certain metals in the needle, such as nickel or chromium. This situation can easily be remedied by using silver or gold plated needles.
7. Patients with an abnormal bleeding tendency, ie, patients on anticoagulant therapy or with thrombocytopenia, must be needled with caution. DN of deep muscles may need to be avoided to prevent excessive bleeding.
8. Patients with a compromised immune system may be more susceptible to local or systemic infections from DN, even though there is no documented increased risk of infection with DN.
9. DN during the first trimester of pregnancy, during which miscarriage is fairly common, must be approached with caution, even though there is no evidence that DN has any potential abortifacient effects.
10. DN should not be used in the presence of vascular disease, including varicose veins.
11. Caution is warranted with DN following surgical procedures where the joint capsule has been opened. Although septic arthritis is a concern, DN can still be performed as long as the needle is not directed toward the joint or implant.
In terms of side-effects:
Brady et al (14), investigated the safety of the DN procedure performed by a sample of 39 physiotherapists. They found that while mild adverse-effects were commonly reported, no significant adverse-effects occurred.
The importance of a suitably qualified therapist is, of course, most important.
## Conclusion
Following a good few weeks of research, I would think that, at best, DN as a therapeutic treatment is somewhat dodgy! Wouldn’t you by now? There is currently no evidence (15) of any longterm benefit derived from DN, for a start. Whatever short-term benefit is suggested from DN generally comes from low to medium-quality evidence concluding that DN is better than “no treatment or sham needling.”
Now, in all honesty, would you be willing to go under the knife… or, should I say, the needle… for that?
YES NO
## Dry Needling FAQs – Southern Rehab & Sports Medicine
Dry needling has become increasingly popular over the years as a form of pain management. See if this technique is right for you.
#### Does dry needling hurt?
Dry needling uses very thin needles that are typically not painful. Because the needles do not contain medication, we are able to use very thin needles that are 8x smaller than those used for your vaccines with your medical doctor. While some areas may be more tender than others, dry needling typically does not cause more pain than your current symptoms.
#### Does dry needling make you tired?
Drowsiness, tiredness, or dizziness occurs after treatment in a small number of patients (1-3%). If this affects you, you are advised not to drive until you feel you are at your baseline.
#### How effective is dry needling?
Dry needling from a clinical perspective has proven to be an extremely useful tool, most notably in patients who have long-term or chronic muscle tightness that does not resolve with independent stretching or strengthening programs. Dry needling research supports its effectiveness in regards to relieving pain, muscle spasms, and muscle tightness by decreasing trigger points in the muscle.
#### How is dry needling different from acupuncture?
The purpose of acupuncture is to alter the flow of energy (“Qi”) along traditional Chinese meridians for the treatment of pain and dysfunction. Dry needling has an anatomy-specific focus as needles will be inserted directly into the tight muscle rather than up and down the path of its energy.
#### How long do dry needling benefits last?
Length of relief will vary from person to person. With initial treatments, results typically last several days. With each additional treatment, the goal is that we are able to increase the window of relief with each session meaning longer relief with each additional attempt.
#### How long does dry needling take to work?
Although you may have some soreness after a dry needling session, you will often notice some improvement in your symptoms within 24-48 hours. Again, the intensity of this improvement will ideally increase with each additional session. As you adjust to the treatment, post-treatment soreness will tend to decrease and your results will often be more noticeable directly after your session.
#### How many dry needling sessions do I need?
In acute pain situations, only one session may be needed. For more chronic pain situations, it may take several treatments to notice a change. Because dry needling can have a cumulative effect if you do not notice results after the first session we typically recommend 2-3 treatments before deciding to pursue other options. We tend to start your dry needling plan at 1x/week, with the goal of increasing the length between sessions as we go along. A therapist will discuss your individualized plan of care in regards to your dry needling plan at your first session.
#### How long is a dry needling session?
Dry needling alone is typically a 30-minute session.
#### What do you wear to a dry needling session?
Wearing loose-fitting clothes tends to be the best option when you are expecting to undergo dry needling. This will allow the therapist to access the muscles while allowing you to stay comfortable, as you are typically lying on a treatment table for 15-20 minutes. Shorts are easiest when attempting to access the muscles of the leg, and loose-fitting pants can help your therapist access the muscles of the low back and hips.
#### Is dry needling dangerous?
Dry needling is very safe with serious side effects occurring in less than 0.01% of treatments. Your therapist has undergone additional training after their doctorate degree to be certified in dry needling and is an expert in regards to anatomy and minimizing the risk associated with this intervention.
#### What does dry needling cost?
The cost will be based on your insurance plan, which our office will be happy to discuss after verifying your insurance. You will be provided with a handout prior to your session that will outline your expected costs. Dry needling sessions can also be purchased separately at a rate of \$65/session as a self-pay plan.
#### What is dry needling good for?
Dry needling can be used for a wide variety of conditions to treat pain and dysfunction of the musculoskeletal system. This includes but is not limited to: neck pain, back pain, shoulder impingement, tendinitis, tennis elbow, carpal tunnel, headaches, knee pain, arthritis, shin splints, and plantar fasciitis. Your therapist will be able to guide you in your first session as to whether or not dry needling will be beneficial for your symptoms.
#### Who benefits from dry needling?
Anyone experiencing pain can benefit from dry needling from athletes attempting to return to their sport, to those who have suffered from motor vehicle accidents. If you are unsure if dry needling will benefit you, feel free to call our office or reach out on our contact page for more information.
For further information, or to schedule your appointment please fill out the from below.
## Dry Needling: The Most Painful Thing I’ve Ever Loved
By AshleyJane Kneeland, Special to Everyday Health
Imagine you have pockets of a highly pressurized, toxic gas caught in your shoulders. Now imagine these painful pockets keep growing in size, every hour of the day, inflaming your muscles and pinching your nerves. And, as a result you’re miserable — grasping to keep your sanity.
Then imagine you find a medical professional willing to puncture your skin and release all that painful pressure — after which some soreness remains, although it’s nothing compared to the blessed relief you’re now feeling.
That procedure, dry needling, is what works best for me and the painful spasms that course through my shoulders. The needles deflate my muscle spasms, which feels like air rushing out of an overfilled balloon. It is, without a doubt, the most painful thing I’ve ever loved.
Prior to my relationship with dry needling, I had an on-again-off-again fling with trigger point injections. These injections provided six to eight weeks of relief. But the liquid injected contains a steroid, so the injections weren’t a viable long-term plan. Because what I’m dealing with is a long-term problem, my shoulders and I moved on to dry needling, which involves no injections, just a bit of brutal poking.
## Three Illnesses, All Causing Pain
A person must be pretty desperate to continually pursue such sharp methods of treatment, right? My despair stems from three illnesses that overlap each other in a Venn diagram kind of way. I made this diagram (left) to show how my lupus, fibromyalgia, and postural orthostatic tachycardia syndrome (POTS) symptoms overlap.
As a kid, I lived with chronic head and body aches. Assuming I was just wimpy and couldn’t handle the daily pressures of life, I carried ibuprofen around in my backpack and prayed for a morning math class, before the pain made it hard to concentrate. I played sports (not very well), competed in mock trials (deciding my true calling in life must be something involving dress suits and heels), and served as a U.S. Senate page (my crowning achievement, which I compulsively must mention when discussing the earlier years of my life). Chances are by now my fiancé and his daughter assume this part of my life is some kind of “Big Fish” story, but I swear, I really was Head Page. Twice. Which is one more than the number of friends I had.
Throughout all this I continued my reckless reliance on ibuprofen, which most definitely, explains why doctors later found three bleeding holes in my stomach. Then, during my senior year of college, extreme fatigue kicked in and I was no longer able to physically get myself to classes. In the 10 years since then, I’ve also experienced mouth and vulvar ulcers, joint stiffness, migraines, severe GI cramping, tachycardia, and an infuriating intolerance to regular amounts of daily activity.
## Why I Tried Dry Needling
What brought me to dry needling were muscle spasms in my neck and shoulders. Creams, patches, muscle relaxers, opiates and heated pool therapy sometimes help, at least temporarily; but new spasms are always appearing, seemingly triggered by everything and nothing at the same time. Of all of the treatments I’ve tried, dry needling has been the most effective.
The procedure goes a little something like this: After I lie down on a massage table, my physical therapy doctor inserts a thin-filament needle directly into the muscle that is currently tight or spasming. Then she jiggles the needle up and down until my muscle responds with a twitch. The purpose of this twitch is to disrupt the “neurological feedback loop” that keeps the muscle in a contracted state of pain. It’s almost like the spasm is treated with another spasm. However, this intentional spasm results in a release of pressure.
(Dry needling uses needles similar in size to the needles used for acupuncture treatments but, unlike acupuncture, dry needling is not a traditional Chinese medicine technique. Instead of inserting needles into the “energetic pathways” defined by traditional Chinese medicine, dry needling practitioners insert them directly into the muscles and nerve pathways causing the pain.)
In addition to muscle spasms like mine, dry needling has been used to treat conditions including headaches, lower back pain, sciatic pain, temporomandibular joint dysfunction (TMJ), and tendonitis. Dry needling hurts, but for me the hurt is worth it. Naturally, the amount of pain varies involved in the procedure varies for different people and their trigger points. Because the knots in my shoulders are so severe, I find dry needling extremely painful. I walk out of the office feeling like my nerve endings have been cut and exposed to air. A few hours later, that sensation passes, and my shoulders are noticeably more relaxed. Over time — two appointments a week for six weeks — most of my spasms, and their resulting headaches, fade away.
## 6 Things I’ve Learned About Dry Needling
Being desperate for pain relief, I’ve tried many things over the years. I recommend dry needling because, despite the discomfort, it produces long-lasting results. If you have severe pain and want a pleasant experience, get a massage. If you want results, commit to dry needling. Here are six things I’ve learned:
1. Schedule medications wisely. If you take Tylenol (or something stronger) at regular intervals, schedule a dosage for right before your appointment. I find the less I’m clenching my muscles, the more effective those helpful “twitches” are.
2. Keep it loose. After your appointment, resist the urge to curl into a ball like an overwhelmed hedgehog. The more you move around, the looser you will be, and the quicker the pain will dissipate.
3. Find a physical therapist who is good at the procedure. After shopping around, I was able to find a physical therapy practice that accepts my insurance and bills dry needling a specific way so that my insurance covers it in full. Don’t give up just because one practice tells you it isn’t covered. (Not all physical therapists can practice dry needling because PT license requirements vary from state to state, and the technique is not yet fully accepted. MDs, DOs, and acupuncturists can practice dry needling, but many are not trained.)
4. Plan your outfit accordingly. My particular impairment favors tube tops layered under a zip-up or button-down shirt. My outfit choice allows for easy access to my shoulders, and makes it easier to get dressed after the appointment.
5. Follow your doctor or therapist’s orders. Be diligent with the daily stretches your physical therapist assigns. These exercises can make the effects of dry needling last longer. Be gentle when exercising. Stretching aggressively can make things worse.
6. Make dry needling work for you, not against you. It’s okay to say, “I only want four needles today.” If you overdo it, your body will burn out, the pain will be overwhelming, and the process won’t be effective.
AshleyJane Kneeland, 32, lives in New Hampshire with her fiancé and his daughter. She works part-time as a bookkeeper and substitute teacher. She is the author of an Amazon Kindle book entitled, Living Incurably Despite Chronic Illness. You can follow her on Twitter and Instagram.
If the thought of lying on a table and being poked by tiny needles makes you feel uneasy, you’re not alone. But a growing number of people – from athletes to people with injuries or chronic pain – swear by its ability to provide sweet relief for intense muscle pain and mobility issues.
Cleveland Clinic is a non-profit academic medical center. Advertising on our site helps support our mission. We do not endorse non-Cleveland Clinic products or services. Policy
Dry needling trigger point therapy has been used for decades, but it’s become an increasingly popular drug-free way to treat musculoskeletal pain.
It’s almost always used as part of a larger pain management plan that could include exercise, stretching, massage and other techniques, says clinical rehabilitation manager Adam Kimberly, PT, DPT, OCS. But it can play an important role in muscle recovery and pain relief.
How does dry needling work? It uses thin, dry needles — “dry” in the sense that they don’t inject anything into the body — that are inserted through the skin into the muscle tissue.
“Our main focus is muscle and connective tissue and trying to restore mobility,” he says.
It is performed by some physical therapists, acupuncturists, chiropractors or medical doctors who receive training in the technique.
### Triggering relief
When they’re overused or strained, muscles can develop knotted areas called myofascial trigger points that are irritable and cause pain.
“An overused muscle undergoes an energy crisis where, because of prolonged or inappropriate contraction, the muscle fibers are no longer getting adequate blood supply,” he says. “If it’s not getting that normal blood supply, it’s not getting the oxygen and nutrients that will allow the muscle to go back to its normal resting state.”
The tissue near the trigger point becomes more acidic, and the nerves are sensitized, which makes the area sore or painful.
Stimulating a trigger point with a needle helps draw normal blood supply back to flush out the area and release the tension, Kimberly says. The prick sensation can also fire off nerve fibers that stimulate the brain to release endorphins – the body’s own “homemade pain medication.”
To locate a patient’s trigger points, a therapist palpates the area with his or her hands. A trigger point map that notes common places in the body where trigger points emerge can be helpful, but every patient is a little different. “That’s where the clinician skill comes in – to palpate the area and locate the trigger point,” Kimberly says.
Once a trigger point is located, the therapist inserts a needle through the skin directly into it. He or she might move the needle around a bit to try to elicit what’s called a local twitch response, which is a quick spasm of the muscle. This reaction can actually be a good sign that the muscle is reacting.
Some patients feel improvement in their pain and mobility almost immediately after a dry needling session, Kimberly says. For others, it takes more than one session.
Regardless, it’s important to continue to keep the affected muscles loose by continuing to move them within their new range of motion after treatment, he adds. There can be some soreness for 24 to 48 hours afterward.
### Is it right for you?
Your provider can advise you on whether dry needling could be a helpful addition to your treatment plan for muscle recovery, mobility issues, or acute or chronic pain.
“Needling is just a component of the therapy process,” Kimberly says. “It’s not everything, and it’s not the be-all, end-all for everyone.”
Temporomandibular joint (TMJ) pain is a challenging diagnosis to manage. TMJ pain and dysfunction can be caused by an increase in tone to local mastication (chewing) muscles including the pterygoids and masseter as well as a dysfunction within the joint capsule and articular disc. People with TMJ dysfunction may have complaints of pain, clicking/popping, dull ache, headaches, or earaches.
The first steps to alleviating the pain may include reduction in excessive mastication (reducing hard foods or “chewy” foods from diet), wearing a night guard, and/or physical therapy. Physical therapy is a great resource for those who have not seen a reduction in pain with changes in daily routines.
As physical therapists, we are trained in proper techniques to mobilize immobile joints, stretch inflexible tissue, and strengthen weakened muscles. This applies to all joints including the TMJ.
During the evaluation, your physical therapist will assess both your TMJ and your cervical spine (neck). A lot of the time, the cervical spine plays a major role in the function of your TMJ. The upper cervical spine positioning can impact the position of your jaw when you are sitting and thus can impact your chewing. We will assess your cervical joints and TMJ mobility, range of motion in your neck and jaw, muscle strength and tenderness to palpation. As with any injury, we will treat the impairments that we find and focus our treatments around your goals. This may include joint mobilization to the upper neck and/or TMJ, strengthening exercises for the cervical stabilizers, stretching to muscles of upper neck and those that attach near the TMJ, and education. Your physical therapist may also include dry needling if it is prescribed by your dentist or physician.
Dry needling includes the insertion of fine needles into myofascial structures to help reduce hypertonicity, pain, and inflammation. For TMJ pain, your physical therapist may insert needles directly into the muscles of mastication which include pterygoids, masseter as well as the joint capsule. She/he may also insert them into the suboccipital muscles (upper neck). Depending on the level of severity, your physical therapist may hook the needles to an electrical stimulation unit to break down the tissue more. The side effects may include local soreness and redness. Dry needling is a safe and effective way to treat TMJ dysfunctions.
If you or someone you love is experiencing TMJ pain with little relief, call us today to schedule your appointment. We can treat most insurances for up to 30 days without a prescription. However, if you need dry needling, (which we recommend!) you will need a prescription from your dentist, family practitioner, or other referral source.
## Discussion
Temporomandibular pain of myofascial origin is a condition often referred for evaluation to outpatient clinics of Oral and Maxillofacial Surgery Departments. In more than 90% of cases no underlying cause is found, that is why they are diagnosed as nonspecific or uncomplicated pains, whose treatment is still under study (2,6,11).
Vazquez-Delgado et al. have reviewed the pathophysiologic, clinical and therapeutics aspects of the myofascial pain syndrome (12,13). Usual treatment of temporomandibular myofascial pain in our working environment is a combination of pharmacological and splint therapy, which produces a temporary relief. However, pharmacological treatments soon reach the limit of therapeutic efficacy and they are also associated with side effects (gastrointestinal disorders, drug interactions, and adverse reactions), so that the current trend is the search for alternative treatments (14-16).
Clinical guidelines and protocols about temporomandibular disorders recommend the management of myofascial pain from a multidisciplinary approach (3,4). However, within available treatments for muscle pain, there is a paucity of studies about the effectiveness of TP needling in masticatory muscles, and that is why we set the objective of studying the DDN of myofascial TP in the external pterygoid muscle. The mechanism of inactivation of a TP by needling is unknown, although we consider important the presence of a local twitch response during DDN, because it has a proven relationship with the desired therapeutic effect (6,17). Apparently, the tissular mechanical disruption caused by needle insertion constitutes the specific therapy, but the mechanism of action by which the TP is disabled is unknown. According to Travell, it would be by the mechanical disruption of the self-sustaining mechanism of the TP due to the membrane depolarization of the nerve fibers provoked by the intracellular potassium release and interruption of the central feedback mechanism (1,7). According to Hong, the most reasonable explanation seems to be a neurological mechanism (18), because pain relief after the DDN happens often in few seconds, as we also observed in our study. Probably, local mechanical disruption or reflex central interruption are the most likely mechanisms for breaking the vicious circle of the phenomenon of myofascial TP (11,14,16).
Since by stretching techniques and manual handling is difficult and complicated to access to the external pterygoid muscle, needling of TPs may be necessary. The critical importance of this muscle as origin of temporomandibular myofascial pain makes it worthwhile to develop the necessary skills for invasive treatment by TP needling. The external approach (extraoral DDN) allows the needling of the central TPs in the muscle bellies of the two divisions of the muscle and in the insertional TPs of posterior myotendinous junctions of both divisions. The correct location of the muscle mass is essential for the technique we use, so interference factors represented by the nearby bony structures (zygomatic arch, coronoid process, condyle and sigmoid notch of the mandible) must be eliminated. That is why there is no need to carry out the electromyographic control described in Koole et al. study (10), which is a complicated and uncomfortable technique that we do not think it is necessary in treating outpatients with DDN in which the TP can be detected by careful palpation and pressure sensitivity. Taut bands and TPs in the masseter muscle are other factors to consider as interference, since it can be difficult to recognize the pressure sensitivity on the external pterygoid muscle, which lies in a deeper layer. Masseter taut bands are more superficial and oriented almost perpendicularly to the fibers of the external pterygoid muscle making them easily distinguishable. Hypersensitivity of TPs in the masseter muscle must be inactivated before treatment of TPs in the external pterygoid muscle by DDN, as we did in 4 cases in our study.
In our patients we did not use a intramuscular puncture technique to inject local anesthetics, corticosteroids or botulinum toxin because DDN is just as effective in the myofascial pain caused by TPs in the pterygoid muscle, and this way myotoxic effects of infiltration with such drugs are avoided. Myotoxicity is strongly related to the concentration of anesthetic injected, especially by long-lasting anesthetics. The use of epinephrine in concentrations of 1:100,000 or more can increase muscle damage caused local anesthetics. The myotoxicity of the botulinum toxin type-A is irreversible by binding to the presynaptic cholinergic nerve terminals and interrupting any neurogenic contraction of muscle fibers mediated by the affected motor plates. This chemical denervation maintains the paralysis of the muscle for a period of 3 to 6 months until new axons sprout from a motor nerve and form new synaptic contacts to restore a normal neuromuscular junction function of affected muscle fibers (17).
For the treatment of symptoms associated with TPs, the DDN is a technique as effective as the infiltration of anesthetics if it causes local twitch response, which occurs when the needle had inserted in the muscular active loci of the TP. The reverse is also true and, if there is no local twitch response, the DDN or the infiltration of anesthetics are equally ineffective. Another reason to prefer the DDN before using local anesthetics is because dry needling allows the location of all TPs from one area by preserving pain reaction to palpation.
The results of our study have shown that the greatest reduction in the magnitude of pain was achieved when patients had started from a more unfavorable clinical situation with intense pain, so it was expected that pain improvement would be more evident in these patients. We have observed that in those patients who had significant pain before starting treatment (values 8 to 10 in visual analog scale), it was common that they had a reduction of 6 points, while those that started with mild pain (value less than 6) the expected reduction of pain was 4 points or less. These findings are consistent with the results published by other authors that performed intramuscular infiltrations after muscle needling (18,19). Fernández-Carnero et al. conducted a study with DDN in masseter muscle TPs in a group of 12 women obtaining a good therapeutic outcome measured by increase in the threshold of pain caused by pressure and measured with algometer (15). Bubnov examined the DDN guided by ultrasound to increase the accuracy of the TP needling under visual verification, and obtained a pain reduction in 93.3% of patients in a group of 91 patients with myofascial pain in different locations (20).
In conclusion, the results of our study suggest that patients with painful temporomandibular disease due to external pterygoid muscle involvement treated selectively with needles for dry needling showed a significant improvement in pain and, as consequence, an improvement of functional limitation which persisted up to 6 months after finishing the treatment. Pain reduction was greater the higher was the intensity of pain at baseline.
## Migraine / TMJ Treatment
### Elke’s Story
“When I first started therapy, I had achy pain and weakness in my back and shoulders, from being rear-ended in a car accident. Normal daily tasks such as driving and carrying groceries were uncomfortable and even painful. I didn’t feel like myself at all. The treatment I had at Fast Track included deep tissue work, muscle strengthening exercises, traction, laser and dry needling. In particular, the dry needling really helped release pent-up tension in my muscles from the accident. The most amazing difference I noticed was when I had dry needling on my jaw area for my TMJ issues that had resurfaced after the accident. The headaches that I had started having again stopped, and I almost felt as if I had a new jaw, as the tension was released and it didn’t feel tight and sore all of the time anymore. The muscle strengthening exercises were very helpful, too, and the traction felt like it helped realign my back and loosen it up. All of the therapy techniques worked really well together to help make my back and neck feel normal again. I mostly worked with Brian, who is extremely knowledgeable and thorough. He always took the time to do whatever was needed during my therapy sessions. Brian also has a very friendly and congenial manner, as did all of the therapists I worked with on the occasions that Brian wasn’t available. The entire staff, including the receptionists, were very pleasant and helpful. I would definitely recommend Fast Track to anyone needing physical therapy again. Thank you, Fast Track, for your excellent work!”
## Effectiveness of dry needling for improving pain and disability in adults with tension-type, cervicogenic, or migraine headaches: protocol for a systematic review
This systematic review will be performed in accordance with the PRISMA statement and principles outlined in the Cochrane Handbook for Systematic Reviews of Interventions . This protocol has been prepared with regard to the PRISMA-P 2015 guidelines and was registered on PROSPERO (International Prospective Register of Systematic Reviews, http://www.crd.york.ac.uk/PROSPERO/; #CRD42019124125) in 4 March 2019. Ethical approval and patient consent will not be required since this is a systematic review of previously published studies and no new data collection will be undertaken.
### Search strategy and study selection
A comprehensive electronic database search will be performed from inception to June 31, 2019 on the following databases: Medline (NLM) via the PubMed, Scopus, Embase®, PEDro, Web of Science, Ovid, AMED via the EBSCO, CENTRAL via The Cochrane Library, and Google Scholar. Electronic search strategies are constructed based on the combined keywords: tension-type headache, cervicogenic headache, migraine, and dry needling to identify human studies in the literature that investigated the effectiveness of dry needling in adult patients (≥ 18 years) with tension-type headache, cervicogenic headache, or migraine. A combination of MeSH (Medline), Emtree (Embase®) terms, and free text words in research equations with ‘OR’ and ‘AND’ Boolean operators will be used. Free text words will be selected from the indexed keywords of most relevant original studies and reviews in Scopus. To retrieve all possible variations of a specific root word, wildcards, and truncations will also be applied. The search strategy is customized according to the database being searched. In addition, if additional keywords of relevance are detected during electronic searches we will modify and re-formulate the search strategies to incorporate these terms. Three authors (M.R.P., M.A.M.B., and M.B.) will develop the sufficient search syntax, and after piloting and finalizing it, the search of the electronic databases will be conducted by one author (M.R.P.). Moreover, we will consult a biomedical librarian to review our search strategy using the PRESS 2015 guideline evidence-based checklist in order to minimize error in our search strategies. Details of PubMed/Medline (NLM) database search syntax are presented in Additional file 1. PubMed’s ‘My NCBI’ (National Center for Biotechnology Information) email alert service will be employed for identification of newly published systematic reviews using a basic search strategy.
Citation tracking and reference lists scanning of the selected studies and relevant systematic reviews will be searched for eligible studies. Manual search of keywords via internet will be also conducted. Additionally, the table of contents of the journal of Cephalalgia and the Journal of Bodywork & Movement Therapies will be reviewed. The key journals are identified within the research in the Web of Science and Scopus. To minimize publication bias, grey literature will be identified by searching for conference proceedings (via ProQuest, Scopus, and Web of Science Conference Proceedings Citation Index database), unpublished masters and doctoral theses (via ProQuest and OpenGrey; System for Information on Grey Literature in Europe), and unpublished trials (via US National Institutes of Health Ongoing Trials Register , WHO International Clinical Trials Registry Platform, and International Standard Randomized Controlled Trials Number . Abstracts from the annual meeting of American Headache Society and European Headache Federation congress in the last 5 years and abstracts from the congress of the International Headache Society in the last 4 years will also be searched. In addition, experts with clinical and research experience on the role of dry needling for headaches will be consulted. Finally, one author (M.R.P.) will complete the search process by manual searching in Google. We will not review content from file sources that are from mainstream publishers (e.g., BMJ, Sage, Wiley, ScienceDirect, Springer, and Taylor & Francis), as we expect these to be captured in our broader search strategy.
If a full text of a relevant article is not accessible, a contact will be made with the corresponding author(s). In addition, when unpublished works are retrieved in our search, an email will be sent to the corresponding author(s) to determine whether the work has been subsequently published. If no response received from the corresponding author(s) after three emails, the study will be excluded.
### Eligibility criteria
All publications identified by the searches will be imported into the EndNote reference management software (version X9.1; Clarivate Analytics Inc., Philadelphia, PA, USA), and duplicates will be removed automatically and double-checked manually. The titles and abstracts of each citation will be screened independently by three reviewers (M.R.P. M.A.M.B., and M.B.) according to a checklist that is developed for this purpose (Table 1) with the following criteria:
1. 1-
Study design should be clinical trials with concurrent comparison group(s) or comparative observational studies;
2. 2-
Study participants should have at least one of the three types of headache (tension-type headache, cervicogenic headache, and/or migraine);
3. 3-
Study participants should be ≥18 years of age;
4. 4-
The studies should have at least one of the primary outcomes (i.e., pain and disability) of this review; and,
5. 5-
Dry needling should be the main intervention in the study.
Table 1 PICOS criteria for the study
If a study meets all of the criteria, then the full-text of the study will be assessed for eligibility. In addition, a full-text review will be undertaken if the title and abstract do not provide adequate information. The selection process will be conducted strictly according to the inclusion and exclusion criteria by three independent reviewers simultaneously (M.R.P., M.A.M.B., and M.B.) (Table 1). The three reviewers are physical therapists with experience in performing systematic reviews. Disagreements will be resolved by discussion and if necessary, consultation with a fourth reviewer (A.A.K.). The eligibility criteria are based on the PICOS acronym (Table 1) and will be piloted prior to conducting the review proccess. The entire process of study selection is summarized in the PRISMA flow diagram (Fig. 1).
Fig. 1
Flow diagram of study selection process
### Data collection and analysis
#### Risk of bias
The risk of bias of each clinical trial will be evaluated independently by three reviewers (M.R.P., M.A.M.B., and M.B.) using the Cochrane Back and Neck Review Group 13-item criteria . The guideline examines six specific domains of bias, and the scoring criteria for each item in each of the domains are “Yes,” “No,” and “Unclear” if there is insufficient information to make an accurate judgment. We will categorize studies as “low risk” (at least six of the 13 criteria are met) or “high risk” (less than six criteria are met) . In addition, the risk of bias assessment of each comparative observational study will be judged independently by the same reviewers (M.R.P., M.A.M.B., and M.B.) on the basis of the NOS . The NOS is recommended by the Cochrane Non-Randomized Studies Methods Working Group to assess the quality of observational studies. The scale is based on the following three subscales: Selection (4 items), Comparability (1 item), and Outcome or Exposure (3 items) . A total score of 3 or less will be considered high, 4–6 will be considered moderate, and ≥ 7 will be deemed low risk of bias . Unacceptable bias will be defined as a zero score in any of the NOS subscales. The level of inter-rater agreement will be assessed using weighted Cohen’s kappa coefficient, with a method developed for comparing the level of agreement with categorical data along with their respective 95% confidence intervals (κ 0–0.20 = poor agreement; 0.21–0.40 = fair agreement; 0.41–0.60 = moderate agreement; 0.61–0.80 = good agreement; and 0.81–1 = very good agreement) . Disagreements will be resolved by discussion and where it is required with input from a fourth reviewer (A.A.K).
The graphical presentation of assessment of risk of bias will be generated by Review Manager Software (RevMan V.5.3.5) or Stata V.14 (Sata Corp., College Station, TX, USA).
#### Data extraction
Data extraction and abstraction from each eligible study will be performed independently by three reviewers (M.R.P., M.A.M.B., and M.B.), using a Microsoft Excel spreadsheet (Microsoft, Redmond, Washington, USA) which will be designed according to the Cochrane meta-analysis guidelines and will be adjusted to the needs of this review. The data-extraction form will be pilot-tested before its use. Pilot testing will be performed on two published studies which are not included in the present systematic review but are relatively similar to the eligible studies. During pilot-testing, we will assess the characteristic of the variables (e.g., categorical or continuous) and whether all pre-defined variables in the data-extraction form are useful for the systematic review and meta-analysis. Moreover, we will check if it is possible to include additional variables in the data-extraction form in order to perform further post-hoc sensitivity analyses. The following data will be extracted from all the eligible studies:
1. 1.
Study characteristics: first author’s name, journal’s name, publication year, country of study performance, study year, study design, single versus multicenter, size of the sample, and duration of follow-up.
2. 2.
Participants’ characteristics: ethnicity, age, gender, body mass, stature, BMI, and type of headache.
3. 3.
Intervention and comparator details: sample size for each treatment group, muscles name, features of dry needling treatment (such as type of dry needling , needle size, needling technique, and whether the technique elicited local twitch response), features of control interventions (sham/placebo methods or standard treatment details), duration of treatment session, frequency of treatment sessions per week or month, withdrawals, dropouts, and any other relevant detail.
4. 4.
Outcome measures: pain intensity, scales and questionnaires used to assess pain, total score of functional disability, disability questionnaires, cervical spine ROM, instruments used to measure cervical spine ROM, questionnaire used to measure health-related quality of life, and instruments used to assess TrPs tenderness. Primary and secondary outcomes will be documented at both baseline and endpoint.
Following the completion of this process, one author (M.R.P.) will double-check the extracted data to avoid any omissions or inaccuracies.
#### Dealing with missing data
If there are missing data or insufficient details in relation to the characteristics of the studies included in the meta-analysis, we will try to contact the study authors for further information. However, if the authors do not respond to queries, we will apply the following strategies to address missing data:
1. 1-
If ITT analyses were conducted in the eligible studies, we will use the ITT data instead of missing data as the first option.
2. 2-
For continuous missing outcome data, we will try to re-calculate mean difference, standard deviation, or effect size values when the test statistics, medians, p-values, standard errors, or confidence intervals are reported in the selected studies using the Campbell Collaboration effect size calculator (http://www.campbellcollaboration.org/escalc/html/EffectSizeCalculator-SMD-main.php).
3. 3-
If required data are presented only in graphs of the included studies, we will extract the data by using WebPlotDigitizer V.4.2 (https://automeris.io/WebPlotDigitizer/index.html).
4. 4-
If none of the above strategies can be implemented, we will try to estimate mean difference and standard deviation values from the most similar study .
#### Assessment of heterogeneity
Statistical heterogeneity among the included studies will be assessed using the I2 statistic and Q test (χ2) as recommended by the Cochrane Handbook for Systematic Reviews of Interventions . The I2 statistic will be interpreted using the following guide: 0–40% = no important heterogeneity; 30–60% = moderate heterogeneity; 50–90% = substantial heterogeneity; 75–100% = considerable heterogeneity . Heterogeneity will be considered before conducting pooled analysis. When I2 values are higher than 50% and there is overlap between the confidence intervals of the included studies with the summary estimate on the forest plot, the results of all eligible studies will be combined. The potential sources of heterogeneity will be explored by sensitivity and subgroup analyses/meta-regression.
#### Assessment of publication bias
Publication bias will be explored by constructing funnel plot and performing Begg and Mazumdar’s rank correlation and Egger’s linear regression tests . A p-value < 0.05 for Begg and Mazumdar’s rank correlation and Egger’s linear regression tests indicates significant statistical publication bias. However, the p-value will be set at 0.10 if the number of included studies is < 10. Moreover, Duval and Tweedie ‘trim and fill’ method will be conducted to explore the potential influence of a publication bias . Publication bias will not be assessed by constructing funnel plot when < 10 studies are available per primary outcome of interest, since the plot for publication bias yields unreliable results . Publication bias will be assessed using Stata V.14 (Stata Corp., College Station, TX, USA).
### Data synthesis
#### Statistical analysis
Pooled effects of continuous variables will be expressed as Morris’s delta (Morris’s dppc), if the same primary outcomes are used in the eligible studies. Morris described a pre-post control effect size as “the mean pre-post change in the treatment group minus the mean pre-post change in the control group, divided by the pooled baseline standard deviation of both the treatment and control groups” :
$${d}_{ppc}={c}_p\left$$
The pooled pretest standard deviation is calculated as :
$${SD}_{pre}=\sqrt{\frac{\left({n}_T-1\right){SD^2}_{pre,T}+\left({n}_C-1\right){SD^2}_{pre,C}}{n_T+{n}_C-2}}$$ T: treatment; C: control
The small sample size bias-correction is calculated as :
$${C}_P=1-\frac{3}{4\left({n}_T+{n}_C-2\right)-1}$$
Effect size (Morris’s dppc) will be calculated using Campbell Collaboration effect size calculator (http://www.campbellcollaboration.org/escalc/html/EffectSizeCalculator-SMD-main.php) and Psychometrica online tool (https://www.psychometrica.de/effect_size.html#cohc). If continuous outcomes measures are different between studies, we will also express pooled effects with Morris’s dppc, but we will first convert the different outcome measures to a 0 to 100 scale . For the measurement of effect sizes three levels are defined: small effect size (dppc < 0.40), medium effect size (0.40 ≤ dppc ≤ 0.70) or large effect size (dppc > 0.70). Although there are no available data for minimally clinically important differences (MCIDs) for pain and disability in adult patients with headache, a clinically important effect for the primary outcomes is considered when the magnitude of the effect size is at least medium . Meta-analysis will be done separately on studies with clinical trial design and on studies with comparative observational design. Additionally, meta-analyses will be conducted separately on tension-type headache, cervicogenic headache, and migraine within each study design. In the presence of a sufficient number of studies, we will also conduct a priori subgroup analysis based on the overall risk of bias score (high, moderate, and low risk of bias). All data from the meta-analyses with 95% confidence intervals will be reported in forest plots. The random-effect model with DerSimonian–Laird (D + L) method will be used to pool the data from individual studies. Stata V.11 and V.14 (Stata Corp., College Station, TX, USA) will be used for meta-analysis. Wherever applicable, NNT will be presented to help the reader better understand how the results can be applied to the individual patient. The Campbell Collaboration effect size calculator and Psychometrica online tool will be used to calculate NNT.
In addition, where a quantitative synthesis will not be deemed suitable due to low number of studies, a qualitative synthesis of results will be undertaken. We will conduct meta-analysis when ≥2 studies are available since “two” is the minimum number of studies required for meta-analysis . If meta-analysis is not possible, we will summarize study results as either statistically significant (p-value < 0.05) or nonsignificant and calculate the effect of intervention on the outcomes of this study.
#### Unit of analysis issues
The unit of analysis will be based on aggregated outcome data as individual patient data is not available for any study.
#### Analysis problems
If sufficient homogeneous studies are available for statistical pooling, a meta-analysis will be performed for the time points: short (< 3 months after the baseline measurements were taken), intermediate (at least 3 months but < 12 months after the baseline measurements were taken) and long-term (12 months or more after the baseline measurements were taken) follow-up. If multiple time points fall within the same category, the one that is closest to the end of the treatment, 6 and 12 months will be used .
#### Sensitivity analysis
Sensitivity analysis using the leave-one-out method will be performed to determine the effect of each individual study on the pooled results . Furthermore, sensitivity analyses will be conducted by using only high-quality studies in the meta-analyses to explore the robustness of conclusion. All sensitivity analysis will be performed using Stata V.14 (Stata Corp., College Station, TX, USA).
#### Summary of evidence
The overall quality of the evidence and strength of the recommendations for the primary outcomes will be assessed using GRADE . The ‘Summary of findings’ tables will be generated by the GRADE working group online tool (GRADEpro GDT (www.gradepro.org)). The downgrading process is based on five domains: study limitations (e.g., risk of bias), inconsistency (e.g., heterogeneity between studies results), indirectness of evidence (including other patient populations or use of surrogate outcomes), imprecision (e.g., small sample size) and reporting bias (e.g., publication bias). The quality of evidence is classified as the following: (i) high quality—further research is unlikely to change confidence in the estimate of effect; the Cochrane criteria and NOS identify no risks of bias and all domains in the GRADE classification are fulfilled. In addition, further research is unlikely to change confidence in the estimate of effect (ii) moderate quality—further research is likely to have an important impact on the confidence in the estimate of effect, and one of the domains in the GRADE classification is not fulfilled; (iii) low quality—further research is likely to have an important impact on the confidence and may change the estimate; two of the domains are not fulfilled in the GRADE classification; and (iv) very low quality—we are uncertain about the estimate; three of the domains in the GRADE classification are not fulfilled .
## Muscle Pain? 6 Things to Know About Dry Needling
Physical therapists have a variety of techniques they use to treat pain and conditions that inhibit a patient’s movement. One of those techniques, dry needling, utilizes a solid filament needle inserted into the muscle.
How does this work? Here are six things to know:
1. Dry needling is not acupuncture
Based in Eastern medicine, acupuncture focuses on the flow of Qi, or energy, along meridians for the treatment of diseases. Dry needling is a Western approach to treating pain and dysfunction in musculoskeletal conditions and serves as a reset button, breaking the pain cycle by resolving trigger points.
2. Dry needling can be used to treat a variety of conditions
Any patient that has pain and/or movement dysfunction due to a musculoskeletal condition can use dry needling to reduce pain. This includes any muscle where a trigger point is located, chronic pain, lumbar pain, neck pain, shoulder pain, headaches/migraines, whiplash and plantar fasciitis. Once a therapist deems a patient appropriate for dry needling, a prescription is obtained the patient signs a consent form.
3. Each patient receives personalized treatment
Treatment is personalized for each patient and can include a warm-up, dry needling, stretching and/or activation of the muscle to promote normal length (extensibility) and restore normal contraction and control of the muscle.”
4. The needles are targeted at trigger points
The patient starts by being placed in a safe position that exposes the desired area. The skin is wiped with alcohol to clean the surface layer. The therapist palpates the patient for tenderness and/or palpable trigger points, which are taut hyper-contracted nodules/bands within muscle tissue.
5. You may not even feel it
The patient may feel the needle enter the skin but sometimes it is not felt at all depending on the patient and the location of the needle. Needles are inserted and manipulated and removed or left in for a period of time. The needle elicits a local twitch response followed by the relaxation of the muscle. There may be a cramping, aching sensation or slight discomfort that lasts a few seconds. Electrical stimulation can be applied to the needles to bring even more blood flow to the tissues and relax the muscle tissue.
6. Pain relief may come after just one session
Some patients report 50% pain relief after just one session while others find relief after several sessions.
What’s Next?
Ready to find a personalized solution to your pain? Find an OrthoCarolina location near you.
This article was originally written on August 28, 2018, and updated on January 29, 2020.
|
2018. We employ the Mining Intelligence database to constitute an individual data panel of gold mines located in Sub-Saharan countries. Our empirical findings also suggest an inverse relationship between the tax rate change of the tax instruments and the declared profit of the firms. This link indicates that firms decide on how much profit to declare depending on the tax levels. [less ▲]Detailed reference viewed: 48 (8 UL) Birefringence-modulated total internal reflection in liquid crystal shellsPopov, Nikolay ; Lagerwall, Jan in Frontiers in Soft Matter (2022), 2(Summer), 991375The combination of anisotropic boundary conditions and topological constraints acting on a spherical shell of nematic liquid crystal confined between aqueous phases gives rise to peculiar but well-defined ... [more ▼]The combination of anisotropic boundary conditions and topological constraints acting on a spherical shell of nematic liquid crystal confined between aqueous phases gives rise to peculiar but well-defined configurations of the director field, and thus of the optic axis that defines the impact of the nematic birefringence. While the resulting optics of nematic shells has been extensively investigated in transmission, studies of the reflection behavior are scarce. Here we show that nematic shells exhibit specific light guiding paths mediated by birefringence-modulated total internal reflection (TIR) within the shell. With stabilizers promoting tangential boundary conditions, shells show immobile antipodal spots revealing the locations of maximum effective refractive index, but their intensity is modulated by the polarization of the illuminating light. With normal-aligning stabilizers, shells instead show bright arcs separated by dark spots, and these follow the rotation of the polarization of the illuminating light. Reflection polarizing microscopy thus offers a valuable complement to the more common characterization in transmission, adding data that can be helpful for accurately mapping out director fields in shells of any liquid crystal phase. Moreover, the TIR-mediated light guiding paths may offer interesting handles to localize photopolymerization of reactive liquid crystal shells or to dynamically modulate the response of light-triggered liquid crystal elastomer shell actuators. [less ▲]Detailed reference viewed: 53 (15 UL) Number transcoding in bilinguals—A transversal developmental studyLachelin, Remy ; Van Rinsveld, Amandine; Poncin, Alexandre et alin PLoS ONE (2022), 17(8), 0273391Number transcoding is the cognitive task of converting between different numerical codes (i.e. visual “42”, verbal “forty-two”). Visual symbolic to verbal transcoding and vice versa strongly relies on ... [more ▼]Number transcoding is the cognitive task of converting between different numerical codes (i.e. visual “42”, verbal “forty-two”). Visual symbolic to verbal transcoding and vice versa strongly relies on language proficiency. We evaluated transcoding of German-French bilinguals from Luxembourg in 5th, 8th, 11th graders and adults. In the Luxembourgish educational system, children acquire mathematics in German (LM1) until the 7th grade, and then the language of learning mathematic switches to French (LM2). French 70s 80s 90s are less transparent than 30s 40s 50s numbers, since they have a base-20 structure, which is not the case in German. Transcoding was evaluated with a reading aloud and a verbal-visual number matching task. Results of both tasks show a cognitive cost for transcoding numbers having a base-20 structure (i.e. 70s, 80s and 90s), such that response times were slower in all age groups. Furthermore, considering only base-10 numbers (i.e. 30s 40s 50s), it appeared that transcoding in LM2 (French) also entailed a cost. While participants across age groups tended to read numbers slower in LM2, this effect was limited to the youngest age group in the matching task. In addition, participants made more errors when reading LM2 numbers. In conclusion, we observed an age-independent language effect with numbers having a base-20 structure in French, reflecting their reduced transparency with respect to the decimal system. Moreover, we find an effect of language of math acquisition such that transcoding is less well mastered in LM2. This effect tended to persist until adulthood in the reading aloud task, while in the matching task performance both languages become similar in older adolescents and young adults. This study supports the link between numbers and language, especially highlighting the impact of language on reading numbers aloud from childhood to adulthood. [less ▲]Detailed reference viewed: 33 (0 UL) Conserved patterns across ion channels correlate with variant pathogenicity and clinical phenotypesBrünger, Tobias; Pérez-Palma, Eduardo; Montanucci, Ludovica et alin Brain: a Journal of Neurology (2022)Clinically identified genetic variants in ion channels can be benign or cause disease by increasing or decreasing the protein function. Consequently, therapeutic decision-making is challenging without ... [more ▼]Clinically identified genetic variants in ion channels can be benign or cause disease by increasing or decreasing the protein function. Consequently, therapeutic decision-making is challenging without molecular testing of each variant. Our biophysical knowledge of ion channel structures and function is just emerging, and it is currently not well understood which amino acid residues cause disease when mutated.We sought to systematically identify biological properties associated with variant pathogenicity across all major voltage and ligand-gated ion channel families. We collected and curated 3,049 pathogenic variants from hundreds of neurodevelopmental and other disorders and 12,546 population variants for 30 ion channel or channel subunits for which a high-quality protein structure was available. Using a wide range of bioinformatics approaches, we computed 163 structural features and tested them for pathogenic variant enrichment. We developed a novel 3D spatial distance scoring approach that enables comparisons of pathogenic and population variant distribution across protein structures.We discovered and independently replicated that several pore residue properties and proximity to the pore axis were most significantly enriched for pathogenic variants compared to population variants. Using our 3D scoring approach, we showed that the strongest pathogenic variant enrichment was observed for pore-lining residues and alpha-helix residues within 5Å distance from the pore axis center and not involved in gating. Within the subset of residues located at the pore, the hydrophobicity of the pore was the feature most strongly associated with variant pathogenicity. We also found an association between the identified properties and both clinical phenotypes and functional in vitro assays for voltage-gated sodium channels (SCN1A, SCN2A, SCN8A) and N-methyl-D-aspartate (NMDA) receptor (GRIN1, GRIN2A, GRIN2B) encoding genes. In an independent expert-curated dataset of 1,422 neurodevelopmental disorder pathogenic patient variants and 679 electrophysiological experiments, we show that pore axis distance is associated with seizure age of onset and cognitive performance as well as differential gain vs. loss-of-channel function.In summary, we identified biological properties associated with ion-channel malfunction and show that these are correlated with in vitro functional read-outs and clinical phenotypes in patients with neurodevelopmental disorders. Our results suggest that clinical decision support algorithms that predict variant pathogenicity and function are feasible in the future. [less ▲]Detailed reference viewed: 25 (3 UL) Ringing the bell for quality P.E.: What are the realities of remote physical education?Kovacs, Viktoria; Csanyi, Tamas; Blagus, Rok et alin European Journal of Public Health (2022), 32(Issue Supplement_1), 3843Background To date, few data on the quality and quantity of online physical education (P.E.) during the COVID-19 pandemic have been published. We assessed activity in online classes and reported allocated ... [more ▼]Background To date, few data on the quality and quantity of online physical education (P.E.) during the COVID-19 pandemic have been published. We assessed activity in online classes and reported allocated curriculum time for P.E. in a multi-national sample of European children (6–18 years). Methods Data from two online surveys were analysed. A total of 8395 children were included in the first round (May–June 2020) and 24 302 in the second round (January–February 2021). Results Activity levels during P.E. classes were low in spring 2020, particularly among the youngest children and in certain countries. 27.9% of students did not do any online P.E. and 15.7% were hardly ever very active. Only 18.4% were always very active and 14.9% reported being very active quite often. In winter 2020, we observed a large variability in the allocated curriculum time for P.E. In many countries, this was lower than the compulsory requirements. Only 65.7% of respondents had the same number of P.E. lessons than before pandemic, while 23.8% had less P.E., and 6.8% claimed to have no P.E. lessons. Rates for no P.E. were especially high among secondary school students, and in large cities and megapolises. Conclusions During the COVID-19 pandemic, European children were provided much less P.E. in quantity and quality than before the pandemic. Countermeasures are needed to ensure that these changes do not become permanent. Particular attention is needed in large cities and megapolises. The critical role of P.E. for students’ health and development must be strengthened in the school system. [less ▲]Detailed reference viewed: 17 (0 UL) Enhanced Communications on Satellite-Based IoT Systems to Support Maritime Transportation ServicesMonzon Baeza, Victor ; Ortiz Gomez, Flor de Guadalupe ; Herrero Garcia, Samuel et alin Sensors (2022), 22(17), Maritime transport has become important due to its ability to internationally unite all continents. In turn, during the last two years, we have observed that the increase of consumer goods has resulted in ... [more ▼]Maritime transport has become important due to its ability to internationally unite all continents. In turn, during the last two years, we have observed that the increase of consumer goods has resulted in global shipping deadlocks. In addition, the future goes through the role of ports and efficiency in maritime transport to decarbonize its impact on the environment. In order to improve the economy and people’s lives, in this work, we propose to enhance services offered in maritime logistics. To do this, a communications system is designed on the deck of ships to transmit data through a constellation of satellites using interconnected smart devices based on IoT. Among the services, we highlight the monitoring and tracking of refrigerated containers, the transmission of geolocation data from Global Positioning System (GPS), and security through the Automatic Identification System (AIS). This information will be used for a fleet of ships to make better decisions and help guarantee the status of the cargo and maritime safety on the routes. The system design, network dimensioning, and a communications protocol for decision-making will be presented. [less ▲]Detailed reference viewed: 31 (4 UL) Evaluation of SORL1 in Lewy Body Dementia Identifies No Significant AssociationsRay, Anindidta; Reho, Paolo; Shah, Zalak et alin Movement Disorders (2022)Lewy body dementia (LBD) is a clinically heterogeneous neurodegenerative disorder characterized by parkinsonism, visual hallucinations, fluctuating mental status, and rapid eye movement sleep behavior ... [more ▼]Lewy body dementia (LBD) is a clinically heterogeneous neurodegenerative disorder characterized by parkinsonism, visual hallucinations, fluctuating mental status, and rapid eye movement sleep behavior disorder. LBD lies along a spectrum between Parkinson's disease and Alzheimer's disease, and recent evidence suggests that the genetic architectures of these age-related syndromes are intersecting. In summary, we did not find a significant enrichment of rare, damaging SORL1 mutations in our well-powered LBD cohort. Our data set is, to our knowledge, the largest genome-sequence cohort in this understudied disease. Although it is possible that an association was missed due to allelic heterogeneity, our findings indicate that caution should be exercised when interpreting SORL1 mutations in LBD, as the current evidence does not conclusively support an association with disease risk. [less ▲]Detailed reference viewed: 35 (2 UL) Alpha synuclein determines ferroptosis sensitivity in dopaminergic neurons via modulation of ether-phospholipid membrane composition.Mahoney-Sanchez, Laura; Bouchaoui, Hind; Boussaad, Ibrahim et alin Cell reports (2022), 40(8), 111231There is a continued unmet need for treatments that can slow Parkinson's disease progression due to the lack of understanding behind the molecular mechanisms underlying neurodegeneration. Since its ... [more ▼]There is a continued unmet need for treatments that can slow Parkinson's disease progression due to the lack of understanding behind the molecular mechanisms underlying neurodegeneration. Since its discovery, ferroptosis has been implicated in several diseases and represents a therapeutic target in Parkinson's disease. Here, we use two highly relevant human dopaminergic neuronal models to show that endogenous levels of α-synuclein can determine the sensitivity of dopaminergic neurons to ferroptosis. We show that reducing α-synuclein expression in dopaminergic neurons leads to ferroptosis evasion, while elevated α-synuclein expression in patients' small-molecule-derived neuronal precursor cells with SNCA triplication causes an increased vulnerability to lipid peroxidation and ferroptosis. Lipid profiling reveals that ferroptosis resistance is due to a reduction in ether-linked phospholipids, required for ferroptosis, in neurons depleted of α-synuclein (α-syn). These results provide a molecular mechanism linking α-syn levels to the sensitivity of dopaminergic neurons to ferroptosis, suggesting potential therapeutic relevance. [less ▲]Detailed reference viewed: 19 (0 UL) Coevaporation Stabilizes Tin-Based Perovskites in a Single Sn-Oxidation StateSingh, Ajay ; Hieulle, Jeremy ; Ferreira Machado, Joana Andreia et alin Nano Letters (2022)Chemically processed methylammonium tin-triiodide (CH3NH3SnI3) films include Sn in different oxidation states, leading to poor stability and low power conversion efficiency of the resulting solar cells ... [more ▼]Chemically processed methylammonium tin-triiodide (CH3NH3SnI3) films include Sn in different oxidation states, leading to poor stability and low power conversion efficiency of the resulting solar cells (PSCs). The development of absorbers with Sn [2+] only has been identified as one of the critical steps to develop all Sn-based devices. Here, we report on coevaporation of CH3NH3I and SnI2 to obtain absorbers with Sn being only in the preferred oxidation state [+2] as confirmed by X-ray photoelectron spectroscopy. The Sn [4+]-free absorbers exhibit smooth highly crystalline surfaces and photoluminescence measurements corroborating their excellent optoelectronic properties. The films show very good stability under heat and light. Photoluminescence quantum yields up to 4 × 10^-3 translate in a quasi Fermi-level splittings exceeding 850 meV under one sun equivalent conditions showing high promise in developing lead-free, high efficiency, and stable PSCs. [less ▲]Detailed reference viewed: 48 (6 UL) Anger can make fake news viral onlineChuai, Yuwei ; Zhao, Jichangin Frontiers in Physics (2022), 10Fake news that manipulates political elections, strikes financial systems, and even incites riots is more viral than real news online, resulting in unstable societies and buffeted democracy. While factor ... [more ▼]Fake news that manipulates political elections, strikes financial systems, and even incites riots is more viral than real news online, resulting in unstable societies and buffeted democracy. While factor that drives the viral spread of fake news is rarely explored. In this study, it is unexpectedly found that the easier contagion of fake news online is positively associated with the greater anger it carries. The same results in Twitter and Weibo indicate that this correlation is independent of the platform. Moreover, mutations in emotions like increasing anger will progressively speed up the information spread. Increasing the occupation of anger by 0.1 and reducing that of joy by 0.1 are associated with the generation of nearly six more retweets in the Weibo dataset. Offline questionnaires reveal that anger leads to more incentivized audiences in terms of anxiety management and information sharing and accordingly makes fake news more contagious than real news online. Cures such as tagging anger in social media could be implemented to slow or prevent the contagion of fake news at the source. [less ▲]Detailed reference viewed: 18 (4 UL) Evolution of Non-Terrestrial Networks From 5G to 6G: A SurveyAzari, M. Mahdi; Solanki, Sourabh ; Chatzinotas, Symeon et alin IEEE Communications Surveys & Tutorials (2022), 24(4), 2633-2672Non-terrestrial networks (NTNs) traditionally have certain limited applications. However, the recent technological advancements and manufacturing cost reduction opened up myriad applications of NTNs for ... [more ▼]Non-terrestrial networks (NTNs) traditionally have certain limited applications. However, the recent technological advancements and manufacturing cost reduction opened up myriad applications of NTNs for 5G and beyond networks, especially when integrated into terrestrial networks (TNs). This article comprehensively surveys the evolution of NTNs highlighting their relevance to 5G networks and essentially, how it will play a pivotal role in the development of 6G ecosystem. We discuss important features of NTNs integration into TNs and the synergies by delving into the new range of services and use cases, various architectures, technological enablers, and higher layer aspects pertinent to NTNs integration. Moreover, we review the corresponding challenges arising from the technical peculiarities and the new approaches being adopted to develop efficient integrated ground-air-space (GAS) networks. Our survey further includes the major progress and outcomes from academic research as well as industrial efforts representing the main industrial trends, field trials, and prototyping towards the 6G networks. [less ▲]Detailed reference viewed: 33 (6 UL) Roadmap on Machine learning in electronic structureH J Kulik; T Hammerschmidt; Tkatchenko, Alexandre in IOP Conference Series: Materials Science and Engineering (2022)Detailed reference viewed: 47 (0 UL) Matching Traffic Demand in GEO Multibeam Satellites: The Joint Use of Dynamic Beamforming and Precoding Under Practical ConstraintsChaker, Haythem ; Chougrani, Houcine ; Alves Martins, Wallace et alin IEEE Transactions on Broadcasting (2022)To adjust for the non-uniform spatiotemporal nature of traffic patterns, next-generation high throughput satellite (HTS) systems can benefit from recent technological advancements in the space-segment in ... [more ▼]To adjust for the non-uniform spatiotemporal nature of traffic patterns, next-generation high throughput satellite (HTS) systems can benefit from recent technological advancements in the space-segment in order to dynamically design traffic-adaptive beam layout plans (ABLPs). In this work, we propose a framework for dynamic beamforming (DBF) optimization and adaptation in dynamic environments. Given realistic traffic patterns and a limited power budget, we propose a feasible DBF operation for a geostationary multibeam HTS network. The goal is to minimize the mismatch between the traffic demand and the offered capacity under practical constraints. These constraints are dictated by the traffic-aware design requirements, the on-board antenna system limitations, and the signaling considerations in the K-band. Noting that the ABLP is agnostic about the inherent inter-beam interference (IBI), we construct an interference simulation environment using irregularly shaped beams for a large-scale multibeam HTS system. To cope with IBI, the combination of on-board DBF and on-ground precoding is considered. For precoded and non-precoded HTS configurations, the proposed design shows better traffic-matching capabilities in comparison to a regular beam layout plan. Lastly, we provide trade-off analyses between system-level key performance indicators for different realistic non-uniform traffic patterns. [less ▲]Detailed reference viewed: 80 (13 UL) Gradient descent dynamics and the jamming transition in infinite dimensionsManacorda, Alessandro ; Zamponi, Francescoin Journal of Physics. A, Mathematical and Theoretical (2022)Gradient descent dynamics in complex energy landscapes, i.e. featuring multiple minima, finds application in many different problems, from soft matter to machine learning. Here, we analyze one of the ... [more ▼]Gradient descent dynamics in complex energy landscapes, i.e. featuring multiple minima, finds application in many different problems, from soft matter to machine learning. Here, we analyze one of the simplest examples, namely that of soft repulsive particles in the limit of infinite spatial dimension d. The gradient descent dynamics then displays a jamming transition: at low density, it reaches zero-energy states in which particles' overlaps are fully eliminated, while at high density the energy remains finite and overlaps persist. At the transition, the dynamics becomes critical. In the d → ∞ limit, a set of self-consistent dynamical equations can be derived via mean field theory. We analyze these equations and we present some partial progress towards their solution. We also study the random Lorentz gas in a range of d = 2...22, and obtain a robust estimate for the jamming transition in d → ∞. The jamming transition is analogous to the capacity transition in supervised learning, and in the appendix we discuss this analogy in the case of a simple one-layer fully-connected perceptron. [less ▲]Detailed reference viewed: 31 (1 UL) Construction of a digital fetus library for radiation dosimetryQu; Xie, Tianwu; L Giger et alin Medical Physics (2022)Purpose: Accurate estimations of fetal absorbed dose and radiation risks are crucial for radiation protection and important for radiological imaging research owing to the high radiosensitivity of the ... [more ▼]Purpose: Accurate estimations of fetal absorbed dose and radiation risks are crucial for radiation protection and important for radiological imaging research owing to the high radiosensitivity of the fetus. Computational anthropomorphic models have been widely used in patient-specific radiation dosimetry calculations. In this work, we aim to build the first digital fetal library for more reliable and accurate radiation dosimetry studies. Acquisition and validation methods: Computed tomography (CT) images of abdominal and pelvic regions of 46 pregnant females were segmented by experienced medical physicists. The segmented tissues/organs include the body contour, skeleton, uterus, liver, kidney, intestine, stomach, lung, bladder, gall bladder, spleen, and pancreas for maternal body, and placenta, amniotic fluid, fetal body, fetal brain, and fetal skeleton. Nonuniform rational B-spline (NURBS) surfaces of each identified region was constructed manually using 3D modeling software. The Hounsfield unit values of each identified organs were gathered from CT images of pregnant patients and converted to tissue density. Organ volumes were further adjusted according to reference measurements for the developing fetus recommended by the World Health Organization (WHO) and International Commission on Radiological Protection. A series of anatomical parameters, including femur length, humerus length, biparietal diameter, abdominal circumference (FAC), and head circumference, were measured and compared with WHO recommendations. Data format and usage notes: The first fetal patient-specific model library was developed with the anatomical characteristics of each model derived from the corresponding patient whose gestational age varies between 8 and 35 weeks. Voxelized models are represented in the form of MCNP matrix input files representing the three-dimensional model of the fetus. The size distributions of each model are also provided in text files. All data are stored on Zenodo and are publicly accessible on the following link: https://zenodo.org/record/6471884. Potential applications: The constructed fetal models and maternal anatomical characteristics are consistent with the corresponding patients. The resulting computational fetus could be used in radiation dosimetry studies to improve the reliability of fetal dosimetry and radiation risks assessment. The advantages of NURBS surfaces in terms of adapting fetal postures and positions enable us to adequately assess their impact on radiation dosimetry calculations. [less ▲]Detailed reference viewed: 33 (6 UL) Quantum machine learning corrects classical forcefields: Stretching DNA base pairs in explicit solventBerryman, Josh ; Taghavi, Amirhossein ; Mazur, Florian et alin Journal of Chemical Physics (2022), 157(6), In order to improve the accuracy of molecular dynamics simulations, classical forcefields are supplemented with a kernel-based machine learning method trained on quantum-mechanical fragment energies. As ... [more ▼]In order to improve the accuracy of molecular dynamics simulations, classical forcefields are supplemented with a kernel-based machine learning method trained on quantum-mechanical fragment energies. As an example application, a potential-energy surface is generalized for a small DNA duplex, taking into account explicit solvation and long-range electron exchange–correlation effects. A long-standing problem in molecular science is that experimental studies of the structural and thermodynamic behavior of DNA under tension are not well confirmed by simulation; study of the potential energy vs extension taking into account a novel correction shows that leading classical DNA models have excessive stiffness with respect to stretching. This discrepancy is found to be common across multiple forcefields. The quantum correction is in qualitative agreement with the experimental thermodynamics for larger DNA double helices, providing a candidate explanation for the general and long-standing discrepancy between single molecule stretching experiments and classical calculations of DNA stretching. The new dataset of quantum calculations should facilitate multiple types of nucleic acid simulation, and the associated Kernel Modified Molecular Dynamics method (KMMD) is applicable to biomolecular simulations in general. KMMD is made available as part of the AMBER22 simulation software. [less ▲]Detailed reference viewed: 74 (5 UL) Optimal Priority Assignment for Real-Time Systems: A Coevolution-Based ApproachLee, Jaekwon ; Shin, Seung Yeob ; Nejati, Shiva et alin Empirical Software Engineering (2022), 27In real-time systems, priorities assigned to real-time tasks determine the order of task executions, by relying on an underlying task scheduling policy. Assigning optimal priority values to tasks is ... [more ▼]In real-time systems, priorities assigned to real-time tasks determine the order of task executions, by relying on an underlying task scheduling policy. Assigning optimal priority values to tasks is critical to allow the tasks to complete their executions while maximizing safety margins from their specified deadlines. This enables real-time systems to tolerate unexpected overheads in task executions and still meet their deadlines. In practice, priority assignments result from an interactive process between the development and testing teams. In this article, we propose an automated method that aims to identify the best possible priority assignments in real-time systems, accounting for multiple objectives regarding safety margins and engineering constraints. Our approach is based on a multi-objective, competitive coevolutionary algorithm mimicking the interactive priority assignment process between the development and testing teams. We evaluate our approach by applying it to six industrial systems from different domains and several synthetic systems. The results indicate that our approach significantly outperforms both our baselines, i.e., random search and sequential search, and solutions defined by practitioners. Our approach scales to complex industrial systems as an offline analysis method that attempts to find near-optimal solutions within acceptable time, i.e., less than 16 hours. [less ▲]Detailed reference viewed: 115 (33 UL) Rényi Entropy in Statistical MechanicsFuentes, Jesús ; Goncalves, Jorge in Entropy (2022), 24(8), 1080Rényi entropy was originally introduced in the field of information theory as a parametric relaxation of Shannon (in physics, Boltzmann–Gibbs) entropy. This has also fuelled different attempts to ... [more ▼]Rényi entropy was originally introduced in the field of information theory as a parametric relaxation of Shannon (in physics, Boltzmann–Gibbs) entropy. This has also fuelled different attempts to generalise statistical mechanics, although mostly skipping the physical arguments behind this entropy and instead tending to introduce it artificially. However, as we will show, modifications to the theory of statistical mechanics are needless to see how Rényi entropy automatically arises as the average rate of change of free energy over an ensemble at different temperatures. Moreover, this notion is extended by considering distributions for isospectral, non-isothermal processes, resulting in relative versions of free energy, in which the Kullback–Leibler divergence or the relative version of Rényi entropy appear within the structure of the corrections to free energy. These generalisa- tions of free energy recover the ordinary thermodynamic potential whenever isothermal processes are considered. [less ▲]Detailed reference viewed: 18 (4 UL) From predicting to learning dissipation from pair correlations of active liquidsRassolov, Gregory; Tociu, Laura; Fodor, Etienne et alin Journal of Chemical Physics (2022)Active systems, which are driven out of equilibrium by local non-conservative forces, can adopt unique behaviors and configurations. An important challenge in the design of novel materials, which utilize ... [more ▼]Active systems, which are driven out of equilibrium by local non-conservative forces, can adopt unique behaviors and configurations. An important challenge in the design of novel materials, which utilize such properties, is to precisely connect the static structure of active systems to the dissipation of energy induced by the local driving. Here, we use tools from liquid-state theories and machine learning to take on this challenge. We first analytically demonstrate for an isotropic active matter system that dissipation and pair correlations are closely related when driving forces behave like an active temperature. We then extend a nonequilibrium mean-field framework for predicting these pair correlations, which unlike most existing approaches is applicable even for strongly interacting particles and far from equilibrium, to predicting dissipation in these systems. Based on this theory, we reveal a robust analytic relation between dissipation and structure, which holds even as the system approaches a nonequilibrium phase transition. Finally, we construct a neural network that maps static configurations of particles to their dissipation rate without any prior knowledge of the underlying dynamics. Our results open novel perspectives on the interplay between dissipation and organization out of equilibrium. [less ▲]Detailed reference viewed: 21 (1 UL) Current cross correlations in a quantum Hall collider at filling factor twoIdrisov, Edvin ; Levkivskyi, Ivan; Sukhorukov, Eugene et alin Physical Review. B, Condensed Matter (2022)Detailed reference viewed: 21 (1 UL) Probabilistic Deep Learning for Real-Time Large Deformation SimulationsDeshpande, Saurabh ; Lengiewicz, Jakub ; Bordas, Stéphane in Computer Methods in Applied Mechanics and Engineering (2022), 398(0045-7825), 115307For many novel applications, such as patient-specific computer-aided surgery, conventional solution techniques of the underlying nonlinear problems are usually computationally too expensive and are ... [more ▼]For many novel applications, such as patient-specific computer-aided surgery, conventional solution techniques of the underlying nonlinear problems are usually computationally too expensive and are lacking information about how certain can we be about their predictions. In the present work, we propose a highly efficient deep-learning surrogate framework that is able to accurately predict the response of bodies undergoing large deformations in real-time. The surrogate model has a convolutional neural network architecture, called U-Net, which is trained with force–displacement data obtained with the finite element method. We propose deterministic and probabilistic versions of the framework. The probabilistic framework utilizes the Variational Bayes Inference approach and is able to capture all the uncertainties present in the data as well as in the deep-learning model. Based on several benchmark examples, we show the predictive capabilities of the framework and discuss its possible limitations. [less ▲]Detailed reference viewed: 92 (5 UL) Outage Constrained Robust BeamformingOptimization for Multiuser IRS-AssistedAnti-Jamming Communications With Incomplete InformationSun, Yifu; An, Kang; Luo, Junshan et alin IEEE Internet of Things Journal (2022), 9(15), 13298-13314Malicious jamming attacks have been regarded asa serious threat to Internet of Things (IoT) networks, which cansignificantly degrade the quality of service (QoS) of users. Thispaper utilizes an ... [more ▼]Malicious jamming attacks have been regarded asa serious threat to Internet of Things (IoT) networks, which cansignificantly degrade the quality of service (QoS) of users. Thispaper utilizes an intelligent reflecting surface (IRS) to enhanceanti-jamming performance due to its capability in reconfiguringthe wireless propagation environment via dynamicly adjustingeach IRS reflecting elements. To enhance the communicationperformance against jamming attacks, a robust beamformingoptimization problem is formulated in a multiuser IRS-assistedanti-jamming communications scenario with or without imperfectjammer’s channel state information (CSI). In addition, we furtherconsider the fact that the jammer’s transmit beamforming cannot be known at BS. Specifically, with no knowledge of jammerstransmit beamforming, the total transmit power minimizationproblems are formulated subject to the outage probability re-quirements of legitimate users with the jammer’s statistical CSI,and signal-to-interference-plus-noise ratio (SINR) requirementsof legitimate users without the jammer’s CSI, respectively.By applying the Decomposition-based large deviation inequal-ity (DBLDI), Bernstein-type inequality (BTI), Cauchy-Schwarzinequality, and penalty non-smooth optimization method, weefficiently solve the initial intractable and non-convex problems.Numerical simulations demonstrate that the proposed anti-jamming approaches achieve superior anti-jamming performanceand lower power-consumption compared to the non-IRS schemeand reveal the impact of key parameters on the achievable systemperformance. [less ▲]Detailed reference viewed: 15 (0 UL) Zones and zoning: Linking the geographies of freeports with ArtTech and financial market makingDörry, Sabine ; Hesse, Markus in Geoforum (2022)Freeports and special economic zones (SEZs) are established policy tools to attract foreign investment at specific locations, based on the de-coupling of sovereignty and territory. As a result, they ... [more ▼]Freeports and special economic zones (SEZs) are established policy tools to attract foreign investment at specific locations, based on the de-coupling of sovereignty and territory. As a result, they emerged not only in developmental contexts, but also in tax havens and financial centres. Recently, freeports and SEZs have shifted from responding to global competition for spaces best suited to attract tangible manufacturing to responding to competition for spaces with best conditions to enable value extraction and wealth shielding. We develop the argument on the emerging industry of ArtTech and new ‘fine art freeports’ that thrive on two core social practices: fracturing property rights to enhance financial liquidity and trading activity in highly exclusive fine-art markets, and offshoring – or zoning – to exploit freeport-facilitated relations for market making and rent-seeking.Besides such practices to make and game markets, freeports supply important physical infrastructure for fine-art technical and custody services that precondition any form of value creation. As such, freeports are important spaces for policy experimentation. Contrary to the conventional belief about free zones in general and freeports in particular, however, their economic impact remains limited. We explain this by conceptualising freeports as ‘zones’ defined or designed by specific processes of ‘zoning’ that link their multiple geographies. We conclude that freeports are no sites of exception but spaces that help legitimise novel institutional and economic arrangements emergent in the economy at large. [less ▲]Detailed reference viewed: 25 (4 UL) Federated Learning Meets Contract Theory: Economic-Efficiency Framework for Electric Vehicle NetworksSaputra, Yuris M.; Nguyen, Diep N.; Dinh, Thai Hoang et alin IEEE Transactions on Mobile Computing (2022), 21(8), 2803-2817In this paper, we propose a novel energy-efficient framework for an electric vehicle (EV) network using a contract theoretic-based economic model to maximize the profits of charging stations (CSs) and ... [more ▼]In this paper, we propose a novel energy-efficient framework for an electric vehicle (EV) network using a contract theoretic-based economic model to maximize the profits of charging stations (CSs) and improve the social welfare of the network. Specifically, we first introduce CS-based and CS clustering-based decentralized federated energy learning (DFEL) approaches which enable the CSs to train their own energy transactions locally to predict energy demands. In this way, each CS can exchange its learned model with other CSs to improve prediction accuracy without revealing actual datasets and reduce communication overhead among the CSs. Based on the energy demand prediction, we then design a multi-principal one-agent (MPOA) contract-based method. In particular, we formulate the CSs' utility maximization as a non-collaborative energy contract problem in which each CS maximizes its utility under common constraints from the smart grid provider (SGP) and other CSs' contracts. Then, we prove the existence of an equilibrium contract solution for all the CSs and develop an iterative algorithm at the SGP to find the equilibrium. Through simulation results using the dataset of CSs' transactions in Dundee city, the United Kingdom between 2017 and 2018, we demonstrate that our proposed method can achieve the energy demand prediction accuracy improvement up to 24.63% and lessen communication overhead by 96.3% compared with other machine learning algorithms. Furthermore, our proposed method can outperform non-contract-based economic models by 35% and 36% in terms of the CSs' utilities and social welfare of the network, respectively. [less ▲]Detailed reference viewed: 96 (4 UL) Backscatter Sensors Communication for 6G Low-Powered NOMA-Enabled IoT Networks Under Imperfect SICAhmed, Manzoor; Khan, Wali Ullah ; Ihsan, Asim et alin IEEE Systems Journal (2022)The combination of nonorthogonal multiple access (NOMA) using power-domain with backscatter communication (BC) is expected to connect large-scale Internet of things (IoT) devices in the future sixth ... [more ▼]The combination of nonorthogonal multiple access (NOMA) using power-domain with backscatter communication (BC) is expected to connect large-scale Internet of things (IoT) devices in the future sixth-generation era. This article introduces a BC in a multicell IoT network, where a source in each cell transmits a superimposed signal to its associated IoT devices using NOMA. The backscatter sensor tag (BST) also transmits data to IoT devices by reflecting and modulating the superimposed signal of the source. A new optimization framework is provided that simultaneously optimizes the total power of each source, power allocation coefficient of IoT devices, and RC of BST under imperfect successive interference cancellation decoding. This work aims to maximize the total energy efficiency (EE) of the IoT network subject to the quality of services of each IoT device. The problem is first transformed using the Dinkelbach method and then decoupled into two subproblems. The Karush–Kuhn–Tucker conditions and dual Lagrangian method are employed to obtain efficient solutions. In addition, we also calculate the EE of the conventional NOMA network without BC as a benchmark framework. Simulation results unveil the advantage of our considered NOMA BC network over the conventional NOMA network in terms of system total EE. [less ▲]Detailed reference viewed: 17 (0 UL) Rate Splitting Multiple Access for Next Generation Cognitive Radio Enabled LEO Satellite NetworksKhan, Wali Ullah ; Ali, Zain; Lagunas, Eva et alin Bulletin. Cornell University Libraries (2022)Low Earth Orbit (LEO) satellite communication (SatCom) has drawn particular attention recently due to its high data rate services and low round-trip latency. It has low launching and manufacturing costs ... [more ▼]Low Earth Orbit (LEO) satellite communication (SatCom) has drawn particular attention recently due to its high data rate services and low round-trip latency. It has low launching and manufacturing costs than Medium Earth Orbit (MEO) and Geostationary Earth Orbit (GEO) satellites. Moreover, LEO SatCom has the potential to provide global coverage with a high-speed data rate and low transmission latency. However, the spectrum scarcity might be one of the challenges in the growth of LEO satellites, impacting severe restrictions on developing ground-space integrated networks. To address this issue, cognitive radio and rate splitting multiple access (RSMA) are the two emerging technologies for high spectral efficiency and massive connectivity. This paper proposes a cognitive radio enabled LEO SatCom using RSMA radio access technique with the coexistence of GEO SatCom network. In particular, this work aims to maximize the sum rate of LEO SatCom by simultaneously optimizing the power budget over different beams, RSMA power allocation for users over each beam, and subcarrier user assignment while restricting the interference temperature to GEO SatCom. The problem of sum rate maximization is formulated as non-convex, where the global optimal solution is challenging to obtain. Thus, an efficient solution can be obtained in three steps: first we employ a successive convex approximation technique to reduce the complexity and make the problem more tractable. Second, for any given resource block user assignment, we adopt Karush–Kuhn–Tucker (KKT) conditions to calculate the transmit power over different beams and RSMA power allocation of users over each beam. Third, using the allocated power, we design an efficient algorithm based on the greedy approach for resource block user assignment. For comparison, we propose two suboptimal schemes with fixed power allocation over different beams and random resource block user assignment as the benchmark. Numerical results provided in this work are obtained based on the Monte Carlo simulations, which demonstrate the benefits of the proposed optimization scheme compared to the benchmark schemes. [less ▲]Detailed reference viewed: 20 (1 UL) Histories of the past and histories of the future: Pandemics and historians of educationGrosvenor, Ian; Priem, Karin in Paedagogica Historica (2022)The COVID-19 outbreak at the beginning of the 2020s not only marked a dramatic moment in world health, but also the start of manifold and entangled global crises that seem to define a watershed moment ... [more ▼]The COVID-19 outbreak at the beginning of the 2020s not only marked a dramatic moment in world health, but also the start of manifold and entangled global crises that seem to define a watershed moment with severe effects on education. Pandemics we know are recurrent events. Faced with COVID-19 some historians have looked to previous pandemics to understand the nature of the disease and its trajectory, and how previous generations have dealt with similar health crises. This special issue intends not to reinforce narratives of the past but rather to question them. The histories that have been written for this special issue Histories of the Past and Histories of the Future: Pandemics and Historians of Education offer insights that refer to past and future research agendas. They look at the mediation and circulation of knowledge during past pandemics, trace unheard voices and emotions of pandemics, analyse national policies and emerging discourses, and underline the entangled histories of education and pandemics. Collectively the articles brought together in this issue forcibly suggest that the most fruitful and rewarding way forward to studying past pandemics lies in thinking ecologically. By asses- sing the myriad consequences of living in ” pandemic times,” of confronting exposure, transmission, transmutation, disruption, and loss, and looking to community and collective futures we believe we cannot study pandemics and their impact on education and children's lives without widening the aperture of our research. Adopting an ecological approach will help us to not only actively engage with histories of the present and contemporary collecting, but also offer the possibility of new understandings and new insights into the dynamics and consequences of past pandemics. [less ▲]Detailed reference viewed: 72 (11 UL) Financial development and income inequality: A meta-analysisChletsos, Michael; Sintos, Andreas in Journal of Economic Surveys (2022)The voluminous empirical research on the effect of financial development on income inequality has yielded mixed results. In this paper, we collect 2127 estimates reported in 116 published studies that ... [more ▼]The voluminous empirical research on the effect of financial development on income inequality has yielded mixed results. In this paper, we collect 2127 estimates reported in 116 published studies that investigate the effect of financial development on income inequality. Although our initial tests for publication bias (which do not account for moderator variables) show that the current literature does not suffer from publication selectivity, once we control for a set of moderator variables, we find evidence of mild publication bias in favor of positive estimates (i.e., the current literature favors the publication of studies that find that financial development increases income inequality). In addition, our results suggest that the overall effect of financial development on income inequality is on average zero, but that its sign and magnitude depend systematically on various study characteristics. The characteristics of data and estimation methods, whether endogeneity is taken into account, the different measures of financial development and the inclusion of financial openness, inflation and income variables in the regressions matter significantly for the effect of financial development on inequality. [less ▲]Detailed reference viewed: 45 (4 UL) The effects of IMF conditional programs on the unemployment rateChletsos, Michael; Sintos, Andreas in European Journal of Political Economy (2022)The fundamental mission of the International Monetary Fund (IMF) is to ensure global financial stability and to assist countries in economic turmoil. Although there is a consensus that IMF-supported ... [more ▼]The fundamental mission of the International Monetary Fund (IMF) is to ensure global financial stability and to assist countries in economic turmoil. Although there is a consensus that IMF-supported programs can have a direct effect on the labor market of recipient countries, it remains unclear how IMF participation decision and conditionalities attached to IMF loans can affect the unemployment rate of borrowing countries. Using a world sample of countries from 1980 to 2014, we investigate how lending conditional programs of the IMF affect the unemployment rate. Our analyses account for the selection bias related to, first, the IMF participation decision and, second, the conditions included within the program. We show that IMF program participation significantly increases the unemployment rate of recipient countries. Once we control for the number of conditions, however, we find that only IMF conditions have a detrimental and highly significant effect on the unemployment rate. There is evidence that the adverse short-run effect of IMF conditions holds robust in the long-run. Disaggregating IMF conditionality by issue area, we find adverse effects on the unemployment rate for four policy areas: labor market deregulation, reforms requiring privatization of state-owned enterprises, external sector reforms stipulating trade and capital account liberalization, and fiscal policy reforms that restrain government expenditure. Our initial results are found to be robust across alternative empirical specifications. [less ▲]Detailed reference viewed: 45 (3 UL) Le droit à la sauce piquante n°27 - Aout 2022Hiez, David ; Laurent, Rémiin Le droit à la sauce piquante (2022), 27Detailed reference viewed: 76 (13 UL) Informalität im regionalen Wachstumsprozess. Einblick in eine „Black Box“ der Planungspraxis am Beispiel LuxemburgsSchmitz, Nicolas; Hesse, Markus ; Becker, Tom in Raumforschung und Raumordnung (2022)Der Beitrag behandelt die Steuerung des Siedlungsflächenwachstums im Großherzogtum Luxemburg aus der Perspektive von Informalität. Luxemburg steht unter einem hohen demographischen und ökonomischen ... [more ▼]Der Beitrag behandelt die Steuerung des Siedlungsflächenwachstums im Großherzogtum Luxemburg aus der Perspektive von Informalität. Luxemburg steht unter einem hohen demographischen und ökonomischen Wachstumsdruck in allen Landesteilen (Hauptstadt, altindustrialisierter Süden, ländlicher Norden), der das Planungssystem stark herausfordert. Zugleich gibt es keine formelle Regionalplanung, allenfalls Ansätze interkommunaler Kooperation, die überwiegend freiwilliger Natur sind. Anhand von empirischen Fallstudien in zwei wachstumsstarken Gemeinden (Junglinster, Schuttrange) skizziert der Beitrag Planungsentscheidungen im institutionellen Dreieck zwischen Kommune, Staat und privaten Trägern. Informalität dient hier nicht nur zur Kompensation fehlender planerischer Steuerung, sondern auch dem Umgang mit der komplexen Rechtsmaterie des Landes. Informell kommen auch die vitalen Interessen der Grundeigentümer ins Spiel: Da ein Teil der Wohlfahrtseffekte des Landes über Grund und Boden realisiert wird, sind Spekulationsinteressen immanent, haben ein hohes Blockadepotenzial. In diesem Kontext formuliert der Beitrag erste Überlegungen für ein regionales Wachstumsmanagement, das die Lücke zwischen staatlicher Landesplanung und kommunalem Eigensinn schließen könnte. [less ▲]Detailed reference viewed: 33 (3 UL) Robust Congestion Control for Demand-Based Optimization in Precoded Multi-Beam High Throughput Satellite CommunicationsBui, Van-Phuc; Chien, Trinh-Van; Lagunas, Eva et alin IEEE Transactions on Communications (2022)Detailed reference viewed: 33 (6 UL) SSPCATCHER: Learning to catch security patchesSawadogo, Delwende Arthur; Bissyande, Tegawendé François D Assise ; Moha, Naouel et alin Empirical Software Engineering (2022), 27Detailed reference viewed: 41 (0 UL) Los pagos por servicios ambientales en la Ciudad de México: un enfoque de coherencia de políticas públicasCetina Arenas, Lucero; Koff, Harlan ; Maganda, Carmen et alin Región y Sociedad (2022), 34Objective: to analyze two programs of environmental services payment in the Mexico City Soil Conservation Lands using the policy coherence for development theoretical-methodological framework. Methodology ... [more ▼]Objective: to analyze two programs of environmental services payment in the Mexico City Soil Conservation Lands using the policy coherence for development theoretical-methodological framework. Methodology: qualitative analysis based on policy coherence typologies. Results: the programs’ beneficiaries economically depend on the associated subsidies; there are restrictions for the economic activities’ development; the financial mechanisms are inefficient and there is a lack of a sustainability vision that considers the socioeconomic dimensions. Limitations: participation of authorities and community members in this type of study is limited and there is a lack of information on the relationships between institutions and work programs. Value: the analysis of peri-urban public policies through the policy coherence for development framework. Conclusions: applying the policy coherence for development framework helped identifying imbalances and synergies among the policy dimensions analyzed. This methodology can promote the mainstreaming of sustainability norms. [less ▲]Detailed reference viewed: 57 (0 UL) Massive MIMO Hybrid Precoding for LEO Satellite Communications With Twin-Resolution Phase Shifters and Nonlinear Power AmplifiersYou, Li; Qiang, Xiaoyu; Li, Ke-Xin et alin IEEE Transactions on Communications (2022), 70(8), 5543-5557The massive multiple-input multiple-output (MIMO) transmission technology has recently attracted much attention in the non-geostationary, e.g., low earth orbit (LEO) satellite communication (SATCOM ... [more ▼]The massive multiple-input multiple-output (MIMO) transmission technology has recently attracted much attention in the non-geostationary, e.g., low earth orbit (LEO) satellite communication (SATCOM) systems since it can significantly improve the energy efficiency (EE) and spectral efficiency. In this work, we develop a hybrid analog/digital precoding technique in the massive MIMO LEO SATCOM downlink, which reduces the onboard hardware complexity and power consumption. In the proposed scheme, the analog precoder is implemented via a more practical twin-resolution phase shifting (TRPS) network to make a meticulous tradeoff between the power consumption and array gain. In addition, we consider and study the impact of the distortion effect of the nonlinear power amplifiers (NPAs) in the system design. By jointly considering all the above factors, we propose an efficient algorithmic approach for the TRPS-based hybrid precoding problem with NPAs. Numerical results show the EE gains considering the nonlinear distortion and the performance superiority of the proposed TRPS-based hybrid precoding scheme over the baselines. [less ▲]Detailed reference viewed: 22 (2 UL) Testing market regulations in experimental asset markets –The case of margin purchasesNeugebauer, Tibor ; Füllbrunn, Saschain Journal of Economic Behavior and Organization (2022), 200Detailed reference viewed: 166 (23 UL) Investigations on the lateral-torsional buckling of an innovative U-shaped steel beam in construction stage of composite beamTuretta, Maxime; Odenbreit, Christoph ; Khelil, Abdelouahab et alin Structures (2022), 44Structural elements of building have to meet a multitude of requirements. Besides the static load bearing capacity, common requirements are the structural integrity and efficiency during the construction ... [more ▼]Structural elements of building have to meet a multitude of requirements. Besides the static load bearing capacity, common requirements are the structural integrity and efficiency during the construction stage and sufficient fire resistance. Within the French CIFRE research project COMINO, an innovative type of composite beam was developed for buildings beams with a span of 6 -12m, which need fire resistance until 2 hours with no additional supports at construction stage. The developed solution is composed of a steel U-section acting as a formwork in construction stage for a reinforced concrete part that provides the fire resistance. In the exploitation stage, the steel and the reinforced concrete are acting together as a composite beam. In construction stage, when the concrete is not hardened and thus the stabilizing effect is not present, the steel beam, is subjected to Lateral-Torsional Buckling. In order to investigate the structural behaviour of the new developed steel section in construction stage, a single full-scale test has been carried out at the Laboratory of Structural Engineering of the University of Luxembourg. This article focuses on the stability of the steel beam made of thin-walled steel parts without considering any stabilizing effect. The test results are then compared to the results of numerical investigations and to the analytical solutions of EN 1993. [less ▲]Detailed reference viewed: 23 (1 UL) Connected Vehicle Platforms for Dynamic InsuranceColot, Christian ; Robinet, François ; Nichil, Geoffrey et alin in Proceedings of the 6th International Conference on Intelligent Traffic and Transportation (2022)Detailed reference viewed: 47 (4 UL) Another end of the world is possible : Nicholas Georgescu-Roegen, from bioeconomy to architectureReyes Nájera, César ; Popova, Simona Bozhidarova in ARQ (2022), (111), 52-59Can technology ignore the physical limits of growth, or can it only provide us with more and better products, but also with more and better waste? This is a text that reflects on the entropic nature of ... [more ▼]Can technology ignore the physical limits of growth, or can it only provide us with more and better products, but also with more and better waste? This is a text that reflects on the entropic nature of construction materials and their economic process, based on the work of the Romanian economist Nicholas Georgescu-Roegen. The discussion helps us to critically reflect if architecture can, given its material quality, extend the life or death of the world, or whether it is simply an activity that contributes for us to having "a short but extravagant existence". [less ▲]Detailed reference viewed: 48 (20 UL) Cemeteries and crematoria, forgotten public space in multicultural Europe. An agenda for inclusion and citizenshipMaddrell, Avril; Beebeejaun, Yasminah; Wingren, Carola et alin Area (2022)In western Europe, municipal or otherwise state-commissioned cemeteries and crematoria are public spaces and services, open to all. Cemeteries and crematoria grounds are neglected in geographical ... [more ▼]In western Europe, municipal or otherwise state-commissioned cemeteries and crematoria are public spaces and services, open to all. Cemeteries and crematoria grounds are neglected in geographical, planning and policy debates about the character, design, management, use and accessibility of public spaces, and likewise debates about the social inclusion of migrants and minorities. This may reflect a tendency to situate cemeteries socially and geographically in the peripheries of contemporary European society, but they are, nonetheless, sites of vital public health infrastructure, as well as being highly significant symbolic, religious-spiritual, secular-sacred, and emotionally-laden places. Examining cemeteries-crematoria against a criteria of inclusive public space provides new insights into i) the nature of public space and its governance; ii) rights and barriers to shared public spaces and associated infrastructure in everyday multicultural contexts; iii) national-local negotiations of majority-minorities social relations and cultural practices in and through public spaces; and iv) the need to place municipal cemeteries-crematoria centre stage in scholarship and policy on public space which is culturally inclusive and serves all citizens. [less ▲]Detailed reference viewed: 23 (1 UL) Energy Efficient Transmission Design for NOMA Backscatter-Aided UAV Networks with Imperfect CSIAlJubayrin, Saad; Al-Wesabi, Fahd N.; Alsolai, Hadeel et alin Drones (2022)The recent combination of ambient backscatter communication (ABC) with non-orthogonal multiple access (NOMA) has shown great potential for connecting large-scale Internet of Things (IoT) in future ... [more ▼]The recent combination of ambient backscatter communication (ABC) with non-orthogonal multiple access (NOMA) has shown great potential for connecting large-scale Internet of Things (IoT) in future unmanned aerial vehicle (UAV) networks. The basic idea of ABC is to provide battery-free transmission by harvesting the energy of existing RF signals of WiFi, TV towers, and cellular base stations/UAV. ABC uses smart sensor tags to modulate and reflect data among wireless devices. On the other side, NOMA makes possible the communication of more than one IoT on the same frequency. In this work, we provide an energy efficient transmission design ABC-aided UAV network using NOMA. This work aims to optimize the power consumption of a UAV system while ensuring the minimum data rate of IoT. Specifically, the transmit power of UAVs and the reflection coefficient of the ABC system are simultaneously optimized under the assumption of imperfect channel state information (CSI). Due to co-channel interference among UAVs, imperfect CSI, and NOMA interference, the joint optimization problem is formulated as non-convex, which involves high complexity and makes it hard to obtain the optimal solution. Thus, it is first transformed and then solved by a sub-gradient method with low complexity. In addition, a conventional NOMA UAV framework is also studied for comparison without involving ABC. Numerical results demonstrate the benefits of using ABC in a NOMA UAV network compared to the conventional UAV framework. [less ▲]Detailed reference viewed: 15 (0 UL) L-invariants of Artin motivesDimitrov, Mladen; Maksoud, Alexandre in Annales mathématiques du Québec (2022)We compute Benois L-invariants of weight 1 cuspforms and of their adjoint representations and show how this extends Gross’ p-adic regulator to Artin motives which are not critical in the sense of Deligne ... [more ▼]We compute Benois L-invariants of weight 1 cuspforms and of their adjoint representations and show how this extends Gross’ p-adic regulator to Artin motives which are not critical in the sense of Deligne. Benois’ construction depends on the choice of a regular submodule which is well understood when the representation is p-regular, as it then amounts to the choice of a “motivic” p-refinement. The situation is dramatically different in the p-irregular case, where the regular submodules are parametrized by a flag variety and thus depend on continuous parameters. We are nevertheless able to show in some examples, how Hida theory and the geometry of the eigencurve can be used to detect a finite number of choices of arithmetic and “mixed-motivic” significance. [less ▲]Detailed reference viewed: 31 (1 UL) Inbound Carrier Plan Optimization for Adaptive VSAT NetworksLacoste, Clément ; Alves Martins, Wallace ; Chatzinotas, Symeon et alin IEEE Transactions on Aerospace and Electronic Systems (2022)The past decades witnessed the application of adaptive modulation and coding (ACM) in satellite links. However, ACM technologies come at the cost of higher complexity when designing the network’s carrier ... [more ▼]The past decades witnessed the application of adaptive modulation and coding (ACM) in satellite links. However, ACM technologies come at the cost of higher complexity when designing the network’s carrier plan and user terminals. Accounting for those issues is even more important when the satellite link uses frequencies in Ka band and above, where the attenuation caused by tropospheric phenomena is a major concern. In this paper, we propose a solution for the inbound, i.e. return link, carrier plan sizing of very small aperture terminal (VSAT) networks. As tropospheric attenuation is a key factor, we present a mathematical problem formulation based on spatially correlated attenuation time series generators. Our proposed sizing scheme is formulated as a mixed integer linear programming (MILP) optimization problem. The numerical results for a test scenario in Europe show a 10 to 50% bandwidth improvement over traditional sizing methods for outage probabilities lower than 1%. [less ▲]Detailed reference viewed: 116 (10 UL) The Use of Copyrighted Technical Standards in the Operationalisation of European Union Law: The Status Quo Position of the General Court in Public.Resources.Org (T-185/19)Gerardy, Marie in European Journal of Risk Regulation (2022)Detailed reference viewed: 39 (1 UL) Targeting a light-weight and multi-channel approach for distributed stream processingVenugopal, Vinu Ellampallil; Theobald, Martin ; Tassetti, Damien et alin Journal of Parallel and Distributed Computing (2022), 167Detailed reference viewed: 30 (1 UL) Exploiting constructive interference in symbol level hybrid beamforming for dual-function radar-communication systemWang, Bowen; Wu, Linlong ; Cheng, Ziyang et alin IEEE Wireless Communications Letters (2022), 11(10), 2071--2075In this letter, we study the Hybrid Beamforming (HBF) design for a Dual-Function Radar-Communication (DFRC) system, which serves Multiple Users (MUs) and detects a target in the presence of signal ... [more ▼]In this letter, we study the Hybrid Beamforming (HBF) design for a Dual-Function Radar-Communication (DFRC) system, which serves Multiple Users (MUs) and detects a target in the presence of signal-dependent clutters, simultaneously. Unlike conventional beamforming strategies, we propose a novel one on the symbol level, which exploits Constructive Interference (CI) to achieve a trade-off between radar and communication using one platform. To implement this novel strategy, we jointly design the DFRC transmit HBF and radar receive beamforming by maximizing the radar Signal to Interference plus Noise Ratio (SINR) while ensuring the Quality of Service (QoS) of downlink communication. To tackle the formulated non-convex problem, we propose an iterative algorithm, which combines the Majorization-Minimization (MM) and Alternating Direction Method of Multipliers (ADMM) judiciously. The numerical experiments indicate that our algorithm yields the CI properly for robust communications and achieves better performance than the conventional HBF benchmarks in both communication bit error rate and radar SINR. [less ▲]Detailed reference viewed: 15 (0 UL) Gaussian approximation for sums of region-stabilizing scoresBhattacharjee, Chinmoy ; Molchanov, Ilyain Electronic Journal of Probability (2022), 27We consider the Gaussian approximation for functionals of a Poisson process that are expressible as sums of region-stabilizing (determined by the points of the process within some specified regions) score ... [more ▼]We consider the Gaussian approximation for functionals of a Poisson process that are expressible as sums of region-stabilizing (determined by the points of the process within some specified regions) score functions and provide a bound on the rate of convergence in the Wasserstein and the Kolmogorov distances. While such results have previously been shown in Lachièze-Rey, Schulte and Yukich (2019), we extend the applicability by relaxing some conditions assumed there and provide further insight into the results. This is achieved by working with stabilization regions that may differ from balls of random radii commonly used in the literature concerning stabilizing functionals. We also allow for non-diffuse intensity measures and unbounded scores, which are useful in some applications. As our main application, we consider the Gaussian approximation of number of minimal points in a homogeneous Poisson process in $[0,1]^d$ with $d \geq 2$, and provide a presumably optimal rate of convergence. [less ▲]Detailed reference viewed: 56 (4 UL) PCfun: a hybrid computational framework for systematic characterization of protein complex functionSharma, Varun; Fossati, Andrea; Ciuffa, Rodolfo et alin Briefings in Bioinformatics (2022), 23(4), 239Detailed reference viewed: 15 (2 UL) The Failure of “Yugoslavia’s Last Chance”: Ante Marković and his Reformists in the 1990 ElectionsGlaurdic, Josip ; Filipovic, Vladimir; Lesschaeve, Christophe in Nationalities Papers (2022)The last Prime Minister of Yugoslavia Ante Marković was considered by many within the country and in the international community to be Yugoslavia’s last chance for a peaceful transition toward democracy ... [more ▼]The last Prime Minister of Yugoslavia Ante Marković was considered by many within the country and in the international community to be Yugoslavia’s last chance for a peaceful transition toward democracy and capitalism. In spite of his popularity, the Reformist party he created failed decisively in the first democratic elections of 1990. We expose the reasons for this failure by analyzing electoral, economic, and sociodemographic data on the level of more than two hundred Yugoslav municipalities where the Reformists put forward their candidates. Our analysis shows that the party’s failure had little to do with the voters’ exposure to the effects of the free market reforms undertaken by Marković’s federal government during this period. Instead, the Reformists’ results were largely determined by the communities’ ethnic makeup and interethnic balance. The Reformists suffered at the hands of a strong negative campaign by the Serbian regime of Slobodan Milošević, and they were squeezed out by the ethnically based parties that benefited from voters behaving strategically in the electoral marketplace dominated by questions of nationalism. The analysis presented here offers important lessons for our understanding of Yugoslavia’s breakup, post-communist transitions in general, and electoral politics in societies on the verge of ethnic conflict. [less ▲]Detailed reference viewed: 60 (3 UL) A Hybrid Modelling Approach For Aerial ManipulatorsKremer, Paul ; Sanchez Lopez, Jose Luis ; Voos, Holger in Journal of Intelligent and Robotic Systems (2022)Aerial manipulators (AM) exhibit particularly challenging, non-linear dynamics; the UAV and its manipulator form a tightly coupled dynamic system, mutually impacting each other. The mathematical model ... [more ▼]Aerial manipulators (AM) exhibit particularly challenging, non-linear dynamics; the UAV and its manipulator form a tightly coupled dynamic system, mutually impacting each other. The mathematical model describing these dynamics forms the core of many solutions in non-linear control and deep reinforcement learning. Traditionally, the formulation of the dynamics involves Euler angle parametrization in the Lagrangian framework or quaternion parametrization in the Newton-Euler framework. The former has the disadvantage of giving birth to singularities and the latter being algorithmically complex. This work presents a hybrid solution, combining the benefits of both, namely a quaternion approach leveraging the Lagrangian framework, connecting the singularity-free parameterization with the algorithmic simplicity of the Lagrangian approach. We do so by offering detailed insights into the kinematic modeling process and the formulation of the dynamics of a general aerial manipulator. The obtained dynamics model is validated experimentally against a real-time physics engine. A practical application of the obtained dynamics model is shown in the context of a computed torque feedback controller (feedback linearization), where we analyze its real-time capability with increasingly complex AM models. [less ▲]Detailed reference viewed: 22 (1 UL) One step ahead: mapping the Italian and German cybersecurity laws against the proposal for a NIS2 directiveSchmitz, Sandra ; Chiara, Pier Giorgio in International Cybersecurity Law Review (2022)With the COVID-19 pandemic accelerating digital transformation of the Single Market, the European Commission also speeded up the review of the first piece of European Union (EU)-wide cybersecurity ... [more ▼]With the COVID-19 pandemic accelerating digital transformation of the Single Market, the European Commission also speeded up the review of the first piece of European Union (EU)-wide cybersecurity legislation, the NIS Directive. Originally foreseen for May 2021, the Commission presented the review as early as December 2020 together with a Proposal for a NIS2 Directive. Almost in parallel, some Member States strengthened (or adopted) national laws beyond the scope of the NIS Directive to respond adequately to the fast-paced digital threat landscape. Against this backdrop, the article investigates the national interventions in the field of cybersecurity recently adopted by Italy and Germany. In order to identify similarities and divergences of the Italian and German national frameworks with the European Commission’s Proposal for a NIS2 Directive, the analysis will focus on selected aspects extrapolated from the Commission Proposal, namely: i) the enlarged scope; ii) detailed cybersecurity risk-management measures; iii) more stringent supervisory measures; and, iv) stricter enforcement requirements, including harmonised sanctions across the EU. The article concludes that the national cybersecurity legal frameworks under scrutiny already match the core of the proposed changes envisaged by the NIS2 Proposal. [less ▲]Detailed reference viewed: 56 (5 UL) Boosting Quantum Battery-Based IoT Gadgets via RF-Enabled Energy HarvestingGautam, Sumit; Solanki, Sourabh ; Sharma, Shree Krishna et alin Sensors (2022), 22(14), 1-19The search for a highly portable and efficient supply of energy to run small-scale wireless gadgets has captivated the human race for the past few years. As a part of this quest, the idea of realizing a ... [more ▼]The search for a highly portable and efficient supply of energy to run small-scale wireless gadgets has captivated the human race for the past few years. As a part of this quest, the idea of realizing a Quantum battery (QB) seems promising. Like any other practically tractable system, the design of QBs also involve several critical challenges. The main problem in this context is to ensure a lossless environment pertaining to the closed-system design of the QB, which is extremely difficult to realize in practice. Herein, we model and optimize various aspects of a Radio-Frequency (RF) Energy Harvesting (EH)-assisted, QB-enabled Internet-of-Things (IoT) system. Several RF-EH modules (in the form of micro- or nano-meter-sized integrated circuits (ICs)) are placed in parallel at the IoT receiver device, and the overall correspondingly harvested energy helps the involved Quantum sources achieve the so-called quasi-stable state. Concretely, the Quantum sources absorb the energy of photons that are emitted by a photon-emitting device controlled by a micro-controller, which also manages the overall harvested energy from the RF-EH ICs. To investigate the considered framework, we first minimize the total transmit power under the constraints on overall harvested energy and the number of RF-EH ICs at the QB-enabled wireless IoT device. Next, we optimize the number of RF-EH ICs, subject to the constraints on total transmit power and overall harvested energy. Correspondingly, we obtain suitable analytical solutions to the above-mentioned problems, respectively, and also cross-validate them using a non-linear program solver. The effectiveness of the proposed technique is reported in the form of numerical results, which are both theoretical and simulations based, by taking a range of operating system parameters into account. [less ▲]Detailed reference viewed: 110 (1 UL) A Gaussian damage function combined with sliced finite-element meshing for damage detectionSchommer, Sebastian; Dakhili, Khatereh ; Nguyen, Viet Ha et alin Journal of Civil Structural Health Monitoring (2022)Bridges are among the most important components of transportation systems. Timely damage detection of these structures not only ensures reliability but also prevents catastrophic failures. This paper ... [more ▼]Bridges are among the most important components of transportation systems. Timely damage detection of these structures not only ensures reliability but also prevents catastrophic failures. This paper addresses the damage assessment of bridges based on model updating techniques. Artificial damage was introduced to a beam that was a part of a real prestressed concrete bridge. The magnitude of the damage was increased stepwise, and static loading experiments were conducted in each step. A linear Finite-Element (FE) model with solid elements that were clustered into slices was utilised. A Gaussian bell-shaped curve was used as a damage function to describe the crack location using only three parameters. The experiments focused on sagging under dead load. Damage identification was performed in two steps using a coarse and a refined model. Initially, the FE model with a coarse mesh was updated to approximately localise the damage. Then, the FE model is refined in the vicinity of the approximately localised damage, and damage identification was accurately achieved. The results show that after the second step, the maximum error value of damage localisation is less than 0.5%. This approach could be later used to detect small damages that are not visible. [less ▲]Detailed reference viewed: 26 (4 UL) The relationship between COVID-19 countermeasures at the workplace and psychological well-being. Findings from a nationally representative sample of Luxembourgish employeesSischka, Philipp ; Schmidt, Alexander F.; Steffgen, Georges in Current Psychology (2022)The COVID-19 pandemic has massively changed people’s working lives all over the world. While various studies investigated the effects from pandemic-induced unemployment and telecommuting, there is a lack ... [more ▼]The COVID-19 pandemic has massively changed people’s working lives all over the world. While various studies investigated the effects from pandemic-induced unemployment and telecommuting, there is a lack of research regarding the impact of workplace COVID-19 countermeasures on well-being and mental health for employees who are still working on site. Thus, the aim of the present study was to investigate the prevalence of workplace COVID-19 countermeasures in organizations in Luxembourg. A person-centered approach was applied in order to explore how employees’ psychological well-being and health (i.e., general psychological well-being, vigor, work satisfaction, work-related burnout, somatic complaints, fear of COVID-19 infection) are impacted by organizational countermeasures and whether there are certain employee groups that are less protected by these. Results of a latent class analysis revealed four different classes (Low level of countermeasures, Medium level of countermeasures, High level of countermeasures, High level of countermeasures low distance). Employees working in a healthcare setting were more likely than employees working in a non-healthcare setting to be members of the High level of countermeasures low distance class. Class membership was meaningfully associated with all well-being outcomes. Members of the High level of countermeasures class showed the highest level of well-being, whereas Members of the Low level of countermeasures class and the High level of countermeasures low distance class showed the lowest level of well-being. Policy makers and organizations are recommended to increase the level of COVID-19 countermeasures as an adjunctive strategy to prevent and mitigate adverse mental health and well-being outcomes during the COVID-19 pandemic. [less ▲]Detailed reference viewed: 36 (3 UL) Studying the Parkinson's disease metabolome and exposome in biological samples through different analytical and cheminformatics approaches: a pilot studyTalavera Andujar, Begona ; Aurich, Dagny ; Aho, Velma et alin Analytical and Bioanalytical Chemistry (2022)Parkinson’s disease (PD) is the second most prevalent neurodegenerative disease, with an increasing incidence in recent years due to the ageing population. Genetic mutations alone only explain <10% of PD ... [more ▼]Parkinson’s disease (PD) is the second most prevalent neurodegenerative disease, with an increasing incidence in recent years due to the ageing population. Genetic mutations alone only explain <10% of PD cases, while environmental factors, including small molecules, may play a significant role in PD. In the present work, 22 plasma (11 PD, 11 control) and 19 feces samples (10 PD, 9 control) were analyzed by non-target high resolution mass spectrometry (NT-HRMS) coupled to two liquid chromatography (LC) methods (reversed phase (RP) and hydrophilic interaction liquid chromatography (HILIC)). A cheminformatics workflow was optimized using open software (MS-DIAL and patRoon) and open databases (all public MSP-formatted spectral libraries for MS-DIAL, PubChemLite for Exposomics and the LITMINEDNEURO list for patRoon). Furthermore, five disease-specific databases and three suspect lists (on PD and related disorders) were developed, using PubChem functionality to identifying relevant unknown chemicals. The results showed that non-target screening with the larger databases generally provided better results compared with smaller suspect lists. However, two suspect screening approaches with patRoon were also good options to study specific chemicals in PD. The combination of chromatographic methods (RP and HILIC) as well as two ionization modes (positive and negative) enhanced the coverage of chemicals in the biological samples. While most metabolomics studies in PD have focused on blood and cerebrospinal fluid, we found a higher number of relevant features in feces, such as alanine betaine or nicotinamide, which can be directly metabolized by gut microbiota. This highlights the potential role of gut dysbiosis in PD development. [less ▲]Detailed reference viewed: 84 (1 UL) Determination of Slip Factor between CNC-Cut Serrated Surfaces of S355J2 Grade Steel PlatesYolacan, Firat; Schäfer, Markus in Buildings (2022), 12(995), Determination of Slip Factor between CNC-Cut Serrated Surfaces of S355J2 Grade Steel Plates by Taygun Fırat Yolaçan and Markus SchäferStructural joint configurations realized with serrated steel surfaces ... [more ▼]Determination of Slip Factor between CNC-Cut Serrated Surfaces of S355J2 Grade Steel Plates by Taygun Fırat Yolaçan and Markus SchäferStructural joint configurations realized with serrated steel surfaces have started to be used in the construction fields to assemble the primary and the secondary structural members of civil engineering structures. The main advantages of these joint configurations rely on their flexibility to accommodate construction tolerances and their slip-resistant load-bearing mechanism against dynamic loading conditions. Therefore, it is important to reliably establish the characteristic value of the friction coefficient or in other words the slip factor between the serrated steel surfaces to design reliable slip-resistant connections. In this study, the characteristic slip factor between the CNC-cut serrated surfaces prepared from S355J2 grade steel plates is determined to investigate the impact of the CNC-cutting procedure on the slip-resistant load-bearing behaviour of steel-to-steel interfaces. Five experimental tests were performed according to EN1090-2, Annex G. The results are presented as the load-slip curves, variation of the bolt pre-tension load level, nominal and actual slip factors for the tested configuration of the CNC-cut serrated steel-to-steel interface [less ▲]Detailed reference viewed: 23 (1 UL) SLNR-based Secure Energy Efficient Beamforming in Multibeam Satellite SystemsLin, Zhi; An, Kang; Niu, Hehao et alin IEEE Transactions on Aerospace and Electronic Systems (2022)Motivated by the fact that both security and energy efficiency are the fundamental requirements and design targets of future satellite communications, this letter investigates secure energy efficient ... [more ▼]Motivated by the fact that both security and energy efficiency are the fundamental requirements and design targets of future satellite communications, this letter investigates secure energy efficient beamforming in multibeam satellite systems, where the satellite user in each beam is surrounded by an eavesdropper attempting to intercept the confidential information. To simul- taneously improve the transmission security and reduce power consumption, our design objective is to maximize the system secrecy energy efficiency (SEE) under the constraint of total transmit power budget. Different from the existing schemes with high complexity, we propose an alternating optimization scheme to address the SEE problem by decomposing the original nonconvex problem into subproblems. Specifically, we first utilize the signal- to-leakage-plus-noise ratio (SLNR) metric to obtain closed-form normalized beamforming weight vectors, while the successive convex approximation (SCA) method is used to efficiently solve the power allocation subproblem. Then, an iterative algorithm is proposed to obtain the suboptimal solutions. Finally, simulation results are provided to verify the superiority of the proposed scheme compared to the benchmark schemes [less ▲]Detailed reference viewed: 13 (0 UL) The Interaction between HLA-DRB1 and Smoking in Parkinson's Disease RevisitedDomenighetti, Cloé; Douillard, Venceslas; Sugier, Pierre-Emmanuel et alin Movement Disorders (2022)Abstract Background Two studies that examined the interaction between HLA-DRB1 and smoking in Parkinson's disease (PD) yielded findings in opposite directions. Objective To perform a large-scale ... [more ▼]Abstract Background Two studies that examined the interaction between HLA-DRB1 and smoking in Parkinson's disease (PD) yielded findings in opposite directions. Objective To perform a large-scale independent replication of the HLA-DRB1 × smoking interaction. Methods We genotyped 182 single nucleotide polymorphism (SNPs) associated with smoking initiation in 12 424 cases and 9480 controls to perform a Mendelian randomization (MR) analysis in strata defined by HLA-DRB1. Results At the amino acid level, a valine at position 11 (V11) in HLA-DRB1 displayed the strongest association with PD. MR showed an inverse association between genetically predicted smoking initiation and PD only in absence of V11 (odds ratio, 0.74, 95 confidence interval, 0.59–0.93, PInteraction = 0.028). In silico predictions of the influence of V11 and smoking-induced modifications of α-synuclein on binding affinity showed findings consistent with this interaction pattern. Conclusions Despite being one of the most robust findings in PD research, the mechanisms underlying the inverse association between smoking and PD remain unknown. Our findings may help better understand this association. © 2022 The Authors. Movement Disorders published by Wiley Periodicals LLC on behalf of International Parkinson and Movement Disorder Society [less ▲]Detailed reference viewed: 76 (1 UL) GRN Mutations Are Associated with Lewy Body DementiaReho, Paolo; Koga, Shunsuke; Shah, Zalak et alin Movement Disorders (2022)ABSTRACT Background Loss-of-function mutations in GRN are a cause of familial frontotemporal dementia, and common variants within the gene have been associated with an increased risk of developing ... [more ▼]ABSTRACT Background Loss-of-function mutations in GRN are a cause of familial frontotemporal dementia, and common variants within the gene have been associated with an increased risk of developing Alzheimer's disease and Parkinson's disease. Although TDP-43-positive inclusions are characteristic of GRN-related neurodegeneration, Lewy body copathology has also been observed in many GRN mutation carriers. Objective The objective of this study was to assess a Lewy body dementia (LBD) case–control cohort for pathogenic variants in GRN and to test whether there is an enrichment of damaging mutations among patients with LBD. Methods We analyzed whole-genome sequencing data generated for 2591 European-ancestry LBD cases and 4032 neurologically healthy control subjects to identify disease-causing mutations in GRN. Results We identified six heterozygous exonic GRN mutations in seven study participants (cases: n = 6; control subjects: n = 1). Each variant was predicted to be pathogenic or likely pathogenic. We found significant enrichment of GRN loss-of-function mutations in patients with LBD compared with control subjects (Optimized Sequence Kernel Association Test P = 0.0162). Immunohistochemistry in three definite LBD cases demonstrated Lewy body pathology and TDP-43-positive neuronal inclusions. Conclusions Our findings suggest that deleterious GRN mutations are a rare cause of familial LBD. © 2022 International Parkinson Movement Disorder Society. This article has been contributed to by U.S. Government employees and their work is in the public domain in the USA. [less ▲]Detailed reference viewed: 56 (1 UL) Rezension zu Jonas Springer, Die Bundeswehr und die Belgischen Streikräfte in DeutschlandBrüll, Christoph in Wissenschaftlicher Literaturanzeiger (2022), 61(2), Detailed reference viewed: 21 (0 UL) University students’ friendship networks: Ambivalence and the role of attachment and personalitySchwind, Lena; Albert, Isabelle in Trends in Psychology (2022), online firstGiven the importance of friendships throughout the life span and the possible experience of ambivalence within these relationships, the present study aims at examining the role that attachment and ... [more ▼]Given the importance of friendships throughout the life span and the possible experience of ambivalence within these relationships, the present study aims at examining the role that attachment and personality dimensions may play in this experience. University students (N = 87) completed an online survey, including the Big Five Inventory-10 (BFI-10), the Attachment Style Questionnaire (ASQ), as well as a two-item scale and an emotion checklist as two measures of ambivalence towards their friends. The correlation analysis revealed significant correlations between the ambivalence measures and secure attachment, fearful attachment, neuroticism, and agreeableness. A subsequent regression analysis demonstrated that fearful attachment, neuroticism, agreeableness, and gender can explain a considerable amount of variation in the degree of ambivalence. The results indicate that both certain attachment dimensions and certain personality dimensions predict the experience of ambivalence, although their importance may vary depending on the object of ambivalence. [less ▲]Detailed reference viewed: 14 (0 UL) Energy Efficiency Optimization for Backscatter Enhanced NOMA Cooperative V2X Communications under Imperfect CSIKhan, Wali Ullah ; Jamshed, Muhammad Ali; Lagunas, Eva et alin IEEE Transactions on Intelligent Transportation Systems (2022)Automotive-Industry 5.0 will use beyond fifth-generation (B5G) technologies to provide robust, computationally intelligent, and energy-efficient data sharing among various onboard sensors, vehicles, and ... [more ▼]Automotive-Industry 5.0 will use beyond fifth-generation (B5G) technologies to provide robust, computationally intelligent, and energy-efficient data sharing among various onboard sensors, vehicles, and other devices. Recently, ambient backscatter communications (AmBC) have gained significant interest in the research community for providing battery-free communications. AmBC can modulate useful data and reflect it towards near devices using the energy and frequency of existing RF signals. However, obtaining channel state information (CSI) for AmBC systems would be very challenging due to no pilot sequences and limited power. As one of the latest members of multiple access technology, non-orthogonal multiple access (NOMA) has emerged as a promising solution for connecting large-scale devices over the same spectral resources in B5G wireless networks. Under imperfect CSI, this paper provides a new optimization framework for energy-efficient transmission in AmBC enhanced NOMA cooperative vehicle-to-everything (V2X) networks. We simultaneously minimize the total transmit power of the V2X network by optimizing the power allocation at BS and reflection coefficient at backscatter sensors while guaranteeing the individual quality of services. The problem of total power minimization is formulated as non-convex optimization and coupled on multiple variables, making it complex and challenging. Therefore, we first decouple the original problem into two sub-problems and convert the nonlinear rate constraints into linear constraints. Then, we adopt the iterative sub-gradient method to obtain an efficient solution. For comparison, we also present a conventional NOMA cooperative V2X network without AmBC. Simulation results show the benefits of our proposed AmBC enhanced NOMA cooperative V2X network in terms of total achievable energy efficiency. [less ▲]Detailed reference viewed: 63 (8 UL) A Secure Data Sharing Scheme in Community Segmented Vehicular Social Networks for 6GKhowaja, Sunder Ali; Khuwaja, Parus; Dev, Kapal et alin IEEE Transactions on Industrial Informatics (2022)The use of aerial base stations, AI cloud, and satellite storage can help manage location, traffic, and specific application-based services for vehicular social networks. However, sharing of such data ... [more ▼]The use of aerial base stations, AI cloud, and satellite storage can help manage location, traffic, and specific application-based services for vehicular social networks. However, sharing of such data makes the vehicular network vulnerable to data and privacy leakage. In this regard, this article proposes an efficient and secure data sharing scheme using community segmentation and a blockchain-based framework for vehicular social networks. The proposed work considers similarity matrices that employ the dynamics of structural similarity, modularity matrix, and data compatibility. These similarity matrices are then passed through stacked autoencoders that are trained to extract encoded embedding. A density-based clustering approach is then employed to find the community segments from the information distances between the encoded embeddings. A blockchain network based on the Hyperledger Fabric platform is also adopted to ensure data sharing security. Extensive experiments have been carried out to evaluate the proposed data-sharing framework in terms of the sum of squared error, sharing degree, time cost, computational complexity, throughput, and CPU utilization for proving its efficacy and applicability. The results show that the CSB framework achieves a higher degree of SD, lower computational complexity, and higher throughput. [less ▲]Detailed reference viewed: 18 (0 UL) Generalising from conventional pipelines using deep learning in high‑throughput screening workfowsGarcia Santa Cruz, Beatriz ; Sölter, Jan; Gomez Giro, Gemma et alin Scientific Reports (2022)The study of complex diseases relies on large amounts of data to build models toward precision medicine. Such data acquisition is feasible in the context of high-throughput screening, in which the quality ... [more ▼]The study of complex diseases relies on large amounts of data to build models toward precision medicine. Such data acquisition is feasible in the context of high-throughput screening, in which the quality of the results relies on the accuracy of the image analysis. Although state-of-the-art solutions for image segmentation employ deep learning approaches, the high cost of manually generating ground truth labels for model training hampers the day-to-day application in experimental laboratories. Alternatively, traditional computer vision-based solutions do not need expensive labels for their implementation. Our work combines both approaches by training a deep learning network using weak training labels automatically generated with conventional computer vision methods. Our network surpasses the conventional segmentation quality by generalising beyond noisy labels, providing a 25% increase of mean intersection over union, and simultaneously reducing the development and inference times. Our solution was embedded into an easy-to-use graphical user interface that allows researchers to assess the predictions and correct potential inaccuracies with minimal human input. To demonstrate the feasibility of training a deep learning solution on a large dataset of noisy labels automatically generated by a conventional pipeline, we compared our solution against the common approach of training a model from a small manually curated dataset by several experts. Our work suggests that humans perform better in context interpretation, such as error assessment, while computers outperform in pixel-by-pixel fne segmentation. Such pipelines are illustrated with a case study on image segmentation for autophagy events. This work aims for better translation of new technologies to real-world settings in microscopy-image analysis. [less ▲]Detailed reference viewed: 205 (18 UL) La Comédie-Française face au CoronavirusDeregnoncourt, Marine in La Scène mondiale en période de confinement (2022)Depuis le vendredi 13 mars 2020, le monde entier vit au rythme ralenti de la crise sanitaire due au Coronavirus. Le parangon de la culture française, à savoir la ComédieFrançaise, ne fait pas exception et ... [more ▼]Depuis le vendredi 13 mars 2020, le monde entier vit au rythme ralenti de la crise sanitaire due au Coronavirus. Le parangon de la culture française, à savoir la ComédieFrançaise, ne fait pas exception et ne déroge pas à la règle. Qu’à cela ne tienne, Éric Ruf, Administrateur Général depuis 2014, la troupe de la Comédie-Française ainsi que les différents métiers du théâtre, salariés de cette institution ont opté pour le numérique et mis en place une WEB-TV, initialement intitulée « La Comédie continue ! » (du lundi 30 mars au dimanche 24 mai 2020), puis « La Comédie continue encore ! » (du lundi 25 mai au mardi 14 juillet 2020), avant de devenir « La Comédie reprend ! » avec, depuis le lundi 28 septembre 2020, une émission hebdomadaire et quotidienne nommée « Quelle Comédie », laquelle s’intéresse à l’actualité de la Comédie-Française. Depuis le mardi 10 novembre 2020, les internautes peuvent profiter de « Comédie d’automne ». Au gré des confinements, la Maison de Molière aura réussi à se renouveler, sans jamais lasser le public, en proposant des programmes différents au fil des semaines, et, aussi paradoxal que cela puisse paraître, à fidéliser des milliers d’internautes. [less ▲]Detailed reference viewed: 35 (0 UL) Mean-field theory for the structure of strongly interacting active liquidsTociu, Laura; Rassolov, Gregory; Fodor, Etienne et alin Journal of Chemical Physics (2022)Active systems, which are driven out of equilibrium by local non-conservative forces, exhibit unique behaviors and structures with potential utility for the design of novel materials. An important and ... [more ▼]Active systems, which are driven out of equilibrium by local non-conservative forces, exhibit unique behaviors and structures with potential utility for the design of novel materials. An important and difficult challenge along the path toward this goal is to precisely predict how the structure of active systems is modified as their driving forces push them out of equilibrium. Here, we use tools from liquid-state theories to approach this challenge for a classic minimal active matter model. First, we construct a nonequilibrium mean-field framework that can predict the structure of systems of weakly interacting particles. Second, motivated by equilibrium solvation theories, we modify this theory to extend it with surprisingly high accuracy to systems of strongly interacting particles, distinguishing it from most existing similarly tractable approaches. Our results provide insight into spatial organization in strongly interacting out-of-equilibrium systems. [less ▲]Detailed reference viewed: 20 (1 UL) Parental Assortative Mating and the Intergenerational Transmission of Human CapitalBingley, Paul; Cappellari, Lorenzo; Tatsiramos, Konstantinos in Labour Economics (2022), 77We study the contribution of parental educational assortative mating to the intergenerational transmission of educational attainment. We develop an empirical model for educational correlations within the ... [more ▼]We study the contribution of parental educational assortative mating to the intergenerational transmission of educational attainment. We develop an empirical model for educational correlations within the family in which parental educational sorting can translate into intergenerational transmission jointly by both parents, or transmission can originate from each parent independently. Estimating the model using educational attainment from Danish population-based administrative data for over 400,000 families, we find that on aver- age 75 percent of the intergenerational correlation in education is driven by the joint contribution of the par- ents. We also document a 38 percent decline of assortative mating in education for parents born between the early 1920s and the early 1950s. While the raw correlations also show decreases in father- and mother- specific intergenerational transmissions of educational attainment, our model shows that once we decompose all factors of intergenerational mobility, the share of intergenerational transmission accounted for by parent-specific factors increased from 7 to 27 percent; an increase compensated by a corresponding fall in joint intergenerational transmission from both parents, leaving total intergenerational persistence un- changed. The mechanisms of intergenerational transmission have changed, with an increased importance of one-to-one parent-child relationships. [less ▲]Detailed reference viewed: 110 (10 UL) Self-regulation of phenotypic noise synchronizes emergent organization and active transport in confluent microbial environmentsDhar, Jayabrata ; Thai, Le Phuong Anh ; Ghoshal, Arkajyoti et alin Nature Physics (2022)The variation associated with different observable characteristics—phenotypes—at the cellular scale underpins homeostasis and the fitness of living systems. However, if and how these noisy phenotypic ... [more ▼]The variation associated with different observable characteristics—phenotypes—at the cellular scale underpins homeostasis and the fitness of living systems. However, if and how these noisy phenotypic traits shape properties at the population level remains poorly understood. Here we report that phenotypic noise self-regulates with growth and coordinates collective structural organization, the kinetics of topological defects and the emergence of active transport around confluent colonies. We do this by cataloguing key phenotypic traits in bacteria growing under diverse conditions. Our results reveal a statistically precise critical time for the transition from a monolayer biofilm to a multilayer biofilm, despite the strong noise in the cell geometry and the colony area at the onset of the transition. This reveals a mitigation mechanism between the noise in the cell geometry and the growth rate that dictates the narrow critical time window. By uncovering how rectification of phenotypic noise homogenizes correlated collective properties across colonies, our work points at an emergent strategy that confluent systems employ to tune active transport, buffering inherent heterogeneities associated with natural cellular environment settings. [less ▲]Detailed reference viewed: 46 (5 UL) LSTM-Based Distributed Conditional Generative Adversarial Network for Data-Driven 5G-Enabled Maritime UAV CommunicationsRasheed, Iftikhar; Asif, Muhammad; Ihsan, Asim et alin IEEE Transactions on Intelligent Transportation Systems (2022)5G enabled maritime unmanned aerial vehicle (UAV) communication is one of the important applications of 5G wireless network which requires minimum latency and higher reliability to support mission ... [more ▼]5G enabled maritime unmanned aerial vehicle (UAV) communication is one of the important applications of 5G wireless network which requires minimum latency and higher reliability to support mission-critical applications. Therefore, lossless reliable communication with a high data rate is the key requirement in modern wireless communication systems. These all factors highly depend upon channel conditions. In this work, a channel model is proposed for air-to-surface link exploiting millimeter wave (mmWave) for 5G enabled maritime unmanned aerial vehicle (UAV) communication. Firstly, we will present the formulated channel estimation method which directly aims to adopt channel state information (CSI) of mmWave from the channel model inculcated by UAV operating within the Long Short Term Memory (LSTM)-Distributed Conditional generative adversarial network (DCGAN) i.e. (LSTM-DCGAN) for each beamforming direction. Secondly, to enhance the applications for the proposed trained channel model for the spatial domain, we have designed an LSTM-DCGAN based UAV network, where each one will learn mmWave CSI for all the distributions. Lastly, we have categorized the most favorable LSTM-DCGAN training method and emanated certain conditions for our UAV network to increase the channel model learning rate. Simulation results have shown that the proposed LSTM-DCGAN based network is vigorous to the error generated through local training. A detailed comparison has been done with the other available state-of-the-art CGAN network architectures i.e. stand-alone CGAN (without CSI sharing), Simple CGAN (with CSI sharing), multi-discriminator CGAN, federated learning CGAN and DCGAN. Simulation results have shown that the proposed LSTM-DCGAN structure demonstrates higher accuracy during the learning process and attained more data rate for downlink transmission as compared to the previous state of artworks. [less ▲]Detailed reference viewed: 21 (0 UL) Linear system identifiability from single-cell dataAalto, Atte ; Lamoline, François ; Goncalves, Jorge in Systems and Control Letters (2022), 165Detailed reference viewed: 49 (5 UL) Some Prevalent Sets in Multifractal Analysis: How Smooth is Almost Every Function in T_p^\alpha(x)Loosveldt, Laurent ; Nicolay, Samuelin Journal of Fourier Analysis and Applications (2022), 28(4), We present prevalent results concerning generalized versions of the $T_p^\alpha$ spaces, initially introduced by Calderón and Zygmund. We notably show that the logarithmic correction appearing in the ... [more ▼]We present prevalent results concerning generalized versions of the $T_p^\alpha$ spaces, initially introduced by Calderón and Zygmund. We notably show that the logarithmic correction appearing in the quasi-characterization of such spaces is mandatory for almost every function; it is in particular true for the Hölder spaces, for which the existence of the correction was showed necessary for a specific function. We also show that almost every function from $T_p^α (x0 )$ has α as generalized Hölder exponent at $x_0$. [less ▲]Detailed reference viewed: 33 (4 UL) Analysis of Coke Particle Gasification in the Raceway of a Blast FurnacePeters, Bernhard ; Besseron, Xavier ; Estupinan Donoso, Alvaro Antonio in IFAC-PapersOnLine (2022), 55(20), 277-282Detailed reference viewed: 28 (2 UL) Fading-ratio-based selection for massive MIMO systems under line-of-sight propagationChaves, Rafael da Silva; Cetin, Ediz; Lima, Markus V. S. et alin Wireless Networks (2022)Massive multiple-input multiple-output (MIMO) enables increased throughput by using spatial multiplexing. However, the throughput may severely degrade when the number of users served by a single base ... [more ▼]Massive multiple-input multiple-output (MIMO) enables increased throughput by using spatial multiplexing. However, the throughput may severely degrade when the number of users served by a single base station increases, especially under line-of-sight (LoS) propagation. Selecting users is a possible solution to deal with this problem. In the literature, the user selection algorithms can be divided into two classes: small-scale fading aware (SSFA) and large scale fading aware (LSFA) algorithms. The LSFA algorithms are good solutions for massive MIMO systems under non LoS propagation since the small-scale fading does not affect the system performance under this type of propagation. For the LoS case, the small scale fading has a great impact on the system performance, requiring the use of SSFA algorithms. However, disregarding the large-scale fading is equivalent to assuming that all users are equidistant from the base station and experience the same level of shadowing, which is not a reasonable approximation in practical applications. To address this shortcoming, a new user selection algorithm called the fading-ratio-based selection (FRBS) is proposed. FRBS considers both fading information to drop those users that induce the highest interference to the remaining ones. Simulation results considering LoS channels show that using FRBS yields near optimum downlink throughput, which is similar to that of the state-of-the-art algorithm, but with much lower computational complexity. Moreover, the use of FRBS with zero forcing precoder resulted in 26.28% improvement in the maximum throughput when compared with SSFA algorithms, and 35.39% improvement when compared with LSFA algorithms. [less ▲]Detailed reference viewed: 12 (1 UL) Determination of the angle of repose of hard metal granulesJust, Marvin ; Medina Peschiutta, Alexander ; Hippe, Frankie et alin Powder Technology (2022), 407The angle of repose is a quantity that delivers direct information about the flowability of granular material. It is therefore desirable to have a reliable experimental method for its determination. Based ... [more ▼]The angle of repose is a quantity that delivers direct information about the flowability of granular material. It is therefore desirable to have a reliable experimental method for its determination. Based on the well-established funnel method with continuous mass flow, an extension is introduced which allows increasing the precision and reproducibility of the angle of repose measurements. A modified experimental setup is presented which exploits asymmetries in the alignment of the mechanical setup to gain more precision in the determination of the angle of repose. This experimental setup is combined with an evaluation method based on automated image analysis. The first results for a set of metal powders are presented. [less ▲]Detailed reference viewed: 49 (6 UL) Sustainable Corporate Governance in the EU: Reasonable Global Ambitions?Conac, Pierre-Henri in Revue Européenne du Droit (2022), 3(4), 111Detailed reference viewed: 28 (1 UL) La gouvernance durable des entreprises selon l'UE : un modèle européen avec des ambitions mondiales réalistes ?Conac, Pierre-Henri in La Revue Européenne du Droit (2022), 3(4), 129Detailed reference viewed: 23 (1 UL) Confirmation quasi-intégrale par la cour d’appel de Paris de la décision de la Commission des sanctions de l’Autorité des marchés financiers du 17 avril 2020 contre Elliott Advisors UK Limited et Elliott Capital Advisors L. P.Conac, Pierre-Henri in Revue des Sociétés (2022), (7-8), 447Detailed reference viewed: 27 (0 UL) Arbeitskreis Historische Kartographie - TagungsberichtUhrmacher, Martin in KN – Journal of Cartography and Geographic Information (2022), 2(2022), 35-40Detailed reference viewed: 77 (5 UL) Cross-Border Enforcement of Consumer Law: Looking to the FutureAade, Laura ; Riefa, Christine; Coll, Elizabeth et alin United Nations Conference on Trade and Development (2022)Detailed reference viewed: 35 (3 UL) Between prudence and modernity : the transposition into French Law of Directive (EU) 2019/1023 on restructuring and insolvencyMastrullo, Thomas in European Insolvency and Restructuration Journal (2022), 4(2022), Detailed reference viewed: 14 (2 UL) Entre modernité et prudence : la transposition en droit français de la directive (UE) 2019/1023 du 20 juin 2019 sur la restructuration et l’insolvabilitéMastrullo, Thomas in Revue des Sociétés (2022), 7-8Detailed reference viewed: 36 (0 UL) Politik im Museum und die Politik der MuseenPauly, Michel in Forum für Politik, Gesellschaft und Kultur in Luxemburg (2022), (426), 62-65Detailed reference viewed: 17 (0 UL) Is There a Need for a Directive on Pillar Two? A Few Normative Comments.Haslehner, Werner in Intertax, International Tax Review (2022), 50(6/7), 527-530Poland’s request to link the entry into force of the Pillar 2 Directive to an international agreement on Pillar 1 raises fundamental questions about the European constitutional structure. Beyond the mere ... [more ▼]Poland’s request to link the entry into force of the Pillar 2 Directive to an international agreement on Pillar 1 raises fundamental questions about the European constitutional structure. Beyond the mere legality of such a link, this contribution seeks to respond to some normative concerns related to the creation of such secondary legislation. [less ▲]Detailed reference viewed: 66 (7 UL) Commercialisation en France d’un FIA non autorisé : un manquement à l’obligation d’agir dans l’intérêt des clientsRiassetto, Isabelle in Banque et Droit (2022)Detailed reference viewed: 17 (1 UL) Further resource multiplication at more advanced ages? Interactions between education, parental socioeconomic status, and age in their impacts upon healthSettels, Jason in Sociology Compass (2022)While scholarship has shown that socioeconomic status creates fine-grained gradients in health, there is debate regarding whether having higher amounts of one socioeconomic resource amplifies (resource ... [more ▼]While scholarship has shown that socioeconomic status creates fine-grained gradients in health, there is debate regarding whether having higher amounts of one socioeconomic resource amplifies (resource multiplication) or reduces (resource substitution) the health benefits of one's other socioeconomic resources. A further question is whether these processes are accentuated or diminished at more advanced ages. Using the 2016 and 2018 waves of the United States General Social Survey (N = 2995) and logistic regression analyses, this study reveals processes of resource multiplication between respondents' education and both parental education and parental occupational prestige in their effects upon self-rated health. Furthermore, these processes are accentuated at more advanced ages. Additionally, these interactive effects remain significant after controlling for respondent-level total family income and occupational prestige, suggesting mechanisms beyond actualized socioeconomic circumstances. These findings raise concerns regarding less educated older persons coming from less advantaged backgrounds. Accordingly, policies and programs should help equalize social circumstances early in the life course, to produce more salubrious trajectories with advancing age. [less ▲]Detailed reference viewed: 18 (2 UL) Conseil en investissement pour des parts de FIA non autorisés à la commercialisation en France, note sous AMF déc. 25 mai 2022,Riassetto, Isabelle in Bulletin Joly Bourse (2022)Detailed reference viewed: 6 (1 UL) Better Executive Functions Are Associated With More Efficient Cognitive Pain Modulation in Older Adults: An fMRI StudyRischer, Katharina Miriam ; Anton, Fernand ; Gonzalez-Roldan, Ana Maria et alin Frontiers in Pain Research (2022), 14Detailed reference viewed: 18 (0 UL) Göttliche Protokolle, Bitcoin-Jünger und schattenhafte Herrscher: Über die religiösen Anwandlungen und ideologischen Verstrickungen der Blockchain-TechnologieBecker, Katrin in Jahrbuch für Technikphilosophie (2022), 8Starting with an overview of the functioning and applications of blockchain, the paper sheds light on the core promise of this technology, namely, to overcome of the need for institutionally legitimized ... [more ▼]Starting with an overview of the functioning and applications of blockchain, the paper sheds light on the core promise of this technology, namely, to overcome of the need for institutionally legitimized intermediaries and to provide the subject with new possibilities for self-determined management of its own and communal lives. With reference to the philosopher Pierre Legendre, the paper first analyses the worldview underlying this promise in terms of cultural theory. Then the focus is directed towards the ideological entanglements as well as the religious elements that one encounters in the discursive environment of the technology. In view of this, it will be critically questioned to what extent one can really speak of decentralization and the abolition of middlemen. The paper thus aims to show why blockchain technology and its fields of applications require a critical discussion. [less ▲]Detailed reference viewed: 82 (8 UL) Normal and Pathological NRF2 Signalling in the Central Nervous SystemHeurtaux, Tony ; Bouvier, David S; Benani, Alexandre et alin Antioxidants (2022), 11(8), 1426The nuclear factor erythroid 2-related factor 2 (NRF2) was originally described as a master regulator of antioxidant cellular response, but in the time since, numerous important biological functions ... [more ▼]The nuclear factor erythroid 2-related factor 2 (NRF2) was originally described as a master regulator of antioxidant cellular response, but in the time since, numerous important biological functions linked to cell survival, cellular detoxification, metabolism, autophagy, proteostasis, inflammation, immunity, and differentiation have been attributed to this pleiotropic transcription factor that regulates hundreds of genes. After 40 years of in-depth research and key discoveries, NRF2 is now at the center of a vast regulatory network, revealing NRF2 signalling as increasingly complex. It is widely recognized that reactive oxygen species (ROS) play a key role in human physiological and pathological processes such as ageing, obesity, diabetes, cancer, and neurodegenerative diseases. The high oxygen consumption associated with high levels of free iron and oxidizable unsaturated lipids make the brain particularly vulnerable to oxidative stress. A good stability of NRF2 activity is thus crucial to maintain the redox balance and therefore brain homeostasis. In this review, we have gathered recent data about the contribution of the NRF2 pathway in the healthy brain as well as during metabolic diseases, cancer, ageing, and ageing-related neurodegenerative diseases. We also discuss promising therapeutic strategies and the need for better understanding of cell-type-specific functions of NRF2 in these different fields. [less ▲]Detailed reference viewed: 36 (7 UL) Préserve-moi ! Des journaux intimes à ceux de confinement dans les archives du WebSchafer, Valerie in Le Temps des Médias (2022)Souvent éphémères, les pages personnelles, les blogs et aujourd’hui les écritures de soi et intimes sur les réseaux socio-numériques, jusqu’aux journaux de confinement nés lors de la crise COVID, sont ... [more ▼]Souvent éphémères, les pages personnelles, les blogs et aujourd’hui les écritures de soi et intimes sur les réseaux socio-numériques, jusqu’aux journaux de confinement nés lors de la crise COVID, sont toutefois partiellement préservés dans les archives du Web. En explorant leur conservation, notamment au sein de la Bibliothèque nationale de France, et les limites et défis que posent ces sources nativement numériques, il s’agit de saisir les enjeux de préservation de ces contenus personnels, intimes, littéraires, vernaculaires, multimédias, mais aussi les possibilités de recherche qu’ils offrent pour l’histoire du numérique et de ses cultures. [less ▲]Detailed reference viewed: 151 (20 UL) Bearing the cost of politics: Consumer prices and welfare in RussiaHinz, Julian; Monastyrenko, Evgenii in Journal of International Economics (2022), 137In August 2014, the Russian Federation implemented an embargo on select food and agricultural imports from Western countries in response to previously imposed economic sanctions. In this paper we quantify ... [more ▼]In August 2014, the Russian Federation implemented an embargo on select food and agricultural imports from Western countries in response to previously imposed economic sanctions. In this paper we quantify the effect of this embargo on consumer prices and welfare in Russia. We provide evidence for the direct effect on monthly consumer prices with a difference-in-differences approach. The embargo caused prices of embargoed goods to rise by up to 7.7% – 14.9% in the short run and by on average 2.6% – 8.1% until at least 2016. The results further suggest the shock was transmitted to non-embargoed sectors through domestic input-output linkages. We then construct a general equilibrium Ricardian model of trade with input-output linkages and goods that are tradable, non-tradable or embargoed. The model-based counterfactual analysis predicts the overall price index in Russia to have increased by 0.33% and welfare to have declined by 1.84% due to the embargo. [less ▲]Detailed reference viewed: 55 (10 UL)
|
# Non unique factorization domains with prime factorizations with differing number of primes
As is well-known, $Z[\sqrt{-5}]$ is not a ufd because $6$ has more than one prime factorization in this ring: $6=2\cdot 3$ and $6=(1+\sqrt{-5})(1-\sqrt{-5})$. But both of these prime factorizations have the same number $(2)$ of prime factors...Am I correct that in $Z[\sqrt{-29}], 30=2\cdot 3\cdot 5$ and $30=(1-\sqrt{-29})(1+\sqrt{-29})$ are prime factorizings of $30$ that have different numbers of factors?
Also would $Z[\sqrt{-2309}]$ give as distinct prime factorizations $2310=2\cdot 3 \cdot 5\cdot 7\cdot 11=(1+\sqrt{-2309})(1-\sqrt{-2309})$? ($2309$ is a prime number).
What about $Z[\sqrt{-30029}]$, would that give $30030=2\cdot 3\cdot 5\cdot 7\cdot 11\cdot 13=(1+\sqrt{-30029})(1-\sqrt{-30029})$ as distinct prime factorizations?($30029$ is prime)...Does this show the number of primes in distinct prime factorizings be different? I'm worried that the norms will cause some of my "primes" to be nonprimes. Thanks.
• You should look into improving the format of this question and its readability. meta.math.stackexchange.com/questions/5020/… – Vincent Jul 30 '14 at 1:56
• One of the things you have to check is that the factorizations are into irreducibles and are different by more than a unit. Take $6=3\cdot 2=(-1)(1+\sqrt{-2})(1-\sqrt{-2})\cdot\sqrt{-2}^2$ these are two factorizations, but only the one on the right is into irreducibles. – Adam Hughes Jul 30 '14 at 2:08
• It is my understanding, you will abundantly let me know if this understanding is the slightest bit wrong, that $\mathbb{Z}[\sqrt{-2}]$ is UFD and thus the irreducibles are primes after all. $\mathbb{Z}[\sqrt{-29}]$, on the other hand, has class number 6, so the irreducibles are not primes, and from Bill's answer it follows that there can be multiple factorizations of differing lengths. I would still warn you to not be led astray by incomplete factorizations and/or multiplication by units, especially in real rings. – Robert Soupe Jul 30 '14 at 4:01
It is a classical result of Carlitz (1960) that a number ring has an element with different length factorizations into irreducibles iff it has class number $> 2.\,$ For a proof, and a survey of related results see Half-factorial domains, a survey by Scott T. Chapman and Jim Coykendall.
|
# Remarks on Lecture 3
Today we saw the connection between mixing times of random walks and the second eigenvalue of the normalized Laplacian. Recall that is an -vertex, weighted graph.
First, we introduced the lazy walk matrix,
which is related to the normalized Laplacian via,
In particular, if we arrange the eigenvalues of as , then
where is the th smallest eigenvalue of . Furthermore, the eigenfunctions of are given by , where is an eigenfunction of .
The eigenfunctions of are orthogonal with respect to the inner product
We will refer to this Hilbert space as . Recall that .
If we define the stationary distribution by
then .
Let be an -orthornormal system of eigenvectors for . Now fix an initial distribution and write,
Then we have,
In particular,
Let be the minimum stationary measure. A particularly strongly notion of mixing time is given by the following definition.
where the supremum is taken over all initial distributions . By time , the distribution is pointwise very close to the stationary distribution .
Let and . Using our preceding calculation, we have
Observe that , so if we now choose appropriately, then
yielding our main result:
One can also proof that (see the exercises). In general, we will be concerned with weaker notions of mixing. We’ll discuss those when they arise.
### Excersies
Prove that for any graph G.
|
# The Function $f:\mathbb{R}^{2}\rightarrow \mathbb{R}^{2}$ defined by $f(r,\theta)=(r\cos\theta,r\sin\theta)$
Let $f:\mathbb{R}^2\rightarrow \mathbb{R}^2$ be the function defined by $f(r,\theta)=(r\cos\theta,r\sin\theta).$ Then for which of the open subset $U$ of $\mathbb{R}^2$ given below, $f$ restricted to $U$ admit an inverse?
1. $U=\mathbb{R}^2$
2. $U=\{x,y \in\mathbb{R}^2:x>0,y>0\}$
3. $U=\{x,y \in\mathbb{R}^2:x^2+y^2<1\}$
4. $U=\{x,y \in\mathbb{R}^2:x<-1,y<-1\}$
It is clear 1 is not true as $\sin ,\cos$ are periodical function. Similarly 3 is not true. What about 2nd and 4th ? please help.Thanks in advance.
To admit an inverse $f$ restricted to $U$ must be bijective, in particular one-one. $1$ and $2$ are not True since $f(1,2) = f(1,2+2\pi)$. $4$ is not true since $f(-2,-2\pi) = f (-2,-4\pi)$. $3$ is not True since $f(0,1/2) = f (0,1/4)$.
• f(1, 2) = f(1, 1+2$\pi$), how come? – user268307 Dec 23 '15 at 4:42
• there should be $f(1,2+2\pi)$ – neelkanth Dec 23 '15 at 4:43
In options 2 and 4, The determinant of jacobian= $$r$$ is nonzero everywhere. Thus inverse function theorem guarantees that, for every point $$p$$ in $$U$$, there exists a neighborhood about $$p$$ over which function is invertible. This does not mean function is invertible over whole domain $$U$$: in this case $$f$$ is not even injective since it is periodic : $$f(x,y)=f(x,y+2 \pi)$$. (And injection is necessary for inverse.)
|
Report
### A Solenoidal Finite Element Approach for Prediction of Radar Cross Sections
Abstract:
This report considers the solution of problems that involve the scattering of plane electromagnetic waves by perfectly conducting obstacles. Such problems are governed by the Maxwell equations. An interesting facet of the solution of Faraday's law and Ampere's law, which on their own form a complete equation set for the determination of the field intensity components, is that there are the additional conservation statements of Coulomb's law and Gauss's law, which appear to be in excess of req...
Files:
• (pdf, 2.3mb)
### Authors
Austin N. F. Mack More by this author
Publication date:
2004-04-05
URN:
uuid:dc889431-ec82-419d-87fa-f798c03330ae
Local pid:
oai:eprints.maths.ox.ac.uk:1184
|
# Homework Help: Perfectly inelastic collision - energies
1. Apr 2, 2012
### kapitan90
1. The problem statement, all variables and given/known data
An atom of mass M is initially at rest in its ground state. A moving (nonrelativistic) electron of mass $$m_e$$ collides with the atom. The atom+electron system can exist in an 'excited state' in which the electron is absorbed into the atom. The excited state has an extra 'internal' energy E relative to the atom's ground state.
2. Relevant equations
Show that the minimum kinetic energy $$K_{initial}$$ that the electron must have in order to excite the atom is given as:
$$K_{initial} = \frac{(M+m_e)E}{M}$$ and derive a formula for the associated minimum kinetic speed $$v_{0min}$$
From conservation of momentum $$v_{final} = \frac{m_e v_0}{m_e +M}$$ and so $$KE_{final}=1/2 \frac{(m_e*v_0)^2}{m_e +M}$$
which can be written $$KE_{final}=\frac{K_{initial}}{M+m_e}$$
4. The attempt at a solution
$$minimum KE_{initial} = KE_{final}+ E = \frac{KE_{initial}}{M+m_e} + E$$
so $$KE_{initial}(1-1/(M+m_e))= E$$
$$KE_{initial} = \frac{E}{1-1/(M+m_e)}$$
which is different from the correct answer.
What am I doing wrong?
2. Apr 2, 2012
### Staff: Mentor
Check that last equation.
3. Apr 2, 2012
### kapitan90
I got the mistake, the answer should be:
$$KE_{final}=1/2 (\frac{m_e*v_0}{m_e +M})^2$$
which can be written $$KE_{final}=\frac{K_{initial}m_e}{M+m_e}$$
Hence
$$minimum KE_{initial} = KE_{final}+ E = \frac{K_{initial}m_e}{M+m_e} + E$$
so $$KE_{initial}(1-{m_e}/(M+m_e))= E$$
$$KE_{initial} = \frac{E(M+m_e)}{M}$$
|
Text Size
Sep 23
## Details on the CERN faster-than-light neutrino anomaly
Posted by: JackSarfatti
Tagged in: Untagged
thanks for the technical paper
On Sep 22, 2011, at 9:26 PM, Amara D. Angelica wrote:
Measurement of the neutrino velocity with the OPERA detector in the CNGS beam
OPERA
(Submitted on 22 Sep 2011)
The OPERA neutrino experiment at the underground Gran Sasso Laboratory has measured the velocity of neutrinos from the CERN CNGS beam over a baseline of about 730 km with much higher accuracy than previous studies conducted with accelerator neutrinos. The measurement is based on high-statistics data taken by OPERA in the years 2009, 2010 and 2011. Dedicated upgrades of the CNGS timing system and of the OPERA detector, as well as a high precision geodesy campaign for the measurement of the neutrino baseline, allowed reaching comparable systematic and statistical accuracies. An early arrival time of CNGS muon neutrinos with respect to the one computed assuming the speed of light in vacuum of (60.7 \pm 6.9 (stat.) \pm 7.4 (sys.)) ns was measured. This anomaly corresponds to a relative difference of the muon neutrino velocity with respect to the speed of light (v-c)/c = (2.48 \pm 0.28 (stat.) \pm 0.30 (sys.)) \times 10-5.
http://arxiv.org/abs/1109.4897
No. This is a very complicated experimental paper. I can't judge it. It need a team of specialists in the field to go over it carefully like the FAA analysing a plane crash.
Carlos Castro has a theory.
On Sep 22, 2011, at 7:05 PM, Carlos Castro wrote:
Dear Colleagues :
We explore the many novel physical consequences of Born’s reciprocal Relativity
theory in flat phase-space and to generalize the theory to the curved phase-space
scenario. We provide with six specific novel physical results resulting from
Born’s reciprocal Relativity and which are not present in Special Relativity.
These are : momentum-dependent time delay in the emission and detection of
photons; energy-dependent notion of locality; superluminal behavior; relative
rotation of photon trajectories due to the aberration of light; invariance of
areas-cells in phase-space and modified dispersion relations. We finalize by
constructing a Born reciprocal general relativity theory in curved phase-spaces
which requires the introduction of a complex Hermitian metric, torsion and
nonmetricity.
Superluminal behavior was one of the consequences found in the article
"The Many Novel Physical Consequences of Born's Reciprocal Relativity in Phase-Spaces" Int. J. Mod. Phys. A vol. 26, no. 21 (2011) 3653-3678;
http://www.vixra.org/abs/1104.0064
Also in the article with Matej Pavsic
Progress in Phys. vol 1, (April 2005) 31-64.
Best wishes
Carlos
Carlos needs to try to calculate the numbers in his theory that would describe the actual data. That would test his theory. This is not an easy task.
|
# Math Help - Proofs
1. ## Proofs
1.
Prove: For all sets A and B, if A is a subset of B, then not B is a subset of not A.
2.
Prove: For all sets A,B,and C, A - (B U C) = (A-B) and (A-C)
2. #1 is really only about logic. Recall that $\left( {P \to Q} \right) \equiv \left( {\neg Q \to \neg P} \right)$.
Thus “If x is in A then x is in B.” is equivalent to “If xi is not in B then x is not in A”.
#2
$\begin{gathered}
A\backslash \left( {B \cup C} \right) \equiv A \cap \left( {B \cup C} \right)^c \hfill \\
\equiv A \cap \left( {B^c \cap C^c } \right) \hfill \\
\equiv \left( {A \cap B^c } \right) \cap \left( {A \cap C^c } \right) \hfill \\
\equiv \left( {A\backslash B} \right) \cap \left( {A \backslash C} \right) \hfill \\
\end{gathered}$
|
# Localization measures in the time-scale setting
Localization measures in the time-scale setting Wavelet (or continuous wavelet) transform is superior to the Fourier transform and the windowed (or short-time Fourier) transform because of its ability to measure the time–frequency variations in a signal at different time–frequency resolutions. However, the uncertainty principles in Fourier analysis set a limit to the maximal time–frequency resolution. We present some forms of uncertainty principles for functions that are $$\varepsilon$$ ε -concentrated in a given region within the time–frequency plane involving particularly localization operators. Moreover we show how the eigenfunctions of such localization operators are maximally time–frequency-concentrated in the region of interest and we will use it to approximate such $$\varepsilon$$ ε -concentrated functions. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Pseudo-Differential Operators and Applications Springer Journals
# Localization measures in the time-scale setting
, Volume 8 (3) – Mar 1, 2017
22 pages
/lp/springer_journal/localization-measures-in-the-time-scale-setting-Qczk2sAsOd
Publisher
Springer International Publishing
Subject
Mathematics; Analysis; Operator Theory; Partial Differential Equations; Functional Analysis; Applications of Mathematics; Algebra
ISSN
1662-9981
eISSN
1662-999X
D.O.I.
10.1007/s11868-017-0195-y
Publisher site
See Article on Publisher Site
### Abstract
Wavelet (or continuous wavelet) transform is superior to the Fourier transform and the windowed (or short-time Fourier) transform because of its ability to measure the time–frequency variations in a signal at different time–frequency resolutions. However, the uncertainty principles in Fourier analysis set a limit to the maximal time–frequency resolution. We present some forms of uncertainty principles for functions that are $$\varepsilon$$ ε -concentrated in a given region within the time–frequency plane involving particularly localization operators. Moreover we show how the eigenfunctions of such localization operators are maximally time–frequency-concentrated in the region of interest and we will use it to approximate such $$\varepsilon$$ ε -concentrated functions.
### Journal
Journal of Pseudo-Differential Operators and ApplicationsSpringer Journals
Published: Mar 1, 2017
## You’re reading a free preview. Subscribe to read the entire article.
### DeepDyve is your personal research library
It’s your single place to instantly
that matters to you.
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month
\$360/year
Save searches from
PubMed
Create lists to
Export lists, citations
|
Home » » Integrate $$5x^{4} + e^{-x}$$ with respect to $$x$$
# Integrate $$5x^{4} + e^{-x}$$ with respect to $$x$$
### Question
Integrate $$5x^{4} + e^{-x}$$ with respect to $$x$$
### Options
A) $$-e^{-x}+x^5+k$$
B) $$e^{-x}+x^5+K$$
C) $$-e^{-x}-x^{-5}+K$$
D) $$-e^{-x}+x^4+K$$
### Explanation:
$$\int (5x^4+e^{-x})\mathit{dx}$$
$$= \frac{5x^{4+1}}{4+1}+\left(-e^{-x}\right)+c$$
$$\frac{5x^5} 5-e^{-x}+c$$
$$= x^5-e^x+c$$
## Dicussion (1)
• $$\int (5x^4+e^{-x})\mathit{dx}$$
$$= \frac{5x^{4+1}}{4+1}+\left(-e^{-x}\right)+c$$
$$\frac{5x^5} 5-e^{-x}+c$$
$$= x^5-e^x+c$$
|
# Flavor Oscillations With Sterile Neutrinos and In Dense Neutrino Environments
Open Access
Author:
Hollander, David Francis
Physics
Degree:
Doctor of Philosophy
Document Type:
Dissertation
Date of Defense:
February 09, 2015
Committee Members:
Many experiments have provided evidence for neutrino flavor oscillations, and consequently that neutrinos are in fact massive which is not predicted by the Standard Model. Many experiments have been built to constrain the parameters which determine flavor oscillations, and for only three flavors of neutrinos the mixing parameters are well known, aside from the CP violating phase for two mass hierarchies. Most experimental data can be well explained by mixing between three flavors of neutrinos, however oscillation anomalies from several experiments, most notably from LSND (Liquid Scintillator Neutrino Detector) have suggested that there may be additional flavors of neutrinos beyond those in the Standard Model. One of the focuses of this dissertation is the possibility of adding new flavors of right-handed neutrinos to the Standard Model to account for oscillation anomalies, and exploring the consequences of sterile neutrinos for other experiments. Sensitivities to a particular model of sterile neutrinos at the future Long-Baseline Neutrino Experiment will be determined, in which CP effects introduced by the sterile neutrinos play an important role. It will be demonstrated how, by combining data from the Long-Baseline Neutrino Experiment along with data from Daya Bay and T2K, it is possible to provide evidence for or rule out this model of sterile neutrinos. A chi-squared analysis is used to determine the significance of measuring the effects of sterile neutrinos in IceCube; it will be shown that it may be possible to extract evidence for sterile neutrinos from high energy atmospheric neutrinos in IceCube. Furthermore it will be demonstrated how measuring neutrino flavor ratios from astrophysical sources in IceCube can help to distinguish between the three flavor scenario and a beyond the Standard Model (BSM) scenario involving sterile neutrinos. Measuring astrophysical as well as atmospheric neutrinos can evince the existence of sterile neutrinos. Despite the fact that the mixing parameters for the three Standard Model neutrino flavors are well known, some implications of neutrino interactions for flavor oscillations are not well understood. Neutrinos can interact with one another in a similar way to how neutrinos interact with normal matter. Neutrino-neutrino forward scattering can lead to a flavor swap for the propagating neutrino, or the propagating neutrino can retain its original flavor. These interactions contribute an effective potential to the Hamiltonian describing the flavor evolution which depends on a background neutrino density. In normal matter the neutrino density is very low which allows for neutrino-neutrino interactions to be ignored, however these interactions can dominate over vacuum and normal matter interactions in very dense environments such as core-collapse supernovae and early universe scenarios. Neutrino-neutrino interactions give rise to terms quadratic in neutrino densities in the equations of motion, and can give rise to what is called collective oscillations resulting from interference with vacuum and normal matter effects. The non-linearity has made the problem of solving for collective oscillations analytically intractable without simplifying assumptions, and has made this a problem relegated to supercomputer simulations. This dissertation is concerned with analytic methods for solving the equations of motion for core-collapse neutrino propagation. It will be shown here that, by keeping only $\nu\nu$-interactions at initial distances outward from the supernova core, it is possible to solve the equations of motion by factorizing vacuum oscillations and the effects of $\nu\nu$-interactions. Furthermore, it will be shown how using this factorization scheme it is possible to predict where flavor oscillations become unstable. This is an important development because it can allow one to predict the neutrino flux in Earth experiments from core-collapse supernovae, while at the same time gaining an understanding of the underlying physics involved in complicated processes such as collective oscillations and the rapid growth of oscillations at medium range distances. Using the factorization ansatz together with a measured supernova spectrum it is possible in principle to determine the thermal spectra inside of the supernova.
|
# How to calculate x,y position of 3D points?
I have points in 3D system like this
$$p1=(2,3,4)$$ $$p2=(3,5,5)$$
Here I would like draw point $p1$ and $p2$ in $2D$ view.
Project type = orthographic. Coordinate system = Cartesian
X- axis, min = 2, max=9 y-axis min=2, max=12 z-axis min=1, max=10
Basically I would like draw $3D$ points in $2D$ view.(using Cartesian coordinate system)
1) how can convert $3D$ points $(p1, p2)$ to $2D$ points. What is the formula for this?
I cann't upload images yet, as I need at least 10 reputation as per the forum rules.
Any idea
Thanks
-
If you need of 10 of reputation I will give one vote. – user29999 Sep 11 '12 at 14:56
Answer would be great help than Vote! – flex Sep 11 '12 at 15:01
You need to specify which plane you want the points projected onto. – copper.hat Sep 11 '12 at 15:17
To do an orthographic projection you need to specify a plane onto which the original points are projected. The plane is usually described by a 3D vector. A projection onto the x-y plane would use $(0,0,1)$, another 'perspective-like' projection would be $(1,1,1)$. – copper.hat Sep 11 '12 at 16:26
when it is rotatable, it is difficult identify which plane is in the eye view(visible). am telling in perspective of programming – flex Sep 11 '12 at 16:59
You also need to specify the orientation of the plane you are projecting onto. The easiest examples are planes perpendicular to one of the axes. So if you project onto a plane perpendicular to $z$, your get $p1=(2,3), p2=(3,5)$
-
What if rectangle is rotatable? – flex Sep 11 '12 at 15:01
Then you must rotate the plane using standard rotation matrices, and project onto the resulting plane. – Arkamis Sep 11 '12 at 15:04
If I understand correctly, orthographic is parallel projection.
Pick a normalized direction $h \in \mathbb{R}^3$ (ie, $\|h\| = 1$). Then you will be projecting onto the plane $\{x \in \mathbb{R}^3 | h^T x = 0\}$. An example would be $h = (0,0,1)$ which would be a plan view.
Then the projection $P: \mathbb{R}^3 \to \mathbb{R}^3$ is given by $P(x) = (I - h h^T) x = x - \langle h, x \rangle h$.
For example, if $h = (0,0,1)$, then $P((x,y,z)) = (x,y,0)$. (This is essentially the same as Ross' example.)
-
Help me to solve the equation by using above points. – flex Sep 11 '12 at 15:10
@flex: You haven't given enough information to uniquely solve the problem. – copper.hat Sep 11 '12 at 15:23
could u pleas tell me, What other information is missing? – flex Sep 11 '12 at 16:20
@flex: the missing information is the plane you want to project onto. It could be the $xy$ plane, as in my example. It could be the $yz$ plane. It could be the plane perpendicular to $(1,1,1)$ or any other. Think of looking at your box from various angles. We need to specify the viewpoint. – Ross Millikan Sep 12 '12 at 13:09
Ok, lets Assume XY plane – flex Sep 12 '12 at 13:24
|
## Calculus 10th Edition
$\lim\limits_{x\to4}[[x-1]]$ does not exist.
$\lim\limits_{x\to4^-}[[x-1]]=[[4^--1]]=[[3^-]]=2.$ $\lim\limits_{x\to4^+}[[x-1]]=[[4^+-1]]=[[3^+]]=3.$ Since $\lim\limits_{x\to4^-}[[x-1]]\ne\lim\limits_{x\to4^+}[[x-1]]\to\lim\limits_{x\to4}[[x-1]]$ does not exist.
|
# 142
Cross-posted from Cold Takes
I've been working on a new series of posts about the most important century.
• The original series focused on why and how this could be the most important century for humanity. But it had relatively little to say about what we can do today to improve the odds of things going well.
• The new series will get much more specific about the kinds of events that might lie ahead of us, and what actions today look most likely to be helpful.
• A key focus of the new series will be the threat of misaligned AI: AI systems disempowering humans entirely, leading to a future that has little to do with anything humans value. (Like in the Terminator movies, minus the time travel and the part where humans win.)
Many people have trouble taking this "misaligned AI" possibility seriously. They might see the broad point that AI could be dangerous, but they instinctively imagine that the danger comes from ways humans might misuse it. They find the idea of AI itself going to war with humans to be comical and wild. I'm going to try to make this idea feel more serious and real.
As a first step, this post will emphasize an unoriginal but extremely important point: the kind of AI I've discussed could defeat all of humanity combined, if (for whatever reason) it were pointed toward that goal. By "defeat," I don't mean "subtly manipulate us" or "make us less informed" or something like that - I mean a literal "defeat" in the sense that we could all be killed, enslaved or forcibly contained.
I'm not talking (yet) about whether, or why, AIs might attack human civilization. That's for future posts. For now, I just want to linger on the point that if such an attack happened, it could succeed against the combined forces of the entire world.
• I think that if you believe this, you should already be worried about misaligned AI,1 before any analysis of how or why an AI might form its own goals.
• We generally don't have a lot of things that could end human civilization if they "tried" sitting around. If we're going to create one, I think we should be asking not "Why would this be dangerous?" but "Why wouldn't it be?"
By contrast, if you don't believe that AI could defeat all of humanity combined, I expect that we're going to be miscommunicating in pretty much any conversation about AI. The kind of AI I worry about is the kind powerful enough that total civilizational defeat is a real possibility. The reason I currently spend so much time planning around speculative future technologies (instead of working on evidence-backed, cost-effective ways of helping low-income people today - which I did for much of my career, and still think is one of the best things to work on) is because I think the stakes are just that high.
Below:
• I'll sketch the basic argument for why I think AI could defeat all of human civilization.
• Others have written about the possibility that "superintelligent" AI could manipulate humans and create overpowering advanced technologies; I'll briefly recap that case.
• I'll then cover a different possibility, which is that even "merely human-level" AI could still defeat us all - by quickly coming to rival human civilization in terms of total population and resources.
• At a high level, I think we should be worried if a huge (competitive with world population) and rapidly growing set of highly skilled humans on another planet was trying to take down civilization just by using the Internet. So we should be worried about a large set of disembodied AIs as well.
• I'll briefly address a few objections/common questions:
• How can AIs be dangerous without bodies?
• If lots of different companies and governments have access to AI, won't this create a "balance of power" so that no one actor is able to bring down civilization?
• Won't we see warning signs of AI takeover and be able to nip it in the bud?
• Isn't it fine or maybe good if AIs defeat us? They have rights too.
• Close with some thoughts on just how unprecedented it would be to have something on our planet capable of overpowering us all.
## How AI systems could defeat all of us
There's been a lot of debate over whether AI systems might form their own "motivations" that lead them to seek the disempowerment of humanity. I'll be talking about this in future pieces, but for now I want to put it aside and imagine how things would go if this happened.
So, for what follows, let's proceed from the premise: "For some weird reason, humans consistently design AI systems (with human-like research and planning abilities) that coordinate with each other to try and overthrow humanity." Then what? What follows will necessarily feel wacky to people who find this hard to imagine, but I think it's worth playing along, because I think "we'd be in trouble if this happened" is a very important point.
### The "standard" argument: superintelligence and advanced technology
Other treatments of this question have focused on AI systems' potential to become vastly more intelligent than humans, to the point where they have what Nick Bostrom calls "cognitive superpowers."2 Bostrom imagines an AI system that can do things like:
• Do its own research on how to build a better AI system, which culminates in something that has incredible other abilities.
• Hack into human-built software across the world.
• Manipulate human psychology.
• Quickly generate vast wealth under the control of itself or any human allies.
• Come up with better plans than humans could imagine, and ensure that it doesn't try any takeover attempt that humans might be able to detect and stop.
• Develop advanced weaponry that can be built quickly and cheaply, yet is powerful enough to overpower human militaries.
(Wait But Why reasons similarly.3)
I think many readers will already be convinced by arguments like these, and if so you might skip down to the next major section.
But I want to be clear that I don't think the danger relies on the idea of "cognitive superpowers" or "superintelligence" - both of which refer to capabilities vastly beyond those of humans. I think we still have a problem even if we assume that AIs will basically have similar capabilities to humans, and not be fundamentally or drastically more intelligent or capable. I'll cover that next.
### How AIs could defeat humans without "superintelligence"
If we assume that AIs will basically have similar capabilities to humans, I think we still need to worry that they could come to out-number and out-resource humans, and could thus have the advantage if they coordinated against us.
Here's a simplified example (some of the simplifications are in this footnote4) based on Ajeya Cotra's "biological anchors" report:
• I assume that transformative AI is developed on the soonish side (around 2036 - assuming later would only make the below numbers larger), and that it initially comes in the form of a single AI system that is able to do more-or-less the same intellectual tasks as a human. That is, it doesn't have a human body, but it can do anything a human working remotely from a computer could do.
• I'm using the report's framework in which it's much more expensive to train (develop) this system than to run it (for example, think about how much Microsoft spent to develop Windows, vs. how much it costs for me to run it on my computer).
• The report provides a way of estimating both how much it would cost to train this AI system, and how much it would cost to run it. Using these estimates (details in footnote)5 implies that once the first human-level AI system is created, whoever created it could use the same computing power it took to create it in order to run several hundred million copies for about a year each.6
• This would be over 1000x the total number of Intel or Google employees,7 over 100x the total number of active and reserve personnel in the US armed forces, and something like 5-10% the size of the world's total working-age population.8
• And that's just a starting point.
• This is just using the same amount of resources that went into training the AI in the first place. Since these AI systems can do human-level economic work, they can probably be used to make more money and buy or rent more hardware,9 which could quickly lead to a "population" of billions or more.
• In addition to making more money that can be used to run more AIs, the AIs can conduct massive amounts of research on how to use computing power more efficiently, which could mean still greater numbers of AIs run using the same hardware. This in turn could lead to a feedback loop and explosive growth in the number of AIs.
• Each of these AIs might have skills comparable to those of unusually highly paid humans, including scientists, software engineers and quantitative traders. It's hard to say how quickly a set of AIs like this could develop new technologies or make money trading markets, but it seems quite possible for them to amass huge amounts of resources quickly. A huge population of AIs, each able to earn a lot compared to the average human, could end up with a "virtual economy" at least as big as the human one.
To me, this is most of what we need to know: if there's something with human-like skills, seeking to disempower humanity, with a population in the same ballpark as (or larger than) that of all humans, we've got a civilization-level problem.
A potential counterpoint is that these AIs would merely be "virtual": if they started causing trouble, humans could ultimately unplug/deactivate the servers they're running on. I do think this fact would make life harder for AIs seeking to disempower humans, but I don't think it ultimately should be cause for much comfort. I think a large population of AIs would likely be able to find some way to achieve security from human shutdown, and go from there to amassing enough resources to overpower human civilization (especially if AIs across the world, including most of the ones humans were trying to use for help, were coordinating).
I spell out what this might look like in an appendix. In brief:
• By default, I expect the economic gains from using AI to mean that humans create huge numbers of AIs, integrated all throughout the economy, potentially including direct interaction with (and even control of) large numbers of robots and weapons.
• (If not, I think the situation is in many ways even more dangerous, since a single AI could make many copies of itself and have little competition for things like server space, as discussed in the appendix.)
• AIs would have multiple ways of obtaining property and servers safe from shutdown.
• For example, they might recruit human allies (through manipulation, deception, blackmail/threats, genuine promises along the lines of "We're probably going to end up in charge somehow, and we'll treat you better when we do") to rent property and servers and otherwise help them out.
• Or they might create fakery so that they're able to operate freely on a company's servers while all outward signs seem to show that they're successfully helping the company with its goals.
• A relatively modest amount of property safe from shutdown could be sufficient for housing a huge population of AI systems that are recruiting further human allies, making money (via e.g. quantitative finance), researching and developing advanced weaponry (e.g., bioweapons), setting up manufacturing robots to construct military equipment, thoroughly infiltrating computer systems worldwide to the point where they can disable or control most others' equipment, etc.
• Through these and other methods, a large enough population of AIs could develop enough military technology and equipment to overpower civilization - especially if AIs across the world (including the ones humans were trying to use) were coordinating with each other.
## Some quick responses to objections
This has been a brief sketch of how AIs could come to outnumber and out-resource humans. There are lots of details I haven't addressed.
Here are some of the most common objections I hear to the idea that AI could defeat all of us; if I get much demand I can elaborate on some or all of them more in the future.
How can AIs be dangerous without bodies? This is discussed a fair amount in the appendix. In brief:
• AIs could recruit human allies, tele-operate robots and other military equipment, make money via research and quantitative trading, etc.
• At a high level, I think we should be worried if a huge (competitive with world population) and rapidly growing set of highly skilled humans on another planet was trying to take down civilization just by using the Internet. So we should be worried about a large set of disembodied AIs as well.
If lots of different companies and governments have access to AI, won't this create a "balance of power" so that nobody is able to bring down civilization?
• This is a reasonable objection to many horror stories about AI and other possible advances in military technology, but if AIs collectively have different goals from humans and are willing to coordinate with each other11 against us, I think we're in trouble, and this "balance of power" idea doesn't seem to help.
• What matters is the total number and resources of AIs vs. humans.
Won't we see warning signs of AI takeover and be able to nip it in the bud? I would guess we would see some warning signs, but does that mean we could nip it in the bud? Think about human civil wars and revolutions: there are some warning signs, but also, people go from "not fighting" to "fighting" pretty quickly as they see an opportunity to coordinate with each other and be successful.
Isn't it fine or maybe good if AIs defeat us? They have rights too.
• Maybe AIs should have rights; if so, it would be nice if we could reach some "compromise" way of coexisting that respects those rights.
• But if they're able to defeat us entirely, that isn't what I'd plan on getting - instead I'd expect (by default) a world run entirely according to whatever goals AIs happen to have.
• These goals might have essentially nothing to do with anything humans value, and could be actively counter to it - e.g., placing zero value on beauty and having zero attempts to prevent or avoid suffering).
## Risks like this don't come along every day
I don't think there are a lot of things that have a serious chance of bringing down human civilization for good.
As argued in The Precipice, most natural disasters (including e.g. asteroid strikes) don't seem to be huge threats, if only because civilization has been around for thousands of years so far - implying that natural civilization-threatening events are rare.
Human civilization is pretty powerful and seems pretty robust, and accordingly, what's really scary to me is the idea of something with the same basic capabilities as humans (making plans, developing its own technology) that can outnumber and out-resource us. There aren't a lot of candidates for that.12
AI is one such candidate, and I think that even before we engage heavily in arguments about whether AIs might seek to defeat humans, we should feel very nervous about the possibility that they could.
What about things like "AI might lead to mass unemployment and unrest" or "AI might exacerbate misinformation and propaganda" or "AI might exacerbate a wide range of other social ills and injustices"13? I think these are real concerns - but to be honest, if they were the biggest concerns, I'd probably still be focused on helping people in low-income countries today rather than trying to prepare for future technologies.
• Predicting the future is generally hard, and it's easy to pour effort into preparing for challenges that never come (or come in a very different form from what was imagined).
• I believe civilization is pretty robust - we've had huge changes and challenges over the last century-plus (full-scale world wars, many dramatic changes in how we communicate with each other, dramatic changes in lifestyles and values) without seeming to have come very close to a collapse.
• So if I'm engaging in speculative worries about a potential future technology, I want to focus on the really, really big ones - the ones that could matter for billions of years. If there's a real possibility that AI systems will have values different from ours, and cooperate to try to defeat us, that's such a worry.
Special thanks to Carl Shulman for discussion on this post.
## Appendix: how AIs could avoid shutdown
This appendix goes into detail about how AIs coordinating against humans could amass resources of their own without humans being able to shut down all "misbehaving" AIs.
It's necessarily speculative, and should be taken in the spirit of giving examples of how this might work - for me, the high-level concern is that a huge, coordinating population of AIs with similar capabilities to humans would be a threat to human civilization, and that we shouldn't count on any particular way of stopping it such as shutting down servers.
I'll discuss two different general types of scenarios: (a) Humans create a huge population of AIs; (b) Humans move slowly and don't create many AIs.
### How this could work if humans create a huge population of AIs
I think a reasonable default expectation is that humans do most of the work of making AI systems incredibly numerous and powerful (because doing so is profitable), which leads to a vulnerable situation. Something roughly along the lines of:
• The company that first develops transformative AI quickly starts running large numbers of copies (hundreds of millions or more), which are used to (a) do research on how to improve computational efficiency and run more copies still; (b) develop valuable intellectual property (trading strategies, new technologies) and make money.
• Over time, AI systems are rolled out widely throughout society. Their numbers grow further, and their role in the economy grows: they are used in (and therefore have direct interaction with) high-level decision-making at companies, perhaps operating large numbers of cars and/or robots, perhaps operating military drones and aircraft, etc. (This seems like a default to me over time, but it isn't strictly necessary for the situation to be risky, as I'll go through below.)
• In this scenario, the AI systems are malicious (as we've assumed), but this doesn't mean they're constantly causing trouble. Instead, they're mostly waiting for an opportunity to team up and decisively overpower humanity. In the meantime, they're mostly behaving themselves, and this is leading to their numbers and power growing.
• There are scattered incidents of AI systems' trying to cause trouble,14 but this doesn't cause the whole world to stop using AI or anything.
• A reasonable analogy might be to a typical civil war or revolution: the revolting population mostly avoids isolated, doomed attacks on its government, until it sees an opportunity to band together and have a real shot at victory.
(Paul Christiano's What Failure Looks Like examines this general flavor of scenario in a bit more detail.)
In this scenario, the AIs face a challenge: if it becomes obvious to everyone that they are trying to defeat humanity, humans could attack or shut down a few concentrated areas where most of the servers are, and hence drastically reduce AIs' numbers. So the AIs need a way of getting one or more "AI headquarters": property they control where they can safely operate servers and factories, do research, make plans and construct robots/drones/other military equipment.
Their goal is ultimately to have enough AIs, robots, etc. to be able to defeat the rest of humanity combined. This might mean constructing overwhelming amounts of military equipment, or thoroughly infiltrating computer systems worldwide to the point where they can disable or control most others' equipment, or researching and deploying extremely powerful weapons (e.g., bioweapons), or a combination.
Here are some ways they could get to that point:
• They could recruit human allies through many different methods - manipulation, deception, blackmail and other threats, genuine promises along the lines of "We're probably going to end up in charge somehow, and we'll treat you better when we do."
• Human allies could be given valuable intellectual property (developed by AIs), given instructions for making lots of money, and asked to rent their own servers and acquire their own property where an "AI headquarters" can be set up. Since the "AI headquarters" would officially be human property, it could be very hard for authorities to detect and respond to the danger.
• Via threats, AIs might be able to get key humans to cooperate with them - such as political leaders, or the CEOs of companies running lots of AIs. This would open up further strategies.
• As assumed above, particular companies are running huge numbers of AIs. The AIs being run by these companies might find security holes in the companies' servers (this isn't the topic of this piece, but my general impression is that security holes are widespread and that reasonably competent people can find many of them)15, and thereby might find opportunities to create durable "fakery" about what they're up to.
• E.g., they might set things up so that as far as humans can tell, it looks like all of the AI systems are hard at work creating profit-making opportunities for the company, when in fact they're essentially using the server farm as their headquarters - and/or trying to establish a headquarters somewhere else (by recruiting human allies, sending money to outside bank accounts, using that money to acquire property and servers, etc.)
• If AIs are in wide enough use, they might already be operating lots of drones and other military equipment, in which case it could be pretty straightforward to be able to defend some piece of territory - or to strike a deal with some government to enlist its help in doing so.
• AIs could mix-and-match the above methods and others: for example, creating "fakery" long enough to recruit some key human allies, then attempting to threaten and control humans in key positions of power to the point where they control solid amounts of military resources, then using this to establish a "headquarters."
In any of these cases, once one or more "AI headquarters" are established:
• It can be expanded as the AIs headquartered there make money, purchase more property and computing power, build more factories, etc.16
• It can develop enough destructive potential and human allies that humans would be hesitant to attack it (as many relatively small countries have today). It would be clear that AIs at the headquarters are seeking autonomy, accumulating resources and taking some unethical actions, but the full scale and nature of the threat could easily remain ambiguous.
• Even if the threat is clear, AIs could potentially maintain a huge population in a relatively small space (potentially underground). They could also split up across many different headquarters (including some where they are simply using humans' hardware while hiding their presence).
• Either way, this large-but-hidden population could be recruiting human allies, developing advanced weaponry, looking for ways to take control of human' weapons (potentially possible if they greatly outnumber the humans working on cybersecurity, and if they can coordinate with the AIs "helping" those humans), etc.
• They could also be working with (unwitting, or allied) humans to manufacture large numbers of robots with back-doors allowing the AIs to take control.
• Once the headquarters has built up enough resources to go on offense, AIs around the world could coordinate with the headquarters. Humans could shut down AIs that they notice doing this, but they might have a very tough time getting value out of their servers and AI-controlled robots; this could make it easy for the AIs at the "AI headquarters" to out-resource humans.
### What if humans move slowly and don't create many AIs?
The above scenario has humans creating large numbers of AIs, such that the AIs just need to find a way to coordinate and acquire a safe "headquarters" in order to defeat us.
What if humans moved more slowly, intentionally restricting human-level AI to a tiny portion of the available computing resources? Could a small number of AIs pose a risk to humanity?
In this world, we would have what Carl Shulman refers to as "dry tinder everywhere, waiting for sparks." Anyone who can buy or rent a large amount of computing power can create a large number of AIs, which can produce a large amount of money and research, leading to still more AIs.
So a single AI could hack into enough servers17 to make a few copies of itself; recruit a few human allies; and start making money, acquiring more server space, etc. until its human allies are running a huge number of AIs. This could all be done in difficult-to-detect ways (it might e.g. just look like a set of humans renting a bunch of servers to run quantitative finance strategies).
So in this world, I think our concern should be any AI that is able to find enough security holes to attain that kind of freedom. Given the current state of cybersecurity, that seems like a big concern.
## Footnotes
1. Assuming you accept other points made in the most important century series, e.g. that AI that can do most of what humans do to advance science and technology could be developed this century.
2. See Superintelligence chapter 6.
3. See the "Nanotechnology blue box," in particular.
• The report estimates the amount of computing power it would take to train (create) a transformative AI system, and the amount of computing power it would take to run one. This is a bounding exercise and isn't supposed to be literally predicting that transformative AI will arrive in the form of a single AI system trained in a single massive run, but here I am interpreting the report that way for concreteness and simplicity.
• As explained in the next footnote, I use the report's figures for transformative AI arriving on the soon side (around 2036). Using its central estimates instead would strengthen my point, but we'd then be talking about a longer time from now; I find it helpful to imagine how things could go in a world where AI comes relatively soon.
4. I assume that transformative AI ends up costing about 10^14 FLOP/s to run (this is about 1/10 the Bio Anchors central estimate, and well within its error bars) and about 10^30 FLOP to train (this is about 10x the Bio Anchors central estimate for how much will be available in 2036, and corresponds to about the 30th-percentile estimate for how much will be needed based on the "short horizon" anchor). That implies that the 10^30 FLOP needed to train a transformative model could run 10^16 seconds' worth of transformative AI models, or about 300 million years' worth. This figure would be higher if we use Bio Anchors's central assumptions, rather than assumptions consistent with transformative AI being developed on the soon side.
5. They might also run fewer copies of scaled-up models or more copies of scaled-down ones, but the idea is that the total productivity of all the copies should be at least as high as that of several hundred million copies of a human-ish model.
6. Working-age population: about 65% * 7.9 billion =~ 5 billion.
7. Humans could rent hardware using money they made from running AIs, or - if AI systems were operating on their own - they could potentially rent hardware themselves via human allies or just via impersonating a customer (you generally don't need to physically show up in order to e.g. rent server time from Amazon Web Services).
8. (I had a speculative, illustrative possibility here but decided it wasn't in good enough shape even for a footnote. I might add it later.)
9. I don't go into detail about how AIs might coordinate with each other, but it seems like there are many options, such as by opening their own email accounts and emailing each other.
10. Alien invasions seem unlikely if only because we have no evidence of one in millions of years.
11. Here's a recent comment exchange I was in on this topic.
12. E.g., individual AI systems may occasionally get caught trying to steal, lie or exploit security vulnerabilities, due to various unusual conditions including bugs and errors.
13. E.g., see this list of high-stakes security breaches and a list of quotes about cybersecurity, both courtesy of Luke Muehlhauser. For some additional not-exactly-rigorous evidence that at least shows that "cybersecurity is in really bad shape" is seen as relatively uncontroversial by at least one cartoonist, see: https://xkcd.com/2030/
14. Purchases and contracts could be carried out by human allies, or just by AI systems themselves with humans willing to make deals with them (e.g., an AI system could digitally sign an agreement and wire funds from a bank account, or via cryptocurrency).
15. See above note about my general assumption that today's cybersecurity has a lot of holes in it.
# 142
New comment
Won't we see warning signs of AI takeover and be able to nip it in the bud? I would guess we would see some warning signs, but does that mean we could nip it in the bud? Think about human civil wars and revolutions: there are some warning signs, but also, people go from "not fighting" to "fighting" pretty quickly as they see an opportunity to coordinate with each other and be successful.
After seeing warning signs of an incoming AI takeover, I would expect people to go from "not fighting" to "fighting" the AI takeover. As it would be bad for virtually everyone (outside of the apocalyptic residual), there would be an incentive to coordinate against the AIs. This seemingly contrasts to civil wars and revolutions, in which there are many competing interests.
That being said, I do not think the above is a strong objection, because humanity does not have a flawless track record of coordinating to solve global groblems.
[Footnote 11] I don't go into detail about how AIs might coordinate with each other, but it seems like there are many options, such as by opening their own email accounts and emailing each other.
I can see "how AIs might coordinate with each other". How about the questions:
• Why should we expect AIs to coordinate with each other? Because they would derive from the same system, and therefore have the same goal?
• On the one hand, intelligent agents seem to coordinate to achieve their goals.
• However, assuming powerful AIs are extreme optimisers, even a very small difference between the goals of two powerful systems might lead to a breakdown in cooperation.
• If multiple powerful AIs, with competing goals, emerge at roughly the same time, can humans take advantage?
• Regardless of the answer, a war between powerful AIs still seems a pretty bad outcome.
• A simple analogy: human wars might eventually benefit some animals, but it does not change much the fact that most power to shape the world belongs to humans.
• However, fierce competition between potentially powerful AIs at an early stage might give humans time to react (before e.g. one of the AIs dominates the others, and starts thinking about how to conquer the wider world).
These questions are outside the scope of this post, which is about what would happen if AIs were pointed at defeating humanity.
I don't think there's a clear answer to whether AIs would have a lot of their goals in common, or find it easier to coordinate with each other than with humans, but the probability of each seems at least reasonably high if they are all developed using highly similar processes (making them all likely more similar to each other in many ways than to humans).
once the first human-level AI system is created, whoever created it could use the same computing power it took to create it in order to run several hundred million copies for about a year each.
How does computing power work here? Is it:
1. We use a supercomputer to train the AI, then the supercomputer is just sitting there, so we can use it to run models. Or:
2. We're renting a server to do the training, and then have to rent more servers to run the models.
In (2), we might use up our whole budget on the training, and then not be able to afford to run any models.
Sorry for chiming in so late! The basic idea here is that if you have 2x the resources it would take to train a transformative model, then you have enough to run a huge number of them.
It's true that the first transformative model might eat all the resources its developer has at the time. But it seems likely that (a) given that they've raised $X to train it as a reasonably speculative project, once it turns out to be transformative there will probably be at least a further$X available to pay for running copies; (b) not too long after, as compute continues to get more efficient, someone will have the 2x the resources needed to train the model.
Basically, is the computing power for training a fixed cost or a variable cost? If it's a fixed cost, then there's no further cost to using the same computing power to train models.
I haven’t read the OP (I haven’t read a full forum post in weeks and I don’t like reading, it’s better to like, close your eyes and try coming up with the entire thing from scratch and see if it matches, using high information tags to compare with, generated with a meta model) but I think this is a referral to the usual training/inference cost differences.
For example, you can run GPT-3 Davinci in a few seconds at trivial cost. But the training cost was millions of dollars and took a long time.
There are further considerations. For example, finding the architecture (stacking more things in Torch, fiddling with parameters, figuring out how to implement the Key Insight , etc.) for finding the first breakthrough model is probably further expensive and hard.
Let be the computing power used to train the model. Is the idea that "if you could afford to train the model, then you can also afford for running models"?
Because that doesn't seem obvious. What if you used 99% of your budget on training? Then you'd only be able to afford for running models.
Or is this just an example to show that training costs >> running costs?
[-]aogara10mo20
Yes, that's how I understood it as well. If you spend the same amount on inference as you did on training, then you get a hell of a lot of inference.
I would expect he'd also argue that, because companies are willing to spend tons of money on training, we should also expect them to be willing to spend lots on inference.
Do we know the expected cost for training an AGI? Is that within a single company's budget?
[-]aogara10mo100
Nearly impossible to answer. This report by OpenPhil gives it a hell of an effort, but could still be wrong by orders of magnitude. Most fundamentally, the amount of compute necessary for AGI might not be related to the amount of compute used by the human brain, because we don’t know how similar our algorithmic efficiency is compared to the brain’s.
https://www.cold-takes.com/forecasting-transformative-ai-the-biological-anchors-method-in-a-nutshell/
Yes, the last sentence is exactly correct.
So like the terms of art here are “training” versus “inference”. I don’t have a reference or guide (because the relative size is not something that most people think about versus the absolute size of each individually) but if you google them and scroll through some papers or posts I think you will see some clear examples.
Just LARPing here. I don’t really know anything about AI or machine learning.
I guess in some deeper sense you are right and (my simulated version of) what Holden has written is imprecise.
We don’t really see many “continuously” updating models where training continues live with use. So the mundane pattern we see today of inference, where we trivially running the instructions from the model (often on specific silicon made for inference) being much cheaper than training, may not apply for some reason, to the pattern that the out of control AI uses.
It’s not impossible that if the system needs to be self improving, it has to provision a large fraction of its training cost, or something, continually.
It’s not really clear what the “shape” of this “relative cost curve” would be, if this would be a short period of time, and it doesn’t make it any less dangerous.
[+]brb2439mo-70
|
# How to add text over a page's background .PNG image in Writer?
I have a multiple page document, each page needing a different full-page .PNG image as background. Above this I need to enter text (on a higher zorder layer). Unfortunately I am not an advanced LibreOffice user. I have version: 6.0.0.3 (x64).
1) While I can add the .PNG images and "Send to Back" I cannot enter text above the .PNG. Suggestions?
2) Or, could different watermarks be confined to specific pages in the same Writer document to accomplish the same thing? If so how?
3) Is there a better approach?
edit retag close merge delete
Sort by » oldest newest most voted
If I understand correctly your requirement, you need both an image and some text as watermark in your pages.
1) Since the Background tab of the page style dialog allows only one graphic to be inserted, you must edit your .png graphic in some external program (e.g. The Gimp under Linux) to include your text watermark in it.
2) The watermark, as a background, is a static component of the page style. When parts of the document receive different watermarks, these parts must have their own page styles. The easiest way to do this is to tune Default Style page style for "geometry" (margins, header, footer), then to create derived page styles where you add the specific backgrounds.
To switch from one page style to another, add a page break before to the first paragraph of the new page and set the desired page style.
Note: defining new pages styles is done through the Format>Styles & Formatting or F11 side pane, clicking on the fourth icon ( Page Styles ) in the tool bar. You are encouraged to read the chapter(s) on style in the user guide if you haven't done so.
3) Open question
If this answer helped you, please accept it by clicking the check mark ✔ to the left and, karma permitting, upvote it. If this resolves your problem, close the question, that will help other people with the same question.
more
Thank you very much. Unfortunately I did not properly articulate that the need is for a "form" (hence the static background .PNGs) that, later on, permits various (dynamic) text entries. More simply, can text be subsequently entered over static graphic images? I apologize for not being more clear originally. Thanks again!
( 2018-02-07 21:32:14 +0200 )edit
Consider "background" as a synonym for "stationery", you print on it. The real question is: how to position accurately text elements above the image when LO typesets text according to text flow rules?
If your form is associated with a database, consider Base and its report (form) creator module. If you need a form for later manual entry with a pen, look at Draw. As a last resort, have a chance with Calc.
( 2018-02-08 09:20:24 +0200 )edit
I appreciate your kind help and patience. My LibreOffice "problem" is that currently I print the multi-page form in one pass and then re-run the result through the printer a second time to include the data. My solution will be to create a template where the various static "stationery" form images simply reside as background. Onto this template I'll import the data and print.
I have some time again today to devote to what's not yet working. Thanks again!
( 2018-02-09 23:34:10 +0200 )edit
Thanks to all for pointing me to Page Styles, that was key. My results:
While I can accomplish (upper) layer text entry over a (lower) static image using a custom Style Pages’ "Area" image this only worked when the file was saved in .ODT format. Yes, MS Word recognizes an .ODT format. Word, however, failed to render the page's .PNG background layer (a requirement). Unfortunately a .PDF export is also not an option here for other reasons.
Sadly, with no ability to create either an .ODT or .DOCX format file that is widely usable OUTSIDE of LibreOffice I’m forced to, ugggh, look to MS Word for this task's needs.
more
Hello @UserRed,
To set a background image to your page in Writer, you could perform the following steps:
1. Select the menu Format : Page ...;
2. In the dialog that pops up, select the tab Area;
3. Click on the button Bitmap;
4. Near the bottom of the dialog, click on the button Add / Import;
5. In the dialog that appears, browse to your background image to add, and click OK; ( A thumbnail of the added image should now be visible in the Bitmaps listbox );
6. Back in the Page Area dialog, under Options, set image Style, Size, and Position;
7. Click OK.
Now you can write text on top of your page background image :)
With Regards, lib
more
Thank you for your great suggestion. This document's length, however, is four pages. I'm sorry that my original question did not state that each page requires either 1) a different static .PNG image or 2) no image at all. Over this bottom layer, later on, text will be dynamically entered.
Thus, the question becomes can different pages in the same document have different background images (or no image)? Is this possible in LibreOffice Writer? If so might you suggest how? Thanks.
( 2018-02-07 22:37:40 +0200 )edit
That's possible by creating a different "Page Style" for each page that has a different background image.
Just insert a pagebreak via the menu "Insert : Page Break", and in the dialog that pops up, set the different page style for that page.
Then you can add a background image for that page, following the steps provided in the answer above.
( 2018-02-07 23:02:36 +0200 )edit
Thanks you so much. I've created my necessary "Page" styles. At this point the document's output is not yet successful, my fault I am sure, but I am simply out of time for today.
BTW my LibreOffice version is 6.0.0.3 and am running it under Windows 10 OS Build 16299.214
I'll continue studying the LibreOffice Writer Guide (5.4) and will resume again tomorrow when I am fresher. Thanks again!
( 2018-02-08 03:05:17 +0200 )edit
Augggh!!! I can create my several "Page Styles" (as per the LibreOffice Writer Guide (5.4)) and they initially display as intended on screen. Unfortunately when the file is re-opened my custom "Page Styles" are completely gone, no matter in which format I save them. This failure invariably occurs.
What I could be missing? I am using the new version 6.0.0.3, not 5.4 Has something changed? This simply should not be this difficult. I'm open to ANY guidance or ideas. Thank you all.
( 2018-02-11 05:28:18 +0200 )edit
|
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition)
$[-4500, 10000]$
The volunteers received 10,000 USD, making the value $10000$. They spent 4500 USD, represented by $-4500$.
|
# ACSL STRING
## My thought
Nothing… The problem is pretty easy, however it troubled me for a quite long time. Just follow the instruction of the problem and simulate the whole process by coding.
## Points to note
• The conversion between String and int
• How to handle carry
• Pay attention to the additional sign
• Sometimes we can use char to simplify the process
|
# Period of nonlinear spring-mass system
by jinteni
Tags: nonlinear, period, springmass
HW Helper P: 6,207 $$m\frac{d^2x}{dt^2}-qx^3=0$$ you simply can't integrate both sides you would have to form the auxiliary eq'n and use the general solution for complex roots.
Math Emeritus Sci Advisor Thanks PF Gold P: 39,322 Period of nonlinear spring-mass system rock.freak667's point is that you integrated $m d^2x/dt^2$ with respect to t and $-qx^3$ with respect to x! That's what you can't do! since the independent variable, t, does not appear explicitely, you could use "quadrature". Let v= dx/dt. Then $d^2x/dt^2= dv/dt= (dv/dx)(dx/dt)= v dv/dx$. The equation reduces to mvdv/dx- qx^3= 0. Now you can integrate with respect to x: $(1/2)mv^2- (q/4)x^4$= Constant. (If you look closely at that, you will see "conservation of energy".) Since v= dx/dt, that is the same as $$dx/dt= \sqrt{\frac{g}{m}}\frac{x^2}{2}$$ That should be easy to integrate. However, I'm not sure that that will answer your question about the period- in general non-linear spring motion is NOT periodic!
P: 1 x=displacement a=amplitude q=spring constant m=mass p=period The energy stored by the spring for a given displacement is given by the integral of the force over the displacement. $E_{spring}=\int_{0}^{x} q*x^3 dx$ which evaluates to $E_{spring}=q*x^4/4$ when the mass is at it's maximum displacement and the mass is not moving all the energy in the system is stored in the spring. Thus the total energy in the system is the energy held by the spring when the displacement equals the amplitude. $E_{system}=q*x^4/4 , x = a$ $E_{system}=q*(a)^4/4$ $E_{system}=q/4*a^4$ applying conservation of energy, any energy lost by the spring must go to the mass as kinetic energy. The kinetic energy of the mass is given by $E_{kinetic of mass}=\frac{1}{2}mv^2$ the energy held by the mass can then be expressed as $E_{kinetic of mass}=E_{system}-E_{spring}$ after substituting this gives $E_{kinetic of mass}=(q/4*a^4)-(q/4*x^4)$ $E_{kinetic of mass}=\frac{q}{4}*(a^4-x^4)$ substituting this back into the expression for the kinetic energy of the mass $\frac{q}{4}*(a^4-x^4)=\frac{1}{2}mv^2$ solving for velocity gives $v_{mass}=\sqrt{\frac{q}{2m}*(a^4-x^4))}$ The mass could of course be traveling in either direction. recall that $v=\frac{dx}{dt}$ thus $\frac{1}{v}=\frac{dt}{dx}$ in other words the reciprocal of velocity is the time passed per unit of distance. applying this to the expression for the velocity of the mass gives $\frac{1}{v}=\sqrt{\frac{2m}{q*(a^4-x^4)}}$ or equivalently $\frac{dt}{dx}=\sqrt{\frac{2m}{q*(a^4-x^4)}}$ integrating both sides dx gives the time taken to travel between the start and end points of that integration. Integrating between $0$ and $a$ provides the time for a quarter period. $\int_{0}^{a}\frac{dt}{dx}dx=\int_{0}^{a}\sqrt{\frac{2m}{q*(a^4-x^4)}}dx$ (for some reason MathJax isn't rendering the above expression right. I hope it works for everyone else) $\int_{0}^{a}\sqrt{\frac{2m}{q*(a^4-x^4)}}dx$ $=\sqrt{\frac{2m}{q}}*\int_{0}^{a}\sqrt{\frac{1}{a^4-x^4}}dx$ I'm not sure exactly how to justify this step but nvertheless it should work $=\sqrt{\frac{2m}{q}}*\frac{1}{a}*\int_{0}^{1}\sqrt{\frac{1}{1-x^4}}dx$ I have no idea how to approach that definite integral analytically but Wolfram can (definite integral~=1.31103) working from there $p/4=\sqrt{\frac{2m}{q}}*\frac{1}{a}*1.33103$ $p=\frac{7.41630}{a}*\sqrt{\frac{m}{q}}$
|
Speaker : Ray-Kuang Lee (National Tsing-Hua University) Title : Reflecting on an alternative (parity-time-symmetric) quantum theory, and its analog in optics Time : 2018-03-20 (Tue) 11:00-12:30 Place : Seminar Room 617, Institute of Mathematics (NTU Campus) Abstract: By no-signaling principle, we showed that parity-time (PT)-symmetric quantum theory as an extension of the quantum theory to non-Hermitian Hamiltonians is either a trivial extension or likely false as a fundamental theory [1]. In addition to the implementation PT-symmetric optical systems by carefully and actively controlling the gain and loss, we show that a 2 × 2 PT-symmetric Hamiltonian has a unitarily equivalent representation without complex optical potentials in the resulting optical coupler. Through the Naimark dilation in operator algebra, passive PT-symmetric couplers can thus be implemented with a refractive index of real values and asymmetric coupling coefficients [2]. Moreover, with a phase-space representation on the vicinity of an exceptional point, we show that a PT-symmetric phase transition from an unbroken PT-symmetry phase to a broken one is a second-order phase transition [3]. References: [1] Yi-Chan Lee, Min-Hsiu Hsieh, Steven T. Flammia, and Ray-Kuang Lee, Phys. Rev. Lett. 112, 130404 (2014). [2] Yi-Chan Lee, Jibing Liu, You-Lin Chuang, Min-Hsiu Hsieh, and Ray-Kuang Lee, Phys. Rev. A 92, 053815 (2015). [3] Ludmila Praxmeyer, Popo Yang, and Ray-Kuang Lee, Phys. Rev. A 93, 042122 (2016).
|
Matematicheskie Zametki
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Forthcoming papers Archive Impact factor Subscription Guidelines for authors License agreement Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Mat. Zametki: Year: Volume: Issue: Page: Find
Mat. Zametki, 1977, Volume 22, Issue 1, Pages 45–49 (Mi mz8023)
Conditions for uniqueness of a projector with unit norm
V. P. Odinets
Abstract: Suppose that in a normed linear space $B$ there exists a projector with unit norm onto a subspace $D$. A sufficient condition for this projector to be unique is the existence of a set $M\subset D^*$ which is total on $D$, each functional in which attains its norm on the unit sphere in $D$ and has a unique extension onto $B$ with preservation of norm. As corollaries to this fact, we obtain a series of sufficient conditions for uniqueness (some of which were previously known) as well as a necessary and sufficient condition for uniqueness.
Full text: PDF file (464 kB)
English version:
Mathematical Notes, 1977, 22:1, 515–517
Bibliographic databases:
UDC: 513.88
Citation: V. P. Odinets, “Conditions for uniqueness of a projector with unit norm”, Mat. Zametki, 22:1 (1977), 45–49; Math. Notes, 22:1 (1977), 515–517
Citation in format AMSBIB
\Bibitem{Odi77} \by V.~P.~Odinets \paper Conditions for uniqueness of a~projector with unit norm \jour Mat. Zametki \yr 1977 \vol 22 \issue 1 \pages 45--49 \mathnet{http://mi.mathnet.ru/mz8023} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=454598} \zmath{https://zbmath.org/?q=an:0357.46029} \transl \jour Math. Notes \yr 1977 \vol 22 \issue 1 \pages 515--517 \crossref{https://doi.org/10.1007/BF01147691}
|
Discussion about “motivation of concept” kind of questions
Very often in mathematics and related sciences the motivation of a concept is helpful or even fundamental for its understanding. Obviously, this motivation has two origins: a historical one, and a technical one. For this reason, a question asking about the origin or motivation of a concept could fit, more or less, either into hsm.se and math.se, physics.se, etc. What should we do about this?
This question came to mind because of this question: Why is the Dirac delta "function" often presented as the limit of a Gaussian with a fraction in the exponent? which in my opinion doesn't belong in hsm, but in math.
I'd like to discuss what to do about the general case, and also what should be done about the particular example.
In my opinion, these fit in hsm. There are two main reasons for my answer.
• Often, the answer is not only maths or physics related, but lie in the field of the original concept author, which may not be the field of interest of the person that asks the question. A perfect example is the Dirac question you cite. If it were to belong to some other place than hsm, it should be in physics, not maths. For this reason, in this particular example we should stay here.
• The technical motivation is not a motivation any more after the concept becomes helpful in more than one field, which most often is the case. The original technical motivation therefore becomes the historical technical motivation, which brings us back to hsm.
As a more general answer, I would like to state that discussing maths and/or science history does mean, at times, discussing maths or physics. We should not be rebutted by answers that requires some technical knowledge in these, and they totally belong here IMO.
• I'm sorry it took me so long to reply, but here we go: $(1)$ Just because a person doesn't have the formation to understand the technical motivation it means the question becomes historical. If the user wants to understand what's behind, (s)he'll have to study said field. $${}$$ $(2)$ I think pretty much the same as in $(1)$. I believe in very few cases the motivation has a subjective/historical reason, most often it's the understanding of the field you're working with that dictates the definitions. – hjhjhj57 May 4 '15 at 23:19
• @Javier can I have an example? Because in your question, understanding of mathematics would not help. – VicAche May 5 '15 at 8:56
• What's needed is some understanding of dimensional analysis, elementary mathematics, and maybe some physical or mathematical intuition. – hjhjhj57 May 5 '15 at 9:06
• Dimensional analysis is, at least here in France, taught only to would-be physicists, and at university level. What's important IMO in this question his really the setting in which the Dirac was introduced, which is not the only possible "technical motivation" for such a function. – VicAche May 5 '15 at 9:51
• Ok, then it should be migrated to physics. As you can see the accepted answer isn't even a historical one. – hjhjhj57 May 6 '15 at 2:20
• If you don't know where it should be migrated, it's because it shouldn't be migrated - the question is about maths, the answer about physics. The answer is historical: the invention was first introduced in the field of physics, hence it carries its past in its most common written form. – VicAche May 6 '15 at 11:33
• I completely disagree, that's not even the point. Look at the accepted answer, it doesn't have anything to do with history. The user tried to give it some historical context by stating that Dirac was a physicist, nothing else. Edit: I hadn't realized you're the poster of the accepted answer, haha. How can you say your answer is historical? – hjhjhj57 May 6 '15 at 21:17
• Makes it even funier :). My answer is hsitorical IMO as you need to know about the context in which he was working to answer, and not only on the mathematical/physical reason why this was (strangely) retained as the canonical form. I really believe you should post an answer of your own to this question. – VicAche May 6 '15 at 22:15
|
# Re: [S] Quick notes on using ESS
A.J. Rossini ([email protected])
12 Oct 1998 10:17:57 -0700
(forwarded to the list, since this might be of general interest)
>
> 1. Thanks.
You are welcome!
> 2. I'm using NT now so maybe there's no help for me, but I've
> recently taken up including S transcript segments as comments in
> latex documents. I was curious if there is any way of dealing with
> that sensibly, even in ESS on unix.
I use Noweb, which is a language-independent literate programming
tool. That, plus a noweb-mode for XEmacs, allows me to edit and
submit code, while documenting and indexing variable and function
definitions. The whole package isn't as global or robust as possible
(i.e. Noweb allows code chunks within chunks, but I've not finished up
the code to deparse appropriately).
For example:
\LaTeX code to discuss how to obtain estimates for
$\hat\beta$, which can be obtained by
<<Estimating Equation for \hat\beta.S>>=
<<find starting values for \beta.S>>
results <- ee(beta)
<<summarize \beta from estimating equation.S>>
@ %def results beta
which provides the following estimates (more latex describing
the output).
and, using C-c C-n on the "results" line, submits the code to the
desired process (but currently that must be specified, sigh...).
The latex is edited using AUC-TeX and the code is edited using ESS
(S-mode), and one just needs to have SPlus running in a separate
buffer.
The editing and document production works (except for process
communication, sigh...) on NT and W95, since one can get both XEmacs
and Noweb running under NT.
For XEmacs, one needs to look at the very recent betas (for 21.0...),
http://www.xemacs.org/
For Noweb, see:
http://www.cs.virginia.edu/~nr/noweb/
For Process communication under NT, it'll be about a year before we
integrate it (unless someone buys me a fast NT desktop with enough
capacity for developing, or someone else does most of the work). Some
work (extremely alpha) has been done by others.
For ESS mode chunk recovery (i.e. being able to dump chunks into a
running process, properly replaced with the actual S code), that's
coming soon...
best,
-tony
--
A.J. Rossini
Rsrch Asst. Professor Senior Biostatistician
Biostatistics, University of Washington Epimetrics Corporation
Center for AIDS Research
[email protected] [email protected]
http://www.biostat.washington.edu/~rossini/ http://www.epimetrics.com/
-----------------------------------------------------------------------
This message was distributed by [email protected]. To unsubscribe
send e-mail to [email protected] with the BODY of the
message: unsubscribe s-news
|
The complexity of finding independent sets in bounded degree (hyper)graphs of low chromatic number
Jan 19, 2011
ABSTRACT: We prove almost tight hardness results under randomized reductions for finding independent sets in bounded degree graphs and hypergraphs that admit a good coloring. Our specific results include the following (where $\Delta$, a constant, is a bound on the degree, and $n$ is the number of vertices):
-- NP-hardness of finding an independent set of size larger than $O\left(n \left( \frac{\log \Delta}{\Delta}\right)^{\frac{1}{r-1}}\right)$ in 2-colorable r-uniform hypergraph for each fixed $r \ge 4$. A simple algorithmis known to find independent sets of size $\Omega\left(\frac{n}{\Delta^{1/(r-1)}}\right)$ in any $r$-uniform hypergraph of maximum degree $\Delta$. Under a combinatorial conjecture on hypergraphs, the $(\log\Delta)^{1/(r-1)}$ factor in our result is necessary.
-- Conditional hardness of finding an independent set with more than $O\left(\frac{n}{\Delta^{1-c/(k-1)}}\right)$ vertices in a $k$-colorable (with $k\ge 7$) graph for some absolute constant $c \le 4$, under Khot's $2$-to-$1$ Conjecture. This suggests the near-optimality of Karger, Motwani and Sudan's graph coloring algorithm which finds an independent set of size $\Omega\left(\frac{n}{\Delta^{1-2/k}\sqrt{\log \Delta}}\right)$ in k-colorable graphs.
-- Conditional hardness of finding independent sets of size $n \Delta^{-1/8+o_\Delta(1)}$ in almost $2$-colorable $3$-uniform hypergraphs, under Khot's Unique Games Conjecture. This suggests the optimality of the known algorithms to find an independent set of size $\tilde{\Omega}(n \Delta^{-1/8})$ in $2$-colorable $3$-uniform hypergraphs.
-- Conditional hardness of finding an independent set of size more than $O\big(n \Delta^{-\frac{1}{r-1}} \log^{\frac{1}{r-1}}\Delta \big)$ in $r$-uniform hypergraphs that contain an independent set of size $n \big(1-O(\log r/r)\big)$ assuming the Unique Games Conjecture.
|
Browse Questions
# Define electric field at a point.
Electric field of a charge is the space around the charge in which the influence of the charge can be felt.
|
# 1. A fireman 50.0 m away from a burning building directs a stream of water from a ground-level fire...
1. A fireman 50.0 m away from a burning building directs a stream of water from a ground-level fire hose at an angle of 30.0° above the horizontal. If the speed of the stream as it leaves the hose is 40.0 m/s, at what height will the stream of water strike the building?
2. A projectile is launched with an initial speed of 60.0 m/s at an angle of 30.0° above the horizontal. The projectile lands on a hillside 4.00 s later. Neglect air friction.
(a) What is the projectile’s velocity at the highest point of its trajectory?
(b) What is the straight-line distance from where the projectile was launched to where it hits its target?
3. A soccer player kicks a rock horizontally off a 40.0-m-high cliff into a pool of water. If the player hears the sound of the splash 3.00 s later, what was the initial speed given to the rock? Assume the speed of sound in air to be 343 m/s.
(1) usig equation of trajectory for the projectile y = xtanø - $$- gx^2 \over 2u^2 cos^2ø$$ here x = 50m , u = 40m/s and ø = 30degree then height at which water jet will strike the building y = 50tan30 - $$- 10*50^2 \over 2*40^2 cos^230$$ = 50*0.58 - $$25000 \over 2400$$ = 10.42m (2) Here ø = 30degree u = 60m/s , and given time for stoping of projectile is t = 4 sec (a) As the time of flight from the above information is calculated as T = $$2usinø \over g$$ = 6sec. Thus the particle will be at hieghest point at 3sec. hence particle stops after crossing of heighest point Thus at heighest point only horizotal component of velocity will be none zero Ux = ucos30 = 60*0.866 = 51.96m/s (b) The straight line distance is simply the displacement between launch point and stop point which can be calculated as r = $$\sqrt x^2 +y^2$$ where x and y are horizontal and virtical displacement of the projectile calculating x and y x = ucosø *t =51.96 *4 = 207.84m y = usinø*t -(gt^2)/2 = 30*4 - $$10*16 \over 2$$ = 120 -80 = 40m thus r =...
|
The Power of Shared Randomness in Uncertain Communication
In a recent work (Ghazi et al., SODA 2016), the authors with Komargodski and Kothari initiated the study of communication with contextual uncertainty, a setup aiming to understand how efficient communication is possible when the communicating parties imperfectly share a huge context. In this setting, Alice is given a function f and an input string x, and Bob is given a function g and an input string y. The pair (x,y) comes from a known distribution mu and f and g are guaranteed to be close under this distribution. Alice and Bob wish to compute g(x,y) with high probability. The lack of agreement between Alice and Bob on the function that is being computed captures the uncertainty in the context. The previous work showed that any problem with one-way communication complexity k in the standard model (i.e., without uncertainty, in other words, under the promise that f=g) has public-coin communication at most O(k(1+I)) bits in the uncertain case, where I is the mutual information between x and y. Moreover, a lower bound of Omega(sqrt{I}) bits on the public-coin uncertain communication was also shown. However, an important question that was left open is related to the power that public randomness brings to more »
Authors:
;
Award ID(s):
Publication Date:
NSF-PAR ID:
10078500
Journal Name:
44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)
ISSN:
1868-8969
3. Abstract The production of $$\pi ^{\pm }$$ π ± , $$\mathrm{K}^{\pm }$$ K ± , $$\mathrm{K}^{0}_{S}$$ K S 0 , $$\mathrm{K}^{*}(892)^{0}$$ K ∗ ( 892 ) 0 , $$\mathrm{p}$$ p , $$\phi (1020)$$ ϕ ( 1020 ) , $$\Lambda$$ Λ , $$\Xi ^{-}$$ Ξ - , $$\Omega ^{-}$$ Ω - , and their antiparticles was measured in inelastic proton–proton (pp) collisions at a center-of-mass energy of $$\sqrt{s}$$ s = 13 TeV at midrapidity ( $$|y|<0.5$$ | y | < 0.5 ) as a function of transverse momentum ( $$p_{\mathrm{T}}$$ p T ) using the ALICE detector at the CERN LHC. Furthermore, the single-particle $$p_{\mathrm{T}}$$ p T distributions of $$\mathrm{K}^{0}_{S}$$ K S 0 , $$\Lambda$$ Λ , and $$\overline{\Lambda }$$ Λ ¯ in inelastic pp collisions at $$\sqrt{s} = 7$$ s = 7 TeV are reported here for the first time. The $$p_{\mathrm{T}}$$ p T distributions are studied at midrapidity within the transverse momentum range $$0\le p_{\mathrm{T}}\le 20$$ 0 ≤ p T ≤ 20 GeV/ c , depending on the particle species. The $$p_{\mathrm{T}}$$ p T spectra, integrated yields, and particle yield ratios are discussed as a function of collision energy and compared with measurements at lower $$\sqrt{s}$$ smore »
|
# Power automate
Occasional Contributor
# Power automate
Hello, I have two tables with 1 column 'Name' in both columns, how can I check if a name in 1 table exists in another table using power automate
|
## Check of moments for pseudo random number generators using CLT
I’m trying to get to grips with my lecture notes but cannot understand the idea behind it. I’d really appreciate some help: “Generate a sequence of size $n$ from stated distribution. Consider an independent sample $X_1,X_2,X_3,..,X_n$ from pdf $f_x(x)$, with mean $\mu$ and variance $\sigma^2$. Then the CLT says that… (def. continued – I don’t … Read more
## Not able to understand KL decomposition
The bias-variance decomposition usually applies to regression data. We would like to obtain similar decomposition for classification, when the prediction is given as a probability distribution over C classes. Let P=[P1,…,PC] be the ground truth class distribution associated to a particular input pattern. Assume the random estimator of class probabilities ˉP=[ˉP1,…,ˉPC] for the same input … Read more
## Help: What is the theory underpinning unit-treatment additivity for ANOVA
I have read extensively about the assumptions for ANOVA, and I am experiencing confusion with the concept of unit-treatment additivity. I would be deeply appreciative if anyone could please explain the theory in laymans terms. Many thanks in advance if this is possible. Here is a short excerpt from my text book The equal variance … Read more
## Is the covariance between the product of two variables and one of the variables zero?
For two centered (zero expectation) random variables X and Z I am interested in the covariance of the product XZ and either X or Z. Cov(X,XZ)=E(X(XZ−E(XZ)))=E(X2Z) I think the last quantity can be non-zero. However, for any choice of X and Z I have tried, in simulations, I find the quantity to be approx. zero, … Read more
## Pattern Recognition Time Series via FFT
I came across this interesting article where the author used FFT to discover some patterns in a time series. I am new to this kind of analysis and have maybe some basic questions about it. How do you compute the Frequency when getting the FFT? I used the fft function in MATLAB with some data … Read more
## Finding UMPT for uniform distribution with varying support
$\textbf{Problem}$ Let $X_1,\dots,X_n$ be a random sample from $f(x;\theta) = 1 / \theta$, where $0 < x < \theta$. We want to test $H_0: \theta \leq \theta_0$ versus $H_1: \theta > \theta_0$. Obtain the uniformly most powerful test with size $\alpha$. You must describe how to calculate $\alpha$ to get a full credit. Here I … Read more
## Adjust decay rate dynamically
Say I have a stream of values $\langle s_1, s_2,\ldots\rangle$ coming in and a function $$E_{s_1:s_n}(t) = E_{s_1:s_{n-1}}(t-1) + \alpha\cdot (s_t-E_{s_1:s_{n-1}}(t-1))$$ that compute their exponential moving average as the values flow in. I would like the alpha, i.e the decay rate, to adjust dynamically as a function of the last $h$ values we have seen. … Read more
## Large deviations results for cosine of two samples from Normal?
I’m looking for large-deviations style results for cosine of two independent samples drawn from N(0,Σ) . IE, q=⟨X,Y⟩‖ More specifically, are there any interesting bounds on the probability of this value being large in terms of properties of \mathcal \Sigma ? Intuitively it seems this value should be small when \Sigma has small condition number. … Read more
## Prove $X_t =$ $\sum_{j=-\infty}^\infty \psi_j * Y_{t-j} = \psi (B) * Y_t$ is stationary?
I was going through my time series text and I found a curious identity that the book just says is true without actually going in depth to prove it. I am unable to figure out how this works, so I would appreciate pointers. The theorem states: “Let {$Y_t$} be a stationary process w/ mean 0 … Read more
## Connection between canonical correlation and distribution of roots of characteristic equation
I’m trying to make sense of the following sentence from introduction “Multiple discoveries: Distribution of roots of determinantal equations” http://statweb.stanford.edu/~ckirby/ted/papers/2007_MultipleDiscoveries.pdf “The distribution of the squares of the canonical correlations when the population canonical correlations are zero is the same as the distribution of the characteristic roots of one sample covariance matrix in the metric of … Read more
|
# OpenGL Doing a 2.5D game using OpenGL?
This topic is 3644 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hi all, I am doing a 2D game using OpenGL (aside: I know there are other libraries suitable for 2D, but this is the job requirement) and instead of using orthographic projection, I would like to keep it 2.5D so as to make it easier to manipulate depth and layering. However, I am not sure how to render the texture (or sprite) at the size it is intended to be. Say for example, if my default camera position is (0,0,-6) and it is looking dead forward at point (0,0,0), how do I make a 64x64 bitmap appears at the same size when I have map it to a quad? What size should I set the texture to be, and how big should the quad be? Thanks in advance!
##### Share on other sites
Hidden
The easiest solution I see here is to simply stick to perspective projection, and use billboarding for your sprites. This way, everything will scale correctly when you move the camera around. It's just so much less math and calculations. The size (distance) of your sprites will scale correctly and everything. This is atleast how I would have done it.
Quote:
Original post by Kada2k6The easiest solution I see here is to simply stick to perspective projection, and use billboarding for your sprites. This way, everything will scale correctly when you move the camera around. It's just so much less math and calculations. The size (distance) of your sprites will scale correctly and everything. This is atleast how I would have done it.
True, it's billboarding that I am looking at. The issue is - the image in Photoshop (for example)is 64 pixel by 64 pixel. How do I get a quad that is on the screen that is also 64 pixel by 64 pixel in perspective projection?
##### Share on other sites
Ensuring an exact size on screen with perspective projection isn't easy. It depends on the position of the camera, the fov used and the size of the viewport.
Note that OpenGL screen coordinates are in the range [-1, 1] as opposed to viewport coordinates which are pixels. So you have to get the pixel-to-screen relation and with as well as the projection matrix settings and the distance calculate the size of the quad in world space.
BUT, why do you need to have an exact size in the first place? If it is really necessary you could switch to orthogonal projection just for rendering that quad, but be careful with respect to depth values!
##### Share on other sites
Quote:
Original post by Lord_EvilEnsuring an exact size on screen with perspective projection isn't easy. It depends on the position of the camera, the fov used and the size of the viewport.Note that OpenGL screen coordinates are in the range [-1, 1] as opposed to viewport coordinates which are pixels. So you have to get the pixel-to-screen relation and with as well as the projection matrix settings and the distance calculate the size of the quad in world space.BUT, why do you need to have an exact size in the first place? If it is really necessary you could switch to orthogonal projection just for rendering that quad, but be careful with respect to depth values!
I could switch to orthogonal projection; However, the boss suggests that I use perspective so I could do effects like zooming in, zooming out and rotational of the entire 2D scene (if it is a top-down game) easily.
Is it possible to do zooming/rotational of the view if using an orthogonal projection?
##### Share on other sites
Check out the Nehe tutorial about mouse selection with openGL.
It's done using Modelview marix, projection matrix, vievport and glUnproject function.
Basically the first step is finding the projection of the point that was clicked (in screen coords) in the 3D scene: it's a 3D vector. If the vector crosses a triangle, then you know that the triangle was clicked.
use the same concept to find the vectors corresponding to (0,0) and (1,1) on your screen. The vectors you find are the screen bounds in the 3d scene.
Then just move/scale your world so that the vectors you found cross the extrema of the 3D plan you want to display.
Hope that was clear...
Hope it works!
Good work
LLL
##### Share on other sites
I'm developing a 2.5D game in OpenGL using orthogonal projection. I think it is the best decision I've made since I started my project. I don't see why you shouldn't be able to rotate and zoom the view in this mode? Just use glOrtho to zoom and glRotate to rotate the view, I do it all the time. If you want you can take a look at my project in my journal here at GameDev.
##### Share on other sites
Quote:
Original post by Extrakun
Quote:
Original post by Lord_EvilEnsuring an exact size on screen with perspective projection isn't easy. It depends on the position of the camera, the fov used and the size of the viewport.Note that OpenGL screen coordinates are in the range [-1, 1] as opposed to viewport coordinates which are pixels. So you have to get the pixel-to-screen relation and with as well as the projection matrix settings and the distance calculate the size of the quad in world space.BUT, why do you need to have an exact size in the first place? If it is really necessary you could switch to orthogonal projection just for rendering that quad, but be careful with respect to depth values!
I could switch to orthogonal projection; However, the boss suggests that I use perspective so I could do effects like zooming in, zooming out and rotational of the entire 2D scene (if it is a top-down game) easily.
Is it possible to do zooming/rotational of the view if using an orthogonal projection?
Then perhaps your 'boss' doesn't quite know WTF he's talking about, because if he's suggesting that you scale by moving the camera closer to the objects or mucking about with the frustum, rather than by using a scaling matrix, he hasn't got a clue.
Just use orthogonal projection, because not only is it possible to do what he suggests in ortho (or perspective) projection, but its more correct to do it using scaling and transform matrices than what he's suggesting.
##### Share on other sites
Quote:
Original post by Ravynebut its more correct to do it using scaling and transform matrices than what he's suggesting.
On which plane of existence? Applying scaling just complicates things by f**ing up a neatly orthonormal matrix that's easy to use (and not so easy to use anymore when the applied scale affects all translations done afterwards).
Also, when did stuff suddenly grow, because you were using your camera zoom? It doesn't, you just pick a smaller window and make it larger. Which is EXACTLY what happens when you reduce the PoV or reduce the width/height of ortho projection. You also don't need to worry about your 100x times bigger stuff suddenly wildly growing beyond your clip planes or "swallowing" the camera. In fact, did you ever use scaling to zoom in a real 3D scene? Are you really trying to tell me you didn't notice the "far end" of your world disappearing more and more?
So how is it more "correct" to NOT use the method that's closest to the real world equivalent AND is having no unpleasant side effects? Using scaling to simulate zoom instead of the actual growing of single objects is about the last things I'd want to do without a gun pointed at my head. In fact, I'd even advice to never ever use scaling unless you absolutely have to.
The reason I agree that this boss has no clue? It sounds like he wants to "zoom" by moving the camera closer. Especially with perspective projection that's a dumb idea, because you would either stop pretty soon or constantly adjust the near plane (not to mention intersection with objects to worry about). Even in 2D with ortho projection you might move beyond your stuff when you actually just want to "zoom" in more.
Zooming is a trivial matter of:
glMatrixMode(GL_PROJECTION);glLoadIdentity();int width = vportWidth/(2.0f*zoomFactor);int height = vportHeight/(2.0f*zoomFactor);glOrtho(-width, width, -height, height, -1, 1);
If you want to zoom at a specific point (for example the location of the mouse pointer), you need to adjust the translation (in which case you don't have to center the view around the origin.. that's just for convenience).
Examples of this and a semi-3D version (a basically 2D "game board") including source can be found here (DSA Route and SW D6 tool.. search code for "zoom", SW D6 uses perspective and moving.. just zoom in close to see why that concept sucks)
##### Share on other sites
Thanks all for your suggestions and comments. I'll look at the source and implement an orthographic PoC for my boss to look at and discuss the issues with him.
1. 1
2. 2
3. 3
4. 4
Rutin
18
5. 5
• 14
• 12
• 9
• 12
• 37
• ### Forum Statistics
• Total Topics
631423
• Total Posts
3000003
×
|
Publication
Title
Bulk magnetic order in a two-dimensional $Ni^{1+}/Ni^{2+}$ ($d^{9}/d^{8}$) nickelate, isoelectronic with superconducting cuprates
Author
Abstract
The Ni(1+)/Ni(2+) states of nickelates have the identical (3d(9)/3d(8)) electronic configuration as Cu(2+)/Cu(3+) in the high temperature superconducting cuprates, and are expected to show interesting properties. An intriguing question is whether mimicking the electronic and structural features of cuprates would also result in superconductivity in nickelates. Here we report experimental evidence for a bulklike magnetic transition in La(4)Ni(3)O(8) at 105 K. Density functional theory calculations relate the transition to a spin density wave nesting instability of the Fermi surface.
Language
English
Source (journal)
Physical review letters. - New York, N.Y.
Publication
New York, N.Y. : 2010
ISSN
0031-9007
Volume/pages
104:20(2010), 4 p.
Article Reference
206403
ISI
000277945900033
Medium
E-only publicatie
Full text (Publisher's DOI)
Full text (open access)
UAntwerpen
Faculty/Department Research group Publication type Subject Affiliation Publications with a UAntwerp address
|
# Integrating Fresnel Integrals with Cauchy Theorem?
In regards to the above proof, I'm a little confused as to how the last conclusion was made --
How does the fact that
$$\int_{-\infty}^{\infty}e^{-x^2}dx = \sqrt{\pi}$$
to conclude that:
$$\int_0^{\infty}\cos{x^2} + i\sin{x^2}dx = \int_0^{\infty}e^{ix^2}dx = \frac{\sqrt{2\pi}}{4} + i\frac{\sqrt{2\pi}}{4}?$$
• They're using the $\sqrt{\pi}$ equality to finish the computation of the integral on the line above. – Josh Keneda Sep 15 '14 at 1:22
• $\int_{\infty}^{\infty}$ should be $\int_{\color{red}{-}\infty}^{\infty}$ – mike Sep 15 '14 at 1:23
• So $ie^{i\pi / 4}\int_R^0 e^{-u^2}(-1/R)du = (-1/R)ie^{i\pi / 4} \int_R^0 \sqrt{\pi} du = \pi i e^{i\pi / 4}$, if I've done that correctly. How are the $\frac{\sqrt{2\pi}}{4}$ terms derived? – Ryan Yu Sep 15 '14 at 1:47
• Be careful: they're not saying that $e^{-u^2} = \sqrt{\pi}$. They're saying that the integral of $e^{-u^2}$ over the real line is $\sqrt{\pi}$. I've posted a full answer below. – Josh Keneda Sep 15 '14 at 3:20
• possible duplicate of Some way to integrate $\sin(x^2)$? – leo Jul 23 '15 at 16:11
There's a typo in their parametrization of $\gamma_3$. They have $dz = (-iRe^{i \frac{\pi}{4}}) dt$ instead of $dz = (-Re^{i \frac{\pi}{4}}) dt$. With this correction, their line after showing that the integral over $\gamma_2$ drops out should read: $$\lim_{R\rightarrow \infty} \int_0^R e^{iz^2} dz = \lim_{R\rightarrow \infty} R e^{i \pi/4} \int_0^1 e^{i(Re^{i \pi/4}(1-t))^2} dt. \quad\quad\quad (*)$$
Now, if we do their $u$-substitution, the right hand side becomes \begin{align*}\lim_{R \rightarrow \infty} R e^{i\pi/4} \int_R^0 e^{-u^2} \frac{-1}{R} du &= \lim_{R\rightarrow \infty} R e^{i\pi/4} \int_0^R e^{-u^2} \frac{1}{R} du\\ &= \lim_{R \rightarrow \infty} e^{i \pi/4} \int_0^R e^{-u^2} du\\ &= e^{i\pi/4} \int_0^\infty e^{-u^2} du\\&=e^{i\pi/4}\frac{\sqrt{\pi}}{2},\end{align*}
where the last equality follows from the fact that $e^{-u^2}$ is even and so $$\int_0^\infty e^{-u^2} du = \frac{1}{2}\int_{-\infty}^\infty e^{-u^2} du = \frac{1}{2} \sqrt{\pi}.$$
Combining this result with $(*)$, we have $$\int_0^\infty e^{iz^2} dz = e^{i\pi/4}\frac{\sqrt{\pi}}{2} = \frac{\sqrt{2\pi}}{4}+ i \frac{\sqrt{2\pi}}{4}.$$
• Not sure if you'll see this, but a brief followup -- how does this proof show that $\int_0^{\infty}\sin{x^2}dx = \frac{\sqrt{2\pi}}{4}$? Doesn't it only show that $\int_0^{\infty}\cos{x^2}dx = \frac{\sqrt{2\pi}}{4}$? (i.e. $\cos{x^2}$ is exactly the real part of $e^{iz^2}$?) – Ryan Yu Sep 16 '14 at 7:12
• Similarly, $\sin{x^2}$ is exactly the imaginary part of $e^{iz^2}$ on the real line, so taking imaginary parts gives us the $\sin{x^2}$ integral. – Josh Keneda Sep 16 '14 at 7:21
|
anonymous one year ago What is the equation of the line, in point-slope form, that passes through the points (-3, -1) and (-6, 8)? y + 8 = -3(x - 6) y - 8 = -3(x + 6) y + 8 = 3(x - 6) y - 8 = 3(x + 6)
1. TheSmartOne
The slope of two points: $$\sf (x_1,y _1)$$ and$$\sf (x_2,y_2)$$ is: $$\sf\Large Slope=\frac{y_2-y_1}{x_2-x_1}$$
2. TheSmartOne
And why is the point-slope form?
3. anonymous
idk
4. TheSmartOne
Ok, then google "point-slope form" and tell me what it is.
5. anonymous
Point slope form is also the quickest method for finding the equation of line given two points
6. TheSmartOne
what equation does it show you that you get for point-slope form?
7. anonymous
8. TheSmartOne
Not an excuse... Read the first 2 paragraphs: http://www.purplemath.com/modules/strtlneq2.htm
9. TheSmartOne
Point slope form is: $$\sf y – y_1 = m(x – x_1)$$
10. anonymous
ok?
11. TheSmartOne
calculate the slope first...
12. anonymous
idk how
13. TheSmartOne
14. anonymous
i got 2.3 for theslope
15. TheSmartOne
How? Where is the work?
16. anonymous
17. TheSmartOne
*facepalm* You used a calculator, yet you plugged in the wrong point...
18. anonymous
my name explains me man
19. TheSmartOne
this will give you the correct slope... http://www.calculator.net/slope-calculator.html?type=1&x11=-3&y11=-1&x12=-6&y12=8&x=92&y=4
20. anonymous
now what
21. TheSmartOne
You have the slope, so now you can plug it into: $$\sf y – y_1 = m(x – x_1)$$ m=slope $$\sf (x_1,y_1)$$ is a point what is the slope again?
22. anonymous
-3
23. anonymous
can u just tell me answer
24. TheSmartOne
$$\sf (-3, -1)$$ and $$\sf (-6,8)$$ are both in the form of $$\sf (x_1,y_1)$$ substitute any point into the equation.
|
5 added 46 characters in body
Let me start with a question for which I know the answer. Consider a symmetric integral $n\times n$ matrix $A=(a_{ij})$ such that $a_{ii}=2$, and for $i\ne j$ one has $a_{ij}=0$ or $-1$. One can encode such a matrix by a graph $G$: it has $n$ vertices, and two vertices $i,j$ are connected by an edge iff $a_{ij}=-1$.
Question 1: Which graphs correspond to positive definite $A$?
Answer 1: (Classical, well-known, and easy) $G$ is a disjoint union of $A_n$, $D_n$, $E_n$. (http://en.wikipedia.org/wiki/Root_system)
Now let me put a little twist to this. Let us also allow $a_{ij}=+1$ for $i\ne j$, and encode this by putting a broken edge between $i$ and $j$ (or an edge of different color, if you prefer).
Real question: Which of these graphs correspond to positive definite $A$?
Let me add some partial considerations which do not quite go far enough. (Some of these were in the helpful Gjergji's answer.)
(1) Consider the set $R$ of shortest vectors in $\mathbb Z^n$; they have square 2. Reflections in elements $r\in R$ send $R$ to itself, and $R$ spans $\mathbb Z^n$ since it contains the standard basis vectors $e_i$. By the standard result about root lattices, $\mathbb Z^n$ is then a direct sum of the $A_n$, $D_n$, $E_n$ root lattices, and one can restrict to the case of a single direct summand.
Hence, the question equivalent to the following: what are the graphs corresponding to all possible bases of $\mathbb R^n$ in which the basis vectors are roots?
The case of $R=A_n$ is relatively easy. The roots are of the form $f_a-f_b$, with $a,b \in (1,\dots,n+1)$. Every collection $e_i$ corresponds to an oriented spanning tree on the set $(1,\dots,n+1)$. The 2-colored graph is computed from that tree. I don't see a clean description of the graphs obtained via this procedure, but it is something.
For $D_n$, similarly, a basis is described by an auxiliary connected graph $S$ on $n$ vertices with $n$ edges whose ends are labeled $+$ or $-$. The graph $G$ is computed from $S$.
And for $E_6,E_7,E_8$ there are of course only finitely many cases, but for me the emphasis is on MANY, very many.
So has anybody done this? Is there a table in some paper or book which contains all the 2-colored graphs obtained this way, or -- better still -- a clean characterization of such graphs?
(2) There is a notion of "weakly positive quadratic forms" used in the cluster algebra theory (see for example the first pages of LNM 1099 (Tame Algebras and Integral Quadratic Forms) by Ringel. And there is some kind of classification theory for them. Maybe I am mistaken, but this seems to be quite different: a quadratic form $q$ is "weakly positive" if $q\ge 0$ on the first quadrant $\mathbb Z_{\ge0}^n$. So there is no direct relation to my question, it seems.
4 added 145 characters in body; edited tags; added 10 characters in body; added 9 characters in body
Let me add some partial considerations which do not quite go far enough. (Some of these were in the helpful Gjergji's answer, which I appreciated, which he unfortunately removed.answer.)
(1) Clearly, Consider the standard basis set $R$ shortest vectors in $e_i$ are roots (\mathbb Z^n$; they have square 2)2. Reflections in elements$r\in R$send$R$to itself, and generate the lattice$R$spans$\mathbb Z^n$. since it contains the standard basis vectors$e_i$. By the standard result about root lattices,$\mathbb Z^n$is then a direct sum of the$A_n$,$D_n$,$E_n$root lattices, and one can restrict to the case of a single direct summand. Then$e_i$'s are some of Hence, the roots in$R$, and we askquestion equivalent to the following: what are the collections graphs corresponding to all possible bases of such roots spanning the vector space, and with the property that the pairwise dot products$(e_i,e_j)$\mathbb R^n$ in which the basis vectors are 0,-1,+1. roots?
The case of $R=A_n$ is relatively easy. The roots are of the form $f_a-f_b$, with $a,b \in (1,\dots,n+1)$. Every collection $e_i$ corresponds to a an oriented spanning tree on the set $(1,\dots,n+1)$. The 2-colored graph is computed from that tree. I don't see a clean description of the graphs obtained via this procedure, but it is something.
For $D_n$, the procedure may be similar but messier. similarly, a basis is described by an auxiliary connected graph $S$ on $n$ vertices with $n$ edges whose ends are labeled $+$ or $-$. The graph $G$ is computed from $S$.
(2) There is a notion of "weakly positive quadratic forms" used in the cluster algebra theory (see for example the first pages of LMN1099 LNM 1099 (Tame Algebras and Integral Quadratic Forms) by Ringel. And there is some kind of classification theory for them. Maybe I am mistaken, but this seems to be quite different: a quadratic form $q$ is "weakly positive" if $q\ge 0$ on the first quadrant $\mathbb Z_{\ge0}^n$. So there is no direct relation to my question, it seems.
3 added 1850 characters in body
Let me add some partial considerations which do not quite go far enough. (Some of these were in Gjergji's answer, which I appreciated, which he unfortunately removed.)
(1) Clearly, the standard basis vectors $e_i$ are roots (they have square 2), and generate the lattice $\mathbb Z^n$. By the standard result about root lattices, $\mathbb Z^n$ is then a direct sum of the $A_n$, $D_n$, $E_n$ root lattices, and one can restrict to the case of a single direct summand.
Then $e_i$'s are some of the roots in $R$, and we ask: what are the collections of such roots spanning the vector space, and with the property that the pairwise dot products $(e_i,e_j)$ are 0,-1,+1.
The case of $R=A_n$ is relatively easy. The roots are of the form $f_a-f_b$, with $a,b \in (1,\dots,n+1)$. Every collection $e_i$ corresponds to a spanning tree on the set $(1,\dots,n+1)$. The 2-colored graph is computed from that tree. I don't see a clean description of the graphs obtained via this procedure, but it is something.
For $D_n$, the procedure may be similar but messier. And for $E_6,E_7,E_8$ there are of course only finitely many cases, but for me the emphasis is on MANY, very many.
So has anybody done this? Is there a table in some paper or book which contains all the 2-colored graphs obtained this way, or -- better still -- a clean characterization of such graphs?
(2) There is a notion of "weakly positive quadratic forms" used in the cluster theory (see for example the first pages of LMN1099 (Tame Algebras and Integral Quadratic Forms) by Ringel. And there is some kind of classification theory for them. Maybe I am mistaken, but this seems to be quite different: a quadratic form $q$ is "weakly positive" if $q\ge 0$ on the first quadrant $\mathbb Z_{\ge0}^n$. So there is no direct relation to my question, it seems.
2 edited tags; edited title; deleted 198 characters in body
1
|
Question
# Consider two rectangular surfaces perpendicular to each other with a common edge which is $1.6 \mathrm{~m}$ long. The horizontal surface is $0.8 \mathrm{~m}$ wide and the vertical surface is $1.2 \mathrm{~m}$ high. The horizontal surface has an emissivity of $0.75$ and is maintained at $400 \mathrm{~K}$. The vertical surface is black and is maintained at $550 \mathrm{~K}$. The back sides of the surfaces are insulated. The surrounding surfaces are at $290 \mathrm{~K}$, and can be considered to have an emissivity of $0.85$. Find the net rate of radiation heat transfers between the two surfaces, and between the horizontal surface and the surroundings.
Solution
Verified
Answered 1 year ago
Answered 1 year ago
Step 1
1 of 11
$\textbf{Given:}$
• Two perpendicular rectangles of length: $W = 1.6$ m;
• The horizontal surface $A_1$ width: $L_1 = 0.8$ m;
• The vertical surface $A_2$ height: $L_2=1.2$ m;
• The horizontal surface emissivity: $\varepsilon_1=0.75$;
• The horizontal surface temperature: $T_1 = 400$ K;
• The vertical surface emissivity: $\varepsilon_2 = 1$;
• The vertical surface temperature: $T_2=550$ K;
• The emissivity of the surrounding surfaces: $\varepsilon_3 = 0.85$;
• The temperature of the surrounding surfaces: $T_3=290$ K;
$\textbf{Required:}$
a) The net rate of radiation heat transfer between the two surfaces: $\dot Q_{12}$;
b) The net rate of radiation heat transfer between the horizontal surface and the surroundings: $\dot Q_{13}$;
|
# zbMATH — the first resource for mathematics
Some open problems and conjectures on submanifolds of finite type. (English) Zbl 0749.53037
Submanifolds of finite type were introduced by the present author more than ten years ago [Bull. Inst. Math., Acad. Sin. 7, 301-311 (1979; Zbl 0442.53050); 11, 309-328 (1983; Zbl 0498.53039)]. Since then this subject has been strongly developed, remarkable results being obtained by the author himself or in joint works. Many of the basic results were presented in his book [Total mean curvature and submanifolds of finite type (World Scientific 1984; Zbl 0537.53049)]. As one can see in the title, in the present paper are presented many problems and conjectures about this theory. For many of those problems there are known some partial results, but they are still open and seem to be quite difficult to solve. We can quote some of them: 1) The only compact finite type surfaces in $$E^ 3$$ are the spheres. 2) Classify finite type hypersurfaces of a hypersphere in $$E^{n+2}$$. 3) Minimal surfaces, standard 2-spheres and products of plane circles are the only finite type surfaces in $$S^ 3$$ (imbedded standardly in $$E^ 4$$). 4) Is every $$n$$- dimensional non-null 2-type submanifold of $$E^{n+2}$$ with constant mean curvature spherical? 5) The only biharmonic submanifolds in Euclidean spaces are the minimal ones.
In my opinion, this field of submanifolds of finite type is very interesting and with nice perspectives.
##### MSC:
53C40 Global submanifolds 53-02 Research exposition (monographs, survey articles) pertaining to differential geometry 53B25 Local submanifolds
|
# Reversing shortest paths among unit disks
Twas the night before Christmas, and throughout M.O.
Not a question was posted, not even by Joe.
Well, let me remedy that. :-)
Let the plane contain a number of pairwise disjoint open unit-radius disk obstacles. Because they are open, their bounding circles can touch tangentially. Shortest paths between two points $a$ and $b$ in the complement of the disks generally look like this:
Notice that the (red) path from $a$ to $b$ is monotonic with respect to the line containing $a$ and $b$—it meets every line orthogonal to $ab$ in a single point—and in that sense never "reverses direction." However, it is possible for a shortest path to reverse direction, e.g.:
Here the red path, which reverses direction slightly near $b$, is a bit shorter than the blue path: $$\pi + 2 + \epsilon < \frac{5}{3}\pi - \epsilon$$ $$5.14 + \epsilon < 5.24 - \epsilon$$ for sufficiently small $\epsilon$, e.g., $\epsilon = 5^\circ$.
However, I am hoping this has a positive answer:
Q1. If the disks form an infinite hexagonal penny packing, are all shortest paths (between pairs of points in the complement of the packing) monotonic with respect the line through the endpoints?
More generally:
Q2. If the disk centers form a regular lattice of $\mathbb{R}^2$, are all shortest paths monotonic?
Here I am not requiring that the disk boundaries be tangent, but rather just that the disks form a regular array. I am wondering if it is irregularity that allows nonmontonic geodesics.
The context is that, perhaps, in disk arrangments where the shortest paths are monotonic, the $O(n^2)$ algorithms for finding a shortest path amidst $n$ disks can be improved to subquadratic.
• Maybe I'm missing something, but isn't your second example already a counterexample to Q1? – Rodrigo A. Pérez Dec 24 '13 at 15:37
• @RodrigoA.Pérez, I think that Q1 refers to the packing of the entire plane, not just a little piece of it. – Gerry Myerson Dec 24 '13 at 16:34
• @RodrigoA.Pérez: Gerry is correct. Apologies for the lack of clarity. Repaired now. – Joseph O'Rourke Dec 24 '13 at 17:56
• Consider a proof by induction. Given a point a, show that all points in a ball centered at a with a radius of r+epsilon have such a monotonic path given all point in an r-ball have such paths. – The Masked Avenger Dec 24 '13 at 19:43
• Yes, and that field of disks lacks sufficient symmetry (and also has a counterexample) for an induction proof to work. I suggest a proof by induction as one way to capitalize on the regularity of the space. To make it more real, here is a suggestion: Look at the circles intersecting the line between a and b; there is a shortest path involving only those circles and some of their neighbors. – The Masked Avenger Dec 25 '13 at 16:54
|
location: Publications → journals
Search results
Search: All articles in the CJM digital archive with keyword Kac-Moody algebra
Expand all Collapse all Results 1 - 2 of 2
1. CJM Online first
Kamgarpour, Masoud
On the notion of conductor in the local geometric Langlands correspondence Under the local Langlands correspondence, the conductor of an irreducible representation of $\operatorname{Gl}_n(F)$ is greater than the Swan conductor of the corresponding Galois representation. In this paper, we establish the geometric analogue of this statement by showing that the conductor of a categorical representation of the loop group is greater than the irregularity of the corresponding meromorphic connection. Keywords:local geometric Langlands, connections, cyclic vectors, opers, conductors, Segal-Sugawara operators, Chervov-Molev operators, critical level, smooth representations, affine Kac-Moody algebra, categorical representationsCategories:17B67, 17B69, 22E50, 20G25
2. CJM 2000 (vol 52 pp. 503)
Gannon, Terry
The Level 2 and 3 Modular Invariants for the Orthogonal Algebras The `1-loop partition function' of a rational conformal field theory is a sesquilinear combination of characters, invariant under a natural action of $\SL_2(\bbZ)$, and obeying an integrality condition. Classifying these is a clearly defined mathematical problem, and at least for the affine Kac-Moody algebras tends to have interesting solutions. This paper finds for each affine algebra $B_r^{(1)}$ and $D_r^{(1)}$ all of these at level $k\le 3$. Previously, only those at level 1 were classified. An extraordinary number of exceptionals appear at level 2---the $B_r^{(1)}$, $D_r^{(1)}$ level 2 classification is easily the most anomalous one known and this uniqueness is the primary motivation for this paper. The only level 3 exceptionals occur for $B_2^{(1)} \cong C_2^{(1)}$ and $D_7^{(1)}$. The $B_{2,3}$ and $D_{7,3}$ exceptionals are cousins of the ${\cal E}_6$-exceptional and $\E_8$-exceptional, respectively, in the A-D-E classification for $A_1^{(1)}$, while the level 2 exceptionals are related to the lattice invariants of affine~$u(1)$. Keywords:Kac-Moody algebra, conformal field theory, modular invariantsCategories:17B67, 81T40
top of page | contact us | privacy | site map |
|
# Calculating information loss in a signal after smoothing?
I have a signal. I applied Gaussian smoothing to it and then for baseline reduction I appied Tophat filter to the smoothed version.
This is the original signal:
This is the final signal:
I read that KL Divergence helps in finding the information loss between two signals but then again a condition was that the elements of the two distributions must be 1 i.e. There are two distributions P & Q then Pi + Qi = 1.
So is there any other way by which I can do that ?
• The only conditions on P and Q in the KL divergence is that Q must dominate P in the probability simplex over the space the distributions are defined on. There is no requirement that the elements sum to a constant (aside from the sum over P is 1 and the sum over Q is 1).. Oct 14 '13 at 15:12
## 2 Answers
Information loss and KL Divergence are functions that describe the behavior of random variables, in other words a set of random signals from some distribution. When you only have one signal, like you do in this case, calculating the information loss doesn't make sense. You can calculate mean squared error like the other comment suggests.
• Indeed. Good you pointed that the KL is "Distance" between distributions and not between signals.
– Royi
Jun 30 '18 at 14:13
For measuring the difference between a signal and an estimation/modification of that signal I usually just take the Mean Square Error or Root Mean Square Error. It is simple to calculate and provides a single metric to describe how far your signal deviates (or technically, variates) from the original.
|
# Taking stalk of a product of sheaves
Let $(\mathscr{F}_\alpha)_\alpha$ be a family of sheaves on $X$, and $\prod_\alpha\mathscr{F}_\alpha$ the product sheaf. If $x\in X$, is it true that $$\left(\prod_\alpha\mathscr{F}_\alpha\right)_x\simeq\prod_\alpha(\mathscr{F}_\alpha)_x \ ?$$ I think $(\oplus_\alpha\mathscr{F}_\alpha)_x\simeq\oplus_\alpha(\mathscr{F}_\alpha)_x$ may be true, but not the product sheaf.
-
There is always a canonical map $(\prod_{\alpha} F_{\alpha})_x \to \prod_{\alpha} (F_{\alpha})_x$. But it doesn't have to be injective, even for very nice spaces $X$ and sheaves $F_{\alpha}$. Take $X=\mathbb{R}$ and $F_{\alpha}$ the sheaf of continuous function for $\alpha \in \mathbb{N}$, and $x=0$. Let $f_{\alpha} : \mathbb{R} \to \mathbb{R}$ be a continuous function which vanishes on $]-1/(\alpha+1),+1/(\alpha+1)[$, but does not vanish at $1/{\alpha}$. Then $(f_{\alpha})_{\alpha}$ represents an element in the kernel of the canonical map, which is not trivial.
|
# Site design for upvote/downvote submissions
I don't really have any specific questions for this thread. I have been working on an independent project recently and have been learning everything from the web. Just yesterday I became aware of templating and the idea of separating business and presentational logic.
I would be very thankful for any comments regarding possible re-factorings, security vulnerabilities, improved design philosophies, etc. The purpose of this thread is mostly a sanity check and to be sure my current design and methodology is sound.
index.php
<?php
require('db.php');
$resultsPerPage = 15;$submissionCount = mysql_num_rows(mysql_query("SELECT id FROM $submissionsTableName"));$pageCount = ceil($submissionCount /$resultsPerPage);
if (isset($_GET['sort'])) if ($_GET['sort'] == 'hot' || $_GET['sort'] == 'new' ||$_GET['sort'] == 'top')
$sort =$_GET['sort'];
else
header('Location: 404.php');
else
$sort = 'hot'; switch ($sort) {
case 'hot':
$sortAlgorithm = 'submissions.id * (submissions.upvote - submissions.downvote)'; break; case 'new':$sortAlgorithm = 'submissions.id';
break;
case 'top':
$sortAlgorithm = '(submissions.upvote - submissions.downvote)'; break; } if (isset($_GET['page']))
if ($_GET['page'] <=$pageCount && $_GET['page'] >= 1)$page = $_GET['page']; else header('Location: 404.php'); else$page = 1;
$startRow = ($page - 1) * $resultsPerPage; if (isset($_GET['search'])) {
$searchArgs = explode(' ',$_GET['search']);
$argCount = count($searchArgs);
$searchSQL = '\'' .$searchArgs[0] . '\'';
for ($i = 1;$i < $argCount;$i++) {
$searchSQL .= ', \'' .$searchArgs[$i] . '\''; }$submissionQuery = mysql_query("SELECT submissions.* FROM submissions, tags, tagmap WHERE tagmap.tagID = tags.id AND (tags.text IN ($searchSQL)) AND submissions.id = tagmap.submissionID ORDER BY$sortAlgorithm DESC LIMIT $startRow,$resultsPerPage");
} else {
$submissionQuery = mysql_query("SELECT id, category, title, author, date, upvote, downvote FROM$submissionsTableName ORDER BY $sortAlgorithm DESC LIMIT$startRow, $resultsPerPage"); }$outcomeCount = mysql_num_rows($submissionQuery);$submissions = array();
while ($row = mysql_fetch_assoc($submissionQuery)) {
$upvote = "upvote";$downvote = "downvote";
$rowIP =$_SERVER['REMOTE_ADDR'];
$userIPRow = mysql_fetch_assoc(mysql_query("SELECT type FROM$votingIPsTableName WHERE submissionID = $row[id] AND commentID = 0 AND IPAddress = '$rowIP'"));
if ($userIPRow['type'] == 'upvote active')$upvote = 'upvote active';
else if ($userIPRow['type'] == 'downvote active')$downvote = 'downvote active';
$votes =$row['upvote'] - $row['downvote'];$tagsQuery = mysql_query("SELECT tags.text FROM tags INNER JOIN tagmap ON tags.id = tagmap.tagID WHERE tagmap.submissionID = $row[id]");$tags = array();
while ($tag = mysql_fetch_assoc($tagsQuery)) {
$tags[] =$tag['text'];
}
$commentsQuery = mysql_query("SELECT id FROM$commentsTableName WHERE submissionID = $row[id]");$commentCount = mysql_num_rows($commentsQuery);$submissions[] = array('submission' => $row, 'upvote' =>$upvote, 'votes' => $votes, 'downvote' =>$downvote, 'tags' => $tags, 'commentCount' =>$commentCount);
}
// Divider
$index_view = new Template('index_view.php', array( 'header' => new Template('header.php'), 'menu' => new Template('menu.php'), 'submissions' => new Template('submissions.php', array('submissions' =>$submissions)),
'pagination' => new Template('pagination.php', array('page' => $page, 'pageCount' =>$pageCount, 'resultsPerPage' => $resultsPerPage, 'outcomeCount' =>$outcomeCount, 'submissionCount' => $submissionCount, 'sort' =>$sort)),
'footer' => new Template('footer.php')
));
$index_view->render(); ?> submissions.php <div id="submissions"> <?php foreach ($this->submissions as $row): ?> <div class="submission" id="submission<?php echo$row['submission']['id']; ?>">
<div class="voteblock">
<a class="<?php echo $row['upvote']; ?>" title="Upvote"></a> <div class="votes"><?php echo$row['votes']; ?></div>
<a class="<?php echo $row['downvote']; ?>" title="Downvote"></a> </div> <div class="submissionblock"> <h3><a href="s/<?php echo$row['submission']['id']; ?>.php"><?php echo $row['submission']['title']; ?></a> (<?php echo$row['submission']['category']; ?>) - <?php echo $row['commentCount']; ?> comments</h3> <div class="tags"> <?php foreach ($row['tags'] as $tag): echo$tag . ' ';
endforeach;
?>
</div>
<span class="date"><?php echo $row['submission']['date']; ?></span> by <span class="author"><?php echo$row['submission']['author']; ?></span>
</div>
</div>
<?php endforeach; ?>
</div>
index_view.php
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Website</title>
<meta http-equiv="content-type" content="text/html; charset=UTF-8" />
<link href="styles.css" rel="stylesheet" type="text/css"/>
<link href="favicon.png" rel="shortcut icon" />
</head>
<body>
<?php
$this->header->render();$this->menu->render();
$this->submissions->render();$this->pagination->render();
$this->footer->render(); ?> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.5.1/jquery.min.js"></script> <script type="text/javascript" src="vote.js"></script> <script type="text/javascript" src="prettydate.js"></script> </body> </html> ## 2 Answers Well, it's good that you thought of it. And you are definitely moving in the right direction. Your code seems to be fine, but still not very readable and not very structured. Try to organize your code to classes or smaller functions. E.g. this: if (isset($_GET['sort']))
if ($_GET['sort'] == 'hot' ||$_GET['sort'] == 'new' || $_GET['sort'] == 'top')$sort = $_GET['sort']; else header('Location: 404.php'); else$sort = 'hot';
switch ($sort) { case 'hot':$sortAlgorithm = 'submissions.id * (submissions.upvote - submissions.downvote)';
break;
case 'new':
$sortAlgorithm = 'submissions.id'; break; case 'top':$sortAlgorithm = '(submissions.upvote - submissions.downvote)';
break;
}
Should be in function get_sort_algorith(), or even in class Request, or it's subclass. This might seem useless right now, but having code in a function or class makes debugging easier and makes this code re-usable in another parts of the project.
Have you heard/tried any MVC frameworks and/or about MVC design pattern? This pattern describes splitting code to Model (=data & database access), View (=html templates) & Controller (code that manipulates data and calls views for output).
Here are some useful links that might help you:
Some other notes:
header('Location: 404.php');
This probably won't give you what you want. This wold be 301 redirect, not a 404 error. You'd better use header("HTTP/1.0 404 Not Found"); and configure your webserver to map error 404 to 404.php. In apache this could be done with ErrorDocument 404 /404.php
if (isset($_GET['search'])) {$searchArgs = explode(' ', $_GET['search']);$argCount = count($searchArgs);$searchSQL = '\'' . $searchArgs[0] . '\''; for ($i = 1; $i <$argCount; $i++) {$searchSQL .= ', \'' . $searchArgs[$i] . '\'';
}
SQL injection here. Assume what will happen if someone will make request with search='; DROP TABLE users. ALWAYS escape what you are getting from outside of your app.
• Frameworks are a very good way to learn MVC, because good ones will force you to use it. You have to invest a little more time reading before you can hack away, but it is well worth it. – amccormack Apr 4 '11 at 1:23
## Use sql binding
Additive to the answer above, look into sql binding. After going through the learning cycle myself (unfettered user input into sql, sql escaping, and finally sql binding), neither of the first two should be used for anything that touches user input and then goes to any database that you like. At all.
In other words, the sooner you start using binding with your sql, the better off you'll be. PDO ( http://php.net/manual/en/book.pdo.php ) is something you'll want to get somewhat familiar with, but the uh, api I guess for PDO is rather complex, so what I have done is implemented a wrapper that allows me to work like this:
$datum = query_item('select id from users where user_id = :id', array(':id'=>5));$row_array = query_row('select * from users where user_id = :id', array(':id'=>5)); \$iterable_resultset = query('select * from users where user_id = :id', array(':id'=>5));
In other words, a simplified query wrapper function to make all your sql easily bindable.
The functions are based in part on the code here:
https://github.com/tchalvak/ninjawars/blob/master/deploy/lib/data/lib_db.php
Anyway, the point is, look into sql binding, you'll be glad you did every time you hear about -other- people's sql-injection problems.
## Use templating principles
One thing I notice with your code (submissions.php and index_view.php for example) is that you're using naked php in your html. That will tend to get complicated when you end up using three or four languages/principles (php, html, javascript, css) in the same page. Let me tell you, the code that I hate, the code that I cringe when I see I have to debug it, that's javascript code with php intermixed on a html page working with css. What you want to work on is separation of concerns. When it comes to php, understanding that and harnessing the benefits of that, at least when you're new to php, will be greatly increased by using a template engine for a while, at least until you understand php enough to decide when and whether you want to skip the templating agent.
1/4 of the benefit of a templating engine will be simplified syntax, and 3/4 of the benefit will be separation of concerns. Using an MVC pattern, which @rvs mentioned, gives a similar benefit, but getting to know a template engine library will be helpful if you end up doing cleanup on someone else's code and can't fully rewrite an existing system (story of my life as a php developer).
So I suggest getting to know smarty ( http://www.smarty.net/ ). In the beginning it should be easy to just include smarty as a lib in projects and using the templates there.
## In lieu of a templating engine
Let's suppose you don't want to get into the complexity of template engines and template engine syntax for the moment, you should still work towards separation of concerns. The simple way to do this in php is just avoid any php except echos in your html. Do all of your logic -outside- of your html, and just pass variables to echo or loop over and then echo into where-ever the html is. It'll make your php code and your html code much easier to debug and much easier to redesign.
Sql binding and separation of concerns, those are the biggest things I've learned make my life easier when developing php.
|
Role of HSP70 in response to (thermo)radiotherapy: analysis of gene expression in canine osteosarcoma cells by RNA-seq
Introduction
Radiotherapy resistance is one of the major obstacles in clinical cancer treatment of various tumor types, including osteosarcomas. Intrinsic resistance is caused by high levels of tumor hypoxia and presence of cancer stem cells, which are highly radio-resistant and are responsible for the tumor relapse after treatment1. Several strategies were developed to increase the efficacy of radiotherapy, including co-treatment with chemotherapeutics and also, pre-treatment with hyperthermia2,3. Hyperthermia is a second-line treatment modality, used mostly in refractory tumors (breast cancer, cervix carcinoma, head and neck cancer) and in combination with chemo- and radiotherapy4. Also in the veterinary oncology, hyperthermia has been used to treat different types of cancer (including (osteo)sarcomas)) in companion animals, in combination with radiotherapy5,6.
One of the main proteins induced in response to hyperthermia are heat shock proteins (HSPs)7. Several HSPs have been identified so far with different molecular weights and differential response to heat. Among them, HSP70 has been also shown to be overexpressed in many types of human tumors7,8. The role of HSP70 and other HSPs is to assist the protein folding processes and act as molecular chaperones9. Many regulatory proteins, including transcriptions factors, kinases and receptors, are known to be controlled by HSP709,10. Therefore, HSP70 plays and important role in maintaining cellular homeostasis and can indirectly influence gene expression (for example, by regulating protein folding of transcription factors). In response to the cellular stress, heat shock transcription factor (HSF) is activated and it increases the expression of HSP7011. Interestingly, HSP70 interacts with HSF to negatively regulate gene expression. Moreover, HSP70 has been also showed to play an important role in posttranscriptional gene expression of selected genes12. Thus, apart from protecting cellular proteins during stress, HSP70 might be also directly or indirectly regulating gene expression.
We have previously shown, that Abrams canine osteosarcoma cell line was radiosensitized by hyperthermia pre-treatment13. Moreover, Abrams have low basal but strongly heat-inducible levels of HSP70. Therefore, we were interested in the gene expression analysis of Abrams cells in response to radiotherapy, hyperthermia and combination of both in HSP70-proficient and HSP70 knockdown Abrams cells. We used RNA sequencing technology and quantitative RT-PCR to identify down- and up-regulated factors, clonogenic cell survival and proliferation assay to measure response to treatment, and apoptosis/necrosis assay to investigate cell death after treatment.
Methods
Cell culture
Abrams cells were obtained from Prof. Robert Rebhun (University of California, Davis, California, USA) and cultured as described before13. Cells were routinely screened for Mycoplasma contamination.
Hyperthermia treatment
Cells were treated in a humidified incubator with 5% CO2. The total treatment time with hyperthermia was 90 min, including the 30 min of heating phase (37–42 °C) and 60 min at 42 °C (plateau). Directly after hyperthermia (time interval between hyperthermia and irradiation was approximately 10 min), cells were transferred to radiation facility and irradiated. Prior to this study, the heating profile of the incubator used for hyperthermia treatments was characterized thoroughly by measurements of the temperature of the cell culture dish (standard 10-cm dish used for clonogenic assay)13. Based on data from three runs, thermal doses were calculated according to the cumulative equivalent minutes at 43 °C iso-effect model (CEM43)14. The mean thermal dose (CEM_43) was found to be 10.9 ± 0.8 min. A script written in python was used to implement the following calculation:
$${\text{CEM}}_{43} = \mathop \sum \limits_{i = 1}^{n} t_{i} \cdot R^{{\left( {43 - T_{i} } \right)}} {\text{with }}R = \left\{ {\begin{array}{*{20}c} {0.25} & {{\text{if }}T_{i} < 43^\circ {\text{C}}} \\ {0.5 } & {{\text{otherwise}}} \\ \end{array} } \right.$$
where $$t_{i}$$ denotes the duration of time interval $$i$$, and $$T_{i}$$ denotes the average temperature in °C during said time interval. As indicated in Nytko et al., the measurement was acquired with a Bowman probe (SPEAG/IT’IS, Zurich, Switzerland), and acquisitions were taken once per second13.
6MV photon radiation at a dose-rate of 600MU/Min (approx. 1 Gy/min) were used, and delivered by a linear accelerator (Clinac iX, Varian Medical Systems, Palo Alto, USA). Appropriated dose build-up was ensured by penetrating layers of plexiglas and the set-up was verified dosimetrically by a medical physicist.
siRNA transfection and treatment
Cells were seeded in a 6-well plate at the density of 50,000 cells/well the day before transfection (adapted from Nytko et al.)13. Cells were transfected with 25 pmol of custom made siRNA against canine HSP70 (siRNAsequences in Supplementary Information 1; Thermo Fisher Scientific) and with Silencer Select Negative Control No. 1 siRNA (Cat. No. 4390843, Thermo Fisher Scientific) using Lipofectamine RNAiMAX Transfection Reagent (Thermo Fisher Scientific). 24 h after transfection medium was exchanged and cells were treated with hyperthermia, irradiation or combination of both. RNA and protein lysates samples were collected 24 h after treatment with irradiation (48 h after transfection, Fig. 1). Three independent experiments were performed resulting in a total of 24 samples (8 samples per experiment).
RNA isolation and quantitative RT-PCR
RNA was isolated using RNeasy Mini Kit according to manufacturer’s protocol (Qiagen). Reverse transcription (RT) was performed using the iScript cDNA Synthesis Kit (BioRad) according to the manufacturer’s protocol (adapted from Ettlin et al.)15. KAPA PROBE FAST qPCR Kit Master Mix (2 × Universal reagents (Kapa Biosystems with 2 μL cDNA per reaction was used for quantitative polymerase chain reaction (qPCR. Samples were run in duplicates using the CFX384 Touch Real-Time PCR detection system (BioRad, Hercules, CA, US. The primers were customized Taqman gene expression assays (Thermo Fisher Scientific; Supplementary Information 1.
RNA sequencing and analysis
The RNA sequencing was performed at the Functional Genomics Center Zürich.
Library preparation
The method was adapted from Zatta et al.16. Briefly, RNA quality was tested with a Qubit (1.0) Fluorometer (Life Technologies, California, USA) and a Bioanalyzer 2100 (Agilent, Waldbronn, Germany). Samples with a 260 nm/280 nm ratio between 1.8 and 2.1 and a 28S/18S ratio between 1.5 and 2 were used for subsequent steps with TruSeq RNA Sample Prep Kit v2 (Illumina, Inc, California, USA). Total RNA samples (600 ng) were poly A enriched and then reverse-transcribed into double-stranded cDNA. The cDNA samples were fragmented, end-repaired and polyadenylated before ligation of TruSeq adapters containing the index for multiplexing. Fragments containing TruSeq adapters on both ends were selectively enriched with PCR. Qubit (1.0) Fluorometer and the Caliper GX LabChip GX (Caliper Life Sciences, Inc., USA) were used to validate the quality and quantity of the enriched libraries. The product is a smear with an average fragment size of approximately 260 bp. The libraries were normalized to 10 nM in Tris–Cl 10 mM, pH8.5 with 0.1% Tween 20.
Cluster generation and sequencing
The method was adapted from Zatta et al.16. The TruSeq SR Cluster Kit HS4000 (Illumina, Inc, California, USA) was used for cluster generation using 2 nM of pooled normalized libraries on the cBOT. Sequencing were performed on the Illumina HiSeq 4000 single end 125 bp using the TruSeq SBS Kit HS4000 (Illumina, Inc, California, USA).
Data analysis and statistics
Reads were quality-checked with FastQC. Reads at least 20 bases long, with a tail phred quality score greater than 15 and an overall average phred quality score greater than 20 were aligned to the reference genome and transcriptome (FASTA and GTF files, respectively, downloaded from the Ensembl, genome build Canis Familiaris 3.1) with STAR v.2.5.4 (1) with default settings for single end reads17.
Distribution of the reads across genomic isoform expression was quantified using the R package GenomicRanges (2) from Bioconductor Version 3.1017. Differentially expressed genes were identified using the R package edgeR (3) from Bioconductor Version 3.1019.
Sample clustering is shown using principal component analysis (PCA) in R version 3.6.0. For the PCA plot, we transformed the counts previously calculated using the regularized logarithm transformation or rlog function from DESeq2 (1.24.0) package. The heatmaps were generated using the R function heatmap2. Venn diagrams were created using BioVenn (http://www.biovenn.nl;20). Functional classification and statistical enrichment test was performed using PANTHER software (http://pantherdb.org;21).
Statistics
Statistical analysis was performed using GraphPad Prism8 (GraphPad Software, Inc., San Diego, California, USA). One-column t test with Bonferroni correction was used to compare the treatment-group to the control group (set as 1), unpaired t-test was used to compare two treatment groups to each other. P values below 0.05 were considered statistically significant and denoted with a star (*), two stars (**) were used for p values below 0.01 and three stars (***) were used for p values falling below 0.001.
Immunoblotting, clonogenic, proliferation and apoptosis/necrosis assay are described in Supplementary Information 1.
Results
Effect of HSP70 downregulation on gene expression in Abrams cells
First, we checked the level of HSP70 protein downregulation in Abrams cells. Initially, two different siRNA sequences targeting HSP70 were tested which resulted in the same knockdown efficiency (Supplementary Fig. 1). Therefore, for the RNA sequencing experiment we proceeded only with siRNA No. 1. The protein levels of HSP70 were strongly influenced by the hyperthermia and combined treatment in negative control siRNA transfected cells but the protein was absent in control (non-treated), hyperthermia and combination-treated HSP70 knockdown cells (Fig. 2A). Additionally, we measured HSP70 mRNA, which was significantly downregulated in knockdown cells (Fig. 2B). The comparison of gene expression between Abrams cells transfected with negative control siRNA and siRNA targeting HSP70 revealed 474 differentially expressed genes (p < 0.01 and log ratio > 0.5; the 500 most significant genes in Supplementary Table 1). Functional classification according to the biological processes revealed that cellular processes, followed by the metabolic processes were the two functional categories mostly populated by the genes identified by the differential analysis (Supplementary Fig. 2A). Enrichment test revealed that the cellular component organization or biogenesis was the most significantly enriched biological process (Supplementary Fig. 2B). The top 19 differentially expressed genes and HSP70 in experimental triplicates are shown in a heatmap (Fig. 3A). The most significantly downregulated gene in the knockdown cells are glycoprotein nmb (GPNMB), followed by lymphatic vessel endothelial hyaluronan receptor 1 (LYVE1) and matrix metallopeptidase 1 (MMP1). Interestingly, among the top 20 most significantly altered genes, only two are significantly induced in the knockdown cells, namely RAS protein activator like 2 (RASAL2) and very low-density lipoprotein receptor (VLDLR). As expected, HSP70 was significantly downregulated in the knockdown cells (log2 ratio – 2.86; p = 0.000744). We confirmed the down- and up-regulation of selected genes by quantitative real-time polymerase chain reaction (qRT-PCR) in an independent experiment (Fig. 3B).
Principal component analysis was performed to reveal the distances between individual samples in control and knockdown cells exposed to all treatment types (Fig. 4). It shows that negative control siRNA transfected cells and knockdown cells separate from each other, as well as treatment with hyperthermia (HT) and thermoradiotherapy (HTRT) from non-treated cells. Radiotherapy-treated cells (RT), however, do not separate from the respective non-treated cells in both, control and knockdown cells.
In the negative control siRNA transfected cells, radiotherapy alone resulted in only 25 differentially expressed genes (p < 0.01 and log ratio > 0.5; the 500 most significant genes in Supplementary Table 2). In the HSP70 knockdown cells, irradiation alone resulted in even less, only 12 differentially expressed genes (p < 0.01 and log ratio > 0.5; Supplementary Table 3). In both comparisons, control (not-treated) over irradiated cells were compared. In both sets of comparisons, the false discovery rate (FDR) was close to 1, therefore we did not proceed with the analysis of these genes and further comparisons will focus mainly on hyperthermia- and thermoradiotherapy-induced genes.
Hyperthermia
Hyperthermia alone resulted in 237 differentially expressed genes in negative control siRNA transfected cells and 131 in the HSP70 knockdown cells (p < 0.01 and log ratio > 0.5; the 500 most significant genes in Supplementary Tables 4 and 5), when control (non-treated) was compared to hyperthermia-treated cells.
Interestingly, the combined treatment, thermoradiotherapy, resulted in 638 differentially expressed genes in negative control siRNA transfected cells and 349 in HSP70 knockdown cells, respectively (p < 0.01 and log ratio > 0.5; the 500 most significant genes in Supplementary Tables 6 and 7), when control (not-treated) cells were compared to thermoradiotherapy-treated cells.
In general, in the HSP70 knockdown cells, fewer genes were significantly differentially expressed (for the given log ratio and significance level) than in the negative control siRNA transfected cells in response to the three treatment modalities (radiation, hyperthermia, thermoradiotherapy), with thermoradiotherapy resulting in the highest amount of the differentially expressed genes in both groups (negative control siRNA and HSP70 knockdown cells). However, when we directly compared the radiotherapy-treated cells transfected with negative control siRNA to irradiation-treated HSP70 knockdown cells, 945 differentially expressed genes were identified (p < 0.01 and log ratio > 0.5; the 500 most significant genes in Supplementary Table 8). Comparison of hyperthermia-treated negative control siRNA cells to hyperthermia-treated HSP70 knockdown cells revealed 1,367 differentially expressed genes (p < 0.01 and log ratio > 0.5; the 500 most significant genes in Supplementary Table 9). Finally, the comparison between thermoradiotherapy-treated negative control siRNA cells to thermoradiotherapy-treated HSP70 knockdown cells resulted in 3,083 differentially expressed genes (p < 0.01 and log ratio > 0.5; the 500 most significant genes in Supplementary Table 10). Many of these genes are the result of the knockdown itself. Combined treatment has the biggest impact on the gene expression in Abrams cells in comparison to single treatments, also when we directly compared the HSP70-proficient and -knockdown cells.
Identification of HSP70-dependent genes regulated by the (thermo)radiotherapy
In order to identify genes differentially expressed in response to thermoradiotherapy and dependent on the presence of HSP70, the Venn diagrams were created (Fig. 5). In the first comparison, the differentially expressed genes in negative control siRNA transfected cells and HSP70 knockdown cells were compared (top 200 most significantly expressed genes in hyperthermia versus non-treated and thermoradiotherapy versus non-treated, annotated genes). In negative control siRNA cells, 75 genes were common between hyperthermia and thermoradiotherapy treated cells (Fig. 5A). In the knockdown cells, 107 genes were common between the two treatment groups (Fig. 5B). As a next step, we compared the genes induced by hyperthermia only in negative control siRNA cells versus knockdown cells in order to identify genes dependent on the presence of HSP70. This comparison resulted in 24 genes differentially expressed in knockdown cells only (Fig. 5C and Supplementary Table 11). Genes involved in cellular processes were the most represented among these 24 genes (11 genes). In thermoradiotherapy-treated cells, 66 genes were differentially expressed in knockdown cells only (Fig. 5D and Supplementary Table 11). Also in this group, genes involved in cellular processes (38) were the most represented, followed by metabolic processes (13) and biological regulation (13). In the radiotherapy alone treated group (RT) only few genes were significantly induced in negative control siRNA and HSP70 knockdown cells, 25 and 12, respectively (Supplementary Tables 2 and 3). Among them (annotated genes), 2 were common between both groups, 16 were expressed in negative control siRNA cells only and 9 in knockdown cells only (Supplementary Fig. 4).
Effect of HSP70 knockdown on cell proliferation and clonogenicity
Negative control siRNA and HSP70 knockdown cells proliferation was inhibited by the treatment with thermoradiotherapy (Fig. 6A), the level of inhibition was significantly lower in negative control siRNA cells in comparison to the siHSP70 cells. When clonogenic cell survival of the cells was measured after treatment, both, negative control siRNA and knockdown cell clonogenicity was significantly reduced by the thermoradiotherapy treatment in comparison to the respective controls (42 °C versus 37 °C at 3 Gy and 6 Gy, Fig. 6B). Interestingly, there was significant difference between negative control siRNA cells and knockdown cells treated with 6 Gy at 37 °C but not between cells pre-treated with hyperthermia (42 °C) and treated with radiotherapy afterwards. Moreover, we have analyzed the levels of apoptosis and necrosis in negative control siRNA and HSP70 knockdown Abrams cells 96 h after treatment end. We observed small, although not significant increase in induction of apoptosis and necrosis in response to radiotherapy and thermoradiotherapy in both negative control siRNA and HSP70 knockdown cells (Fig. 6C,D).
Discussion
Moreover, another human and canine osteosarcoma cell lines could be used to investigate if the effects of thermoradiotherapy and HSP70 knockdown we observed are specific for this type of cancer. Interestingly, although radiotherapy alone did not induce strong gene expression changes, the combined thermoradiotherapy induced stronger changes than hyperthermia alone, suggesting, that pre-treatment with hyperthermia acts on the cellular level to sensitize cells to radiotherapy. Indeed, it has been previously shown that hyperthermia can affect DNA repair pathway in cancer cells32. Functionally, we observed significant changes in proliferation but not in clonogenicity between HSP70-proficient and deficient cells in response to thermoradiotherapy. Membrane-bound HSP70 has been previously shown to play an important role in the mechanism or radiosensitization33. Since we used HSP70 cells with low total (not only membrane-bound) HSP70 in our study, it could explain why there were no differences in clonogenic cell survival in response to thermoradiotherapy between HSP70-proficient and -deficient cells. Moreover, another studies showing the role of HSP70 in radiation sensitivity indicate, that these effects are mediated by the interaction of the surface HSP70 with immune cells and tumor microenvironment, the aspect, which we did not investigate in our study34,35.
In summary, knockdown of HSP70 induces gene expression changes in response to hyperthermia and thermoradiotherapy in canine osteosarcoma cell line. Further studies on the role of HSP70-dependent genes in the mechanism of thermoradiosensitization could pave a way into novel, combinatorial treatment options.
Data availability
The datasets generated during and/or analysed during the current study are available in the ArrayExpress repository, https://www.ebi.ac.uk/arrayexpress/experiments/E-MTAB-8652/.
References
1. 1.
Tamari, K. et al. Identification of chemoradiation-resistant osteosarcoma stem cells using an imaging system for proteasome activity. Int. J. Oncol. 45(6), 2349–2354 (2014).
2. 2.
Wang, H. et al. Cancer radiosensitizers. Trends Pharmacol. Sci. 39(1), 24–48 (2018).
3. 3.
Kampinga, H. H. Cell biological effects of hyperthermia alone or combined with radiation or drugs: a short introduction to newcomers in the field. Int. J. Hypertherm. 22(3), 191–196 (2006).
4. 4.
Peeken, J. C., Vaupel, P. & Combs, S. E. Integrating hyperthermia into modern radiation oncology: what evidence is necessary?. Front. Oncol. 7, 132 (2017).
5. 5.
Dressel, S. et al. Novel hyperthermia applicator system allows adaptive treatment planning: preliminary clinical results in tumour-bearing animals. Vet. Comp. Oncol. 16(2), 202–213 (2018).
6. 6.
Gillette, E. L. Hyperthermia effects in animals with spontaneous tumors. Natl. Cancer Inst. Monogr. 61, 361–364 (1982).
7. 7.
Creagh, E. M., Sheehan, D. & Cotter, T. G. Heat shock proteins–modulators of apoptosis in tumour cells. Leukemia 14(7), 1161–1173 (2000).
8. 8.
Murphy, M. E. The HSP70 family and cancer. Carcinogenesis 34(6), 1181–1188 (2013).
9. 9.
Mayer, M. P. & Bukau, B. Hsp70 chaperones: cellular functions and molecular mechanism. CMLS 62(6), 670–684 (2005).
10. 10.
Rosenzweig, R. et al. The Hsp70 chaperone network. Nat. Rev. Mol. Cell Biol. 20(11), 665–680 (2019).
11. 11.
Abravaya, K. et al. The human heat shock protein hsp70 interacts with HSF, the transcription factor that regulates heat shock gene expression. Genes Dev. 6(7), 1153–1164 (1992).
12. 12.
Kishor, A. et al. Hsp70 is a novel posttranscriptional regulator of gene expression that binds and stabilizes selected mRNAs containing AU-rich elements. Mol. Cell. Biol. 33(1), 71–84 (2013).
13. 13.
Nytko, K. J. et al. Cell line-specific efficacy of thermoradiotherapy in human and canine cancer cells in vitro. PLoS ONE 14(5), e0216744 (2019).
14. 14.
Sapareto, S. A. & Dewey, W. C. Thermal dose determination in cancer therapy. Int. J. Radiat. Oncol. Biol. Phys. 10(6), 787–800 (1984).
15. 15.
Ettlin, J. et al. Analysis of gene expression signatures in cancer-associated stroma from canine mammary tumours reveals molecular homology to human breast carcinomas. Int. J. Mol. Sci. 18(5), 1101 (2017).
16. 16.
Zatta, S. et al. Transcriptome analysis reveals differences in mechanisms regulating cessation of luteal function in pregnant and non-pregnant dogs. BMC Genomics 18(1), 757 (2017).
17. 17.
Dobin, A. et al. STAR: ultrafast universal RNA-seq aligner. Bioinformatics (Oxford, England) 29(1), 15–21 (2013).
18. 18.
Lawrence, M. et al. Software for computing and annotating genomic ranges. PLoS Comput. Biol. 9(8), e1003118 (2013).
19. 19.
Robinson, M. D., McCarthy, D. J. & Smyth, G. K. edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics (Oxford, England) 26(1), 139–140 (2010).
20. 20.
Hulsen, T., de Vlieg, J. & Alkema, W. BioVenn - a web application for the comparison and visualization of biological lists using area-proportional Venn diagrams. BMC Genomics 9, 488 (2008).
21. 21.
Mi, H. et al. PANTHER version 14: more genomes, a new PANTHER GO-slim and improvements in enrichment analysis tools. Nucleic Acids Res. 47(D1), D419–D426 (2019).
22. 22.
Overgaard, J. Fractionated radiation and hyperthermia: experimental and clinical studies. Cancer 48(5), 1116–1123 (1981).
23. 23.
Gillette, E. L. Large animal studies of hyperthermia and irradiation. Cancer Res. 39(6 Pt 2), 2242–2244 (1979).
24. 24.
Elming, P. B. et al. Hyperthermia: The Optimal Treatment to Overcome Radiation Resistant Hypoxia. Cancers 11(1), 1 (2019).
25. 25.
Doan, N. B. et al. Identification of radiation responsive genes and transcriptome profiling via complete RNA sequencing in a stable radioresistant U87 glioblastoma model. Oncotarget 9(34), 23532–23542 (2018).
26. 26.
Young, A. et al. RNA-seq profiling of a radiation resistant and radiation sensitive prostate cancer cell line highlights opposing regulation of DNA repair and targets for radiosensitization. BMC Cancer 14, 808 (2014).
27. 27.
Wahba, A., Lehman, S. L. & Tofilon, P. J. Radiation-induced translational control of gene expression. Translation (Austin, Tex) 5(1), e1265703 (2017).
28. 28.
Sun, L. et al. Transcriptome response to heat stress in a chicken hepatocellular carcinoma cell line. Cell Stress Chaperones 20(6), 939–950 (2015).
29. 29.
Cha, J. et al. Electro-hyperthermia inhibits glioma tumorigenicity through the induction of E2F1-mediated apoptosis. Int. J. Hypertherm. 31(7), 784–792 (2015).
30. 30.
Court, K. A. et al. HSP70 inhibition synergistically enhances the effects of magnetic fluid hyperthermia in ovarian cancer. Mol. Cancer Ther. 16(5), 966–976 (2017).
31. 31.
Mahat, D. B. et al. Mammalian heat shock response and mechanisms underlying its genome-wide transcriptional regulation. Mol. Cell 62(1), 63–78 (2016).
32. 32.
Oei, A. L. et al. Effects of hyperthermia on DNA repair pathways: one treatment to inhibit them all. Radiat. Oncol. (London, England) 10, 165 (2015).
33. 33.
Murakami, N. et al. Role of membrane Hsp70 in radiation sensitivity of tumor cells. Radiat Oncol 10, 149 (2015).
34. 34.
Rothammer, A. et al. Increased heat shock protein 70 (Hsp70) serum levels and low NK cell counts after radiotherapy: potential markers for predicting breast cancer recurrence?. Radiat. Oncol. (London, England) 14(1), 78 (2019).
35. 35.
Multhoff, G. et al. The role of heat shock protein 70 (Hsp70) in radiation-induced immunomodulation. Cancer Lett. 368(2), 179–184 (2015).
Acknowledgements
Financial support was received from the CABMM Start-up grant (to Katarzyna J. Nytko). The authors would like to thank Maria Domenica Moccia for the technical help with RNA sequencing study and Emmanuel Karouzakis for the help with data analysis. The laboratory work was (partly) performed using the logistics of the Center for Clinical Studies at the Vetsuisse Faculty of the University of Zurich.
Author information
Authors
Contributions
K.J.N designed and performed the experiments, analyzed the data, and wrote the manuscript. P.T.H. performed the experiments and edited the manuscript. G.R. analyzed the data and edited the manuscript. M.S.W. performed the experiment and analyzed the data. C.R.B supervised the project and edited the manuscript.
Corresponding author
Correspondence to Katarzyna J. Nytko.
Ethics declarations
Competing interests
The authors declare no competing interests.
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Reprints and Permissions
Nytko, K.J., Thumser-Henner, P., Russo, G. et al. Role of HSP70 in response to (thermo)radiotherapy: analysis of gene expression in canine osteosarcoma cells by RNA-seq. Sci Rep 10, 12779 (2020). https://doi.org/10.1038/s41598-020-69619-2
• Accepted:
• Published:
|
The demand function for a Christmas music CD is given by where $x$ (measured in units of a hundred) is the quantity demanded per week and $p$ is the unit price in dollars.
(a) Evaluate the elasticity at 10. $E(10)=$
(b) Should the unit price be lowered slightly from 10 in order to increase revenue?
(c) When is the demand unitary? $p=$ dollars
(d) Find the maximum revenue. Maximum revenue = hundreds of dollars
Your overall score for this problem is
|
# Area of Union of n circles
I am trying to calculate the area of union of n circles in a plane when it is known that all circles are of equal radii and their centers are also known(of all n circles). I was trying to follow the set theory approach(inclusion-exclusion principle), where we know the formula for union of n sets. I was just using an operator Ar() which gives the area,i.e. Ar(A) gives me the area of A. I first tried to find out which circle is intersecting with which other circle(s) with the help of D<2R(D=dist between the centers of the two circles), then I was trying to calculate the area of intersection between them pairwise and hence find the area of union. But I am getting stuck for n>4. Can anyone provide a soln to this(soln by the set theory approach is necessary). Thanks in advance
-
For the inclusion-exclusion approach, you need to be able to calculate for each set $S$ of circles the area $A_S$ of their intersection. Consider a set of circles, all of radius $1$, whose intersection is nonempty. The intersection will be a convex region bounded by $k$ arcs (where $k$ might be less than the number of circles); ignoring trivial cases, I'll suppose $k\ge 2$. Let $P_i = (x_i, y_i), i=0 \ldots k$, be the endpoints of the arcs, taken counterclockwise, with (for convenience) $P_0 = P_k$. Note that the area of the "cap" cut off from a circle of radius $1$ by a chord of length $L$ is $f(L) = \arcsin(L/2) - L \sqrt{4 - L^2}/4$, while the area of the polygon with vertices $P_i$ is $\sum_{i=1}^k (x_{i-1} - x_i)(y_{i-1}+y_i)/2$. So the total area of the intersection is $$A_S = \sum_{i=1}^k \left( f\left(\sqrt{(x_i - x_{i-1})^2 + (y_i - y_{i-1})^2}\right) + \frac{(x_{i-1} - x_i)(y_{i-1}+y_i)}2 \right)$$
Thank you for your answer. However I couldn't get the last part where you have mentioned the formula for $A_{s}$. Can you pl elaborate on that bit? Thanks in advance.wouldn't the area of intersection be the $Area of Polygon$ $+$ $\sum_{i=1}^k(Area of cap)_{i}$ – Saptarshi Jan 6 '12 at 9:40
Yes, that's what I wrote: the area of the polygon is the sum of $(x_{i-1}-x_i)(y_{i-1}+y_i)/2$, and the aum of the areas of the caps is the sum of $f(\sqrt{(x_i-x_{i-1})^2+(y_i-y_{i-1})^2})$ – Robert Israel Jan 6 '12 at 17:28
|
# Tag Info
3
Sounds like you haven't installed build support for Linux binaries in Unity. The way to do this in Unity can be found here: Open the Hub. Select Installs. Find the Editor you want to add the components to. Click the three dots to the right of the version label, then select Add Modules. (NOTE: If you didn’t install the Editor via the Hub, you will not see ...
2
Let's assume that although it takes time to change our momentum, we're allowed to change our acceleration vector instantaneously. (If we have to model gradually changing acceleration over time, computing the optimal intercept trajectory gets much harder, so I'd argue it's not worth it - the trajectory we'll get this way still has $C_1$ continuity, so it ...
2
Awake() and Start() are called only once per object for reasons unrelated to Singletons. The single instance part of the singleton is fine, and its possible to make it thread safe and control how many things its keeping in memory because it never dies. It's the global accessibility that a lot of design pattern people dislike. Having a singleton in your code ...
2
When you have a data-structure which fits your design workflow but is bad for runtime, then you always have the option to convert it into a more runtime-appropriate data structure at game startup. Your current architecture seems to allow you to quickly see which rank unlocks what content. But you might also require the reverse: On which rank is a specific ...
2
Run this from an Editor script to copy your AudioSource to the prefab: UnityEditorInternal.ComponentUtility.CopyComponent(source.GetComponent<AudioSource>()); UnityEditorInternal.ComponentUtility.PasteComponentAsNew(source.ebullet); This does not work at runtime, but you can update your prefabs in the Editor and then they'll be good to go at runtime....
1
Well I got a few tips for you to try to get it working : I'm not really sure what the exact issue is. I think it might might actually be a bug in Unity. Anyways, it deals with the SpriteRenderer and the order in which it is applied to the gameobject (relative to the Particle System).Try to remove the SpriteRenderer, and then add it (or just create a whole ...
1
Click on the image asset in your "Project" tab and check its import settings in the "Inspector" tab. For pixel art you usually want the following settings: Texture Type: Sprite (2D and UI) Filter Mode: Point Compression: None
1
Click on your texture, the texture import settings appear in the inspector window.... look for Filter Mode... bilinear, trilinear, point. Try changing it to Point(no filter) and that should eliminate any distortion. And what do you exactly mean by distortion? It seemed fine for me when I imported with point filter mode. Anyway let me know if it worked.
1
An elegant solution popped into my head: The core of it is this code that should run each frame: Vector3 desiredVector = (destination - transform.position).normalized; Vector3 currentVector = body.velocity.normalized; accelerationVector = desiredVector - currentVector; This code picks an acceleration vector that will counteract momentum that is moving us ...
1
This type of thing is often implemented using some form of the Singleton pattern. Here's one version, fairly similar to what you're doing now, just without the redundant double-search for the object, then the component, when all you really wanted was the component: public class Player: MonoBehaviour { static Player _instance; public static Player ...
1
For minimum loading times, you can have both the world map and the battleground in the same scene as two different gameObjects. One option would be to have both of them in the same space as two different gameObjects and just deactivate the one you are not currently playing on. You can also keep both active and use rendering layers to tell the camera which ...
1
The main advantage of Addressables is that they make it very easy for your game to acquire a certain asset by name at runtime. This used to be pretty annoying with asset bundles. First you had to know in which asset bundles that asset was hiding in. Then you had to find out if this asset bundles was already loaded, and when it wasn't you had to load it first....
1
some special prefabs or gameobjects need a certain parent object. for example any Unity UI object needs to be parented directly or indirectly with canvas object. NGUI is the same and any ui object needs to be parented with object with UIRoot Component with same layer. in new version of ngui its handled but in older versions Unity doesnt know what is the ...
1
After DMGregory's comment, I'm rethinking my answer. You are already on a right way, actually. Let's say some other script assign target (the player) to your NavMeshAgent (enemy). It can be done through public variable or through a method. As your target is assigned, you want to check, what is the distance left to it and if it is smaller than some threshold (...
1
It looks like you want to do something like this, using PlayerPrefs to store the state between runs of the game. (This is insecure, but so is using the local system time - if a player wants to cheat, they can just change their local clock anyway, so we might as well keep things simple) using System.Collections; using UnityEngine; using UnityEngine.UI; using ...
1
Your if-else block for constraining the position of the stick looks like this: if (currentPoint.y >= 2.7f){currentPoint.y = 2.7f;} if (currentPoint.x >= 2.5f){currentPoint.x = 2.5f;} if (currentPoint.y <= -2.7f){currentPoint.y = -2.7f;} if (currentPoint.x <= -2.5f){currentPoint.x = -2.5f;} This is going to lock the stick in a square region, not ...
1
You don't need to reset the mouse axes. The mouse axes (e.g. Input.GetAxis("Mouse X")) reflect mouse movement over the last frame. If you do not move the mouse, the value returned from this axis will be 0. You need to reset your fields horizontalRotation and verticalRotation in your OnTriggerEnter().
1
Following @DMGregory's excellent feedback, I wrote the following class to test: public class MaterialTextureTest : MonoBehaviour { public Material material = null; public List<string> textureNames = new List<string>(); public List<Texture2D> textures = new List<Texture2D>(); // Start is called before the first frame ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
# Are these two scenarios equivalent ? (random walks on chessboard)
The random walks start at $(x,y)=(0,0)$ on an infinite chessboard which covers the whole upper plane. Let's say $(0,0)$ is white.
Random walk 1: At every step, I always go up one square, and either one square to the left or to the right with probability $1/2$ for each. (so the displacement occurs only on white squares)
After many random walks 1 stopped at step $t$, the final values of $x$ (position of the end of the paths) are stored and their standard deviation $\sigma_1$ is computed
Random walk 2: On each square of the chessboard, a normal number following $N(0,1)$ is placed randomly. An algorithm computes for every connected path of $t$ steps on white squares the sum of the Gaussian numbers encountered and chooses the path whose corresponding sum is minimal. The end values of $x$ are stored and their standard deviation $\sigma_2$ is computed, for many random walks of length $t$ (the same amount of samples than for random walk $1$).
Should $\sigma_1$ and $\sigma_2$ differ ?
Context: I am asking this question because I was tasked to design an algorithm which picks the path minimizing the sum of Gaussian numbers, and wanted to check my results with a simpler problem which I think is equivalent. I programmed the two algorithms, and get different values for the $\sigma_i$, although I can't see how a significant difference can be justified.
Actually, for the random walk $1$, I didn't need to even code anything, I could solve it using combinatorics too. A random path is basically picked uniformly. All paths have the same length, but there are more paths leading to an end position close to the center. Computing the probabilities I can infer $\sigma_1$ easily.
In random walk $2$, the setting seems to be the same... there can only be one path minimizing the sum, and each path has the same length, so every path has the same probability to be chosen. Using the combinatorics argument, $\sigma_2$ should be the same as $\sigma_1$
So, are my maths wrong or is my algorithm faulty ?
• The title has already been corrected, but for future reference, for the plural of scenario refer to english.stackexchange.com/questions/11775/… – David K Nov 11 '16 at 14:31
• In RW2 do you restrict the paths to simple paths or are the paths allowed to cycle (between squares with negative gaussians). Also, are the steps restricted to squares that share a corner? – A.G. Nov 12 '16 at 3:18
• @A.G Good point, no they don't cycle. At every step, it always go forwards one square and goes either right or left. The shape of the paths are like RW1, but the probability laws aren't the same – Evariste Nov 12 '16 at 10:43
## 2 Answers
Answering self:
$\sigma_2>\sigma_1$
The two problems are not equivalent because in random walk $1$, a random path is chosen uniformly among the possible ones, but not in random walk $2$
Indeed, in random walk $2$, despite being the same length, they cross and share Gaussian numbers, especially near the center. So one single number will affect several paths at the same time (either favorably or unfavorably), and cause the walk to either avoid or be attracted to the center, widening the variations of the ending positions.
This is a partial answer. In RW1 the end point will be $(X,t)$ where $X=\sum_1^t Y_k$ and the $Y_k$'s are independent random variables with $P(Y_k=-1)=P(Y_k=1)=1/2$. The variance of each of the $Y_k$'s is $1$ and therefore $$\sigma_1=\sqrt{t}.$$
• Yes, thanks for pointing that out. I was using the combinatorics argument mainly to try and find $\sigma_2$ since I can't find an easy way to compute it because of correlations, but it seems like it isn't valid and I get numerically $\sigma_2 \approx 2/3$ – Evariste Nov 12 '16 at 11:28
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.