url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://math.stackexchange.com/questions/869515/functional-equation-relating-to-normal-numbers
# Functional equation relating to normal numbers My coauthor and I have run into the following problem in a research project involving normal numbers. We suspect that the following question may be resolved using standard techniques in analysis. We take the empty product to be $1$. ### Question 1. Let $m\ge 2$ and let $c_1,c_2,\cdots,c_m$ be integers $\geq 2$. Suppose that $f:[0,1] \to [0,1]$ is continuous and non-decreasing, f(0)=0, f(1)=1, and satisfies $$\tag{1} \frac{1}{m} \sum_{r = 0}^{m-1} \sum_{j = 0}^{c_1 \ldots c_{r} - 1} \left(f\left(\frac{j+x}{c_1 \ldots c_{r}}\right) - f\left(\frac{j}{c_1 \ldots c_{r}}\right)\right)=x$$ for all $x \in [0,1]$. Note that $f(x) = x$ satisfies these conditions. Is it true that this is the only function that satisfies these conditions? Since $f$ is non-decreasing, the derivative exists almost everywhere. Differentiating the previous equation, we have $$\tag{2}\frac{1}{m} \sum_{r=0}^{m-1} \sum_{j=0}^{c_1 \cdots c_r -1} \frac{1}{c_1 \cdots c_r} f'\left(\frac{j+x}{c_1 \cdots c_r}\right) = 1$$ for a.e. $x \in [0,1]$. Does this imply that $f'(x)=1$ almost everywhere? If this is true, then we can prove that the answer to our question is yes. 2. In case the answer to question $1$ is no, consider the following conditions. Let $m \geq 2$ and let $(c_1, c_2, \cdots)$ be a sequence of integers $\geq 2$ with $c_{i} = c_{i+m}$ for all $i$. Suppose that $f: [0,1] \to [0,1]$ is continuous, non-decreasing, f(0)=0, f(1)=1, and satisfies $$\tag{3} \frac{1}{mt} \sum_{r = 0}^{mt-1} \sum_{j = 0}^{c_1 \ldots c_{r} - 1} \left(f\left(\frac{j+x}{c_1 \ldots c_{r}}\right) - f\left(\frac{j}{c_1 \ldots c_{r}}\right)\right)=x$$ for all $x \in [0,1]$ and all positive integers $t$. Is it true that $f(x) =x$ for all $x \in [0,1]$? Note that a similar equation holds for the derivative as well. ### What we've tried Note that the left side of Equation 2 is an average of $m$ Perron-Frobenius operators (acting on $L^1([0,1])$) applied to $f'$. We have that $f' \in L^1([0,1])$ by applying the Lebesgue decomposition theorem to the distributional derivative of $f$. We do not have much experience with the study of Perron-Frobenius operators, so we are not sure if this observation is helpful. The left sides of both Equation 1 and 3 can be regarded as averages of variations of $f$ over specific types of partitions. Taking the Fourier transform of both sides of Equation 1 gives (writing $e_n(x) = e^{2\pi i n x}$ and $a_n = \int_0^1 f(x) e_n(x) dx$) $$\tag{4} \frac{1}{m} \sum_{r=0}^{m-1} c_1 \cdots c_r a_{c_1 \cdots c_r n} = \frac{i}{2\pi n} \hbox{ for } n\neq 0$$ and $$\tag{5} \frac{1}{m} \sum_{r=0}^{m-1} c_1 \cdots c_r \left( a_0 - \sum_{j=0}^{c_1 \cdots c_r -1} \frac{1}{c_1 \cdots c_r} f\left(\frac{j}{c_1 \cdots c_r}\right) \right) = \frac{1}{2}.$$ • Since this is part of a research project, I might suggest MathOverflow. However, people might have an answer on this site. – Clarinetist Jul 17 '14 at 1:51 • We wanted to try here first, but if we don't get any answers, we'll post to overflow. – Dylan Airey Jul 17 '14 at 1:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.980374813079834, "perplexity": 97.60586062566136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998250.13/warc/CC-MAIN-20190616142725-20190616164725-00318.warc.gz"}
https://physics.aps.org/synopsis-for/10.1103/PhysRevLett.110.041301
# Synopsis: Waiting for Dark Matter to Light Up The H.E.S.S. Collaboration has put bounds on the flux of very-high-energy gamma rays that could be produced by annihilations of dark matter particles. Clear detection of dark matter would fill some of the more puzzling holes in astrophysics and cosmology. In recent years, researchers have sought after signals originating from annihilation or decay of dark matter particles in space using high-energy messenger particles like gamma rays, electrons, and neutrinos. In a paper in Physical Review Letters, the H.E.S.S. Collaboration reports on their analysis of very-high-energy gamma-ray events, which they detected with an array of ground-based gamma-ray telescopes in Namibia, Africa. As a result, the team has placed upper bounds on gamma-ray flux levels in a largely unexplored energy band between 500 giga-electron-volts and 25 tera-electron-volts, which limits the annihilation cross section to values complementing recent lower energy measurements. The researchers looked at data taken at times when the H.E.S.S. instrument was pointed (among others) at the central region of the Milky Way, a place expected to have large concentrations of dark matter. In particular, they looked for evidence of a gamma-ray emission line feature that might indicate self-annihilation, however, they observed no signal amid the unavoidable background from high-energy cosmic rays. Nonetheless, the H.E.S.S. team can now set upper limits on the gamma-ray flux from self-annihilation at high energies, and compare these with limits derived at lower energies by the Fermi-LAT satellite. In a sense, H.E.S.S. takes up at the highest energies inaccessible to Fermi-LAT. The search for an actual detection of dark matter continues both on the ground and in orbit. – David Voss More Features » ### Announcements More Announcements » Astrophysics Nuclear Physics ## Related Articles Astrophysics ### Synopsis: Turning up the Ringdown Stacking up gravitational-wave “ringdown” signals from a set of black hole mergers increases the sensitivity of the signals to black hole properties. Read More » Quantum Physics ### Synopsis: Soothing Quantum Effects Quantum-mechanical effects may remove some unphysical features of spacetime predicted by classical general relativity.   Read More » Astrophysics ### Focus: Hard and Soft Bounces Explain Asteroid’s Surface Structure Experiments and computer simulations show that the segregation of small and large rocks on an asteroid’s surface can arise from the way particles hitting the surface collide with the rocks already present. Read More »
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8805555105209351, "perplexity": 2030.1720947002802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121869.65/warc/CC-MAIN-20170423031201-00134-ip-10-145-167-34.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3574530/if-x412x-5-has-roots-x-1-x-2-x-3-x-4-find-polynomial-with-roots-x-1x-2-x
# If $x^4+12x-5$ has roots $x_1,x_2,x_3,x_4$ find polynomial with roots $x_1+x_2,x_1+x_3,x_1+x_4,x_2+x_3,x_2+x_4,x_3+x_4$ I have the polynomial $$x^4+12x-5$$ with the roots $$x_1,x_2,x_3,x_4$$ and I want to find the polynomial whose roots are $$x_1+x_2,x_1+x_3,x_1+x_4,x_2+x_3,x_2+x_4,x_3+x_4$$. I found the roots $$x_1=-1+\sqrt{2},x_2=-1-\sqrt{2},x_3=1-2i,x_4=1+2i$$. And after long computations the polynomial is $$x^6+20x^2-144$$. Are there clever way to find it? Let $$s_1,p_1$$ the sum and product of any two roots and $$s_2,p_2$$ the sum and product of the other two roots. From Vieta's: $$\begin{cases} s_1+s_2=0 \\ s_1s_2+p_1+p_2=0\\ p_1s_2+p_2s_1=-12\\ p_1p_2=5 \end{cases}$$ Substitute $$s_2=-s_1$$ $$\begin{cases} p_1+p_2=s_1^2\\ -p_1+p_2=\frac{12}{s_1}\\ p_1p_2=5 \end{cases}$$ or $$\begin{cases} p_1=\frac{1}{2}\left(s_1^2+\frac{12}{s_1}\right)\\ p_2 =\frac{1}{2}\left(s_1^2-\frac{12}{s_1}\right)\\ p_1p_2=5 \end{cases}$$ Replace in the last equation: $$\frac{1}{4}\left(s_1^2+\frac{12}{s_1}\right)\left(s_1^2-\frac{12}{s_1}\right)=5$$ or equivalently $$s_1^6+20s_1-144=0$$. Since $$s_1$$ can be the sum of any two roots, that means each of $$x_1+x_2,x_1+x_3,x_1+x_4,x_2+x_3,x_2+x_4,x_3+x_4$$ is a root of $$X^6+20X-144$$ and there are not other roots. Of course, any other polynomial $$a(X^6+20X-144),\ a\in\mathbb{R}$$ satisfies the requirements as well. By Vieta's formulae, we have $$x_1 + x_2 + x_3 + x_4 = 0$$, $$x_1x_2 + x_1x_3 + x_1x_4 + x_2x_3 + x_2x_4 + x_3x_4 = 0$$, $$x_1x_2x_3 + x_1x_2x_4 + x_1x_3x_4 + x_2x_3x_4 = -12$$, and $$x_1x_2x_3x_4 = 5$$. We can now calculate \begin{align*}(x_1+x_2)+(x_1+x_3)+(x_1+x_4)+(x_2+x_3)+(x_2+x_4)+(x_3+x_4) &= 3(x_1+x_2+x_3+x_4) \\ &= 0\end{align*} Similarly, \begin{align*}&(x_1+x_2)(x_1+x_3) + (x_1+x_2)(x_1+x_4) + \dotsb + (x_2+x_4)(x_3+x_4) \\ &= 3\left(x_1^2+x_2^2+x_3^2+x_4^2\right)+8\left(x_1x_2+x_1x_3+x_1x_4+x_2x_3+x_2x_4+x_3x_4\right) \\ &= 3\left[(x_1+x_2+x_3+x_4)^2-2(x_1x_2+x_1x_3+x_1x_4+x_2x_3+x_2x_4+x_3x_4)\right] + 8(0) \\ &= 3(0^2-2(0))+8(0) \\ &= 0\end{align*} and so on. Once we have computed all the symmetric polynomials, we can then use Vieta's formulae again to form an equation with the desired roots. I do not pretend that the solution I propose is simpler, but its big advantage is that it is "computer oriented", therefore able to manage any polynomial degree. Let us first give a name to the initial polynomial : $$P(x)=x^4+12x-5$$ We are going to use the resultant of two monic polynomials $$P$$ and $$Q$$, which is defined as the product of all the differences between their roots. $$\operatorname{Res}(P,Q)=\prod(\alpha_i-\beta_j),$$ ($$\alpha_i :$$ roots of $$P$$, $$\beta_j :$$ roots of $$Q$$). $$\operatorname{Res}(P,Q)$$ is zero if and only if $$P$$ and $$Q$$ have a common root. The interest of resultants is mainly in issues like this one with presence of parameters. Here, we are going to introduce a parameter $$s$$ by taking the resultant of the initial polynomial $$P$$ and the new polynomial $$Q_s(x):=P(s-x)$$ $$\operatorname{Res}(P,Q_s)$$ will be a polynomial in the variable $$s$$ which will be zero if and only if there is a value of $$s$$ such that $$\alpha_i=s-\beta_j \ \ \ \iff \ \ \ s=\alpha_i+\beta_j$$ for some $$i,j$$, which is what we desire. An explicit form of $$Q_s$$ is : $$Q_s(x)=x^4 + \underbrace{(-4s)}_{A}x^3 + \underbrace{(6s^2)}_{B}x^2 + \underbrace{(- 4s^3 - 12)}_{C}x + \underbrace{(s^4 + 12s - 5)}_{D}\tag{1}$$ Let us now form the resultant matrix of $$P$$ and $$Q_s$$ (obtained by repeating 4 times the coefficients of the first, then the second polynomial, with a shift at each new row as indicated in the reference given upwards) : $$R=\left(\begin{array}{cccccccc} 1& 0& 0& 12& -5& 0& 0& 0\\ 0& 1& 0& 0& 12& -5& 0& 0\\ 0& 0& 1& 0& 0& 12& -5& 0\\ 0& 0& 0& 1& 0& 0& 12& -5\\ 1& A& B& C& D& 0& 0& 0\\ 0& 1& A& B& C& D& 0& 0\\ 0& 0& 1& A& B& C& D& 0\\ 0& 0& 0& 1& A& B& C& D \end{array}\right)$$ Let us expand and factorize $$\det(R)$$ (all operations done with a Computer Algebra System) : $$\det(R)=(s^2 + 4s - 4)(s^2 - 4s + 20)(\underbrace{(s - 2)(s + 2)(s^4 + 4s^2 + 36)}_{\color{red}{s^6+20s^2-144}})^2$$ The first two factors have to be discarded because they correspond to spurious roots $$x_k+x_k$$. It remains the content of the square factor which is the looked for polynomial... Here is the corresponding (Matlab) program : function main; syms s x; % symbolic letters P=[1,0,0,12,-5]; % it's all we have to give ; the rest is computed... lp=length(P);pol=0; for k=1:lp; pol=pol+P(k)*x^(lp-k); end; Qs=coeffs(collect(expand(subs(pol,x,s-x)),x),x); Qs=fliplr(Qs); % list reversal ("flip left right') R=Resu(P,Qs)' factor(det(R)) % function R=Resu(P,Q) ; % Resultant matrix p=length(P)-1;q=length(Q)-1; % degrees of P,Q R=sym(zeros(p+q)); for k=1:q R(k,k:k+p)=P; % progressive shifting end for k=1:p R(k+q,k:k+q)=Q; end R=R' Remarks : 1) The presence of a square around the solution isn't in fact surprizing : we have the same phenomenon with the discriminant of a polynomial : $$\operatorname{Disc}(P)=\operatorname{Res}(P,P')=\prod_{i \neq j}(\alpha_i-\alpha_j)^2,$$ 2) A similar issue can be found here. 3) In (1), if necessary, these coefficients $$A,B,C$$ and $$D$$ can be considered as issued from a Taylor expansion: $$D=P(s), C=-P'(s), B=\tfrac12 P''(s), A=-\tfrac16 P'''(s)$$. 4) Another category of problems, polynomial transformations, for example finding the polynomial whose roots are the $$\alpha_k+1/\alpha_k$$ where the $$\alpha_k$$s are the roots of a given polynomial $$P$$ can as well be solved using resultants see here. • @Atticus : Thanks – Jean Marie Mar 9 '20 at 9:51 • It looks good (+1) – LHF Mar 9 '20 at 10:50 Let $$x_1+x_2=u, x_1x_2=v$$, then the connections of roots with coefficients (Vieta's formulas) $$x^4+12x-5=0$$ give: $$x_1+x_2+x_3+x_4=0 \implies x_3+x_4=-u ~~~(1)$$ $$x_1x+2+x_3x_4+(x1+x_2)(x_3+x_4)=0 ~~~~(2)$$ $$x_1x_2(x_3+x+4)+x_3x_4(x_1+x_2)=-12~~~~(3)$$ $$x_1x_2x_3x_4=-5~~~~(4)$$ Using (4) in (2), we get $$v-5/v-u^2=0~~~~(5)$$ Introducing $$u$$ and using (4) in Eq. (3), we get $$v(-u)-5u/v=-12 ~~~~(6)$$ From (5) and (6), we need to eliminate $$v$$, then the eliminant will be a sixth degree polynomial of $$u$$ as $$(v+5/v)^2-(v-5/v)^2=20 \implies (12/u)^2-u^4=20 \implies u^6+20u^2-144=0 ~~~(7)$$ Hence, by symmetry the $$u$$-polynomial Eq. (7) will have six roots as $$x_1+x_2,x_2+x_3,...$$. If you eliminate $$u$$ from (5) and (6) you will get a $$v$$-polynomial Eq. whose six roots will be $$x_1x_2, x_2x_3, ...$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 74, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9951193928718567, "perplexity": 287.5558029554208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066613.21/warc/CC-MAIN-20210412053559-20210412083559-00200.warc.gz"}
http://kleine.mat.uniroma3.it/mp_arc-bin/mpa?yn=10-122
10-122 Aleksey Kostenko and Gerald Teschl On the Singular Weyl-Titchmarsh Function of Perturbed Spherical Schroedinger Operators (97K, LaTeX2e) Aug 9, 10 Abstract , Paper (src), View paper (auto. generated ps), Index of related papers Abstract. We investigate the singular Weyl--Titchmarsh $m$-function of perturbed spherical Schr\"odinger operators (also known as Bessel operators) under the assumption that the perturbation $q(x)$ satisfies $x q(x) \in L^1(0,1)$. We show existence plus detailed properties of a fundamental system of solutions which are entire with respect to the energy parameter. Based on this we show that the singular $m$-function belongs to the generalized Nevanlinna class and connect our results with the theory of super singular perturbations. Files: 10-122.src( 10-122.comments , 10-122.keywords , SingMB.tex )
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9826763868331909, "perplexity": 2921.6644331261687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159561.37/warc/CC-MAIN-20180923153915-20180923174315-00178.warc.gz"}
https://electricalacademia.com/electrical-comparisons/difference-between-ac-generator-and-dc-generator/
# Difference between AC and DC Generator Want create site? Find Free WordPress Themes and plugins. The AC generator generates an output voltage which alters in amplitude as well as time whereas the DC generator generates a constant output voltage which does not change in amplitude as well as time. The electrical energy we utilize has two fundamental types, one is known as Alternating while the other one is Direct. The electric power in homes has alternating current and voltages, but in an automobile currents and voltages are constant. Both types have their own particular uses whereas the technique of producing the both is same, which is called electromagnetic induction. The machines which produce an electric power are called generators, and AC and DC generators are different from each other by the method they apply to pass the produced current to the external circuit. The key differences between AC and DC Generator are discussed here on the basis of real-world factors such as output power, current induction type, brushes & maintenance, voltage level, commutation process, and types.The following table presents the key differences between AC Generator and DC Generator. ## Difference between AC Generator and DC Generator CharacteristicsAC GeneratorDC Generator Output PowerGenerates AC electric powerProduces DC electric power Ringsit has slip-ringsit uses a split-ring commutator Current induction Output current can be induced either in the rotor or in the statorOutput current is induced in the rotor ONLY BrushesSlip-rings have smooth and uninterrupted surfaces, which allow the brushes to remain in contact uninterruptedly with the slip ring surfaces. Hence, the brushes do not wear as quickly as in case of a DC generator and there is almost no possibility of a short circuit. Commutators and brushes wear out very quickly thus a short circuit is possible in the commutator because of sparking. MaintenanceRequires very less maintenance and thus more reliable than a DC GeneratorIt requires frequent maintenance Voltage LevelUsed to generate very high voltagesUsed to generate comparatively lower voltages CommutatorsIt Does not have commutatorsIt possesses commutators in order to counter the changing polarities effect. Voltage DistributionAC voltage can be distributed quite easily using transformers.DC Voltage is comparatively difficult to distribute efficiently. TypesRotating armature, Rotating field Single phase, Three phase Permanent magnet, Separately-excited, Self-excited Did you find apk for android? You can find new Free Android Games and apps.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8342797160148621, "perplexity": 2062.5124566256177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088731.42/warc/CC-MAIN-20210416065116-20210416095116-00153.warc.gz"}
https://open.library.ubc.ca/cIRcle/collections/ubctheses/24/items/1.0355266
# Open Collections ## UBC Theses and Dissertations ### Enabling large-scale seismic data acquisition, processing and waveform-inversion via rank-minimization Kumar, Rajiv 2017 Media 24-ubc_2017_november_kumar_rajiv.pdf [ 30.3MB ] JSON: 24-1.0355266.json JSON-LD: 24-1.0355266-ld.json RDF/XML (Pretty): 24-1.0355266-rdf.xml RDF/JSON: 24-1.0355266-rdf.json Turtle: 24-1.0355266-turtle.txt N-Triples: 24-1.0355266-rdf-ntriples.txt Original Record: 24-1.0355266-source.json Full Text 24-1.0355266-fulltext.txt Citation 24-1.0355266.ris #### Full Text `Enabling large-scale seismic data acquisition, processing andwaveform-inversion via rank-minimizationbyRajiv KumarB.Sc., Hindu College, Delhi University, 2006M.Sc., Indian Institute of Technology, Bombay, 2008a thesis submitted in partial fulfillmentof the requirements for the degree ofDoctor of Philosophyinthe faculty of graduate and postdoctoral studies(Geophysics)The University Of British Columbia(Vancouver)August 2017c© Rajiv Kumar, 2017AbstractIn this thesis, I adapt ideas from the field of compressed sensing to mitigate the computationaland memory bottleneck of seismic processing workflows such as missing-trace interpolation, sourceseparation and wave-equation based inversion for large-scale 3- and 5-D seismic data. For interpo-lation and source separation using rank-minimization, I propose three main ingredients, namely arank-revealing transform domain, a subsampling scheme that increases the rank in the transformdomain, and a practical large-scale data-consistent rank-minimization framework, which avoids theneed for expensive computation of singular value decompositions. We also devise a wave-equationbased factorization approach that removes computational bottlenecks and provides access to thekinematics and amplitudes of full-subsurface offset extended images via actions of full extended im-age volumes on probing vectors, which I use to perform the amplitude-versus- angle analyses andautomatic wave-equation migration velocity analyses on complex geological environments. After abrief overview of matrix completion techniques in Chapter 1, we propose a singular value decompo-sition (SVD)-free factorization based rank-minimization approach for large-scale matrix completionproblems. Then, I extend this framework to deal with large-scale seismic data interpolation prob-lems, where I show that the standard approach of partitioning the seismic data into windows is notrequired, which use the fact that events tend to become linear in these windows, while exploiting thelow-rank structure of seismic data. Carefully selected synthetic and realistic seismic data examplesvalidate the efficacy of the interpolation framework. Next, I extend the SVD-free rank-minimizationapproach to remove the seismic cross-talk in simultaneous source acquisition. Experimental resultsverify that source separation using the SVD-free rank-minimization approaches are comparableto the sparsity-promotion based techniques; however, separation via rank-minimization is signif-icantly faster and memory efficient. We further introduce a matrix-vector formulation to formfull-subsurface extended image volumes, which removes the storage and computational bottleneckfound in the convention methods. I demonstrate that the proposed matrix-vector formulation isused to form different image gathers with which amplitude-versus-angle and wave-equation mi-gration velocity analyses is performed, without requiring prior information on the geologic dips.Finally, I conclude the thesis by outlining potential future research directions and extensions of thethesis work.iiLay SummaryIn this thesis, I have developed fast computational techniques for large-scale seismic applicationsusing SVD-free factorization based rank-minimization approach for missing-trace interpolation andsource separation. The proposed framework is built upon the existing knowledge of compressedsensing as a successful signal recovery paradigm and outlined the necessary components of a low-rank domain, a rank-increasing sampling scheme, and a SVD-free rank-minimizing optimizationscheme for successful missing-trace interpolation. I also proposed a matrix-vector formulationthat avoids the explicit storage of full-subsurface offset extended image volumes and removes theexpensive loop over shots found in the conventional extended imaging.iiiPrefaceAll of my thesis work herein presented is carried out under the supervision of Dr. Felix J. Herrmann,in the Seismic Laboratory for Imaging and Modeling at the University of British Columbia.Chapter 1 is prepared by me. Figures related to seismic data survey designs are taken fromthe publicly available seismic literature with citations added at the appropriate locations in thechapter.A version of Chapter 2 was published in A. Aravkin, R. Kumar, H. Mansour, B. Recht, andF. J. Herrmann, ”Fast methods for denoising matrix completion formulations, with applicationsto robust seismic data interpolation”, SIAM Journal on Scientific Computing, 36(5):S237-S266,2014. F. Herrmann developed the idea of SVD-free matrix completion techniques for seismic data.B. Recht was involved in the initial theoretical discussion on matrix completion techniques. Icollaborated with A. Aravkin and H. Mansour in the theoretical developments of SVD-free matrixcompletion framework along with its robust and weighting extensions. A. Aravkin coded the SPG-LR framework. I coded all the experiments and prepared all the figures along with the parametersrelated to the experiments in this chapter. This manuscript was mainly written by A. Aravkin andH. Mansour, and i contributed to the numerical and experimental section. F. Herrmann and B.Recht were involved in the final manuscript editing.A version of Chapter 3 was published in R. Kumar, C. D. Silva, O. Akalin, A. Y. Aravkin,H. Mansour, B. Recht, and F. J. Herrmann, ”Efficient matrix completion for seismic data recon-struction”, Geophysics, 80(05): V97-V114, 2015. All the authors were involved in the initial layoutof this work and manuscript edits. I, along with C. D. Silva, was responsible for the manuscriptand examples. I performed all the matrix completion examples using SPG-LR along with the realdata case study and comparison to sparsity-promotion based techniques. O. Akalin performedthe Jellyfish comparison example and C. D. Silva performed the ADMM and LmaFit comparisonexamples.A version of Chapter 4 was published in R. Kumar, H. Wason, and F. J. Herrmann, ” Sourceseparation for simultaneous towed-streamer marine acquisition: a compressed sensing approach”,Geophysics, 80(06): WD73-WD88, 2015. H. Wason identified the problem of source separationtechniques in seismic literature. I, along with H. Wason, developed the theoretical frameworkto solve the source separation problem using compressed sensing. I coded the underlying solverivusing rank-minimization based techniques to solve the source separation problem. F. Herrmannproposed to incorporate the HSS techniques in the rank-minimization framework. I performedall the experiments using the rank-minimization based techniques, and, H. Wason performed allthe experiments using the sparsity-promotion based techniques. I along with H. Wason wrote themanuscript and F. Herrmann was involved in the final manuscript editing. H. Wason and I madean equal contribution (approximately 50%) in developing this research idea for large-scale seismicdata applications. This chapter appears as Chapter 7 in H. Wason’s dissertation. H. Wason hasalso granted permission for this chapter to appear in my dissertation.A version of Chapter 5 was published in R. Kumar, S. Sharan, H. Wason, and F. J. Herrmann,”Time-jittered marine acquisition: a rank-minimization approach for 5D source separation”, SEGTechnical Program Expanded Abstracts, 2016, p. 119-123. This was my original contributionwherein i proposed the mathematical formulation, coded it and performed the verification examples.S. Sharan helped in simulation of continuous time-domain data. I wrote the abstract where all thecoauthors were involved in the abstracts editing.A version of Chapter 6 was published in T. van Leeuwen, R. Kumar, F. J. Herrmann, ”Enablingaffordable omnidirectional subsurface extended image volumes via probing”, Geophysical Prospect-ing, 1365-2478, 2016. T. van Leeuwen and F. Herrmann proposed the idea of probing techniques forextended-images. T. van Leeuwen provided the initial framework to compute the extended imagevolumes. I coded the AVA, 3D extended image volume and the wave-equation migration velocityanalysis examples. I, along with T. van Leeuwen, wrote the initial manuscript, and F. Herrmannprovided substantial editing of the manuscript during the review process.MATLAB and its parallel computing toolbox has been used to prepare all the examples inthis thesis. The `1 solver is a public toolbox presented by Ewout van den Berg and Michael P.Friedlander, whereas, SPG-LR solver is coded by Aleksandr Y. Aravkin. The Curvelet toolbox ispresented by Emmanuel Candes, Laurent Demanet, David Donoho and Lexing Ying.vTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiLay Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiiAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxiii1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.4 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Fast methods for denoising matrix completion formulations, with applicationsto robust seismic data interpolation. . . . . . . . . . . . . . . . . . . . . . . . . . 122.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.3 Regularization formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.4 Factorization approach to rank optimization . . . . . . . . . . . . . . . . . . . . . . . 172.5 Local minima correspondence between factorized and convex formulations . . . . . . 182.6 LR-BPDN algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.6.1 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.6.2 Increasing k on the fly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.6.3 Computational efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22vi2.7 Robust formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.8 Reweighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.8.1 Projection onto the weighted frobenius norm ball . . . . . . . . . . . . . . . . 272.8.2 Traversing the pareto curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272.9 Numerical experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.9.1 Collaborative filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.9.2 Seismic missing-trace interpolation . . . . . . . . . . . . . . . . . . . . . . . . 302.10 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392.11 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 Efficient matrix completion for seismic data reconstruction . . . . . . . . . . . 483.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493.2.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513.2.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523.3 Structured signal recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523.4 Low-rank promoting data organization . . . . . . . . . . . . . . . . . . . . . . . . . . 543.4.1 2D seismic data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553.4.2 3D seismic data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553.5 Large scale data reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563.5.1 Large scale matrix completion . . . . . . . . . . . . . . . . . . . . . . . . . . . 573.5.2 Large scale tensor completion . . . . . . . . . . . . . . . . . . . . . . . . . . . 603.6 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613.6.1 2D seismic data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623.6.2 3D seismic data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764 Source separation for simultaneous towed-streamer marine acquisition—a com-pressed sensing approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 774.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 774.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784.2.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 804.2.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 804.3 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814.3.1 Rank-revealing “transform domain” . . . . . . . . . . . . . . . . . . . . . . . 824.3.2 Hierarchical semi-separable matrix representation (HSS) . . . . . . . . . . . . 854.3.3 Large-scale seismic data: SPG-LR framework . . . . . . . . . . . . . . . . . . 88vii4.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 904.4.1 Comparison with nmo-based median filtering . . . . . . . . . . . . . . . . . . 934.4.2 Remark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 944.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 944.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 985 Large-scale time-jittered simultaneous marine acquisition: rank-minimizationapproach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1055.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1055.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1055.3 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1065.4 Experiments & results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1095.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1106 Enabling affordable omnidirectional subsurface extended image volumes viaprobing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1126.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1126.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1136.2.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1156.3 Anatomy & physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1156.4 Computational aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1186.5 Case study 1: computing gathers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1196.5.1 Numerical results in 2-D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1206.5.2 Numerical results in 3-D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1216.6 Case study 2: dip-angle gathers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1216.6.1 Numerical results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1226.7 Case study 3: wave-equation migration-velocity analysis (WEMVA) . . . . . . . . . 1246.7.1 Numerical results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1266.8 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1286.9 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1287 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1437.1 Main contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1437.1.1 SVD-free factorization based matrix completion . . . . . . . . . . . . . . . . . 1447.1.2 Enabling computation of omnidirectional subsurface extended image volumes 1477.2 Follow-up work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1487.3 Current limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1497.4 Future extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1497.4.1 Extracting on-the-fly information . . . . . . . . . . . . . . . . . . . . . . . . . 149viii7.4.2 Compressing full-subsurface offset extended image volumes . . . . . . . . . . 1507.4.3 Comparison of MVA and FWI . . . . . . . . . . . . . . . . . . . . . . . . . . 152Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153ixList of TablesTable 2.1 Summary of the computational time (in seconds) for LR-BPDN, measuring theeffect of random versus smart ([Jain et al., 2013, Algorithm 1]) initialization ofL and R for factor rank k and relative error level η for (BPDNη). Comparisonperformed on the 1M MovieLens Dataset. Type of initialization had almost noeffect on quality of final reconstruction. . . . . . . . . . . . . . . . . . . . . . . . 22Table 2.2 Summary of the recovery results on the MovieLens (1M) data set for factor rankk and relative error level η for (BPDNη). SNR in dB (higher is better) listedin the left table, and RMSE (lower is better) in the right table. The last rowin each table gives recovery results for the non-regularized data fitting factorizedformulation solved with Riemannian optimization (ROPT). Quality degrades withk due to overfitting for the non-regularized formulation, and improves with k whenregularization is used. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30Table 2.3 Summary of the computational timing (in seconds) on the MovieLens (1M) dataset for factor rank k and relative error level η for (BPDNη). The last row givescomputational timing for the non-regularized data fitting factorized formulationsolved with Riemannian optimization. . . . . . . . . . . . . . . . . . . . . . . . . . 30Table 2.4 Nuclear-norms of the solutions X = LRT for results in Table 2.2, correspondingto τ values in (LASSOτ ). These values are found automatically via root finding,but are difficult to guess ahead of time. . . . . . . . . . . . . . . . . . . . . . . . . 31Table 2.5 Classic SPGL1 (using Lanczos based truncated SVD) versus LR factorization onthe MovieLens (10M) data set ( 10000 × 20000 matrix) shows results for a fixediteration budget (100 iterations) for 50% subsampling of MovieLens data. SNR,RMSE and computational time are shown for k = 5, 10, 20. . . . . . . . . . . . . 31Table 2.6 LR method on the Netflix (100M) data set ( 17770×480189 matrix) shows resultsfor 50% subsampling of Netflix data. SNR, computational time and RMSE areshown for factor rank k and relative error level η for (BPDNη). . . . . . . . . . . 32xTable 2.7 TFOCS versus classic SPG`1 (using direct SVDs) versus LR factorization. Syn-thetic low rank example shows results for completing a rank 10, 100 × 100matrix, with 50% missing entries. SNR, Computational time and iterations areshown for η = 0.1, 0.01, 0.005, 0.0001. Rank of the factors is taken to be 10.Seismic example shows results for matrix completion a low-frequency slice at 10Hz, extracted from the Gulf of Suez data set, with 50% missing entries. SNR,Computational time and iterations are shown for η = 0.2, 0.1, 0.09, 0.08. Rank offactors was taken to be 28. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37Table 3.1 Curvelet versus matrix completion (MC). Real data results for completing afrequency slice of size 401× 401 with 50% and 75% missing sources. Left : 10 Hz(low frequency), right : 60 Hz (high frequency). SNR, computational time, andnumber of iterations are shown for varying levels of η = 0.08, 0.1. . . . . . . . . . 63Table 3.2 Single reflector data results. The recovery quality (in dB) and the computationaltime (in minutes) is reported for each method. The quality suffers significantlyas the window size decreases due to the smaller redundancy of the input data, asdiscussed previously. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70Table 3.3 3D seismic data results. The recovery quality (in dB) and the computational time(in minutes) is reported for each method. . . . . . . . . . . . . . . . . . . . . . . . 74Table 4.1 Comparison of computational time (in hours), memory usage (in GB) and averageSNR (in dB) using sparsity-promoting and rank-minimization based techniquesfor the Marmousi model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94Table 4.2 Comparison of computational time (in hours), memory usage (in GB) and averageSNR (in dB) using sparsity-promoting and rank-minimization based techniquesfor the Gulf of Suez dataset. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94Table 4.3 Comparison of computational time (in hours), memory usage (in GB) and averageSNR (in dB) using sparsity-promoting and rank-minimization based techniquesfor the BP model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95Table 6.1 Correspondence between continuous and discrete representations of the imagevolume. Here, ω represents frequency, x represents subsurface positions, and(i, j) represents the subsurface grid points. The colon (:) notation extracts avector from e at the grid point i, j for all subsurface offsets. . . . . . . . . . . . . 130Table 6.2 Computational complexity of the two schemes in terms of the number of sourcesNs, receivers Nr sample points Nx and desired number of subsurface offsets ineach direction Nh{x,y,z} . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130xiTable 6.3 Comparison of the computational time (in sec) and memory (in megabytes) forcomputing CIP’s gather on a central part of Marmousi model. We can see thesignificant difference in time and memory using the probing techniques comparedto the conventional method and we expect this difference to be greatly exacerbatedfor realistically sized models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130xiiList of FiguresFigure 1.1 Schematic representation of (a) marine and (b) land seismic data acquisition.Source [Enjolras, January 24 2017, RigZone, January 24 2017] . . . . . . . . . . 2Figure 1.2 Illustration of various marine acquisition geometries, namely towed-streamer (1),an ocean bottom geometry (2), buried seafloor array (3), and Vertical SeismicProfile (4). All the seismic surveys involve a source (S), which is typically anairgun for marine survey, and receivers (black dots) that are mainly hydrophonesand/or 3-component geophones. Source Caldwell and Walker [January 24 2017]. 3Figure 1.3 This table summarizes the different types of marine seismic surveys. Source Cald-well and Walker [January 24 2017]. . . . . . . . . . . . . . . . . . . . . . . . . . . 3Figure 1.4 Here, we illustrate the basic difference between the 2D and 3D survey geometry.The area covered by the two surveys is exactly identical, as suggested by thedashed contour lines. Source Caldwell and Walker [January 24 2017]. . . . . . . . 3Figure 1.5 Various types of seismic displays: (a) wiggle trace, (b) variable area, (c) variablearea wiggle trace, and (d) variable density. Copyright: Conoco Inc. . . . . . . . . 4Figure 1.6 Common-receiver gather. (a) Fully sampled and (b) 50% subsampled. The finalgoal is to recover the fully-sampled data from the subsampled data with minimalloss of coherent energy. (c, e) Reconstruction results from two different typesof interpolation and (d, f) corresponding residual plots. We can see that theinterpolation results in (e, f) are better than (c, d) because the energy loss issmall, especially at the cusp of the common-receiver gather. . . . . . . . . . . . . 7Figure 1.7 Simultaneous long-offset acquisition, where an extra source vessel is deployed,sailing one spread-length ahead of the main seismic vessel (see Chapter 4 formore details). We record overlapping shot records in the field and separate theminto non-overlapping shots using source separation based techniques. . . . . . . . 8Figure 1.8 Over/Under acquisition is an instance of low-variability in source firing times,i.e, two sources are firing within 1 (or 2) seconds (see Chapter 4 for more details). 9xiiiFigure 1.9 Time-jittered marine continuous acquisition, where a single source vessel sailsacross an ocean-bottom array firing two airgun arrays at jittered source locationsand time instances with receivers recording continuously . . . . . . . . . . . . . . 10Figure 2.1 Gaussian (black dashed line), Laplace (red dashdotted line), and Student’s t(blue solid line); Densities (left plot), Negative Log Likelihoods (center plot),and Influence Functions (right plot). Student’s t-density has heavy tails, a non-convex log-likelihood, and re-descending influence function. . . . . . . . . . . . . 25Figure 2.2 Frequency slices of a seismic line from Gulf of Suez with 354 shots, 354 receivers.Full data for (a) low frequency at 12 Hz and (b) high frequency at 60 Hz ins-r domain. 50% Subsampled data for (c) low frequency at 12 Hz and (d) highfrequency at 60 Hz in s-r domain. Full data for (e) low frequency at 12 hz and(f) high frequency at 60 Hz in m-h domain. 50% subsampled data for (g) lowfrequency at 12 Hz and (h) high frequency at 60 Hz in m-h domain. . . . . . . . 33Figure 2.3 Singular value decay of fully sampled (a) low frequency slice at 12 Hz and (c)high frequency slice at 60 Hz in (s-r) and (m-h) domains. Singular value decay of50% subsampled (b) low frequency slice at 12 Hz and (d) high frequency data at60 Hz in (s-r) and (m-h) domains. Notice that for both high and low frequencies,decay of singular values is faster in the fully sampled (m-h) domain than in thefully sampled (s-r) domain, and that subsampling does not significantly changethe decay of singular value in (s-r) domain, while it destroys fast decay of singularvalues in (m-h) domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34Figure 2.4 Recovery results for 50% subsampled 2D frequency slices using the nuclear normformulation. (a) Interpolation and (b) residual of low frequency slice at 12 Hzwith SNR = 19.1 dB. (c) Interpolation and (d) residual of high frequency sliceat 60 Hz with SNR = 15.2 dB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38Figure 2.5 Missing trace interpolation of a seismic line from Gulf of Suez. (a) Ground truth.(b) 50% subsampled common shot gather. (c) Recovery result with a SNR of18.5 dB. (d) Residual. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39Figure 2.6 Matricization of 4D monochromatic frequency slice. Top: (Source x, Sourcey) matricization. Bottom: (Source x, Receiver x) matricization. Left: Fullysampled data; Right: Subsampled data. . . . . . . . . . . . . . . . . . . . . . . . 40Figure 2.7 Singular value decay in case of different matricization of 4D monochromaticfrequency slice. Left: Fully sampled data; Right: Subsampled data. . . . . . . . . 40xivFigure 2.8 Missing-trace interpolation of a frequency slice at 12.3Hz extracted from 5D dataset, 75% missing data. (a,b,c) Original, recovery and residual of a common shotgather with a SNR of 11.4 dB at the location where shot is recorded. (d,e,f)Interpolation of common shot gathers at the location where no reference shot ispresent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41Figure 2.9 Missing-trace interpolation of a frequency slice at 12.3Hz extracted from 5D dataset, 50% missing data. (a,b,c) Original, recovery and residual of a common shotgather with a SNR of 16.6 dB at the location where shot is recorded. (d,e,f)Interpolation of common shot gathers at the location where no reference shot ispresent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42Figure 2.10 Comparison of regularized and non-regularized formulations. SNR of (a) lowfrequency slice at 12 Hz and (b) high frequency slice at 60 Hz over a range offactor ranks. Without regularization, recovery quality decays with factor rankdue to over-fiting; the regularized formulation improves with higher factor rank. 43Figure 2.11 Comparison of interpolation and denoising results for the Student’s t and least-squares misfit function. (a) 50% subsampled common receiver gather with an-other 10 % of the shots replaced by large errors. (b) Recovery result using theleast-squares misfit function. (c,d) Recovery and residual results using the stu-dent’s t misfit function with a SNR of 17.2 dB. . . . . . . . . . . . . . . . . . . . 44Figure 2.12 Residual error for recovery of 11 Hz slice (a) without weighting and (b) withweighting using true support. SNR in this case is improved by 1.5 dB. . . . . . 45Figure 2.13 Residual of low frequency slice at 11 Hz (a) without weighing (c) with supportfrom 10.75 Hz frequency slice. SNR is improved by 0.6 dB. Residual of lowfrequency slice at 16 Hz (b) without weighing (d) with support from 15.75 Hzfrequency slice. SNR is improved by 1dB. Weighting using learned support isable to improve on the unweighted interpolation results. . . . . . . . . . . . . . . 46Figure 2.14 Recovery results of practical scenario in case of weighted factorized formulationover a frequency range of 9-17 Hz. The weighted formulation outperforms thenon-weighted for higher frequencies. For some frequency slices, the performanceof the non-weighted algorithm is better, because the weighted algorithm can benegatively affected when the subspaces are less correlated. . . . . . . . . . . . . 47Figure 3.1 Singular value decay in the source-receiver and midpoint-offset domain. Left :fully sampled frequency slices. Right : 50% missing shots. Top: low frequencyslice. Bottom: high frequency slice. Missing source subsampling increases thesingular values in the (midpoint-offset) domain instead of decreasing them in the(src-rec) domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56xvFigure 3.2 A frequency slice from the the seismic dataset from Nelson field. Left : Fullysampled data. Right : 50% subsampled data. Top: Source-receiver domain.Bottom: Midpoint-offset domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . 57Figure 3.3 (xrec, yrec) matricization. Top: Full data volume. Bottom: 50% missing sources.Left : Fully sampled data. Right : Zoom plot . . . . . . . . . . . . . . . . . . . . . 58Figure 3.4 (ysrc, yrec) matricization. Top: Fully sampled data. Bottom: 50% missingsources. Left : Full data volume. Right : Zoom plot. In this domain, the samplingartifacts are much closer to the idealized ’pointwise’ random sampling of matrixcompletion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59Figure 3.5 Singular value decay (normalized) of the Left : (xrec, yrec) matricization andRight : (ysrc, yrec) matricization for full data and 50% missing sources. . . . . . . 59Figure 3.6 Missing-trace interpolation. Top : Fully sampled data and 75% subsampledcommon receiver gather. Bottom Recovery and residual results with a SNR of9.4 dB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64Figure 3.7 Qualitative performance of 2D seismic data interpolation for 5-85 Hz frequencyband for 50% and 75% subsampled data. . . . . . . . . . . . . . . . . . . . . . . 64Figure 3.8 Recovery results using matrix-completion techniques. Left : Interpolation in thesource-receiver domain, low-frequency SNR 3.1 dB. Right : Difference betweentrue and interpolated slices. Since the sampling artifacts in the source-receiverdomain do not increase the singular values, matrix completion in this domainis unsuccesful. This example highlights the necessity of having the appropriateprinciples of low-rank recovery in place before a seismic signal can be interpolatedeffectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65Figure 3.9 Gulf of Mexico data set. Top: Fully sampled monochromatic slice at 7 Hz.Bottom left : Fully sampled data (zoomed in the square block). Bottom right :80% subsampled sources. For visualization purpose, the subsequent figures onlyshow the interpolated result in the square block. . . . . . . . . . . . . . . . . . . 66Figure 3.10 Reconstruction errors for frequency slice at 7Hz (left) and 20Hz (right) in caseof 80% subsampled sources. Rank-minimization based recovery with a SNR of14.2 dB and 11.0 dB respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . 67Figure 3.11 Frequency-wavenumber spectrum of the common receiver gather. Top left : Fully-sampled data. Top right : Periodic subsampled data with 80% missing sources.Bottom left : Uniform-random subsampled data with 80% missing sources. Bot-tom Right : Reconstruction of uniformly-random subsampled data using rank-minimization based techniques. While periodic subsampling creates aliasing,uniform-random subsampling turns the aliases in to incoherent noise across thespectrum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67xviFigure 3.12 Gulf of Mexico data set, common receiver gather. Left : Uniformly-randomsubsampled data with 80% missing sources. Middle : Reconstruction resultsusing rank-minimization based techniques (SNR = 7.8 dB). Right : Residual. . . 68Figure 3.13 Missing-trace interpolation (80% sub-sampling) in case of geological structureswith a fault. Left : 80% sub-sampled data. Middle: after interpolation (SNR =23 dB). Right : difference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68Figure 3.14 ADMM data fit + recovery quality (SNR) for single reflector data, common re-ceiver gather. Middle row: recovered slices, bottom row: residuals correspondingto each method in the middle row. Tensor-based windowing appears to visiblydegrade the results, even with overlap. . . . . . . . . . . . . . . . . . . . . . . . 71Figure 3.15 BG 5-D seismic data, 12.3 Hz, 75% missing sources. Middle row: interpolationresults, bottom row: residuals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73Figure 3.16 BG 5D seismic data, 4.68 Hz, Comparison of interpolation results with and with-out windowing using Jellyfish for 75% missing sources. Top row: interpolationresults for differing window sizes, bottom row: residuals. . . . . . . . . . . . . . 74Figure 4.1 Monochromatic frequency slice at 5 Hz in the source-receiver (s-r) and midpoint-offset (m-h) domain for blended data (a,c) with periodic firing times and (b,d)with uniformly random firing times for both sources. . . . . . . . . . . . . . . . . 84Figure 4.2 Decay of singular values for a frequency slice at (a) 5 Hz and (b) 40 Hz of blendeddata. Source-receiver domain: blue—periodic, red—random delays. Midpoint-offset domain: green—periodic, cyan—random delays. Corresponding decay ofthe normalized curvelet coefficients for a frequency slice at (c) 5 Hz and (d) 40Hz of blended data, in the source-channel domain. . . . . . . . . . . . . . . . . . 85Figure 4.3 Monochromatic frequency slice at 40 Hz in the s-r and m-h domain for blendeddata (a,c) with periodic firing times and (b,d) with uniformly random firing timesfor both sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86Figure 4.4 HSS partitioning of a high-frequency slice at 40 Hz in the s-r domain: (a) first-level, (b) second-level, for randomized blended acquisition. . . . . . . . . . . . . . 86Figure 4.5 (a,b,c) First-level sub-block matrices (from Figure 4.4a). . . . . . . . . . . . . . . 87Figure 4.6 Decay of singular values of the HSS sub-blocks in s-r domain: red—Figure 4.5a,black—Figure 4.5b, blue—Figure 4.5c. . . . . . . . . . . . . . . . . . . . . . . . . 87Figure 4.7 Original shot gather of (a) source 1, (b) source 2, and (c) the correspondingblended shot gather for simultaneous over/under acquisition simulated on theMarmousi model. (d, e) Corresponding common-channel gathers for each sourceand (f) the blended common-channel gather. . . . . . . . . . . . . . . . . . . . . 91xviiFigure 4.8 Original shot gather of (a) source 1, (b) source 2, and (c) the correspondingblended shot gather for simultaneous over/under acquisition from the Gulf ofSuez dataset. (d, e) Corresponding common-channel gathers for each source and(f) the blended common-channel gather. . . . . . . . . . . . . . . . . . . . . . . . 92Figure 4.9 Original shot gather of (a) source 1, (b) source 2, and (c) the correspondingblended shot gather for simultaneous long offset acquisition simulated on the BPsalt model. (d, e) Corresponding common-channel gathers for each source and(f) the blended common-channel gather. . . . . . . . . . . . . . . . . . . . . . . . 93Figure 4.10 Deblended shot gathers and difference plots (from the Marmousi model) of source1 and source 2: (a,c) deblending using HSS based rank-minimization and (b,d)the corresponding difference plots; (e,g) deblending using curvelet-based sparsity-promotion and (f,h) the corresponding difference plots. . . . . . . . . . . . . . . . 95Figure 4.11 Deblended common-channel gathers and difference plots (from the Marmousimodel) of source 1 and source 2: (a,c) deblending using HSS based rank-minimizationand (b,d) the corresponding difference plots; (e,g) deblending using curvelet-based sparsity-promotion and (f,h) the corresponding difference plots. . . . . . . 96Figure 4.12 Deblended shot gathers and difference plots (from the Gulf of Suez dataset) ofsource 1 and source 2: (a,c) deblending using HSS based rank-minimization and(b,d) the corresponding difference plots; (e,g) deblending using curvelet-basedsparsity-promotion and (f,h) the corresponding difference plots. . . . . . . . . . . 97Figure 4.13 Deblended common-channel gathers and difference plots (from the Gulf of Suezdataset) of source 1 and source 2: (a,c) deblending using HSS based rank-minimization and (b,d) the corresponding difference plots; (e,g) deblending usingcurvelet-based sparsity-promotion and (f,h) the corresponding difference plots. . 98Figure 4.14 Deblended shot gathers and difference plots (from the BP salt model) of source 1and source 2: (a,c) deblending using HSS based rank-minimization and (b,d) thecorresponding difference plots; (e,g) deblending using curvelet-based sparsity-promotion and (f,h) the corresponding difference plots. . . . . . . . . . . . . . . . 99Figure 4.15 Deblended common-channel gathers and difference plots (from the BP salt model)of source 1 and source 2: (a,c) deblending using HSS based rank-minimizationand (b,d) the corresponding difference plots; (e,g) deblending using curvelet-based sparsity-promotion and (f,h) the corresponding difference plots. . . . . . . 100Figure 4.16 Signal-to-noise ratio (dB) over the frequency spectrum for the deblended datafrom the Marmousi model. Red, blue curves—deblending without HSS; cyan,black curves—deblending using second-level HSS partitioning. Solid lines—separated source 1, + marker—separated source 2. . . . . . . . . . . . . . . . . . 101xviiiFigure 4.17 Blended common-midpoint gathers of (a) source 1 and (e) source 2 for the Mar-mousi model. Deblending using (b,f) NMO-based median filtering, (c,g) rank-minimization and (d,h) sparsity-promotion. . . . . . . . . . . . . . . . . . . . . . 102Figure 4.18 Blended common-midpoint gathers of (a) source 1, (e) source 2 for the Gulf ofSuez dataset. Deblending using (b,f) NMO-based median filtering, (c,g) rank-minimization and (d,h) sparsity-promotion. . . . . . . . . . . . . . . . . . . . . . 103Figure 4.19 Blended common-midpoint gathers of (a) source 1, (e) source 2 for the BPsalt model. Deblending using (b,f) NMO-based median filtering, (c,g) rank-minimization and (d,h) sparsity-promotion. . . . . . . . . . . . . . . . . . . . . . 104Figure 5.1 Aerial view of the 3D time-jittered marine acquisition. Here, we consider onesource vessel with two airgun arrays firing at jittered times and locations. Start-ing from point a, the source vessel follows the acquisition path shown by blacklines and ends at point b. The receivers are placed at the ocean bottom (reddashed lines). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107Figure 5.2 Schematic representation of the sampling-transformation operator A during theforward operation. The adjoint of the operator A follows accordingly. (a, b,c) represent a monochromatic data slice from conventional data volume and (d)represents a time slice from the continuous data volume. . . . . . . . . . . . . . . 107Figure 5.3 Monochromatic slice at 10.0 Hz. Fully sampled data volume and simultaneousdata volume matricized as (a, c) i = (nsx, nsy), and (b, d) i = (nrx, nsx). (e) De-cay of singular values. Notice that fully sampled data organized as i = (nsx, nsy)has slow decay of the singular values (solid red curve) compared to the i =(nrx, nsx) organization (solid blue curve). However, the sampling-restriction op-erator slows the decay of the singular values in the i = (nrx, nsx) organization(dotted blue curve) compared to the i = (nsx, nsy) organization (dotted redcurve), which is a favorable scenario for the rank-minimization formulation. . . . 108Figure 5.4 Source separation recovery. A shot gather from the (a) conventional data; (b) asection of 30 seconds from the continuous time-domain simultaneous data (b);(c) recovered data by applying the adjoint of the sampling operatorM; (d) datarecovered via the proposed formulation (SNR = 20.8 dB); (e) difference of (a)and (d) where amplitudes are magnified by a factor of 8 to illustrate a very smallloss in coherent energy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111xixFigure 6.1 Different slices through the 4-dimensional image volume e(z, z′, x, x′) aroundz = zk and x = xk. (a) Conventional image e(z, z, x, x), (b) Image gather forhorizontal and vertical offset e(z, z′, xk, x′), (c) Image gather for horizontal offsete(z, z, xk, x′) and (d) Image gather for a single scattering point e(zk, z′, xk, x′).(e-g) shows how these slices are organized in the matrix representation of e. . . . 131Figure 6.2 Migrated images for a wrong (a) and the correct (b) background velocity areshown with 3 locations at which we extract CIPs for a wrong (c) and the correct(d) velocity. The CIPs contain many events that do not necessarily focus. How-ever, these events are located along the line normal to the reflectors. Therefore,it seems feasible to generate multiple CIPs simultaneously as long as they arewell-separated laterally. A possible application of this is the extraction of CIGsat various lateral positions. CIGs at x = 1500 m and x = 2500 m for a wrong(e) and the correct (f) velocity indeed show little evidence of crosstalk, allowingus to compute several CIGs at the cost of a single CIG. . . . . . . . . . . . . . . 132Figure 6.3 (a) Compass 3D synthetic velocity model provided to us by BG group. (b) ACIP gather at (x, y, z) = (1250, 1250, 390)m. The proposed method (Algorithm2) is 1500 times faster then the classical method (Algorithm 1) to generate CIPgather. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133Figure 6.4 Cross-section of Compass 3D velocity model (Figure 6.3 (a)) along (a) x, and (b)y direction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134Figure 6.5 Slices extracted along the horizontal (a,b) and vertical (c) offset directions fromthe CIP gather shown in Figure 6.3 (b). . . . . . . . . . . . . . . . . . . . . . . . 134Figure 6.6 Schematic depiction of the scattering point and related positioning of the reflector.134Figure 6.7 (a) Horizontal one-layer velocity model and (b) constant density model. CIP lo-cation is x = 1250 m and z = 400 m. (c) Modulus of angle-dependent reflectivitycoefficients at CIP. The black lines are included to indicate the effective apertureat depth. The red lines are the theoretical reflectivity coefficients and the bluelines are the wave-equation based reflectivity coefficients. . . . . . . . . . . . . . 135Figure 6.8 Angle dependent reflectivity coefficients in case of horizontal four-layer (a) veloc-ity and (b) density model at x = 1250 m. Modulus of angle-dependent reflectivitycoefficients at (c) z = 200 m, (d) z = 600 m, (e) z = 1000 m, (f) z = 1400 m. . . 136Figure 6.9 Estimation of local geological dip. (a,b) Two-layer model. (c) CIP gather atx = 2250 m and z = 960 m overlaid on dipping model. (d) Stack-power versusdip-angle. We can see that the maximum stack-power corresponds to the dipvalue of 10.8◦, which is close to the true dip value of 11◦. . . . . . . . . . . . . . 137xxFigure 6.10 Modulus of angle-dependent reflectivity coefficients in two-layer model at z = 300and 960 m and x = 2250 m. (a) Reflectivity coefficients at z = 300 m andx = 2250 m. Reflectivity coefficients at z = 900 m (b) with no dip θ = 0◦ and(c) with the dip obtained via the method described above (θ = 10.8◦). . . . . . . 138Figure 6.11 Comparison of working with CIGs versus CIPs. (a) True velocity model. Theyellow line indicates the location along which we computed the CIGs and thegreen dot is the location where we extracted the CIPs. (b,c) CIGs extractedalong vertical and horizontal offsets directions in case of vertical reflector. (d)CIPs extracted along vertical (z = 1.2 km, x = 1 km) reflector. (e,f) CIGsextracted along vertical and horizontal offsets directions in case of horizontalreflector. (g) CIPs extracted along horizontal (z = 1.5 km, x = 4.48 km) reflector.139Figure 6.12 Randomized trace estimation. (a,b) True and initial velocity model. Objectivefunctions for WEMVA based on the Frobenius norm, as a function of velocity per-turbation using the complete matrix (blue line) and error bars of approximatedobjective function evaluated via 5 different random probing with (c) K=10 and(d) K = 80 for the Marmousi model. . . . . . . . . . . . . . . . . . . . . . . . . . 140Figure 6.13 WEMVA on Marmousi model with probing technique for a good starting model.(a,b) True and initial velocity models. Inverted model using (c) K = 10 and (b)K = 100 respectively. We can clearly see that even 10 probing vectors are goodenough to start revealing the structural information. . . . . . . . . . . . . . . . . 141Figure 6.14 WEMVA on Marmousi model with probing technique for a poor starting veloc-ity model and 8-25 Hz frequency band. (a,b) True and initial velocity models.Inverted model using (c) K = 100. (d) Inverted velocity model overlaid with acontour plot of the true model perturbation. We can see that we captures theshallow complexity of the model reasonably well when working with a realisticseismic acquisition and inversion scenario. . . . . . . . . . . . . . . . . . . . . . . 142Figure 7.1 To understand the inherent redundancy of seismic data, we analyze the decayof singular values of windowed versus non-windowed cases. We see that fullysampled seismic data volumes have fastest decay of singular values, whereas,smaller window sizes result in the slower decay rate of the singular values. . . . . 148xxiFigure 7.2 To visualize the low-rank nature of image volumes, I form a full-subsurface offsetextended image volume using a subsection of the Marmousi model and analyzedthe decay of singular values. (a) Complex subsection of the Marmousi modelwith highly dipping reflectors with strong lateral variations in the velocity, and(b) corresponding full-subsurface offset extended image volume at 5 Hz. (c) Todemonstrate the low-rank nature of image volumes, I plot the decay of singularvalues, where I observed that we only required the first 10 singular vectors to geta reconstruction error of 10−4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151xxiiAcknowledgementsFirst and foremost, I want to express my sincere gratitude to my advisor Professor Dr. Felix J.Herrmann. I am deeply indebted for his continuous support of my research, for believing in me,his patience, motivation, and immense knowledge that have made my PhD experience productive.He has taught me, both consciously and unconsciously, the best practices in conducting scientificresearch.I would also like to thank my committee members, Professor Dr. Eldad Haber, Professor Dr.Chen Greif, and Professor Dr. Ozgur Yilmaz for serving on my supervisory committee and forgenerously offering their time, support, and invaluable advice.I would like to express my heartiest thanks to Dr. Aleksandr Aravkin, Dr. Tristan van Leeuwen,Dr. Hassan Mansour and Dr. Rongrong Wang for their mentorships and friendship, sharing theirexpertise and experiences, and for providing valuable feedback throughout my research. I speciallywould like to thank our late post-doc Dr. Ernie Esser (1980-2015) for his friendship, guidance andlate evening conversations on research ideas. I miss cycling with him around Vancouver.I would like to show my special appreciation to Henryk Modzelewski and Miranda Joyce fortheir support, friendship and generous time during my stay at the SLIM lab. I would like to expressmy special thanks to them for being great souls. They are always ready to help with a smile. Ialso would like to thank Manjit Dosanjh and Ian Hanlon for their support and help during my firstyear in the SLIM-group.I am forever thankful to my colleagues at the SLIM Lab for their friendship and support, andfor creating a cordial working environment. Many thanks to Haneet Wason, Xiang Li, Ning Tu,Shashin Sharan, Felix Oghenekohwo, Curt Da Silva, Oscar Lopez, Zhilong Fang, Art Petrenko, LuzAnglica Caudillo Mata, Tim Lin, Brendan Smithyman, Bas Peter, Ali Alfaraj and other membersof the SLIM-group.My grateful appreciation goes to Dr. James Rickett for giving me the opportunity to do aninternship with Schlumberger. I also would like to thank Dr. Can Evren Yarman and Dr. IvanVasconcelos for mentorship at Schlumberger, and for creating an enjoyable working environment.Many thanks to Dr. Eric Verschuur for providing the Gulf of Suez dataset, which I used inChapters 2 and 4, to PGS for providing the North Sea dataset, and to Chevron for providing theGulf of Mexico dataset that I used in Chapter 3. I also would like to thank the BG group forxxiiiproviding the synthetic 3D compass velocity model and 5D seismic data that I used in Chapters2, 3 and 5. Many thanks to the authors of IWave, SPG`1, SPG-LR, Madagascar, the Marmousivelocity model and the BP salt model, which I used throughout the thesis. I also would liketo acknowledge the collaboration of the SENAI CIMATEC Supercomputing Center for IndustrialInnovation, Bahia, Brazil, and the support of BG Group and the International Inversion InitiativeProject.Finally, I acknowledge the people who mean a lot to me: my family, my brother and sistersfor their continuous and unparalleled love, help and support. My heartfelt regard goes to myfather-in-law and mother-in-law for their love and moral support. I owe thanks to a very specialperson, my wife Monika, for making countless sacrifices to help me get to this point; and my sonRutva for abiding my ignorance and continually providing the requisite breaks from research withhis innocent smiles. Monika’s unconditional love and support helped me get through this period inthe most positive way, and I dedicate this milestone to her.This work was in part financially supported by the NSERC Collaborative Research and De-velopment Grant DNOISE II (375142-08). This research was carried out as part of the SINBADproject with support from the following organizations: BG Group, BGP, CGG, Chevron, Cono-coPhillips, DownUnder GeoSolutions, Hess, Petrobras, PGS, Sub-Salt Solutions, Schlumberger,and Woodside.xxivChapter 1IntroductionExploration geophysics is an applied branch of geophysics whose aim is to predict the physicalproperties of the subsurface of the earth, along with the anomalies, using data acquired overthe surface of the earth. This data ranges from seismic, gravitational, magnetic, electrical andelectromagnetic. Seismic methods are widely used in all the major oil and gas exploration activitiesaround the world because of the better resolution capability of small scale anomalies compared toother existing geophysical methods. Although the acquisition principles are identical for land andmarine environments, the operational details such as the geometry and type of receivers systems,the density of measurements made over a given area, and the type of sensors used differ between thetwo environments [Caldwell and Walker, January 24 2017]. All seismic surveys involve application ofa seismic energy source at discrete surface locations, such as a vibroseis truck / shot-hole dynamiteon land / air-guns at sea (Figure 1.1). In this thesis, I focus my investigation on the issues related tomarine seismic data acquisition, processing and inversion. Figure 1.2 illustrates the different receivergeometries used in marine seismic surveying, while Figure 1.3 provides a list of the different typesof surveys.As outlined in [Caldwell and Walker, January 24 2017], in towed streamer acquisition, a cableis towed or streamed behind a moving vessel that contains the hydrophones, where the length ofthe streamer can vary between 3 and 12 kilometres depending upon the depth of the geologicaltarget of interest. In ocean bottom surveys, the recording system contains a hydrophone and pos-sibly 3-component geophones at each recording location. In vertical seismic profiling, 3-componentgeophones are placed along vertical and/or horizontal wells. Seismic surveys are often acquiredeither along single lines or over an area, termed 2D and 3D seismic acquisition, respectively. In 2Dacquisition, the sources and receivers are placed along a single sail line below the sea-surface withthe underlying assumption being that the reflections are generated in the same 2D vertical planelying below the sail line. The processing of 2D seismic data generates a 2D image of the subsurfacewith detailed geological features, i.e., a map of the locations where the acoustic properties of theearth change, which lie beneath the sail line, hence, the name 2D [Caldwell and Walker, January1(a)(b)Figure 1.1: Schematic representation of (a) marine and (b) land seismic data acquisition.Source [Enjolras, January 24 2017, RigZone, January 24 2017]24 2017]. Figure 1.4 shows a standard 2D survey, where the 2D lines are placed on a grid. Whilethe 2D surveys are economical, the gaps between the receiver lines are in kilometres, which makesinterpretation of the subsurface problematic if there are strong lateral variations in the earth in thecross-line direction. In that case, 2D assumption fails and we are not able to produce the correctimage of the subsurface. This limitation is overcome using 3D acquisition (Figure 1.4), where seis-mic surveys are acquired over an area resulting in a 3D image of the subsurface, hence the term3D seismic. Although 3D surveys capture the geological features in detail, they are very expen-sive because they involve greater investment than 2D surveying in terms of logistics, turnaroundacquisition time and sophisticated equipment. Apart from 2D and 3D, seismic surveys are oftenacquired repeatedly over a producing hydrocarbon field, known as 4D surveys (time-lapse), wherethe interval between the surveys can be in the order of months or years. The objective of 4D surveysis to estimate reservoir changes as a result of production and/or injection of fluid in the reservoir,by comparing different datasets acquired over a period of time [Caldwell and Walker, January 242017].Seismic data acquisition results in millions of recorded traces with reflection events generatedat interfaces between rock layers in the subsurface having different rock properties. Each tracedisplays the data associated with its common depth point as a continuous function of pressureoscillating on either side of a zero amplitude line (Figure 1.5a). The amplitude of the wiggle isrelative to how large the change in rock properties is between two layers. By Society of ExplorationGeophysicist (SEG) convention [AAPG, January 24 2017], a reflection event is displayed as apositive peak (polarity is positive) if it is generated from an increase in acoustic impedance, such2Figure 1.2: Illustration of various marine acquisition geometries, namely towed-streamer (1),an ocean bottom geometry (2), buried seafloor array (3), and Vertical Seismic Profile(4). All the seismic surveys involve a source (S), which is typically an airgun for marinesurvey, and receivers (black dots) that are mainly hydrophones and/or 3-componentgeophones. Source Caldwell and Walker [January 24 2017].Figure 1.3: This table summarizes the different types of marine seismic surveys. Source Cald-well and Walker [January 24 2017].Figure 1.4: Here, we illustrate the basic difference between the 2D and 3D survey geometry.The area covered by the two surveys is exactly identical, as suggested by the dashedcontour lines. Source Caldwell and Walker [January 24 2017].3Figure 1.5: Various types of seismic displays: (a) wiggle trace, (b) variable area, (c) variablearea wiggle trace, and (d) variable density. Copyright: Conoco Inc.as from a slow velocity shale to a high velocity dolomite. If a reflection event is generated fromthe decrease in acoustic impedance, then it is displayed as a trough and the polarity is negative.This convention is called normal polarity. In seismic processing, this reflection energy is mappedto an image and/or attributes of the underground geological structures that are used to infer thephysical rock properties.In realistic seismic data acquisition, each sub-surface point is sampled multiple times to increasethe fold, where fold is a measure of the redundancy of common midpoint seismic data equal to thenumber of offset receivers that record a given data point or in a given bin. Fold improves thesignal-to-noise ratio and reduces the random environmental noise in the data during the stacking(summation) process to produce the image of the subsurface. Various factors, such as depth andthickness of the zone of interest, surface conditions and topography, play an important role in thedesign of the seismic acquisition layout. Sources and receivers grid points are controlled by the binsize, which determine how often you sample the subsurface. Bin sizes are smaller in the target-areaof interest where the aim is to get a higher resolution image of the subsurface. Apart from the binsize, the frequency spectrum of the data controls the vertical resolution, since higher frequencieshave shorter wavelengths and provide a more detailed image of the subsurface.1.1 Problem statementRealistically, conventional oil and gas fields are increasingly difficult to explore and produce, callingfor more complex wave-equation based inversion (WEI) algorithms requiring dense long-offset sam-plings and wide-azimuthal coverage. Due to budgetary and/or physical constraints, seismic dataacquisition involves coarser sampling (sub-sampling) along either sources or receivers, i.e., seismicdata along spatial sampling grids are typically sampled below Nyquist because of cost and certainphysical constraints. However, some of the seismic data processing and imaging techniques such as4surface related multiple estimation, amplitude-versus-offset (AVO) and amplitude-versus-azimuth(AVAz) analyses require densely sampled seismic data to avoid acquisition related artifacts in theinverted model of the subsurface. To mitigate these artifacts, we rely on seismic data interpolationmethods that result in dense periodically sampled data preferably at or above Nyquist. I includedFigures 1.6 (a ,b) to show a fully sampled and 50% subsampled common-receiver gather extractedfrom a 2D seismic data acquisition where sources and receivers are placed below the sea-surface.The objective of different types of interpolation algorithms are to estimate the missing traces froman undersampled dataset with minimal loss of coherent energy as shown in Figures 1.6 (c, d) and(e,f).The practitioner also proposed to acquire simultaneous source surveys to reduce costs by re-ducing acquisition time and environmental impact (shorter disturbance of an area). Simultaneousacquisition also mitigates sampling related issues and improves the quality of seismic data, whereinsingle and/or multiple source vessels fire sources at near-simultaneous or slightly random times, re-sulting in overlapping shot records (also known as blending). In general, there are two different typesof marine source surveys, namely static and dynamic surveys. During static surveys, sources aretowed behind the source vessels and receivers are fixed at the ocean-floor, whereas during dynamicsurveys, both sources and receivers are towed behind the source vessels. The current paradigm forsimultaneous towed-streamer marine acquisition (dynamic geometry) incorporates low-variabilityin source firing times—i.e., 0 ≤ 1 or 2 seconds, since both the sources and receivers are moving.Figures 1.7 and 1.8 show two instances of dynamic geometry, namely simultaneous long offset [Longet al., 2013] and over/under [Hill et al., 2006, Moldoveanu et al., 2007, Lansley et al., 2007, Long,2009, Hegna and Parkes, 2012, Torben Hoy, 2013], where we can see the low degree of randomnessin the overlapping shots in simultaneous data that presents a challenging case for source separationaccording to compressed sensing. For static geometry, [Wason and Herrmann, 2013b, Li et al.,2013] proposed an alternate sampling strategy for simultaneous acquisition (time-jittered marine)leveraging ideas from compressed sensing (CS), where a single source vessel sails across an ocean-bottom array continuously firing two airgun arrays at jittered source locations and time instanceswith receivers recording continuously (Figure 1.9). This results in a continuous time-jittered simul-taneous recording of seismic data (Figure 1.9). Simultaneous acquisitions are economically viable,since overall acquisition becomes compressed, where we record the overlapping shot records. Whilethese cost savings are very welcome, especially in the current downturn, subsequent seismic dataprocessing and imaging workflows expect recordings where the data is collected for sequential shots.So our task is to turn continuous recordings with overlapping shots into sequential recording withthe least amount of artifacts and loss of energy at late arrival time. Therefore, a practical (simulta-neous) source separation technique is required, which aims to recover unblended (non-overlapping)data—as acquired during conventional acquisition—from simultaneous data. However, all the in-terpolation and source separation workflows for 3D seismic data acquisition results in exponentialgrowth in data volumes because the recovered seismic data contains traces on the order of millions,5and prohibitive demands on computational resources. Given the data size volumes and resources,one of the key challenges is to extract meaningful information from this huge dataset (which mayin the near future grow as large as on the order of petabytes) in computationally efficient ways, i.e.,to reduce the turnaround time of each of the processing and inversion steps, apart from storing themassive volumes of interpolated and separated seismic data on disks.1.2 ObjectivesThe primary focus of this thesis is to propose fast computational techniques that are practical androbust for large-scale seismic data processing, namely missing-trace interpolation, source separation,and wave-equation based migration velocity analysis. The main objectives of this work are:• to identify three basic principles of compressed sensing [Donoho, 2006b] for recovering seis-mic data volumes using rank-minimization based techniques, namely a low-rank transformdomain, a rank-increasing sampling scheme, and a practical SVD-free rank-minimizing opti-mization scheme for large-scale seismic data processing;• to decrease the cost of large-scale simultaneous source acquisition for towed streamers andocean-bottom surveys by leveraging ideas from compressed sensing via randomization insource locations and time instances followed by (SVD-free) computationally efficient sourceseparation;• to derive a computationally feasible two-way wave-equation based factorization principle thatgives us access to the kinematics and amplitudes of full subsurface offset extended image vol-umes without carrying out explicit cross-correlations between source and receiver wavefieldsfor each shot.1.3 ContributionsTo our knowledge this work represents the first instance where seismic data processing, namelymissing-trace interpolation and source separation, is performed using full 5D seismic data volumeby avoiding windowed based operations. In this work, I show the benefits of large-scale SVD-freeframework in terms of the computational time and memory requirements. We also propose a novelmatrix-vector formulation to extract information from the full-subsurface offset extended image vol-umes that overcome prohibitive computational and storage costs of forming full-subsurface offsetextended image volumes, which cannot be formed using conventional extended imaging workflows.I show that the purpose matrix-vector formulation offers new perspectives on the design and imple-mentation of workflows that exploit information embedded in various types of subsurface extendedimages. I further demonstrate the benefits of this matrix-vector formulation to obtain local infor-mation, tied to individual subsurface points that can serve as quality control for velocity analyses6(a) (b)(c) (d)(e) (f)Figure 1.6: Common-receiver gather. (a) Fully sampled and (b) 50% subsampled. The finalgoal is to recover the fully-sampled data from the subsampled data with minimal loss ofcoherent energy. (c, e) Reconstruction results from two different types of interpolationand (d, f) corresponding residual plots. We can see that the interpolation results in (e,f) are better than (c, d) because the energy loss is small, especially at the cusp of thecommon-receiver gather.7(a)Figure 1.7: Simultaneous long-offset acquisition, where an extra source vessel is deployed,sailing one spread-length ahead of the main seismic vessel (see Chapter 4 for moredetails). We record overlapping shot records in the field and separate them into non-overlapping shots using source separation based techniques.or as input to localized amplitude-versus-offset analyses, or global information that can be used todrive automatic velocity analyses without requiring prior information on the geologic dip.1.4 OutlineIn Chapters 2 and 3, we first introduce the underlying theory of matrix completion and its SVD-free approach for large-scale missing-trace interpolation problem. Next, we outline three practicalprinciples for using low-rank optimization techniques to recover missing seismic data which arebuilt upon theoretical ideas from Compressed Sensing. We further address the computationalchallenges of using the matrix-based techniques for seismic data reconstruction, where we proposeto use either a (SVD-free) factorization based rank-minimization framework with the Pareto curveapproach or the factorization-based parallel matrix completion framework dubbed Jellyfish. Wealso examine the popular approach of windowing a large data volume into smaller data volumesto be processed in parallel, and empirically demonstrate how such a process does not respect theinherent redundancy present in the data, degrading reconstruction quality as a result. Finally,I demonstrate these observations on carefully selected real 2D seismic surveys and synthetic 3Dseismic surveys simulated using a complex velocity model provided by the BG Group. We also show8(a)Figure 1.8: Over/Under acquisition is an instance of low-variability in source firing times,i.e, two sources are firing within 1 (or 2) seconds (see Chapter 4 for more details).the computational advantages of matrix completion techniques over sparsity-promoting techniquesand tensor-based missing-trace reconstruction techniques.In Chapter 4, I extend the low-rank optimization based approaches to perform source separationin simultaneous towed-streamer marine acquisition, where I modified the matrix completion formu-lation to separate multiple sources acquired simultaneously. We address the challenge of source sep-aration for simultaneous towed-streamer acquisitions via two compressed sensing based approaches,namely sparsity-promotion and rank-minimization. We exploit the sparse structure of seismic datain the curvelet domain and low-rank structure of seismic data in the midpoint-offset domain. Ifurther incorporate the Hierarchical Semi-Separable matrix representation in rank-minimizationframework to exploit the low-rank structure of seismic data at higher frequencies. Finally, we illus-trate the performance of both the sparsity-promotion and rank-minimization based techniques bysimulating two simultaneous towed-streamer acquisition scenarios: over/under and simultaneouslong offset. A field data example from the Gulf of Suez for the over/under acquisition scenario isalso included. I further compare these two techniques with the NMO-based median filtering typeapproach.In Chapter 5, I use rank-minimization based techniques to present a computationally tractablealgorithm to separate simultaneous time-jittered continuous recording for a 3D ocean-bottom cablesurvey. First, I formulate a factorization based rank-minimization formulation that works on the9(a)Figure 1.9: Time-jittered marine continuous acquisition, where a single source vessel sailsacross an ocean-bottom array firing two airgun arrays at jittered source locations andtime instances with receivers recording continuouslytemporal-frequency domain using all monochromatic data matrices together. Then, I show theefficacy of proposed framework on a synthetic 3D seismic survey simulated on a complex geologicalvelocity model provided by the BG Group.In Chapter 6, we exploit the redundancy in extended image volumes, by first arranging theextended image volume as a matrix, followed by probing this matrix in a way that avoids explicitstorage and removes the customary and expensive loop over shots found in conventional extendedimaging. As a result, we end up with a matrix-vector formulation where I form different imagegathers and use them to perform amplitude-versus-angle and wave-equation migration velocityanalyses, without requiring prior information on the geologic dips. Next, I show how this factor-ization can be used to extract information on local geological dips and radiation patterns for AVApurposes, and how full-subsurface offset extended image volumes can be used to carry out auto-matic WEMVA. Each application is illustrated by carefully selected stylized numerical exampleson 2- and 3-D velocity models.10In Chapter 7, I conclude the work presented in this thesis and propose future research directions.11Chapter 2Fast methods for denoising matrixcompletion formulations, withapplications to robust seismic datainterpolation.2.1 SummaryRecent SVD-free matrix factorization formulations have enabled rank minimization for systemswith millions of rows and columns, paving the way for matrix completion in extremely large-scaleapplications, such as seismic data interpolation.In this paper, we consider matrix completion formulations designed to hit a target data-fittingerror level provided by the user, and propose an algorithm called LR-BPDN that is able to exploitfactorized formulations to solve the corresponding optimization problem. Since practitioners typi-cally have strong prior knowledge about target error level, this innovation makes it easy to applythe algorithm in practice, leaving only the factor rank to be determined.Within the established framework, we propose two extensions that are highly relevant to solvingpractical challenges of data interpolation. First, we propose a weighted extension that allows knownsubspace information to improve the results of matrix completion formulations. We show how thisweighting can be used in the context of frequency continuation, an essential aspect to seismicdata interpolation. Second, we propose matrix completion formulations that are robust to largemeasurement errors in the available data.We illustrate the advantages of LR-BPDN on collaborative filtering problem using the Movie-A version of this chapter has been published in SIAM Journal on Scientific Computing, 2014, vol 36, pagesS237-S266.12Lens 1M, 10M, and Netflix 100M datasets. Then, we use the new method, along with its robustand subspace re-weighted extensions, to obtain high-quality reconstructions for large scale seismicinterpolation problems with real data, even in the presence of data contamination.2.2 IntroductionSparsity- and rank-regularization have had significant impact in many areas over the last severaldecades. Sparsity in certain transform domains has been exploited to solve underdetermined linearsystems with applications to compressed sensing Donoho [2006a], Cande`s and Tao [2006], nat-ural image denoising/inpainting Starck et al. [2005], Mairal et al. [2008], Mansour et al. [2010],and seismic image processing Herrmann and Hennenfent [2008a], Neelamani et al. [2010], Her-rmann et al. [2012a], Mansour et al. [2012b]. Analogously, low-rank structure has been used toefficiently solve matrix completion problems, such as the Netflix Prize problem, along with manyother applications, including control, system identification, signal processing, and combinatorialoptimization Fazel [2002], Recht et al. [2010b], Cande`s et al. [2011], and seismic data interpolationand denoising Oropeza and Sacchi [2011].Regularization formulations for both types of problems introduce a regularization functional ofthe decision variable, either by adding an explicit penalty to the data-fitting termminxρ(A(x)− b) + λ||x||, (QPλ)or by imposing constraintsminxρ(A(x)− b) s.t. ||x|| ≤ τ . (LASSOτ )In these formulations, x may be either a matrix or a vector, ‖ · ‖ may be a sparsity or low-rankpromoting penalty such as the `1 norm ‖ · ‖1 or the matrix nuclear norm ‖ · ‖∗, A may be any linearoperator that predicts the observed data vector b of size p× 1, and ρ(·) is typically taken to be the2-norm.These approaches require the user to provide regularization parameters whose values are typi-cally not known ahead of time, and otherwise may require fitting or cross-validation procedures.The alternate formulationminx||x|| s.t. ρ(A(x)− b) ≤ η. (BPDNη)has been successfully used for the sparse regularization of large scale systems Berg and Friedlander[2008], and proposed for nuclear norm regularization Berg and Friedlander [2011]. This formulationrequires the user to provide an acceptable error bound in the data fitting domain (BPDNη), andis preferable for many applications, especially when practitioners know (or are able to estimate)an approximate data error level. We refer to (BPDNη), (QPλ) and (LASSOτ ) as regularization13formulations, since all three limit the space of feasible solutions by considering the nuclear norm ofthe decision variable.A practical implementation of (BPDNη) for large scale matrix completion problems is difficultbecause of the large size of the systems of interest, which makes SVD-based approaches intractable.For example, seismic inverse problems work with 4D data volumes, and matricization of such datacreates structures whose size is a bottleneck for standard low-rank interpolation approaches.Fortunately, a growing literature on factorization-based rank optimization approaches has enabledmatrix completion formulations for (QPλ) and (LASSOτ ) approaches for extremely large-scale sys-tems that avoids costly SVD computations Rennie and Srebro [2005b], Lee et al. [2010b], Rechtand Re´ [2011]. These formulations are non-convex, and therefore do not have the same convergenceguarantees as convex formulations for low-rank factorization. In addition, they require an a priorirank specification, adding a rank constraint to the original problem. Nonetheless, factorized for-mulations can be shown to avoid spurious local minima, so that if a local minimum is found, it willcorrespond to the global minimum in the convex formulation, provided the chosen factor rank washigh enough. In addition, computational methods for factorized formulations are more efficient,mainly because they can completely avoid SVD (or partial SVD) computations. In this paper,we extend the framework of Berg and Friedlander [2011] to incorporate matrix factorization ideas,enabling the (BPDNη) formulation for rank regularization of large scale problems, such as seismicdata interpolation.While formulations in Berg and Friedlander [2008, 2011] choose ρ in (BPDNη) to be thequadratic penalty, recent extensions Aravkin et al. [2013a] allow more general penalties to beused. In particular, robust convex (see e.g. Huber [1981]) and nonconvex penalties (see e.g. Langeet al. [1989], Aravkin et al. [2012]) can be used to measure misfit error in the (BPDNη) formulation.We incorporate these extensions into our framework, allowing matrix completion formulations thatare robust to data contamination.Finally, subspace information can be used to inform the matrix completion problem, analogouslyto how partial support information can be used to improve the sparse recovery problem Friedlanderet al. [2011]. This idea is especially important for seismic interpolation, where frequency continua-tion is used. We show that subspace information can be incorporated into the proposed frameworkusing reweighting, and that the resulting approach can improve recovery SNR in a frequency contin-uation setting. Specifically, subspace information obtained at lower frequencies can be incorporatedinto reweighted formulations for recovering data at higher frequencies.To summarize, we design factorization-based formulations and algorithms for matrix completionthat1. Achieve a specified target misfit level provided by the user (i.e. solve (BPDNη)).2. Achieve recovery in spite of severe data contamination using robust cost functions ρ in (BPDNη)3. Incorporate subspace information into the inversion using re-weighting.14The paper proceeds as follows. In section 2.3, we briefly discuss and compare the formula-tions (QPλ), (LASSOτ ), and (BPDNη). We also review the SPG`1 algorithm Berg and Friedlander[2008] to solve (BPDNη), along with recent extensions for (BPDNη) formulations developed in Ar-avkin et al. [2013a]. In section 2.4, we formulate the convex relaxation for the rank optimizationproblem, and review SVD-free factorization methods. In section 2.5, we extend analysis from Bu-rer and Monteiro [2003] to characterize the relationship between local minima of rank-optimizationproblems and their factorized counterparts in a general setting that captures all formulations ofinterest here. In section 2.6, we propose an algorithm that combines matrix factorization with theapproach developed by Berg and Friedlander [2008, 2011], Aravkin et al. [2013a]. We develop therobust extensions in section 2.7, and reweighting extensions in section 2.8. Numerical results forboth the Netflix Prize problem and for seismic trace interpolation of real data are presented insection 2.9.2.3 Regularization formulationsEach of the three formulations (QPλ), (LASSOτ ), and (BPDNη) controls the tradeoff betweendata fitting and a regularization functional using a regularization parameter. However, there areimportant differences between them.From an optimization perspective, most algorithms solve (QPλ) or (LASSOτ ), together witha continuation strategy to modify τ or λ, see e.g., Figueiredo et al. [2007], Berg and Friedlander[2008]. There are also a variety of methods to determine optimal values of the parameters; seee.g. Giryes et al. [2011] and the references within. However, from a modeling perspective (BPDNη)has a significant advantage, since the η parameter can be directly interpreted as a noise floor, ora threshold beyond which noise is commensurate with the data. In many applications, such asseismic data interpolation, scientists have good prior knowledge of the noise floor. In the absenceof such knowledge, one still wants an algorithm that returns a reasonable solution given a fixedcomputational budget, and some formulations for solving (BPDNη) satisfy this requirement.van den Berg and Friedlander Berg and Friedlander [2008] proposed the SPG`1 algorithm foroptimizing (BPDNη) that captures the features discussed above. Their approach solves (BPDNη)using a series of inexact solutions to (LASSOτ ). The bridge between these problems is provided bythe value function v : R→ Rv(τ) = minxρ(A(x)− b) s.t. ‖x‖ ≤ τ , (2.1)where the particular choice of ρ(·) = ‖ · ‖2 was made in Berg and Friedlander [2008, 2011]. Thegraph of v(τ) is often called the Pareto curve. The (BPDNη) problem can be solved by finding the15root of v(τ) = η using Newton’s method:τk+1 = τk − v(τ)− ηv′(τ), (2.2)and the quantities v(τ) and v′(τ) can be approximated by solving (LASSOτ ) problems. In thecontext of sparsity optimization, (BPDNη) and (LASSOτ ) are known to be equivalent for certainvalues of parameters τ and η. Recently, these results were extended to a much broader class offormulations (see [Aravkin et al., 2013a, Theorem 2.1]). Indeed, convexity of ρ is not required forthis theorem to hold, and instead activity of the constraint at the solution plays a key role. Themain hypothesis requires that the constraint is active at any solution x, i.e. ρ(b−A(x)) = σ, and‖x‖ = τ .For any ρ, v(τ) is non-increasing, since larger τ allow a bigger feasible set. For any convex ρin (2.1), v(τ) is convex by inf-projection [Rockafellar and Wets, 1998, Proposition 2.22]. When ρ isalso differentiable, it follows from [Aravkin et al., 2013a, Theorem 5.2] that v(τ) is differentiable,with derivative given in closed form byv′(τ) = −‖A∗∇ρ(b−Ax¯)‖d , (2.3)where A∗ is the adjoint to the operator A, ‖ ·‖d is the dual norm to ‖ ·‖, and x¯ solves LASSOτ . Forexample, when the norm ‖ · ‖ in (2.1) is the 1-norm, the dual norm is the infinity norm, and (2.3)evaluates to the maximum absolute entry of the gradient. In the matrix case, ‖ ·‖ is typically takento be the nuclear norm, and then ‖ · ‖d is the spectral norm, so (2.3) evaluates to the maximumsingular value of A∗∇ρ(r).To design effective optimization methods, one has to be able to evaluate v(τ), and to computethe dual norm ‖ · ‖d. Evaluating v(τ) requires solving a sequence of optimization problems (2.1),for the sequence of τ given by (2.2).A key idea that makes the approach of Berg and Friedlander [2008] very useful in practice issolving LASSO problems inexactly, with increasing precision as the overarching Newton’s methodproceeds. The net computation is therefore much smaller than what would be required if one solveda set of LASSO problems to a pre-specified tolerance. For large scale systems, the method of choiceis typically a first-order method, such as spectral projected gradient where after taking a step alongthe negative gradient of the mismatch function ρ(A(x)− b), the iterate is projected onto the normball ‖ · ‖ ≤ τ . Fast projection is therefore a necessary requirement for tractable implementation,since it is used in every iteration of every subproblem.With the inexact strategy, the convergence rate of the Newton iteration (2.2) may depend onthe conditioning of the linear operator A [Berg and Friedlander, 2008, Theorem 3.1]. For well-conditioned problems, in practice one can often observe only a few (6-10) (LASSOτ ) problems tofind the solution for (BPDNη) for a given η. As the optimization proceeds, (LASSOτ ) problems for16larger τ warm-start from the solution corresponding to the previous τ .2.4 Factorization approach to rank optimizationWe now consider (BPDNη) in the specific context of rank minimization. In this setting, ‖ · ‖ istaken to be the nuclear norm, where for a matrix X ∈ Rn×m, ‖X‖∗ = ‖σ‖1, where σ is the vectorof singular values. The dual norm in this case is ‖σ‖∞, which is relatively easy to find for verylarge systems.Unfortunately, solving the optimization problem in (2.1) is much more difficult. For the largesystem case, this requires repeatedly projecting onto the set ‖X‖∗ ≤ τ , which which means repeatedSVD or partial SVD computations. This is not feasible for large systems.Factorization-based approaches allow matrix completion for extremely large-scale systems byavoiding costly SVD computations Rennie and Srebro [2005b], Lee et al. [2010a], Recht and Re´[2011]. The main idea is to parametrize the matrix X as a product,X = LRT , (2.4)and to optimize over the factors L,R. If X ∈ Rn×m, then L ∈ Rn×k, and R ∈ Rm×k. Thedecision variable therefore has dimension k(n + m), rather than nm; giving tremendous savingswhen k  m,n. The asymptotic computational complexity of factorization approaches is the sameas that of partial SVDs, as both methods are dominated by an O(nmk) cost; the former having toform X = LRT , and the latter computing partial SVDs, at every iteration. However, in practice theformer operation is much simpler than the latter, and factorization methods outperform methodsbased on partial SVDs. In addition, factorization methods keep an explicit bound on the rank ofall iterates, which might otherwise oscillate, increasing the computational burden.Using representation (2.4), the projection problem in (LASSOτ ) is trivial. For the nuclear norm,we have Rennie and Srebro [2005b]‖X‖∗ = infX=LRT12∥∥∥∥∥[LR]∥∥∥∥∥2F, (2.5)and therefore for any partiular L,R, we have‖X‖∗ = ‖LRT ‖∗ ≤ 12∥∥∥∥∥[LR]∥∥∥∥∥2F. (2.6)The nuclear norm is not the only formulation that can be factorized. Lee et al. [2010b] haverecently introduced the max norm, which is closely related to the nuclear norm and has beensuccessfully used for matrix completion.172.5 Local minima correspondence between factorized and convexformulationsAll of the algorithms we propose for matrix completion are based on the factorization approachdescribed above. Even though the change of variables X = LRT makes the problem nonconvex,it turns out that for a surprisingly general class of problems, this change of variables does notintroduce any extraneous local minima, and in particular any local minimum of the factorized(non-convex) problem corresponds to a local (and hence global) minimum of the correspondingun-factorized convex problem. This result appeared in [Burer and Monteiro, 2003, Proposition2.3] in the context of semidefinite programming (SDP); however, it holds in general, as the authorspoint out [Burer and Monteiro, 2003, p. 431].Here, we state the result for a broad class of problems, which is general enough to capture allof our formulations of interest. In particular, the continuity of the objective function is the mainhypothesis required for this correspondence. It is worthwhile to emphasize this, since in Section 2.7,we consider smooth non-convex robust misfit penalties for matrix completion, which give impressiveresults (see figure 2.11).For completeness, we provide a proof in the appendix.Theorem 1 (General Factorization Theorem) Consider an optimization problem of the formminZ0f(Z)s.t. gi(Z) ≤ 0 i = 1, . . . , nhj(Z) = 0 j = 1, . . . ,mrank(Z) ≤ r,(2.7)where Z ∈ Rn×n is positive semidefinite, and f, gi, hi are continuous. Using the change of variableZ = SST , take S ∈ Rn×r, and consider the problemminSf(SST )s.t. gi(SST ) ≤ 0 i = 1, . . . , nhj(SST ) = 0 j = 1, . . . ,m(2.8)Let Z¯ = S¯S¯T , where Z¯ is feasible for (2.7). Then Z¯ is a local minimum of (2.7) if and only if S¯is a local minimum of (2.8).At first glance, Theorem 1 seems restrictive to apply to a recovery problem for a generic X,since it is formulated in terms of a PSD variable Z. However, we show that all of the formulationsof interest can be expressed this way, due to the SDP characterization of the nuclear norm.It was shown in [Recht et al., 2010b, Sec. 2] that the nuclear norm admits a semi-definite18programming (SDP) formulation. Given a matrix X ∈ Rn×m, we can characterize the nuclearnorm ‖X‖∗ in terms of an auxiliary matrix positive semidefinite marix Z ∈ R(n+m)×(n+m)‖X‖∗ = minZ012Tr(Z)subject to Z1,2 = ZT2,1 = X ,(2.9)where Z1,2 is the upper right n × m block of Z, and Z2,1 is the lower left m × n block. Moreprecisely, the matrix Z is a symmetric positive semidefinite matrix having the structureZ =[LR] [LT RT]=[LLT XXT RRT], (2.10)where L and R have the same rank as X, and Tr(Z) = ‖L‖2F + ‖R‖2F .Using characterization (2.9)-(2.10), we can show that a broad class of formulations of interestin this paper are in fact problems in the class characterized by Theorem 1.Corollary 1 (General Matrix Lasso) Any optimization problem of the formminXf(X)s.t. ‖X‖∗ ≤ τrank(X) ≤ r(2.11)where f is continuous has an equivalent problem in the class of problems (2.7) characterized byTheorem 1.Proof 1 Using (2.9), write (2.11) asminZ≥0f(R(Z))s.t. Tr(Z) ≤ τrank(Z) ≤ r,(2.12)where R(Z) extracts the upper right n × m block of Z. It is clear that if rank(Z) ≤ r, thenrank(X) ≤ r, so every solution feasible for the problem in Z is feasible for the problem in Xby (2.9). On the other hand, we can use the SVD of any matrix X of rank r to write X = LRT ,with rank(L) = rank(R) = r, and then the matrix Z in (2.10) has rank r, contains X in its upperright hand corner, and has as its trace the nuclear norm of X. In particular, if X = UΣV T , wecan use L = U√Σ, and R = V√Σ to get this representation. Therefore, every feasible point forthe X problem has a corresponding Z.192.6 LR-BPDN algorithmThe factorized formulations in the previous section have been used to design several algorithmsfor large scale matrix completion and rank minimization Lee et al. [2010b], Recht and Re´ [2011].However, all of these formulations take the form (QPλ) or (LASSOτ ). The (LASSOτ ) formulationenjoys a natural relaxation interpretation, see e.g. Herrmann et al. [2012b]; on the other hand, a lotof work has focused on methods for λ-selection in (QPλ) formulations, see e.g. Giryes et al. [2011].However, both formulations require some identification procedure of the parameters λ and τ .Instead, we propose to use the factorized formulations to solve the (BPDNη) problem by travers-ing the Pareto curve of the nuclear norm minimization problem. In particular, we integrate thefactorization procedure into the SPG`1 framework, which allows to find the minimum rank solutionby solving a sequence of factorized (LASSOτ ) subproblems (2.14). The cost of solving the factor-ized (LASSOτ ) subproblems is relatively cheap and the resulting algorithm takes advantage of theinexact subproblem strategy in Berg and Friedlander [2008].For the classic nuclear norm minimization problem, we definev(τ) = minX‖A(X)− b‖22 s.t. ‖X‖∗ ≤ τ , (2.13)and find v(τ) = η using the iteration (2.2).However, rather than parameterizing our problem with X, which requires SVD for each pro-jection, we use the factorization formulation, exploiting Theorem 1 and Corollary 1. Specifically,when evaluating the value function v(τ), we solve the corresponding factorized formulationminL,R‖A(LRT )− b‖22 s.t.12∥∥∥∥∥[LR]∥∥∥∥∥2F≤ τ (2.14)using decision variables L,R with a fixed number of k columns each.By Theorem 1 and Corollary 1, any local solution to this problem corresponds to a local solutionof the true LASSO problem, subject to a rank constraint rank(X) ≤ k. We use residual A(LRT )−breconstructed from (2.14) to evaluate both v(τ) and its derivative v′(τ). When the rank of L,Ris large enough, a local minimum of (2.14) corresponds to a local minimum of (2.13), and for anyconvex ρ, every local minimum of (LASSOτ ) is also a global minimum. When the rank of thefactors L and R is smaller than the rank of the optimal LASSO solution, the algorithm looks forlocal minima of the rank-constrained LASSO problem. Unfortunately, we cannot guarantee thatthe solutions we find are local minima for (2.14), rather than simply stationary points. Nonetheless,this approach works quickly and reliably in practice, as we show in our experiments.Problem (2.14) is optimized using the spectral projected gradient algorithm. The gradient iseasy to compute, and the projection requires rescaling all entries of L,R by a single value, whichis fast, simple, and parallelizable.20To evaluate v′(τ), we use the formula (2.3) for the Newton step corresponding to the original(convex) problem in X; this requires computing the spectral norm (largest singular value) ofA∗(b−A(L¯R¯T )) ,where A∗ is the adjoint of the linear operator A, while L¯ and R¯ are the solutions to (2.14). Thelargest singular value of the above matrix can be computed relatively quickly using the powermethod. Again, at every update requiring v(τ) and v′(τ), we are assuming here that our solutionX¯ = L¯R¯T is close to a local minimum of the true LASSO problem, but we do not have theoreticalguarantees of this fact.2.6.1 InitializationThe factorized LASSO problem (2.14) has a stationary point at L = 0, R = 0. This means that inparticular, we cannot initialize from this point. Instead, we recommend initializing from a smallrandom starting point. Another possibility is trying to jump start the algorithm, for example usingthe initialization technique of [Jain et al., 2013, Algorithm 1]. One can compute the partial SVDof the adjoint of the linear operator A on the observed data:USV T = A∗bThen L and R are initialized asL = U√S, R = V√S.This initialization procedure can sometimes result in faster convergence over random initialization.Compared to random initialization, this method has the potential to reduce the runtime of thealgorithm by 30-40% for smaller values of η, see Table 2.1. The key feature of any initializationprocedure is to ensure that the starting value ofτ0 =12[L0R0]is less than the solution to the root finding problem for (BPDNη), v(τ) = η.2.6.2 Increasing k on the flyIn factorized formulations, the user must specify a factor rank. From a computational perspective,it is better that the rank stay small; however if it is too small, it may be impossible to solve (BPDNη)to a specified error level η. For some classes of problems, where the true rank is known ahead oftime (see e.g. Cande`s et al. [2013]), one is guaranteed that a solution will exist for a given rank.21Table 2.1: Summary of the computational time (in seconds) for LR-BPDN, measuring theeffect of random versus smart ([Jain et al., 2013, Algorithm 1]) initialization of L and Rfor factor rank k and relative error level η for (BPDNη). Comparison performed on the1M MovieLens Dataset. Type of initialization had almost no effect on quality of finalreconstruction.Random initializationk 10 20 30 50η=0.5 3.54 5.46 4.04 8.31η=0.3 11.90 6.14 8.42 20.84η=0.2 86.53 107.88 148.12 166.92Smart initializationk 10 20 30 50η=0.5 5.01 5.75 6.84 8.24η=0.3 11.15 18.88 12.38 21.02η=0.2 58.78 84.29 95.07 114.21However, if necessary, factor rank can be adjusted on the fly within our framework.Specifically, adding columns to L and R can be done on the fly, since[L l] [R r]T= LRT + lrT .Moreover, the proposed framework for solving (BPDNη) is fully compatible with this strategy, sincethe underlying root finding is blind to the factorization representation. Changing k only affectsiteration (2.2) through v(τ) and v′(τ).2.6.3 Computational efficiencyOne way of assessing the cost of LR-BPDN is to compare the computational cost per iteration ofthe factorization constrained LASSO subproblems (2.14) with that of the nuclear norm constrainedLASSO subproblems (2.13). We first consider the cost for computing the gradient direction. Agradient direction for the factor L in the factorized algorithm is given bygL = A∗(A(LRT )− b)R,with gR taking a similar form. Compare this to a gradient direction for XgX = A∗ (A(X)− b) .First, we consider the cost incurred in working with the residual and decision variables. While bothmethods must compute the action of A∗ on a vector, the factorized formulation must modify factorsL,R (at a cost of O(k(n + m)) and re-form the matrix X = LRT (at a cost of at most O(knm),for every iteration and line search evaluation. Since A is a sampling matrix for the applicationsof interest, it is sufficient to form only the entries of X that are sampled by A, thus reducing thecost to O(kp), where p is the dimension of the measurement vector b. The sparser the sampling22operator A, the greater the savings. Standard approaches update an explicit decision variable X,at a cost of O(nm), for every iteration and line search evaluation. If the fraction sampled is smallerthan the chosen rank k, the factorized approach is actually cheaper than the standard method. Itis also important to note that standard approaches have a memory footprint of O(mn), simply tostore the decision variable. In contrast, the memory used by factorized approaches are dominatedby the size of the observed data.We now consider the difference in cost involved in the projection. The main benefit for thefactorized formulation is that projection is done using the Frobenius norm formulation (2.14), andso the cost is O(k(n+m)) for every projection. In contrast, state of the art implementations thatcompute full or partial SVDs in order to accomplish the projection (see e.g. Jain et al. [2010],Becker et al. [2011]) are dominated by the cost of this calculation, which is (in the case of partialk-SVD) O(nmk), assuming without loss of generality that k ≤ min(m,n).While the complexity of both standard and factorized iterations is dominated by the termO(mnk), in practice forming X = LRT from two factors with k columns each is still cheaper thancomputing a k-partial SVD of X. This essentially explains why factorized methods are faster. Whileit is possible to obtain further speed up for standard methods using inexact SVD computations,the best reported improvement is a factor of two or three Lin and Wei [2010]. To test our approachagainst a similar approach that uses Lanczos to compute partial SVDs, we modified the projectionused by the SPGL1 code to use this acceleration. We compare against this accelerated code, aswell as against TFOCS Becker et al. [2011] in section 2.9 (see Table 2.7).Finally, both standard and factorized versions of the algorithm require computing the maximumsingular value in order to compute v′(τ). The analysis in section 2.5 shows that if the chosenrank of the factors L and R is larger than or equal to the rank of the global minimizers of thenuclear norm LASSO subproblems, then any local minimizer of the factorized LASSO subproblemcorresponds to a global minimizer for the convex nuclear norm LASSO formulation. Consequently,both formulations will have similar of Pareto curve updates, since the derivates are necessarilyequal at any global minimum whenever ρ is strictly convex1.2.7 Robust formulationsRobust statistics Huber [1981], Maronna et al. [2006] play a crucial role in many real-world appli-cations, allowing good solutions to be obtained in spite of data contamination. In the linear andnonlinear regression setting, the least-squares problemminX‖F (X)− b‖221It is shown in Aravkin et al. [2013a] that for any differentiable convex ρ, the dual problem for the residualr = b−Ax has a unique solution. Therefore, any global minimum for (LASSOτ ) guarantees a unique residual whenρ is strictly convex, and the claim follows, since the derivative only depends on the residual.23corresponds to the maximum likelihood estimate of X for the statistical modelb = F (X) +  , (2.15)where  is a vector of i.i.d. Gaussian variables. Robust statistical approaches relax the Gaussianassumption, allowing other (heavier tailed) distributions to be used. Maximum likelihood estimatesof X under these assumptions are more robust to data contamination. Heavy-tailed distributions,in particular the Student’s t, yield formulations that are more robust to outliers than convexformulations Lange et al. [1989], Aravkin et al. [2012]. This corresponds to the notion of a re-descending influence function Maronna et al. [2006], which is simply the derivative of the negativelog likelihood. The relationship between densities, penalties, and influence functions is shown inFigure 2.1. Assuming that  has the Student’s t density leads to the maximum likelihood estimationproblemminXρ(F (x)− b) :=∑ilog(ν + (F (X)i − bi)2), (2.16)where ν is the Student’s t degree of freedom.A general version of (BPDNη) was proposed in Aravkin et al. [2013a], allowing different penaltyfuntionals ρ. The root-finding procedure of Berg and Friedlander [2008] was extended in Aravkinet al. [2013a] to this more general context, and used for root finding for both convex and noncovexρ (e.g. as in (2.16)).The (BPDNη) formulation for any ρ do not arise directly from a maximum likelihood estimatorof (2.15), because they appear in the constraint. However, we can think about penalties ρ as agentswho, given an error budget η, distribute it between elements of the residual. The strategy that eachagent ρ will use to accomplish this task can be deduced from tail features evident in Figure 2.1.Specifically, the cost of a large residual is prohibitively expensive for the least squares penalty,since its cost is commensurate with that of a very large number of small residuals. For example,(10α)2 = 100α2; so a residual of size 10α is worth as much as 100 residuals of size α to the leastsquares penalty. Therefore, a least squares penalty will never assign a single residual a relativelylarge value, since this would quickly use up the entire error budget. In contrast, |10α| = 10|α|, soa residual of size 10α is worth only 10 residuals of size α when the 1-norm penalty is used. Thispenalty is likely to grant a few relatively large errors to certain residuals, if this resulted in a betterfit. For the penalty in (2.16), it is easy to see that the cost of a residual of size 10α can be worthfewer than 10 residuals of size α, and specific computations depend on ν and actual size of α. Anonconvex penalty ρ, e.g. the one in (2.16), allows large residuals, as long as the majority of theremaining residuals are fit well.From the discussion in the previous paragraph, it is clear that robust penalties are useful asconstraints in (BPDNη), and can cleverly distribute the allotted error budget η, using it for outlierswhile fitting good data. The LR-BPDN framework proposed in this paper captures the robust24Figure 2.1: Gaussian (black dashed line), Laplace (red dashdotted line), and Student’s t (bluesolid line); Densities (left plot), Negative Log Likelihoods (center plot), and InfluenceFunctions (right plot). Student’s t-density has heavy tails, a non-convex log-likelihood,and re-descending influence function.extension, allowing robust data interpolation in situations when some of available data is heavilycontaminated. To develop this extension, we follow Aravkin et al. [2013a] to define the generalizedvalue functionvρ(τ) = minXρ(A(X)− b) s.t. ‖X‖∗ ≤ τ , (2.17)and find vρ(τ) = η using the iteration (2.2). As discussed in section 2.3, for any convex smoothpenalty ρ,v′ρ(τ) = −‖A∗∇ρ(r¯)‖2 , (2.18)where ‖ · ‖2 is the spectral norm, and r¯ = A(X¯)− b for optimal solution X¯ that achieves vρ(τ). Forsmooth non-convex ρ, e.g. (2.16), we still use (2.18) in iteration (2.2).As with standard least squares, we use the factorization formulation to avoid SVDs. Note thatTheorem 1 and Corollary 1 hold for any choice of penalty ρ. When evaluating the value functionvρ(τ), we actually solveminL,Rρ(A(LRT )− b) s.t. 12∥∥∥∥∥[LR]∥∥∥∥∥2F≤ τ . (2.19)For any smooth penalty ρ, including (2.16), a stationary point for this problem can be found usingthe projected gradient method.2.8 ReweightingEvery rank-k solution X¯ of (BPDNη) lives in a lower dimensional subspace of Rn×m spanned bythe n×k row and m×k column basis vectors corresponding to the nonzero singular values of X¯. Incertain situations, it is possible to estimate the row and column subspaces of the matrix X eitherfrom prior subspace information or by solving an initial (BPDNη) problem.In the vector case, it was shown that prior information on the support (nonzero entries) canbe incorporated in the `1-recovery algorithm by solving the weighted-`1 minimization problem. Inthis case, the weights are applied such that solutions with large nonzero entries on the support25estimate have a lower cost (weighted `1 norm) than solutions with large nonzeros outside of thesupport estimate Friedlander et al. [2011].In the matrix case, the support estimate is replaced by estimates of the row and columnsubspace bases U0 ∈ Rn×k and V0 ∈ Rm×k of the largest k singular values of X. Let the matricesU˜ ∈ Rn×k and V˜ ∈ Rm×k be estimates of U0 and V0, respectively.The weighted nuclear norm minimization problem can be formulated as follows:minX||QXW ||∗ s.t. ρ(A(X)− b) ≤ η, (wBPDNη)where Q = ωU˜U˜T + U˜⊥U˜⊥T , W = ωV˜ V˜ T + V˜ ⊥V˜ ⊥T , and ω is some constant between zero andone. Here, we use the notation U˜⊥ ∈ Rn×n−k to refer to the orthogonal complement of U˜ inRn×n, and similarly for V˜ ⊥ in Rm×m. The matrices Q and W are weighted projection matricesof the subspaces spanned by U˜ and V˜ and their orthogonal complements. Therefore, minimizing||QXW ||∗ penalizes solutions that live in the orthogonal complement spaces more when ω < 1.Note that matrices Q and W are invertible, and hence the reweighed LASSO problem still fitsinto the class of problems characterized by Theorem 1. Specifically, we can write any objectivef(X) subject to a reweighted nuclear norm constraint asmin f(Q−1R(Z)W−1)s.t. Tr(Z) ≤ τ ,(2.20)where as in Corollary 1, R(Z) extracts the upper n ×m block of Z (see (2.10)). A factorizationsimilar to (2.14) can then be formulated for the (wBPDNη) problem in order to optimize over thelower dimensional factors L ∈ Rn×k and R ∈ Rm×k.In particular, we can solve a sequence of (LASSOτ ) problemsminL,R‖A(LRT )− b‖22 s.t.12∥∥∥∥∥[QLWR]∥∥∥∥∥2F≤ τ , (2.21)where Q and W are as defined above. Problem (2.21) can also be solved using the spectral projectedgradient algorithm. However, unlike to the non-weighted formulation, the projection in this case isnontrivial. Fortunately, the structure of the problem allows us to find an efficient formulation forthe projection operator.262.8.1 Projection onto the weighted frobenius norm ballThe projection of a point (L,R) onto the weighted Frobenius norm ball 12(‖QL‖2F + ‖WR‖2F ) ≤ τis achieved by finding the point (L˜, R˜) that solvesminLˆ,Rˆ12∥∥∥∥∥[Lˆ− LRˆ−R]∥∥∥∥∥2Fs.t.12∥∥∥∥∥[QLˆWRˆ]∥∥∥∥∥2F≤ τ.The solution to the above problem is given byL˜ =((µω2 + 1)−1U˜ U˜T + (µ+ 1)−1U˜⊥U˜⊥T)LR˜ =((µω2 + 1)−1V˜ V˜ T + (µ+ 1)−1V˜ ⊥V˜ ⊥T)R,where µ is the Lagrange multiplier that solves f(µ) ≤ τ with f(µ) given byf(µ) =12Tr[( ω2(µω2 + 1)2U˜ U˜T +1(µ+ 1)2U˜⊥U˜⊥T)LLT+( ω2(µω2 + 1)2V˜ V˜ T +1(µ+ 1)2V˜ ⊥V˜ ⊥T)RRT].(2.22)The optimal µ that solves equation (2.22) can be found using the Newton iterationµ(t) = µ(t−1) − f(µ(t−1))− τ∇f(µ(t−1)) ,where ∇f(µ) is given byTr[( −2ω4(µω2 + 1)2U˜ U˜T +−2(µ+ 1)3U˜⊥U˜⊥T)LLT+( −2ω4(µω2 + 1)3V˜ V˜ T +−2(µ+ 1)3V˜ ⊥V˜ ⊥T)RRT].2.8.2 Traversing the pareto curveThe design of an effective optimization method that solves (wBPDNη) requires 1) evaluating prob-lem (2.21), and 2) computing the dual of the weighted nuclear norm ‖QXW‖∗.We first define a gauge function κ(x) as a convex, nonnegative, positively homogeneous functionsuch that κ(0) = 0. This class of functions includes norms and therefore includes the formulationsdescribed in (wBPDNη) and (2.21). Recall from section 2.3 that taking a Newton step along thePareto curve of (wBPDNη) requires the computation of the derivative of v(τ) as in (2.3). Therefore,27we also define the polar (or dual) of κ asκo(x) = supw{wTx | κ(w) ≤ 1}. (2.23)Note that if κ is a norm, the polar reduces to the dual norm.To compute the dual of the weighted nuclear norm, we follow Theorem 5.1 of Berg and Fried-lander [2011] which defines the polar (or dual) representation of a weighted gauge function κ(Φx)as κo(Φ−1x), where Φ is an invertible linear operator. The weighted nuclear norm ‖QXW‖∗ is infact a gauge function with invertible linear weighting matrices Q and W . Therefore, the dual normis given by(‖Q(·)W‖∗)d(Z) := ‖Q−1ZW−1‖∞.2.9 Numerical experimentsWe test the performance of LR-BPDN on two example applications. In section 2.9.1, we con-sider the Netflix Prize problem, which is often solved using rank minimization Funk [2006], Gross[2011], Recht and Re´ [2011]. Using MovieLens 1M, 10M, and Netflix 100M datasets, we compareand discuss advantages of different formulations, compare our solver against state of the art con-vex (BPDNη) solver SPG`1, and report timing results. We show that the proposed algorithm isorders of magnitude faster than the best convex (BPDNη) solver.In section 2.9.2, we apply the proposed methods and extensions to seismic trace interpolation, akey application in exploration geophysics Sacchi et al. [1998], where rank regularization approacheshave recently been used successfully Oropeza and Sacchi [2011]. In section 2.9.2, we include anadditional comparison of LR-BPDN with classic SPG`1 as well as with TFOCS Becker et al.[2011] for small matrix completion and seismic data interpolation problems. Then, using real datacollected from the Gulf of Suez, we show results for robust completion in section 2.9.2, and presentresults for the weighted extension in section 2.9.2.2.9.1 Collaborative filteringWe tested the performance of our algorithm on completing missing entries in the MovieLens (1M), (10M),and Netflix (100M) datasets, which contain anonymous ratings of movies made by Netflix users.The ratings are on an integer scale from 1 to 5. The ratings matrix is not complete, and the goal isto infer the values in the unseen test set. In order to test our algorithm, we further subsampled theavailable ratings by randomly removing 50% of the known entries. We then solved the (BPDNη)formulation to complete the matrix, and compared the predicted (P) and actual (A) removed en-tries in order to assess algorithm performance. We report the signal-to-noise ratio (SNR) and root28means square error (RMSE):SNR = 20 log( ‖A‖F‖P −A‖F), RMSE = ‖P −A‖F /‖A‖0for different values of η in the (BPDNη) formulation.Since our algorithm requires pre-defining the rank of the factors L and R, we perform therecovery with ranks k ∈ {10, 20, 30, 50}. Table 2.2 shows the reconstruction SNR for each of theranks k and for a relative error η ∈ {0.5, 0.3, 0.2} (the data mismatch is reduced to a fraction ηof the initial error). The last row of table 2.2 shows the recovery for an unconstrained low-rankformulation, using the work and software of Vandereycken [2013]. This serves as an interestingbaseline, since the rank k of the Riemannian manifold in the unconstrained formulation functionsas a regularizer. It is clear that for small k, we get good results without additional functionalregularization; however, as k increases, the quality of the rank k solution decays without furtherconstraints. In contrast, we get better results as the rank increases, because we consider a largermodel space, but solve the BPDN formulation each time. This observation demonstrates theimportance of the nuclear norm regularization, especially when the underlying rank of the problemis unknown.Table 2.3 shows the timing (in seconds) used by all methods to obtain solutions. There areseveral conclusions that can be readily drawn. First, for error-level constrained problems, a tightererror bound requires a higher computational investment by our algorithm, which is consistent withthe original behavior of SPG`1 Berg and Friedlander [2008]. Second, the unconstrained prob-lem is easier to solve (using the Riemmanian manifolds approach of Vandereycken [2013]) than aconstrained problem of the same rank; however, it is interesting to note that as the rank of therepresentation increases, the unconstrained Riemmanian approach becomes more expensive thanthe constrained problem for the levels η considered, most likely due to second-order methods usedby the particular implementation of Vandereycken [2013].Table 2.4 shows the value of ‖X‖∗ of the reconstructed signal corresponding to the settings inTable 2.2. While the interpretation of the η values are straightforward (they are fractions of theinitial data error), it is much more difficult to predict ahead of time which value of τ one maywant to use when solving (LASSOτ ). This illustrates the modeling advantage of the (BPDNη)formulation: it requires only the simple parameter η, which is an estimate of the (relative) noisefloor. Once η is provided, the algorithm (not the user) will instantiate (LASSOτ ) formulations, andfind the right value τ that satisfies v(τ) = η. When no estimate of η is available, our algorithm canstill be applied to the problem, with η = 0 and a fixed computational budget (see Table 2.5. )Table 2.5 shows a comparison between classic SPG`1, accelerated with a Lanczos-based trun-cated SVD projector, against the new solver, on the MovieLens (10M) dataset, for a fixed budgetof 100 iterations. Where the classic solver takes over six hours, the proposed method finishes in lessthan a minute. For a problem of this size, explicit manipulation of X as a full matrix of size 10K29Table 2.2: Summary of the recovery results on the MovieLens (1M) data set for factor rankk and relative error level η for (BPDNη). SNR in dB (higher is better) listed in the lefttable, and RMSE (lower is better) in the right table. The last row in each table givesrecovery results for the non-regularized data fitting factorized formulation solved withRiemannian optimization (ROPT). Quality degrades with k due to overfitting for thenon-regularized formulation, and improves with k when regularization is used.k 10 20 30 50η0.5 5.93 5.93 5.93 5.930.3 10.27 10.27 10.26 10.270.2 12.50 12.54 12.56 12.56ROPT 11.16 8.38 6.01 2.6k 10 20 30 50η0.5 1.89 1.89 1.89 1.890.3 1.14 1.14 1.15 1.140.2 0.88 0.88 0.88 0.88ROPT 1.03 1.42 1.87 2.77Table 2.3: Summary of the computational timing (in seconds) on the MovieLens (1M) dataset for factor rank k and relative error level η for (BPDNη). The last row gives compu-tational timing for the non-regularized data fitting factorized formulation solved withRiemannian optimization.k 10 20 30 50η0.5 5.0 5.7 6.8 8.20.3 11.1 18.8 12.3 21.00.2 58.7 84.2 95.0 114.2ROPT 14.9 43.5 98.4 327.3by 20K is computationally prohibitive. Table 2.6 gives timing and reconstruction quality resultsfor the Netflix (100M) dataset, where the full matrix is 18K by 500K when fully constructed.2.9.2 Seismic missing-trace interpolationIn exploration seismology, large-scale data sets (approaching the order of petabytes for the latestland and wide-azimuth marine acquisitions) must be acquired and processed in order to determinethe structure of the subsurface. In many situations, only a subset of the complete data is acquireddue to physical and/or budgetary constraints. Recent insights from the field of compressed sensingallow for deliberate subsampling of seismic wavefields in order to improve reconstruction quality andreduce acquisition costs Herrmann and Hennenfent [2008a]. The acquired subset of the completedata is often chosen by randomly subsampling a dense regular periodic source or receiver grid.Interpolation algorithms are then used to reconstruct the dense regular grid in order to perform30Table 2.4: Nuclear-norms of the solutions X = LRT for results in Table 2.2, correspondingto τ values in (LASSOτ ). These values are found automatically via root finding, butare difficult to guess ahead of time.k 5 10 30 50η0.5 5.19e3 5.2e3 5.2e3 5.2e30.3 9.75e3 9.73e3 9.76e3 9.74e30.2 1.96e4 1.96e4 1.93e4 1.93e4Table 2.5: Classic SPGL1 (using Lanczos based truncated SVD) versus LR factorization onthe MovieLens (10M) data set ( 10000×20000 matrix) shows results for a fixed iterationbudget (100 iterations) for 50% subsampling of MovieLens data. SNR, RMSE andcomputational time are shown for k = 5, 10, 20.MovieLens (10M)k 5 10 20SPG`1SNR (dB) 11.32 11.37 11.37RMSE 1.02 1.01 1.01time (sec) 22680 93744 121392LRSNR (dB) 11.87 11.77 11.72RMSE 0.95 0.94 0.94time (sec) 54.3 48.2 47.5additional processing on the data such as removal of artifacts, improvement of spatial resolution,and key analysis, such as imaging.In this section, we apply the new rank-minimization approach, along with weighted and robustextensions, to the trace-interpolation problem for two different seismic acquisition examples. Wefirst describe the structure of the datasets, and then present the transform we use to cast theinterpolation as a rank-minimization problem.The first example is a real data example from the Gulf of Suez. Seismic data are organizedinto seismic lines, where Nr receivers and Ns sources are collocated in a straight line. Sources aredeployed sequentially, and receivers collect each shot record2 for a period of Nt time samples. TheGulf of Suez data contains Ns = 354 sources, Nr = 354 receivers, and Nt = 1024 with a samplinginterval of 0.004s, leading to a shot duration of 4s and a maximum temporal frequency of 125 Hz.Most of the energy of the seismic line is preserved when we restrict the spectrum to the 12-60Hzfrequency band. Figs. 2.2(a) and (b) illustrate the 12Hz and 60Hz frequency slices in the source-2Data collection performed for several sources taken with increasing or decreasing distance between sources andreceivers.31Table 2.6: LR method on the Netflix (100M) data set ( 17770×480189 matrix) shows resultsfor 50% subsampling of Netflix data. SNR, computational time and RMSE are shownfor factor rank k and relative error level η for (BPDNη).Netflix (100M)k 2 4 6η=0.5SNR (dB) 7.37 7.03 7.0RMSE 1.60 1.67 1.68time (sec) 236.5 333.0 335.0η=0.4SNR (dB) 8.02 7.96 7.93RMSE 1.49 1.50 1.50time (sec) 315.2 388.6 425.0η=0.3SNR (dB) 10.36 10.32 10.35RMSE 1.14 1.14 1.14time (sec) 1093.2 853.7 699.7receiver domain, respectively. Columns in these frequency slices represent the monochromaticresponse of the earth to a fixed source and as a function of the receiver coordinate. In order tosimulate missing traces, we apply a subsampling mask that randomly removes 50% of the sources,resulting in the subsampled frequency slices illustrated in Figs. 2.2 (c) and (d).State of the art trace-interpolation schemes transform the data into sparsifying domains, forexample using the Fourier Sacchi et al. [1998] and curvelet Herrmann and Hennenfent [2008a]transforms. The underlying sparse structure of the data is then exploited to recover the missingtraces. The approach proposed in this paper allows us to instead exploit the low-rank matrixstructure of seismic data, and to design formulations that can achieve trace-interpolation usingmatrix-completion strategies.The main challenge in applying rank-minimization for seismic trace-interpolation is to find atransform domain that satisfies the following two properties:1. Fully sampled seismic lines have low-rank structure (quickly decaying singular values)2. Subsampled seismic lines have high rank (slowly decaying singular values).When these two properties hold, rank-penalization formulations allow the recovery of missing traces.To achieve these aims, we use the transformation from the source-receiver (s-r) domain to themidpoint-offset (m-h). The conversion from (s-r) domain to (m-h) domain is a coordinate transfor-mation, with the midpoint is defined by m = 12(s+r) and the half-offset is defined by h =12(s-r).3This transformation is illustrated by transforming the 12Hz and 60Hz source-receiver domain fre-quency slices in Figs. 2.2(a) and (b) to the midpoint-offset domain frequency slices in Figs. 2.2(e)3 In mathematical terms, the transformation from (s-r) domain to (m-h) domain represents a tight frame.32Source(m)Receiver(m)0 1000 2000 3000 400001000200030004000(a)Source(m)Receiver(m)0 1000 2000 3000 400001000200030004000(b)Source(m)Receiver(m)0 1000 2000 3000 400001000200030004000(c)Source(m)Receiver(m)0 1000 2000 3000 400001000200030004000(d)Offset(m)Midpoint(m)−4000 −2000 0 2000 400001000200030004000(e)Offset(m)Midpoint(m)−4000 −2000 0 2000 400001000200030004000(f)Offset(m)Midpoint(m)−4000 −2000 0 2000 400001000200030004000(g)Offset(m)Midpoint(m)−4000 −2000 0 2000 400001000200030004000(h)Figure 2.2: Frequency slices of a seismic line from Gulf of Suez with 354 shots, 354 receivers.Full data for (a) low frequency at 12 Hz and (b) high frequency at 60 Hz in s-r domain.50% Subsampled data for (c) low frequency at 12 Hz and (d) high frequency at 60 Hzin s-r domain. Full data for (e) low frequency at 12 hz and (f) high frequency at 60Hz in m-h domain. 50% subsampled data for (g) low frequency at 12 Hz and (h) highfrequency at 60 Hz in m-h domain.and (f). The corresponding subsampled frequency slices in the midpoint-offset domain are shownin Figs. 2.2(g) and (h).To show that the midpoint-offset transformation achieves aims 1 and 2 above, we plot the decayof the singular values of both the 12Hz and 60Hz frequency slices in the source-receiver domainand in the midpoint-offset domain in Figs. 2.3 (a) and (c). Notice that the singular values of bothfrequency slices decay faster in the midpoint-offset domain, and that the singular value decay isslower for subsampled data in Figs. 2.3 (b) and (d).Let X denote the data matrix in the midpoint-offset domain and let R be the subsamplingoperator that maps Figs. 2.2 (e) and (f) to Figs. 2.2(g) and (h). Denote by S the transformationoperator from the source-receiver domain to the midpoint-offset domain. The resulting measure-ment operator in the midpoint-offset domain is then given by A = RSH .We formulate and solve the matrix completion problem (BPDNη) to recover a seismic linefrom Gulf of Suez in the (m-h) domain. We first performed the interpolation for the frequencyslices at 12Hz and 60Hz to get a good approximation of the lower and higher limit of the rankvalue. Then, we work with all the monochromatic frequency slices and adjust the rank within thelimit while going from low- to high-frequency slices. We use 300 iterations of LR for all frequencyslices. Figures 2.4(a) and (b) show the recovery and error plot for the low frequency slice at 1233100 101 102 10300.51number of singular valuessingular value magnitude SR DomainMH Domain(a)100 101 102 10300.51number of singular valuessingular value magnitude SR DomainMH Domain(b)100 101 102 10300.51number of singular valuessingular value magnitude SR DomainMH Domain(c)100 101 102 10300.51number of singular valuessingular value magnitude SR DomainMH Domain(d)Figure 2.3: Singular value decay of fully sampled (a) low frequency slice at 12 Hz and (c)high frequency slice at 60 Hz in (s-r) and (m-h) domains. Singular value decay of 50%subsampled (b) low frequency slice at 12 Hz and (d) high frequency data at 60 Hzin (s-r) and (m-h) domains. Notice that for both high and low frequencies, decay ofsingular values is faster in the fully sampled (m-h) domain than in the fully sampled(s-r) domain, and that subsampling does not significantly change the decay of singularvalue in (s-r) domain, while it destroys fast decay of singular values in (m-h) domain.Hz, respectively. Figures 2.4(c) and (d) show the recovery and error plot for the high frequencyslice at 60 Hz, respectively. Figures 2.5 shows a common-shot gather section after missing-traceinterpolation from Gulf of Suez data set. We can clearly see that we are able to recapture most ofthe missing traces in the data (Figures 2.5c), also evident from residual plot (Figures 2.5d).In the second acquisition example, we implement the proposed formulation on the 5D syntheticseismic data (2 source dimension, 2 receiver dimension, 1 temporal dimension) provided by BGGroup. We extract a frequency slice at 12.3Hz to perform the missing-trace interpolation, wherethe size of the to-be recovered matrix is 400×400 receivers spaced by 25m and 68×68 sources spaced34by 150m. Due to the low spatial frequency content of the data at 12.3 Hz, we further subsample thedata in receiver coordinates by a factor of two to speed up the computation. We apply sub-samplingmasks that randomly remove 75% and 50% of the shots. In case of 4D, we have two choices ofmatricization Silva and Herrmann [2013c], Demanet [2006a], as shown in Figures 2.6(a,b), wherewe can either place the (Receiver x, Receiver y) dimensions in the rows and (Source x, Source y)dimensions in the columns, or (Receiver y, Source y) dimensions in the rows and (Receiver x, Sourcex) dimensions in the columns. We observed the faster decay of singular value decay as shown inFigure 2.7(a,b), for each of these strategies. We therefore selected the transform domain to be thepermutation of source and receivers coordinates, where matricization of each 4D monochromaticfrequency slices is done using (Source x, Receiver x) and (Source y, Receiver y) coordinates. We userank 200 for the interpolation, and run the solver for a maximum of 1000 iterations. The resultsafter interpolation are shown in Figures 2.8 and 2.9 for 75% and 50% missing data, respectively.We can see that when 75% of data is missing, we start losing coherent energy (Figures 2.8c). With50% missing data, we capture most of the coherent energy (Figures 2.9c). We also have higherSNR values for recovery in case of 50% compared to 75% missing data.To illustrate the importance of the nuclear-norm regularization, we solved the interpolationproblem using a simple least-squares formulation on the same seismic data set from Gulf ofSuez. The least squares problem was solved using the L, R factorization structure, thereby im-plicitly enforcing a rank on the recovered estimate (i.e, formulation (2.14) was optimized with-out the τ -constraint). The problem was then solved with the factors L and R having a rankk ∈ {5, 10, 20, 30, 40, 50, 80, 100}. The reconstruction SNRs comparing the recovery for the reg-ularized and non-regularized formulations are shown in Fig. 2.10. The Figure shows that theperformance of the non-regularized approach decays with rank, due to overfitting. The regularizedapproach, in contrast, obtains better recovery as the factor rank increases.Comparison with classical nuclear-norm formulationTo illustrate the advantage of proposed matrix-factorization formulation (which we refer to as LRbelow) over classical nuclear-norm formulation, we compare the reconstruction error and computa-tion time with the existing techniques. The most natural baseline is the SPG`1 algorithm Berg andFriedlander [2011] applied to the classic nuclear norm (BPDNη) formulation, where the decisionvariable is X, the penalty function ρ is the 2-norm, and the projection is done using the SVD. Thisexample tests the classic (BPDNη) formulation against the LR extension proposed in this paper.The second comparison is with the TFOCSBecker et al. [2011], which is a library of first-ordermethods for a variety of problems with explicit examples written by the authors for the (BPDNη)formulation. The TFOCS approach to (BPDNη) relies on a proximal function for the nuclear norm,which, similar to projection, requires computing SVDs or partial SVDs.The comparisons are done using three different data sets. In the first example, we interpolated35missing traces of a monochromatic slice (of size 354×354), extracted from Gulf of Suez data set. Wesubsampled the frequency slice by randomly removing the 50% of shots and performed the missing-trace interpolation in the midpoint-offset (m-h) domain. We compares the SNR, computation timeand iterations for a fixed set of η. The rank of the factors was set to 28. The seismic example inTable 2.7 shows the results. Both the classic SPG`1 algorithm and LR are faster than TFOCS.In the quality of recovery, both SPG`1 and LR have better SNR than TFOCS. In this case LR isfaster than SPG`1 by a factor of 15 (see Table 2.7).In the second example, we generated a rank 10 matrix of size 100 × 100. We subsampled thematrix by randomly removing 50% of the data entries. The synthetic low-rank example in Table2.7 shows the comparison of SNR and computational time. The rank of the factors was set to bethe true rank of the original data matrix for this experiment. The LR formulation proposed inthis paper is faster than classic SPG`1, and both are faster than TFOCS. As the error thresholdtightens, TFOCS requires a large number of iterations to converge. For a small problem size, LRand the classic SPG`1 perform comparably. When operating on decision variables with the correctrank LR gave uniformly better SNR results than classic SPG`1 and TFOCS, and the improvementwas significant for lower error thresholds.4 In reality, we do not know the rank value in advance.To make a fair comparison, we used MovieLens (10M) dataset, where we subsampled the availableratings by randomly removing 50% of the known entries. In this example, we fixed the number ofiterations to 100 and compared the SNR and computational time (Table 2.5) for multiple ranks,k = 5, 10, 20. It is evident that the we get better SNR in case of LR, also the computational speedof LR is significantly faster then the classic SPG`1.Simultaneous missing-trace interpolation and denoisingTo illustrate the utility of robust cost functions, we consider a situation where observed dataare heavily contaminated. The goal here is to simultaneously denoise interpolate the data. Wework with same seismic line from Gulf of Suez. To obtain the observed data, we apply a sub-sampling mask that randomly removes 50% of the shots, and to simulate contamination, we replaceanother 10% of the shots with large random errors, whose amplitudes are three times the maximumamplitude present in the data. In reality, we know the sub-sampling mask but we do not knowthe behaviour and amplitude of noise. In this example, we formulate and solve the robust matrixcompletion problem (BPDNη), where the cost ρ is taken to be the penalty (2.16); see section 2.7for the explanation and motivation. As in the previous examples, the recovery is done in the(m-h) domain. We implement the formulation in the frequency domain, where we work withmonochromatic frequency slices, and adjust the rank and ν parameter while going from low tohigh frequency slices. Figure 2.11 compares the recovery results with and without using a robust4We tested this hypothesis by re-running the experiment with higher factor rank. For example, selecting factorrank to be 40 gives SNRs of 16.5, 36.7, 42.4, 75.3 for the corresponding η values for the synthetic low-rank experimentin 2.7.36Table 2.7: TFOCS versus classic SPG`1 (using direct SVDs) versus LR factorization. Syn-thetic low rank example shows results for completing a rank 10, 100 × 100 matrix,with 50% missing entries. SNR, Computational time and iterations are shown forη = 0.1, 0.01, 0.005, 0.0001. Rank of the factors is taken to be 10. Seismic exam-ple shows results for matrix completion a low-frequency slice at 10 Hz, extracted fromthe Gulf of Suez data set, with 50% missing entries. SNR, Computational time anditerations are shown for η = 0.2, 0.1, 0.09, 0.08. Rank of factors was taken to be 28.Synthetic low rankη 0.1 0.01 0.005 0.0001TFOCSSNR (dB) 17.2 36.3 56.2 76.2time (s) 24 179 963 2499iteration 1151 8751 46701 121901SPG`1SNR (dB) 14.5 36.4 39.2 76.2time (s) 4.9 17.0 17.2 61.1iteration 12 46 47 152LRSNR (dB) 16.5 36.7 42.7 76.2time (s) 0.6 0.5 0.58 0.9iteration 27 64 73 119Seismicη 0.2 0.1 0.09 0.08TFOCSSNR (dB) 13.05 17.4 17.9 18.5time (s) 593 3232 4295 6140iteration 1201 3395 3901 4451SPG`1SNR (dB) 12.8 17.0 17.4 17.9time (s) 30.4 42.8 32.9 58.8iteration 37 52 40 73LRSNR (dB) 13.1 17.1 17.4 18.0time (s) 1.6 2.9 3.2 4.0iteration 38 80 87 113penalty function. The error budget plays a significant role in this example, and we standardizedthe problems by setting the relative error to be 20% of the initial error, so that the formulationsare comparable.We can clearly see that the standard least squares formulation is unable to recover the truesolution. The intuitive reason is that the least squares penalty is simply unable to budget largeerrors to what should be the outlying residuals. The Student’s t penalty, in contrast, achieves agood recovery in this extreme situation, with an SNR of 17.9 DB. In this example, we used 300iterations of SPG`1 for all frequency slices.Re-weightingRe-weighting for seismic trace interpolation was recently used in Mansour et al. [2012a] to improvethe interpolation of subsampled seismic traces in the context of sparsity promotion in the curveletdomain. The weighted `1 formulation takes advantage of curvelet support overlap across adjacentfrequency slices.Analogously, in the matrix setting, we use the weighted rank-minimization formulation (wBPDNη)to take advantage of correlated row and column subspaces for adjacent frequency slices. We firstdemonstrate the effectiveness of solving the (wBPDNη) problem when we have accurate subspaceinformation. For this purpose, we compute the row and column subspace bases of the fully sampledlow frequency (11Hz) seismic slice and pass this information to (wBPDNη) using matrices Q and37Source(m)Receiver(m)0 1000 2000 3000 400001000200030004000(a)Source(m)Receiver(m)0 1000 2000 3000 400001000200030004000(b)Source(m)Receiver(m)0 1000 2000 3000 400001000200030004000(c)Source(m)Receiver(m)0 1000 2000 3000 400001000200030004000(d)Figure 2.4: Recovery results for 50% subsampled 2D frequency slices using the nuclear normformulation. (a) Interpolation and (b) residual of low frequency slice at 12 Hz withSNR = 19.1 dB. (c) Interpolation and (d) residual of high frequency slice at 60 Hz withSNR = 15.2 dB.W . Figures 2.12(a) and (b) show the residual of the frequency slice with and without weighting.The reconstruction using the (wBPDNη) problem achieves a 1.5dB improvement in SNR over thenon-weighted (BPDNη) formulation.Next, we apply the (wBPDNη) formulation in a practical setting where we do not know subspacebases ahead of time, but learn them as we proceed from low to high frequencies. We use therow and column subspace vectors recovered using (BPDNη) for 10.75 Hz and 15.75 Hz frequencyslices as subspace estimates for the adjacent higher frequency slices at 11 Hz and 16 Hz. Usingthe (wBPDNη) formulation in this way yields SNR improvements of 0.6dB and 1dB, respectively,over (BPDNη) alone. Figures 2.13(a) and (b) show the residual for the next higher frequencywithout using the support and Figures 2.13(c) and (d) shows the residual for next higher frequencywith support from previous frequency. Figure 2.14 shows the recovery SNR versus frequency forweighted and non-weighted cases for a range of frequencies from 9 Hz to 17 Hz.38Source(m)Time(s)0 1000 2000 3000 400000.511.52(a)Source(m)Time(s)0 1000 2000 3000 400000.511.52(b)Source(m)Time(s)0 1000 2000 3000 400000.511.52(c)Source(m)Time(s)0 1000 2000 3000 400000.511.52(d)Figure 2.5: Missing trace interpolation of a seismic line from Gulf of Suez. (a) Ground truth.(b) 50% subsampled common shot gather. (c) Recovery result with a SNR of 18.5 dB.(d) Residual.2.10 ConclusionsWe have presented a new method for matrix completion. Our method combines the Pareto curveapproach for optimizing (BPDNη) formulations with SVD-free matrix factorization methods.We demonstrated the modeling advantages of the (BPDNη) formulation on the Netflix Prizeproblem, and obtained high-quality reconstruction results for the seismic trace interpolation prob-lem. Comparison with state of the art methods for the (BPDNη) formulation showed that thefactorized formulation is faster than both TFOCS and classic SPG`1 formulations that rely on theSVD. The presented factorized approach also has a small memory imprint and does not rely onSVDs, which makes this method applicable to truly large-scale problems.We also proposed two extensions. First, using robust penalties ρ in (BPDNη), we showedthat simultaneous interpolation and denoising can be achieved in the extreme data contaminationcase, where 10% of the data was replaced by large outliers. Second, we proposed a weighted39Source x, Source yReceiver x, Receiver y 50 100 150 200 250 300 350 40050100150200250300350400 −2−1012x 10−4(a)Source x, Source yReceiver x, Receiver y 50 100 150 200 250 300 350 40050100150200250300350400 −2−1012x 10−4(b)Source x, Receiver xSource y, Receiver y 50 100 150 200 250 300 350 40050100150200250300350400 −2−1012x 10−4(c)Source x, Receiver xSource y, Receiver y 50 100 150 200 250 300 350 40050100150200250300350400 −2−1012x 10−4(d)Figure 2.6: Matricization of 4D monochromatic frequency slice. Top: (Source x, Source y)matricization. Bottom: (Source x, Receiver x) matricization. Left: Fully sampleddata; Right: Subsampled data.100 101 102 103 10400.51number of singular valuessingular value magnitude Acquisition DomainTransform Domain(a)100 101 102 103 10400.51number of singular valuessingular value magnitude Acquisition DomainTransform Domain(b)Figure 2.7: Singular value decay in case of different matricization of 4D monochromatic fre-quency slice. Left: Fully sampled data; Right: Subsampled data.40Receiver yReceiver x 50 100 150 200 250 300 350 40050100150200250300350400−6−4−20246x 10−4(a)Receiver yReceiver x 50 100 150 200 250 300 350 40050100150200250300350400−6−4−20246x 10−4(b)Receiver yReceiver x 50 100 150 200 250 300 350 40050100150200250300350400−6−4−20246x 10−4(c)Receiver yReceiver x 50 100 150 200 250 300 350 40050100150200250300350400−8−6−4−202468x 10−4(d)Receiver yReceiver x 50 100 150 200 250 300 350 40050100150200250300350400−8−6−4−202468x 10−4(e)Receiver yReceiver x 50 100 150 200 250 300 350 40050100150200250300350400−6−4−20246x 10−4(f)Figure 2.8: Missing-trace interpolation of a frequency slice at 12.3Hz extracted from 5D dataset, 75% missing data. (a,b,c) Original, recovery and residual of a common shot gatherwith a SNR of 11.4 dB at the location where shot is recorded. (d,e,f) Interpolation ofcommon shot gathers at the location where no reference shot is present.extension (wBPDNη), and used it to incorporate subspace information we learned on the fly toimprove interpolation in adjacent frequencies.2.11 AppendixProof of Theorem 1 Recall [Burer and Monteiro, 2003, Lemma 2.1]: if SST = KKT , then S = KQfor some orthogonal matrix Q ∈ Rr×r. Next, note that the objective and constraints of (2.8) aregiven in terms of SST , and for any orthogonal Q ∈ Rr×r, we have SQQTST = SST , so S¯ is a localminimum of (2.8) if and only if S¯Q is a local minimum for all orthogonal Q ∈ Rr×r.If Z¯ is a local minimum of (2.7), then any factor S¯ with Z¯ = S¯S¯T is a local minimum of (2.8).Otherwise, we can find a better solution S˜ in the neighborhood of S¯, and then Z˜ := S˜S˜T will be afeasible solution for (2.7) in the neighborhood of Z¯ (by continuity of the map S → SST ).We prove the other direction by contrapositive. If Z¯ is not a local minimum for (2.7), thenyou can find a sequence of feasible solutions Zk with f(Zk) < f(Z¯) and Zk → Z¯. For each k,41Receiver yReceiver x 50 100 150 200 250 300 350 40050100150200250300350400−6−4−20246x 10−4(a)Receiver yReceiver x 50 100 150 200 250 300 350 40050100150200250300350400−6−4−20246x 10−4(b)Receiver yReceiver x 50 100 150 200 250 300 350 40050100150200250300350400−6−4−20246x 10−4(c)Receiver yReceiver x 50 100 150 200 250 300 350 40050100150200250300350400−8−6−4−202468x 10−4(d)Receiver yReceiver x 50 100 150 200 250 300 350 40050100150200250300350400−8−6−4−202468x 10−4(e)Receiver yReceiver x 50 100 150 200 250 300 350 40050100150200250300350400−6−4−20246x 10−4(f)Figure 2.9: Missing-trace interpolation of a frequency slice at 12.3Hz extracted from 5D dataset, 50% missing data. (a,b,c) Original, recovery and residual of a common shot gatherwith a SNR of 16.6 dB at the location where shot is recorded. (d,e,f) Interpolation ofcommon shot gathers at the location where no reference shot is present.write Zk = SkSTk . Since Zk are all feasible for (2.7), so Sk are feasible for (2.8). By assumption{Zk} is bounded, and so is Sk; we can therefore find a subsequence of Sj → S˜ with S˜S˜T = Z¯,and f(SjSTj ) < f(S˜S˜T ). In particular, we have Z¯ = S¯S¯T = S˜S˜T , and S˜ is not a local minimumfor (2.8), and therefore (by previous results) S¯ cannot be either.4220 40 60 80 100101214161820RankSNR no−regularizationregularization(a)20 40 60 80 1000246810121416RankSNR no−regularizationregularization(b)Figure 2.10: Comparison of regularized and non-regularized formulations. SNR of (a) lowfrequency slice at 12 Hz and (b) high frequency slice at 60 Hz over a range of factorranks. Without regularization, recovery quality decays with factor rank due to over-fiting; the regularized formulation improves with higher factor rank.43Source(m)Time(s)0 1000 2000 3000 400000.511.52(a)Source(m)Time(s)0 1000 2000 3000 400000.511.52(b)Source(m)Time(s)0 1000 2000 3000 400000.511.52(c)Source(m)Time(s)0 1000 2000 3000 400000.511.52(d)Figure 2.11: Comparison of interpolation and denoising results for the Student’s t and least-squares misfit function. (a) 50% subsampled common receiver gather with another10 % of the shots replaced by large errors. (b) Recovery result using the least-squares misfit function. (c,d) Recovery and residual results using the student’s tmisfit function with a SNR of 17.2 dB.44Source(m)Receiver(m)0 1000 2000 3000 400005001000150020002500300035004000(a)Source(m)Receiver(m)0 1000 2000 3000 400005001000150020002500300035004000(b)Figure 2.12: Residual error for recovery of 11 Hz slice (a) without weighting and (b) withweighting using true support. SNR in this case is improved by 1.5 dB.45Source(m)Receiver(m)0 1000 2000 3000 400005001000150020002500300035004000(a)Source(m)Receiver(m)0 1000 2000 3000 400005001000150020002500300035004000(b)Source(m)Receiver(m)0 1000 2000 3000 400005001000150020002500300035004000(c)Source(m)Receiver(m)0 1000 2000 3000 400005001000150020002500300035004000(d)Figure 2.13: Residual of low frequency slice at 11 Hz (a) without weighing (c) with supportfrom 10.75 Hz frequency slice. SNR is improved by 0.6 dB. Residual of low frequencyslice at 16 Hz (b) without weighing (d) with support from 15.75 Hz frequency slice.SNR is improved by 1dB. Weighting using learned support is able to improve on theunweighted interpolation results.4610 11 12 13 14 15 16 171618202224Frequency(Hz)SNR WeightedNon−weightedFigure 2.14: Recovery results of practical scenario in case of weighted factorized formulationover a frequency range of 9-17 Hz. The weighted formulation outperforms the non-weighted for higher frequencies. For some frequency slices, the performance of thenon-weighted algorithm is better, because the weighted algorithm can be negativelyaffected when the subspaces are less correlated.47Chapter 3Efficient matrix completion forseismic data reconstruction3.1 SummaryDespite recent developments in improved acquisition, seismic data often remains undersampledalong source and receiver coordinates, resulting in incomplete data for key applications such asmigration and multiple prediction. We interpret the missing-trace interpolation problem in thecontext of matrix completion and outline three practical principles for using low-rank optimiza-tion techniques to recover seismic data. Specifically, we strive for recovery scenarios wherein theoriginal signal is low rank and the subsampling scheme increases the singular values of the ma-trix. We employ an optimization program that restores this low rank structure to recover the fullvolume. Omitting one or more of these principles can lead to poor interpolation results, as weshow experimentally. In light of this theory, we compensate for the high-rank behaviour of datain the source-receiver domain by employing the midpoint-offset transformation for 2D data and asource-receiver permutation for 3D data to reduce the overall singular values. Simultaneously, inorder to work with computationally feasible algorithms for large scale data, we use a factorization-based approach to matrix completion, which significantly speeds up the computations comparedto repeated singular value decompositions without reducing the recovery quality. In the contextof our theory and experiments, we also show that windowing the data too aggressively can haveadverse effects on the recovery quality. To overcome this problem, we carry out our interpolationsfor each frequency independently while working with the entire frequency slice. The result is acomputationally efficient, theoretically motivated framework for interpolating missing-trace data.Our tests on realistic two- and three-dimensional seismic data sets show that our method comparesfavorably, both in terms of computational speed and recovery quality, to existing curvelet-basedA version of this chapter has been published in Geophysics, 2015, vol 80, pages V97-V114.48and tensor-based techniques.3.2 IntroductionCoarsely sampled seismic data creates substantial problems for seismic applications such as mi-gration and inversion [Canning and Gardner, 1998, Sacchi and Liu, 2005]. In order to mitigateacquisition related artifacts, we rely on interpolation algorithms to reproduce the missing tracesaccurately. The aim of these interpolation algorithms is to reduce acquisition costs and to providedensely sampled seismic data to improve the resolution of seismic images and mitigate subsamplingrelated artifacts such as aliasing. A variety of methodologies, each based on various mathematicaltechniques, have been proposed to interpolate seismic data. Some of the methods require trans-forming the data into different domains, such as the Radon [Bardan, 1987, Kabir and Verschuur,1995], Fourier [Duijndam et al., 1999, Sacchi et al., 1998, Curry, 2009, Trad, 2009] and curveletdomains [Herrmann and Hennenfent, 2008a, Sacchi et al., 2009, Wang et al., 2010]. The CS ap-proach exploits the resulting sparsity of the signal, i.e. small number of nonzeros [Donoho, 2006a]in these domains. In the CS framework, the goal for effective recovery is to first find a repre-sentation in which the signal of interest is sparse, or well-approximated by a sparse signal, andwhere the the mask encoding missing traces makes the signal much less sparse. Hennenfent andHerrmann [2006b], Herrmann and Hennenfent [2008a] successfully applied the ideas of CS to thereconstruction of missing seismic traces in the curvelet domain.More recently, rank-minimization-based techniques have been applied to interpolating seismicdata [Trickett et al., 2010, Oropeza and Sacchi, 2011, Kreimer and Sacchi, 2012c,a, Yang et al., 2013].Rank minimization extends the theoretical and computational ideas of CS to the matrix case (seeRecht et al. [2010a] and the references within). The key idea is to exploit the low-rank structure ofseismic data when organized as a matrix, i.e. a small number of nonzero singular values or quicklydecaying singular values. Oropeza and Sacchi [2011] identified that seismic temporal frequencyslices organized into a block Hankel matrix, under ideal conditions, is a matrix of rank k, wherek is the number of different plane waves in the window of analysis. These authors showed thatadditive noise and missing samples increase the rank of the block Hankel matrix, and the authorspresented an iterative algorithm that resembles seismic data reconstruction with the method ofprojection onto convex sets, where they use a low-rank approximation of the Hankel matrix viathe randomized singular value decomposition [Liberty et al., 2007, Halko et al., 2011b, Mahoney,2011] to interpolate seismic temporal frequency slices. While this technique may be effective forinterpolating data with a limited number of distinct dips, first, the approach requires embedding thedata into an even larger space where each dimension of size n is mapped to a matrix of size n×n, so afrequency slice with 4 dimensions becomes a Hankel tensor with 8 dimensions. Second, the processinvolves partitioning the input data in to smaller subsets that can be processed independently.As we know the theory of matrix completion is predicated upon the notion of an m × n matrix49being relatively low rank in order to ensure successful recovery. That is, the ratio of rank of thematrix to the ambient dimension, min(m,n), should be small for rank-minimizing techniques tobe successful in recovering the matrix from appropriately subsampled data. With the practice ofwindowing, we are inherently increasing the relative rank by decreasing the ambient dimension.Although mathematically desirable due to the seismic signal being stationary in sufficiently smallwindows, the act of windowing from a matrix rank point of view can lead to lower quality results,as we will see later in experiments. Choosing window sizes apriori is also a difficult task, as it isnot altogether obvious how to ensure that the resulting sub-volume is approximately a plane-wave.Previously proposed methods for automatic window size selection include Sinha et al. [2005], Wanget al. [2011] in the context of time-frequency analysis.Other than the Hankel transformation, Yang et al. [2013] used a texture-patch based trans-formation of the data, initially proposed by Schaeffer and Osher [2013], to exploit the low-rankstructure of seismic data. They showed that seismic data can be expressed as a combination of afew textures, due to continuity of seismic data. They divided the signal matrix into small r × rsubmatrices, which they then vectorized in to the columns of a matrix with r2 rows using the sameordering, and approximated the resulting matrix using low rank techniques. Although experimen-tally promising, this organization has no theoretically motivated underpinning and its performanceis difficult to predict as a function of the submatrix size. The authors proposed two algorithms tosolve this matrix completion problem, namely accelerated proximal gradient method (APG) andlow-rank matrix fitting (LMaFit). APG does not scale well to large scale seismic data because itinvolves repeated singular value decompositions, which are very expensive. LMaFit, on the otherhand, parametrizes the matrix in terms of two low-rank factors and uses nonlinear successive-over-relaxation to reconstruct the seismic data, but without penalizing the nuclear norm of the matrix.As shown in Aravkin et al. [2014a], without a nuclear norm penalty, choosing an incorrect rankparameter k can lead to overfitting of the data and degrading the interpolated result. Moreover,Mishra et al. [2013] demonstrates the poor performance of LMaFit, both in terms of speed andsolution quality, compared to more modern matrix completion techniques that penalize the nuclearnorm.Another popular approach to seismic data interpolation is to exploit the multi-dimensionalnature of seismic data and parametrize it as a low-rank tensor. Many of the ideas from low rankmatrices carry over to the multidimensional case, although there is no unique extension of the SVDto tensors. It is beyond the scope of this paper to examine all of the various tensor formats in thispaper, but we refer to a few tensor-based seismic interpolation methods here. Kreimer and Sacchi[2012a] stipulates that the seismic data volume of interest is well captured by a k−rank Tuckertensor and subsequently propose a projection on to non-convex sets algorithm for interpolatingmissing traces. Silva and Herrmann [2013a] develop an algorithm for interpolating HierarchicalTucker tensors, which are similar to Tucker tensors but have much smaller dimensionality. Trickettet al. [2013] proposes to take a structured outer product of the data volume, using a tensor ordering50similar to Hankel matrices, and performs tensor completion in the CP-Parafac tensor format. Themethod of Kreimer et al. [2013], wherein the authors consider a nuclear norm-penalizing approachin each matricization of the tensor, that is to say, the reshaping of the tensor, along each dimension,in to a matrix.These previous CS-based approaches, using sparsity or rank-minimization, incur computationaldifficulties when applied to large scale seismic data volumes. Methods that involve redundanttransforms, such as curvelets, or that add additional dimensions, such as taking outer products oftensors, are not computationally tractable for large data volumes with four or more dimensions.Moreover, a number of previous rank-minimization approaches are based on heuristic techniques andare not necessarily adequately grounded in theoretical considerations. Algorithmic components suchas parameter selection can significantly affect the computed solution and “hand-tuning” parameters,in addition to incurring unnecessary computational overhead, may lead to suboptimal results [Owenand Perry, 2009, Kanagal and Sindhwani, 2010].3.2.1 ContributionsOur contributions in this work are three-fold. First, we outline a practical framework for recoveringseismic data volumes using matrix and tensor completion techniques built upon the theoreticalideas from CS. In particular, understanding this framework allows us to determine apriori whenthe recovery of signals sampled at sub-Nyquist will succeed or fail and provides the principles uponwhich we can design practical experiments to ensure successful recovery. The ideas themselveshave been established for some time in the literature, albeit implicitly by means of the somewhattechnical conditions of CS and matrix completion. We explicitly describe these ideas on a highlevel in a qualitative manner in the hopes of broadening the accessibility of these techniques toa wider audience. These principles are all equally necessary in order for CS-based approaches ofsignal recovery to succeed and we provide examples of how recovery can fail if one or more of theseprinciples are omitted.Second, we address the computational challenges of using these matrix-based techniques forseismic-data reconstruction, since traditional rank minimization algorithms rely on computing thesingular value decomposition (SVD), which is prohibitively expensive for large matrices. To over-come this issue we propose to use either a fast optimization approach that combines the (SVD-free)matrix factorization approach recently developed by Lee et al. [2010a] with the Pareto curve ap-proach proposed by Berg and Friedlander [2008] and the factorization-based parallel matrix comple-tion framework dubbed Jellyfish [Recht and Re´, 2013]. We demonstrate the superior computationalperformances of both of these approaches compared to the tensor-based interpolation of Kreimeret al. [2013] as well as traditional curvelet-based approaches on realistic 2D and 3D seismic datasets.Third, we examine the popular approach of windowing a large data volume in to smaller data51volumes to be processed in parallel and empirically demonstrate how such a process does not respectthe inherent redundancy present in the data, degrading reconstruction quality as a result.3.2.2 NotationIn this paper, we use lower case boldface letters to represent vectors (i.e. one-dimensional quan-tities), e.g., b, f ,x,y, . . . . We denote matrices and tensors using upper case boldface letters, e.g.,X,Y,Z, . . . and operators that act on vectors, matrices, or tensors will be denoted using calligraphicupper case letters, e.g., A. 2D seismic volumes have one source and one receiver dimensions, de-noted xsrc, xrec, respectively, and time, denoted t. 3D seismic volumes have two source dimensions,denoted xsrc, ysrc, two receiver dimensions, denoted xrec, yrec, and time t. We also denote midpointand offset coordinates as xmidpt, xoffset for the x-dimensions and similarly for the y-dimensions.The Frobenius norm of a m × n matrix X, denoted as ‖X‖F , is simply the usual `2 normof X when considered as a vector, i.e., ‖X‖F =√∑mi=1∑nj=1 X2ij . We write the SVD of X asX = USV H , where U and V are orthogonal and S = diag(s1, s2, . . . , sr) is a block diagonalmatrix of singular values, s1 ≥ s2 ≥ · · · ≥ sr ≥ 0. The matrix X has rank k when sk > 0 andsk+1 = sk+2 = · · · = sr = 0. The nuclear norm of X is defined as ‖X‖∗ =∑ri=1 si.We will use the matricization operation freely in the text below, which reshapes a tensor in toa matrix along specific dimensions. Specifically, if X is a temporal frequency slice with dimensionsxsrc, ysrc, xrec, yrec indexed by i = 1, . . . , 4, the matrix X(i) is formed by vectorizing the ith dimen-sion along the rows and the remaining dimensions along the columns. Matricization can also beperformed not only along singleton dimensions, but also with groups of dimensions. For example,X(i) with i = xsrc, ysrc places the x and y source dimensions along the columns and the remainingdimensions along the columns.3.3 Structured signal recoveryIn this setting, we are interested in completing a matrix X when we only view a subset of itsentries. For instance, in the 2D seismic data case, X is typically a frequency slice and missing shotscorrespond to missing columns from this matrix. Matrix completion arises as a natural extensionof Compressive Sensing ideas to recovering two dimensional signals. Here we consider three corecomponents of matrix completion.1. Signal structure - low rankCompressed Sensing is a theory that is deals with recovering vectors x that are sparse, orhave a few nonzeros. For a matrix X, a direct analogue for sparsity in a signal x is sparsity inthe singular values of X. We are interested in the case where the singular values of X decayquickly, so that X is well approximated by a rank k matrix. The set of all rank-k matriceshas low dimensionality compared to the ambient space of m × n matrices, which will allow52us to recover a low rank signal from sufficiently incoherent measurements.When our matrix X has slowly decaying singular values, i.e. is high rank, we considertransformations that promote quickly decaying singular values, which will allow us to recoverour matrix in another domain.Since we are sampling points from our underlying matrix X, we want to make sure that X isnot too ”spiky” and is sufficiently ”spread out”. If our matrix of interest was, for instance, thematrix of all zeros with one nonzero entry, we could not hope to recover this matrix withoutsampling the single, nonzero entry. In the seismic case, given that our signals of interest arecomposed of oscillatory waveforms, they are rarely, if ever, concentrated in a single region of,say, (source,receiver) space.2. Structure-destroying sampling operatorSince our matrix has an unknown but small rank k, we will look for the matrix X of smallestrank that fits our sampled data, i.e., A(X) = B, for a subsampling operator A. As such, weneed to employ subsampling schemes that increase the rank or decay of the singular valuesof the matrix. That is to say, we want to consider sampling schemes that are incoherentwith respect to the left and right singular vectors. Given a subsampling operator A, theworst possible subsampling scheme for the purposes of recovery would be removing columns(equivalently, rows) from the matrix, i.e. A(X) = XIk, where Ik is a subset of the columnsof identity matrix. Removing columns from the matrix can never allow for successful recon-struction because this operation lowers the rank, and therefore the original matrix X is nolonger the matrix of smallest rank that matches the data (for instance, the data itself wouldbe a candidate solution).Unfortunately, for, say, a 2D seismic data frequency slice X with sources placed along thecolumns and receivers along the rows, data is often acquired with missing sources, whichtranslates to missing columns of X. Similarly, periodic subsampling can be written as A(X) =ITkXIk′ , where Ik, Ik′ are subsets of the columns of the identity matrix. A similar considerationshows that this operator lowers the rank and thus rank minimizing interpolation will notsucceed in this sampling regime.The problematic aspect of the aforementioned sampling schemes is that they are separablewith respect to the matrix. That is, if X = USVH is the singular value decompositionof X, the previously mentioned schemes yield a subsampling operator of the form A(X) =CXDH = (CU)S(DV)H , for some matrices C,D. In the compressed sensing context, thistype of sampling is coherent with respect to the left and right singular vectors, which is anunfavourable recovery scenario.The incoherent sampling considered in the matrix completion literature is that of uniformrandom sampling, wherein the individual entries of X are sampled from the matrix with equal53probability [Cande`s and Recht, 2009, Recht, 2011]. This particular sampling scheme, althoughtheoretically convenient to analyze, is impractical to implement in the seismic context as itcorresponds to removing (source, receiver) pairs from the data. Instead, we will considernon-separable transformations, i.e., transforming data from the source-receiver domain tothe midpoint-offset domain, under which the missing sources operator is incoherent. Theresulting transformations will simultaneously increase the decay of the singular values of ouroriginal signal, thereby lowering its rank, and slow the decay of the singular values of thesubsampled signal, thereby creating a favourable recovery scenario.3. Structure-promoting optimization programSince we assume that our target signal X is low-rank and that subsampling increases therank, the natural approach to interpolation is to find the matrix of lowest possible rank thatagrees with our observations. That is, we solve the following problem for A, our measurementoperator, and B, our subsampled data, up to a given tolerance σ,minimizeX‖X‖∗ (3.1)subject to ‖A(X)−B‖F ≤ σ.Similar to using the `1 norm in the sparse recovery case, minimizing the nuclear norm promoteslow-rank structure in the final solution. Here we refer to this problem as Basis PursuitDenoising (BPDNσ).In summary, these three principles are all necessary for the recovery of subsampled signalsusing matrix completion techniques. Omitting any one of these three principles will, in general,cause such methods to fail, which we will see in the next section. Although this framework isoutlined for matrix completion, a straightforward extension of this approach also applies to thetensor completion case.3.4 Low-rank promoting data organizationBefore we can apply matrix completion techniques to interpolate F, our unvectorized frequencyslice of fully-sampled data, we must deal with the following issues. First, in the original (src, rec)domain, the missing sources operator, A, removes columns from F, which sets the singular valuesto be set to zero at the end of the spectrum, thereby decreasing the rank.Second, F itself also has high rank, owing to the presence of strong diagonal entries (zerooffset energy) and subsequent off-diagonal oscillations. Our previous theory indicates that naivelyapplying matrix completion techniques in this domain will yield poor results. Simply put, we aremissing two of the prerequisite signal recovery principles in the (src, rec) domain, which we can seeby plotting the decay of singular values in Figure 3.1. In light of our previous discussion, we will54examine different transformations under which the missing sources operator increases the singularvalues of our data matrix and hence promotes recovery in an alternative domain.3.4.1 2D seismic dataIn this case, we use the Midpoint-Offset transformation, which defines new coordinates for thematrix asxmidpt =12(xsrc + xrec)xoffset =12(xsrc − xrec).This coordinate transformation rotates the matrix F by 45 degrees and is a tight frame operatorwith a nullspace, as depicted in Figure 3.2. If we denote this operator by M, then M∗M = I,so transforming from (src, rec) to (midpt, offset) to (src, rec) returns the original signal, butMM∗ 6= I, so the transformation from (midpt, offset) to (src, rec) and back again does notreturn the original signal. By using this transformation, we move the strong diagonal energy toa single column in the new domain, which mitigates the slow singular value decay in the originaldomain. Likewise, the restriction operator A now removes super-/sub-diagonals from F rather thancolumns, demonstrated in Figure 3.2, which results in an overall increase in the singular values, asseen in Figure 3.1, placing the interpolation problem in a favourable recovery scenario as per theprevious section. Our new optimization variable is X˜ = M(X), which is the data volume in themidpoint-offset domain, and our optimization problem is thereforeminimizeX˜‖X˜‖∗s.t. ‖AM∗(X˜)−B‖F ≤ σ.3.4.2 3D seismic dataUnlike in the matrix-case, there is no unique generalization of the SVD to tensors and as a result,there is no unique notion of rank for tensors. Instead, can consider the rank of different matri-cizations of F. Instead of restricting ourselves to matricizations F(i) where i = xsrc, ysrc, xrec, yrec,we consider the case where i = {xsrc, ysrc}, {xsrc, xrec}, {xsrc, yrec}, {ysrc, xrec}. Owing to the reci-procity relationship between sources and receivers in F, we only need to consider two differentmatricizations of F, which are depicted in Figure 3.3 and Figure 3.4. As we see in Figure 3.5, thei = (xrec, yrec) organization, that is, placing both receiver coordinates along the rows, results ina matrix that has high rank and the missing sources operator removes columns from the matrix,decreasing the rank as mentioned previously. On the other hand, the i = (ysrc, yrec) matriciza-55100 200 300 4000.20.40.60.81Number of singular valueSingular value magnitude (src−rec)(midpt−offset)100 200 300 40000.20.40.60.81Number of singular valueSingular value magnitude (src−rec)(midpt−offset)100 200 300 4000.20.40.60.81Number of singular valueSingular value magnitude (src−rec)(midpt−offset)100 200 300 40000.20.40.60.81Number of singular valueSingular value magnitude (src−rec)(midpt−offset)Figure 3.1: Singular value decay in the source-receiver and midpoint-offset domain. Left :fully sampled frequency slices. Right : 50% missing shots. Top: low frequency slice.Bottom: high frequency slice. Missing source subsampling increases the singular valuesin the (midpoint-offset) domain instead of decreasing them in the (src-rec) domain.tion yields fast decay of the singular values for the original signal and a subsampling operatorthat causes the singular values to increase. This scenario is much closer to the idealized matrixcompletion sampling, which would correspond to the nonphysical process of randomly removing(xsrc, ysrc, xrec, yrec) points from F. We note that this data organization has been considered in thecontext of solution operators of the wave equation in Demanet [2006b], which applies to our caseas our data volume F is the restriction of a Green’s function to the acquisition surface.3.5 Large scale data reconstructionIn this section, we explore the modifications necessary to extend matrix completion to 3D seismicdata and compare this approach to an existing tensor-based interpolation technique. Matrix-56Source (m)Receiver (m)0 1000 2000 3000 4000 50000500100015002000250030003500400045005000Source (m)Receiver (m)0 1000 2000 3000 4000 50000500100015002000250030003500400045005000Offset (m)Midpoint (m)−5000 0 50000500100015002000250030003500400045005000Offset (m)Midpoint (m)−5000 0 50000500100015002000250030003500400045005000Figure 3.2: A frequency slice from the the seismic dataset from Nelson field. Left : Fullysampled data. Right : 50% subsampled data. Top: Source-receiver domain. Bottom:Midpoint-offset domain.completion techniques, after some modification, easily scale to interpolate large, multidimensionalseismic volumes.3.5.1 Large scale matrix completionFor the matrix completion approach, the limiting component for large scale data is that of thenuclear norm projection. As mentioned in Aravkin et al. [2014a], the projection on to the set‖X‖∗ ≤ τ requires the computation of the SVD of X. The main computational costs of computingthe SVD of a n× n matrix has computational complexity O(n3), which is prohibitively expensivewhen X has tens of thousands or even millions of rows and columns. On the assumption that X isapproximately low-rank at a given iteration, other authors such as Stoll [2012] compute a partialSVD using a Krylov approach, which is still cost-prohibitive for large matrices.We can avoid the need for the expensive computation of SVDs via a well known factorizationof the nuclear norm. Specifically, we have the following characterization of the nuclear norm, due57xsrc, ysrcxrec,y rec100 200 300 400 500 600100200300400500600xsrc, ysrcxrec,y rec20 40 60 80 100102030405060708090100xsrc, ysrcxrec,y rec100 200 300 400 500 600100200300400500600xsrc, ysrcxrec,y rec20 40 60 80 100102030405060708090100Figure 3.3: (xrec, yrec) matricization. Top: Full data volume. Bottom: 50% missing sources.Left : Fully sampled data. Right : Zoom plotto Srebro [2004],‖X‖∗ = minimizeL,R12(‖L‖2F + ‖R‖2F )subject to X = LRT .This allows us to write X = LRT for some placeholder variables L and R of a prescribed rankk. Therefore, instead of projecting on to ‖X‖∗ ≤ τ , we can instead project on to the factor ball12(‖L‖2F +‖R‖2F ) ≤ τ . This factor ball projection only involves computing ‖L‖2F , ‖R‖2F and scalingthe factors by a constant, which is substantially cheaper than computing the SVD of X.Equipped with this factorization approach, we can still use the basic idea of SPG`1 to flip theobjective and the constraints. The resulting subproblems for solving BPDNσ can be solved muchmore efficiently in this factorized form, while still maintaining the quality of the solution. Theresulting algorithm is dubbed SPG-LR by Aravkin et al. [2014a]. This reformulation allows us to58xsrc,xrecy rec,ysrc100 200 300 400 500 600100200300400500600xsrc,xrecy rec,ysrc20 40 60 80 100102030405060708090100xsrc,xrecy rec,ysrc100 200 300 400 500 600100200300400500600xsrc,xrecy rec,ysrc20 40 60 80 100102030405060708090100Figure 3.4: (ysrc, yrec) matricization. Top: Fully sampled data. Bottom: 50% missingsources. Left : Full data volume. Right : Zoom plot. In this domain, the samplingartifacts are much closer to the idealized ’pointwise’ random sampling of matrix com-pletion.0 100 200 300 400 500 600 70010−710−610−510−410−310−210−1100Singular value indexNormalized singular value No subsampling50% missing sources0 100 200 300 400 500 600 70010−710−610−510−410−310−210−1100Singular value indexNormalized singular value No subsampling50% missing sourcesFigure 3.5: Singular value decay (normalized) of the Left : (xrec, yrec) matricization and Right :(ysrc, yrec) matricization for full data and 50% missing sources.59apply these matrix completion techniques to large scale seismic data interpolation.This factorization turns the convex subproblems for solving BPDNσ posed in terms of X intoa nonconvex problem in terms of the variables L,R, so there is a possibility for local minima ornon-critical stationary points to arise when using this approach. As it turns out, as long as theprescribed rank k is larger than the rank of the optimal X, any local minima encountered in thefactorized problem is actually a global minimum [Burer and Monteiro, 2005, Aravkin et al., 2014a].The possibility of non-critical stationary points is harder to discount, and remains an open problem.There is preliminary analysis indicating that initializing L and R so that LRT is sufficiently closeto the true X will ensure that this optimization program will converge to the true solution [Sun andLuo, 2014]. In practice, we initialize L and R randomly with appropriately scaled Gaussian randomentries, which does not noticeably change the recovery results across various random realizations.An alternative approach to solving the factorized BPDNσ is to relax the data constraint ofEquation (3.1) in to the objective, resulting in the QPλ formulation,minL,R12‖A(LRH)−B‖2F + λ(‖L‖2F + ‖R‖2F ). (3.2)The authors in Recht and Re´ [2013] exploit the resulting independance of various subblocksof the L and R factors to create a partitioning scheme that updates components of these factorsin parallel, resulting in a parallel matrix completion framework dubbed Jellyfish. By using thisJellyfish approach, each QPλ problem for fixed λ and fixed internal rank k can be solved veryefficiently and cross-validation techniques can choose the optimal λ and rank parameters.3.5.2 Large scale tensor completionFollowing the approach of Kreimer et al. [2013], which applies the method developed in Gandyet al. [2011] to seismic data, we can also exploit the tensor structure of a frequency slice F forinterpolating missing traces.We now stipulate that each matricization F(i) for i = 1, . . . , 4 has low-rank. We can proceed inan analogous way to the matrix completion case by solving the following problemminimizeF4∑i=1‖F(i)‖∗subject to ‖A(F)−B‖2 ≤ σ,i.e. look for the tensor F that has simultaneously the lowest rank in each matricization F(i) thatfits the subsampled data B. In the case of Kreimer et al. [2013], this interpolation is performed inthe (xmidpt, ymidpt, xoffset, yoffset) domain on each frequency slice, which we also employ in our laterexperiments.To solve this problem, the authors in Kreimer et al. [2013] use the Douglas-Rachford variable60splitting technique that creates 4 additional copies of the variable F, denoted Xi, with each copycorresponding to each matricization F(i). This is an inherent feature of this approach to solveconvex optimization problems with coupled objectives/constraints and thus cannot be avoided oroptimized away. The authors then use an Augmented Lagrangian approach to solve the decoupledproblemminimizeX1,X2,X3,X4,F4∑i=1‖Xi‖∗ + λ‖A(F)−B‖22 (3.3)subject to Xi = F(i) for i = 1, . . . , 4.The resulting problem is convex, and thus has a unique solution. We refer to this method asthe alternating direction method of multipliers (ADMM) tensor method. This variable splittingtechnique can be difficult to implement for realistic problems, as the tensor F is often unableto be stored fully in working memory. Given the large number of elements of F, creating atminimum four extraneous copies of F can quickly overload the storage and memory of even alarge computing cluster. Moreover, there are theoretical and numerical results that state thatthis problem formulation is in fact no better than imposing the nuclear norm penalty on a singlematricization of F, at least in the case of Gaussian measurements [Oymak et al., 2012, Signorettoet al., 2011]. We shall see a similar phenomenon in our subsequent experiments.Penalizing the nuclear norm in this fashion, as in all methods that use an explicit nuclearnorm penalty, scales very poorly as the problem size grows. When our data F has four or moredimensions, the cost of computing the SVD of one of its matricizations easily dominates the overallcomputational costs of the method. Applying this operation four times per iteration in the aboveproblem, as is required due to the variable splitting, prevents this technique from performingefficiently for large realistic problems.3.6 ExperimentsWe perform seismic data interpolation on five different data sets. In case of 2D, the first data set,which is a shallow-water marine scenario, is from the Nelson field provided to us by PGS. TheNelson data set contains 401× 401 sources and receivers with the temporal sampling interval of0.004s. The second synthetic data set is from the Gulf of Mexico (GOM) and is provided to usby the Chevron. It contains 3201 sources and 801 receivers with a spatial interval of 25m. Thethird data set is simulated on a synthetic velocity model (see Berkhout and Verschuur [2006]) usingIWave [Symes et al., 2011a]. An anticline salt structure over-lies the target, i.e., a fault structure.A seismic line is modelled using a fixed-spread configuration where sources and receivers are placedat an interval of 15m. This results in a data set of 361× 361 sources and receivers.Our 3D examples consist of two different data sets. The first data set is generated on a synthetic61single-layer model. This data set has 50 sources and 50 receivers and we use a frequency slice at4 Hz. This simple data set allows us to compare the running time of the various algorithms underconsideration. The Compass data set is provided to us by the BG Group and is generated froman unknown but geologically complex and realistic model. We selected a few 4D monochromaticfrequency slices from this data set at 4.68, 7.34, and 12.3Hz. Each monochromatic frequency slicehas 401×401 receivers spaced by 25m and 68×68 sources spaced by 150m. In all the experiments,we initialize L and R using random numbers.3.6.1 2D seismic dataIn this section, we compare matrix-completion based techniques to existing curvelet-based interpo-lation for interpolating 2D seismic data. For details on the curvelet-based reconstruction techniques,we refer to [Herrmann and Hennenfent, 2008a, Mansour et al., 2013]. For concreteness, we concernourselves with the missing-sources scenario, although the missing-receivers scenario is analogous.In all the experiments, we set the data misfit parameter σ to be equal to η‖B‖F where η ∈ (0, 1)is the fraction of the input data energy to fit.Nelson data setHere, we remove 50%, 75% of the sources, respectively. For the sake of comparing curvelet-based andrank-minimization based reconstruction methods on identical data, we first interpolate a single 2Dfrequency slice at 10 Hz. When working with frequency slices using curvelets, Mansour et al. [2013]showed that the best recovery is achieved in the midpoint-offset domain, owing to the increasedcurvelet sparsity. Therefore, in order to draw a fair comparison with the matrix-based methods, weperform curvelet-based and matrix-completion based reconstruction in the midpoint-offset domain.We summarize these results of interpolating a single 2D frequency slice in Table 3.1. Comparedto the costs associated to applying the forward and adjoint curvelet transform, SPG-LR is muchmore efficient and, as such, this approach significantly outperforms the `1-based curvelet interpo-lation. Both methods perform similarly in terms of reconstruction quality for low frequency slices,since these slices are well represented both as a sparse superposition of curvelets and as a low-rankmatrix. High frequency data slices, on the other hand, are empirically high rank, which can beshown explicitly for a homogeneous medium as a result of Lemma 2.7 in Engquist and Ying [2007],and we expect matrix completion to perform less well in this case, as high frequencies containsoscillations away from the zero-offset. On the other hand, these oscillations can be well approxi-mated by low-rank values in localized domains. To perform the reconstruction of seismic data inthe high frequency regime, Kumar et al. [2013a] proposed to represent the matrix in the Hierar-chical semi-separable (HSS) format, wherein data is first windowed in off-diagonal and diagonalblocks and the diagonal blocks are recursively partitioned. The interpolation is then performedon each subset separately. In the interest of brevity, we omit the inclusion of this approach here.62Additionally, since the high frequency slices are very oscillatory, they are much less sparse in thecurvelet dictionary.Owing to the significantly faster performance of matrix completion compared to the curvelet-based method, we apply the former technique to an entire seismic data volume by interpolatingeach frequency slice in the 5-85Hz band. Figures 3.6 show the interpolation results in case of 75%missing traces. In order to get the best rank values to interpolation the full seismic line, we firstperformed the interpolation for the frequency slices at 10 Hz and 60 Hz. The best rank value weget for these two slices is 30 and 60 respectively. Keeping this in mind, we work with all of themonochromatic frequency slices and adjust the rank linearly from 30 to 60 when moving from lowto high frequencies. The running time is 2 h 18 min using SPG-LR on a 2 quad-core 2.6GHz Intelprocessor with 16 GB memory and implicit multithreading via LAPACK libraries. We can seethat we have low-reconstruction error with little coherent energy in the residual when 75% of thesources are missing. Figure 3.7 shows the qualitative measurement of recovery for all frequencies inthe energy-band. We can further mitigate such coherent residual energy by exploiting additionalstructures in the data such as symmetry, as in Kumar et al. [2014].RemarkIt is important to note that if we omit the first two principles of matrix completion by interpolat-ing the signal in the source-receiver domain, as discussed previously, we obtain very poor results, asshown in Figure 3.8. Similar to CS-based interpolation, choosing an appropriate transform-domainfor matrix and tensor completion is vital to ensure successful recovery.Table 3.1: Curvelet versus matrix completion (MC). Real data results for completing afrequency slice of size 401 × 401 with 50% and 75% missing sources. Left : 10 Hz (lowfrequency), right : 60 Hz (high frequency). SNR, computational time, and number ofiterations are shown for varying levels of η = 0.08, 0.1.Curvelets MCη 0.08 0.1 0.08 0.150%SNR (dB) 18.2 17.3 18.6 17.7time (s) 1249 1020 15 10iterations 123 103 191 12475%SNR (dB) 13.5 13.2 13.0 13.3time (s) 1637 1410 8.5 8iterations 162 119 105 104Curvelets MCη 0.08 0.1 0.08 0.150%SNR (dB) 10.5 10.4 12.5 12.4time (s) 1930 1549 19 13iteration 186 152 169 11875%SNR (dB) 6.0 5.9 6.9 7.0time (s) 3149 1952 15 10iteration 284 187 152 105Gulf of mexico data setIn this case, we remove 80% of the sources. Here, we perform the interpolation on a frequencyspectrum of 5-30hz. Figure 3.10 shows the comparison of the reconstruction error using rank-63Figure 3.6: Missing-trace interpolation. Top : Fully sampled data and 75% subsampledcommon receiver gather. Bottom Recovery and residual results with a SNR of 9.4 dB.10 20 30 40 50 60 70 80681012141618Frequency(Hz)SNR(dB) 75% missing50% missingFigure 3.7: Qualitative performance of 2D seismic data interpolation for 5-85 Hz frequencyband for 50% and 75% subsampled data.64Source (m)Receiver (m)0 1000 2000 3000 4000 50000500100015002000250030003500400045005000Source (m)Receiver (m)0 1000 2000 3000 4000 50000500100015002000250030003500400045005000Figure 3.8: Recovery results using matrix-completion techniques. Left : Interpolation in thesource-receiver domain, low-frequency SNR 3.1 dB. Right : Difference between trueand interpolated slices. Since the sampling artifacts in the source-receiver domaindo not increase the singular values, matrix completion in this domain is unsuccesful.This example highlights the necessity of having the appropriate principles of low-rankrecovery in place before a seismic signal can be interpolated effectively.minimization based approach for a frequency slice at 7Hz and 20 Hz. For visualization purposes,we only show a subset of interpolated data corresponding to the square block in Figure 3.9, but weinterpolate the monochromatic slice over all sources and receivers. Even in the highly sub-sampledcase of 80%, we are still able to recover to a high SNR of 14.2 dB, 10.5dB, respectively, but westart losing coherent energy in the residual as a result of the high-subsampling ratio. These resultsindicate that even in complex geological environments, low-frequencies are still low-rank in nature.This can also be seen since, for a continuous function, the smoother the function is (i.e., the morederivatives it has), the faster its singular values decay (see, for instance, Chang and Ha [1999]).For comparison purposes, we plot the frequency-wavenumber spectrum of the 20Hz frequency slicein Figure 3.11 along with the corresponding spectra of matrix with 80% of the sources removedperiodically and uniform randomly. In this case, the volume is approximately three times aliased inthe bandwidth of the original signal for periodic subsampling, while the randomly subsampled casehas created noisy aliases. The average sampling interval for both schemes is the same. As shown inthis figure, the interpolated matrix has a significantly improved spectrum compared to the input.Figure 3.12 shows the interpolation result over a common receiver gather using rank-minimizationbased techniques. In this case, we set the rank parameter to be 40 and use the same rank for allthe frequencies. The running time on a single frequency slice in this case is 7 min using SPG-LRand 1320 min using curvelets.65Source (km)Receiver (km)30 35 40 45 503032343638404244464850Source (km)Receiver (km)30 35 40 45 503032343638404244464850Figure 3.9: Gulf of Mexico data set. Top: Fully sampled monochromatic slice at 7 Hz.Bottom left : Fully sampled data (zoomed in the square block). Bottom right : 80%subsampled sources. For visualization purpose, the subsequent figures only show theinterpolated result in the square block.Synthetic fault modelIn this setting, we remove 80% of the sources and display the results in Figure 3.13. For simplicity,we only perform rank-minimization based interpolation on this data set. In this case we set therank parameter to be 30 and used the same for all frequencies. Even though the presence of faultsmake the geological environment complex, we are still able to successfully reconstruct the datavolume using rank-minimization based techniques, which is also evident in the low-coherency ofthe data residual (Figure 3.13).66Source (km)Receiver (km)30 35 40 45 503032343638404244464850Source (km)Receiver (km)30 35 40 45 503032343638404244464850Figure 3.10: Reconstruction errors for frequency slice at 7Hz (left) and 20Hz (right) in caseof 80% subsampled sources. Rank-minimization based recovery with a SNR of 14.2dB and 11.0 dB respectively.Wavenumber (1/m)Frequency (Hz)−0.02 −0.01 0 0.0151015202530Wavenumber (1/m)Frequency (Hz)−0.02 −0.01 0 0.0151015202530Wavenumber (1/m)Frequency (Hz)−0.02 −0.01 0 0.0151015202530Wavenumber (1/m)Frequency (Hz)−0.02 −0.01 0 0.0151015202530Figure 3.11: Frequency-wavenumber spectrum of the common receiver gather. Top left :Fully-sampled data. Top right : Periodic subsampled data with 80% missingsources. Bottom left : Uniform-random subsampled data with 80% missing sources.Bottom Right : Reconstruction of uniformly-random subsampled data using rank-minimization based techniques. While periodic subsampling creates aliasing, uniform-random subsampling turns the aliases in to incoherent noise across the spectrum.67Source (km)Time (s)30 35 403.544.555.566.577.58Source (km)Time (s)30 35 403.544.555.566.577.58Source (km)Time (s)30 35 403.544.555.566.577.58Figure 3.12: Gulf of Mexico data set, common receiver gather. Left : Uniformly-randomsubsampled data with 80% missing sources. Middle : Reconstruction results usingrank-minimization based techniques (SNR = 7.8 dB). Right : Residual.Sources (m)Time (s)0 2000 400000.511.52Sources (m)Time (s)0 2000 400000.511.52Sources (m)Time (s)0 2000 400000.511.52Figure 3.13: Missing-trace interpolation (80% sub-sampling) in case of geological structureswith a fault. Left : 80% sub-sampled data. Middle: after interpolation (SNR = 23dB). Right : difference.683.6.2 3D seismic dataSingle-layer reflector dataBefore proceeding to a more realistically sized data set, we first test the performance of the SPG-LRmatrix completion and the tensor completion method of Kreimer et al. [2013] on a small, syntheticdata set generated from a simple, single-reflector model. We only use a frequency slice at 4 Hz. Wenormalize the volume to unit norm and randomly remove 50% of the sources from the data.For the alternating direction method of multipliers (ADMM) tensor method, we complete thedata volumes in the midpoint-offset domain, which is the same domain used in Kreimer et al.[2013]. In the context of our framework, we note that the midpoint-offset domain for recovering3D frequency slices has the same recovery-enhancing properties as for recovering 2D frequencyslices, as mentioned previously. Specifically, missing source sampling tends to increase the rankof the individual source and receiver matricizations in this domain, making completion via rank-minimization possible in midpoint-offset compared to source-receiver. In the original source-receiverdomain, removing (xsrc, xrec) points from the tensor does not increase the singular values in thexsrc and xrec matricizations and hence the reconstruction quality will suffer. On the other hand,for the matrix completion case, the midpoint-offset conversion is a tight frame that acts on the leftand right singular vectors of the matricized tensor F(xsrc,xrec) and thus does not affect the rank forthis particular matricization. Also in this case, we consider the effects of windowing the input dataon interpolation quality and speed. We let ADMM-w denote the ADMM method with a windowsize of w with an additional overlap of approximately 20%. In our experiments, we consider w = 10(small windows), w = 25 (large windows), and w = 50 (no windowing).In the ADMM method, the two parameters of note are λ, which control the relative penaltybetween data misfit and nuclear norm, and β, which controls the speed of convergence of theindividual matrices X(i) to the tensor F. The λ, β parameters proposed in Kreimer et al. [2013] donot appear to work for our problems, as using the stated parameters penalizes the nuclear normterms too much compared to the data residual term, resulting in the solution tensor convergingto X = 0. Instead, we estimate the optimal λ, β parameters by cross validation, which involvesremoving 20% of the 50% known sources, creating a so-called ”test set”, and using the remainingdata points as input data. We use various combinations of λ, β to solve Problem 3.3, using 50iterations, and compare the SNR of the interpolant on the test set in order to determine the bestparameters, i.e. we estimate the optimal λ, β without reference to the unknown entries of thetensor. Owing to the large computational costs of the ”no window” case, we scan over the valuesof λ increasing exponentially and fix β = 0.5. For the windowed cases, we scan over exponentiallyincreasing values of λ, β for a single window and use the estimated λ, β for interpolating the otherwindows. For the SPG-LR, we set our internal rank parameter to be 20 and allow the algorithm torun for 1000 iterations. As shown in Aravkin et al. [2014a], as long as the chosen rank is sufficiently69Method SNR Solve time Parameter selection time Total timeSPG-LR 25.5 0.9 N/A 0.9ADMM - 50 20.8 87.4 320 407.4ADMM - 25 16.8 4.4 16.4 20.8ADMM - 10 10.9 0.1 0.33 0.43Table 3.2: Single reflector data results. The recovery quality (in dB) and the computationaltime (in minutes) is reported for each method. The quality suffers significantly as thewindow size decreases due to the smaller redundancy of the input data, as discussedpreviously.large, further increasing the rank parameter will not significantly change the results. We summarizeour results in Table 3.2 and display the results in Figure 3.14.Even disregarding the time spent selecting ideal parameters, SPG-LR matrix completion dras-tically outperforms the ADMM method on this small example. The tensor-based, per-dimensionwindowing approach also degrades the overall reconstruction quality, as the algorithm is unable totake advantage of the redundancy of the full data volume once the windows are sufficiently small.There is a very prominent tradeoff between recovery speed and reconstruction quality as the sizeof the windows become smaller, owing to the expensive nature of the ADMM approach itself forlarge data volumes and the inherent redundancy in the full data volume that makes interpolationpossible which is decreased when windowing.BG compass dataOwing to the smoothness of the data at lower frequencies, we uniformly downsample the individualfrequency slices in the receiver coordinates without introducing aliasing. This reduces the overallcomputational complexity while simultaneously preserving the recovery quality. The 4.64Hz, 7.34Hzand 12.3Hz slices were downsampled to 101×101, 101×101 and 201×201 receiver grids, respectively.For these problems, the data was subsampled along the source coordinates by removing 25%, 50%,and 75% of the shots.In order to apply matrix completion without windowing on the entire data set, the data wasorganized as a matrix using the low-rank promoting organization described previously. We usedJellyfish and SPG-LR implementations to complete the resulting incomplete matrix and comparedthese methods to the ADMM Tensor method and LMaFit, an alternating least-squares approachto matrix completion detailed in Wen et al. [2012]. LMaFit is a fast matrix completion solverthat avoids using nuclear norm penalization but must be given an appropriate rank parameter inorder to achieve reasonable results. We use the code available from the author’s website. SPG-LR,ADMM, and LMaFit were run on a 2 quad-core 2.6GHz Intel processor with 16 GB memory andimplicit multithreading via LAPACK libraries while Jellyfish was run on a dual Xeon X650 CPU70receiver xreceiver y1 10 20 30 40110203040(a) True Datareceiver xreceiver y1 10 20 30 40110203040(b) Subsampled datareceiver yreceiver x1 10 20 30 40110203040(c) SPG-LR - 34.5 dBreceiver xreceiver y1 10 20 30 40110203040(d) ADMM-50 - 26.6 dBreceiver xreceiver y1 10 20 30 40110203040(e) ADMM-25 - 21.4 dBreceiver xreceiver y1 10 20 30 40110203040(f) ADMM-10 - 16.2 dBreceiver yreceiver x1 10 20 30 40110203040receiver xreceiver y1 10 20 30 40110203040receiver xreceiver y1 10 20 30 40110203040receiver xreceiver y1 10 20 30 40110203040Figure 3.14: ADMM data fit + recovery quality (SNR) for single reflector data, commonreceiver gather. Middle row: recovered slices, bottom row: residuals corresponding toeach method in the middle row. Tensor-based windowing appears to visibly degradethe results, even with overlap.(6 x 2 cores) with 24 GB of RAM with explicit multithreading. The hardware configurations ofboth of these environments are very similar, which results in SPG-LR and Jellyfish performingcomparably.For the Jellyfish experiments, the model parameter µ, which plays the same role as the λparameter above, and the optimization parameters (initial step size and step decay) were selectedby validation, which required 120 iterations of the optimization procedure for each (frequency,subsampling ratio) pair. The maximum rank value was set to the rank value used in the SPG-LRresults. For the SPG-LR experiments, we interpolate a subsection of the data for various rankvalues and arrived at 120, 150 and 200 as the best rank parameters for each frequency. We performthe same validation techniques on the rank parameter k of LMaFit. In order to focus solely oncomparing computational times, we omit reporting the parameter selection times for the ADMMmethod.The results for 75% missing sources in Figure 3.15 demonstrate that, even in the low subsampling71regime, matrix completion methods can successfully recover the missing shot data at these lowfrequencies. Table 3.3 gives an extensive summary of our results for different subsampling ratios andfrequencies. The comparative results between Jellyfish and SPG-LR agree the with the theoreticalresults that establish the equivalence of BPDNσ and QPλ formulations. The runtime values includethe parameter estimation procedure, which was carried out individually in each case. As we haveseen previously, the ADMM approach does not perform well both in terms of computational timeand in terms of recovery.In our experiments, we noticed that successful parameter combinations work well for otherproblems too. Hence we can argue that in a real-world problem, once a parameter combinationis selected, it can be used for different instances or it can be used as an initial point for a localparameter search.Matrix completion with windowingWhen windowing the data, we use the same matricizations of the data as discussed previously,but now split the volume in to nonoverlapping windows. We now use matrix completion on theresulting windows of data individually. We used Jellyfish for matrix completion on individualwindows. Again, we use cross validation to select our parameters. We performed the experimentswith two different window sizes. For the small window case, the matricization was partitionedinto 4 segments along rows and columns, totalling 16 windows. For the large window case, thematricization was split into 16 segments along rows and columns, yielding 256 windows. Thiswindowing is distinctly different from the windowing explored for the single-layer model, since herewe are windowing the matricized form of the tensor, in the (xsrc, xrec) unfolding, as opposed tothe per-dimension windowing in the previous section. The resulting windows created in this waycontain much more sampled data than in the tensor-windowing case yet are still small enough insize to be processed efficiently.The results in Figure 3.16 suggest that for this particular form of windowing, the matrix comple-tion results are particularly degraded by only using small windows of data at a time. As mentionedpreviously, since we are relying on a high redundancy (with respect to the SVD) in the underlyingand sampled data to perform matrix completion, we are reducing the overall redundancy of the in-put data by partitioning it. On the other hand, the real-world benefits of windowing in this contextbecome apparent when the data cannot be fit into the memory at the cost of reconstruction quality.In this case, windowing allows us to partition the problem, offsetting the I/O cost that would resultfrom memory paging. Based on these results, whenever possible, we strive to include as much dataas possible in a given problem in order to recover the original matrix/tensor adequately.72receiver yreceiver x1 100 200 300 4001100200300400(a) True datareceiver yreceiver x1 100 200 300 4001100200300400(b) SPG-LR : 8.3 dBreceiver yreceiver x1 100 200 300 4001100200300400(c) SPG-LR differencereceiver yreceiver x1 100 200 300 4001100200300400(d) Jellyfish : 9.2 dBreceiver yreceiver x1 100 200 300 4001100200300400(e) ADMM : -1.48 dBreceiver yreceiver x1 100 200 300 4001100200300400(f) LMaFit : 6.3 dBreceiver yreceiver x1 100 200 300 4001100200300400(g) Jellyfish differencereceiver yreceiver x1 100 200 300 4001100200300400(h) ADMM differencereceiver yreceiver x1 100 200 300 4001100200300400(i) LMaFit differenceFigure 3.15: BG 5-D seismic data, 12.3 Hz, 75% missing sources. Middle row: interpolationresults, bottom row: residuals.3.7 DiscussionAs the above results demonstrate, the L,R matrix completion approach significantly outperformsthe ADMM tensor-based approach due to the need to avoid the computation of SVDs as well as theminimal duplication of variables compared to the latter method. For the simple synthetic data, theADMM method is able to achieve a similar recovery SNR to matrix completion, albeit at a muchlarger computational cost. For realistically sized data sets, the difference between the two methodscan mean the difference between hours and days to produce an adequate result. In terms of thedifference between SPG-LR and Jellyfish matrix completion, both return results that are similarin quality, which agrees with the fact that they are both based off of L,R factorizations and theranks used in these experiments are identical. Compared to these two methods, LMaFit converges73receiver yreceiver x1 100 200 300 4001100200300400(a) No windowing - SNR 16.7 dBreceiver yreceiver x1 100 200 300 4001100200300400(b) Large window - 14.5 dBreceiver yreceiver x1 100 200 300 4001100200300400(c) Small window - SNR 8.5 dBreceiver yreceiver x1 100 200 300 4001100200300400receiver yreceiver x1 100 200 300 4001100200300400receiver yreceiver x1 100 200 300 4001100200300400Figure 3.16: BG 5D seismic data, 4.68 Hz, Comparison of interpolation results with andwithout windowing using Jellyfish for 75% missing sources. Top row: interpolationresults for differing window sizes, bottom row: residuals.Frequency Missing sources SPG-LR Jellyfish ADMM LmaFitSNR Time SNR Time SNR Time SNR Time4.68 Hz75% 15.9 84 16.34 36 0.86 1510 14.7 20450% 20.75 96 19.81 82 3.95 1510 17.5 9125% 21.47 114 19.64 124 9.17 1510 18.9 667.34 Hz75% 11.2 84 11.99 52 0.39 1512 10.7 18350% 15.2 126 15.05 146 1.71 1512 14.1 3725% 16.3 138 15.31 195 4.66 1512 14.3 2112.3 Hz75% 7.3 324 9.34 223 0.06 2840 8.1 81450% 12.6 438 12.12 706 0.21 2840 11.1 7225% 14.02 450 12.90 1295 0.42 2840 11.3 58Table 3.3: 3D seismic data results. The recovery quality (in dB) and the computational time(in minutes) is reported for each method.74much faster for regimes when there is more data available, while producing a lower quality result.When there is very little data available, as is typical in realistic seismic acquisition scenarios, thealgorithm has issues converging. We note that, since it is a sparse linear algebra method, Jellyfishtends to outperform SPG-LR when the number of missing traces is high. This sparse linear algebraapproach can conceivably be employed with the SPG-LR machinery. In these examples, we havenot made any attempts to explicitly parallelize the SPG-LR or ADMM methods, instead relyingon the efficient dense linear algebra routines used in Matlab, whereas Jellyfish is an inherentlyparallelized method.Without automatic mechanisms for parameter selection as in SPG-LR, the Jellyfish, ADMM,and LMaFit algorithms rely on cross-validation techniques that involve solving many probleminstances at different parameters. The inherently parallel nature of Jellyfish allows it to solve eachproblem instance very quickly and thus achieves very similar performance to SPG-LR. LMaFithas very fast convergence when there is sufficient data, but slows down significantly in scenarioswith very little data. The ADMM method, on the other hand, scales much more poorly forlarge data volumes and spends much more time on parameter selection than the other methods.However, in practice, we can assume that across frequency slices, say, optimally chosen parametersfor one frequency slice will likely work well for neighbouring frequency slices and thus the parameterselection time can be amortized over the whole volume.In our experiments, aside from the simple 3D layer model and the Nelson dataset, the geologicalmodels used were not low rank. That is to say, the models had complex geology and were not simplyhorizontally layered media. Instead, through the use of these low rank techniques, we are exploitingthe low rank structure of the data volumes achieved from the data acquisition process, not merelyany low rank structure present in the models themselves. As the temporal frequency increases, theinherent rank of the resulting frequency slices increases, which makes low rank interpolation morechallenging. Despite this observation, we still achieve reasonable results for higher frequencies usingour methods.As predicted by our theoretical considerations, the choice of windowing in this case has anegative effect on the generated results in the situation where the earth model permits a low-rankrepresentation that is reflected in the midpoint-offset domain. In case of earth models that are notinherently low-rank, such as those with salt bodies, we can still recover the low-frequency slicesas shown by the examples without performing the windowing on the data sets. As a general ruleof thumb, we advise to incorporate as much of the input data is possible in to a given matrix-completion problem but clearly there is a tradeoff between the size of the data windows, theamount of memory available to process such volumes, and the inherent complexity of the model.Additionally, one should avoid methods that needlessly create extraneous copies of the data whenworking with large scale volumes.Here we have also demonstrated the importance of theoretical components for signal recoveryusing matrix and tensor completion methods. By ignoring these principles of matrix completion, a75practitioner can unintentionally find herself in a disadvantageous scenario and produce sub-optimalresults without a guiding theory to remedy the situation. However, by choosing an appropriatetransform domain in which to complete the matrix or tensor, we can successfully employ this rank-minimizing machinery to interpolate a signal with missing traces in a computationally efficientmanner.From a practitioner’s point of view, the purpose of this interpolation machinery is to remove theacquisition footprint from missing-trace data that is used in further downstream seismic processessuch as migration and full waveform inversion. These techniques can help mitigate the lack of datacoverage in certain areas that would otherwise have created artifacts or non-physical regions in aseismic image.3.8 ConclusionBuilding upon existing knowledge of compressive sensing as a successful signal recovery paradigm,this work has outlined the necessary components of using matrix and tensor completion methodsfor interpolating large-scale seismic data volumes. As we have demonstrated numerically, with-out the necessary components of a low-rank domain, a rank-increasing sampling scheme, and arank-minimizing optimization scheme, matrix completion-based techniques cannot successfully re-cover subsampled seismic signals. Once all of these ingredients are in place, however, we can useexisting convex solvers to recover the fully sampled data volume. Since such solvers invariablyinvolve computing singular-value decomposition of large matrices, we have presented two alter-native factorized-based formulations that scale much more efficiently than their strictly convexcounterparts when the data volumes are large. We have shown that our factorization-based ma-trix completion approach is very competitive compared to existing curvelet-based methods for 2Dseismic data and alternating direction method of multipliers tensor-based methods for 3D seismicdata.From a practical point of view, this theoretical framework is exceedingly flexible. We haveshown the effectiveness of midpoint-offset organization for 2D data and (xsource, xreceiver) matrixorganization for 3D data for promoting low-rank structure in the data volumes but it is conceivablethat other seismic data organizations could also be useful in this regard, e.g., midpoint-offset-azimuth. Our optimization framework also allows us to operate on both large-scale data withouthaving to select a large number of parameters and we do not need to recourse to using small windowsof data, which may degrade the recovery results. In the seismic context, reusing the interpolatedresults from lower frequencies as a warm-start for interpolating data at higher frequencies canfurther reduce the overall computational costs. The proposed approach to matrix completion andthe Jellyfish method are very promising for large scale data sets and can conceivably be applied tointerpolate wide azimuth data sets as well.76Chapter 4Source separation for simultaneoustowed-streamer marine acquisition—acompressed sensing approach4.1 SummaryApart from performing seismic data interpolation as shown in previous chapters, rank-minimizationbased techniques have shown great potential in dealing with seismic data acquired in simultaneousfashion. In marine environment, simultaneous acquisition is an economic way to sample seismicdata and speedup acquisition, wherein single and/or multiple source vessels fire sources at near-simultaneous or slightly random times, resulting in overlapping shot records. The current paradigmfor simultaneous towed-streamer marine acquisition incorporates “low-variability” in source firingtimes—i.e., 0 ≤ 1 or 2 seconds, since both the sources and receivers are moving. This resultsin low degree of randomness in simultaneous data, which is challenging to separate (into its con-stituent sources) using compressed sensing based separation techniques since randomization is thekey to successful recovery via compressed sensing. In this chapter, we address the challenge ofsource separation for simultaneous towed-streamer acquisitions via two compressed sensing basedapproaches—i.e., sparsity-promotion and rank-minimization. We illustrate the performance of boththe sparsity-promotion and rank-minimization based techniques by simulating two simultaneoustowed-streamer acquisition scenarios—i.e., over/under and simultaneous long offset. A field dataexample from the Gulf of Suez for the over/under acquisition scenario is also included. We observethat the proposed approaches give good and comparable recovery qualities of the separated sources,but the rank-minimization technique outperforms the sparsity-promoting technique in terms of thecomputational time and memory. We also compare these two techniques with the NMO-basedA version of this chapter has been published in Geophysics, 2016, vol 80, pages WD73-WD88.77median filtering type approach.4.2 IntroductionThe benefits of simultaneous source marine acquisition are manifold—it allows the acquisition ofimproved-quality seismic data at standard (conventional) acquisition turnaround, or a reducedturnaround time while maintaining similar quality, or a combination of both advantages. In simul-taneous marine acquisition, a single or multiple source vessels fire sources at near-simultaneous orslightly random times resulting in overlapping shot records [de Kok and Gillespie, 2002, Beasley,2008, Berkhout, 2008b, Hampson et al., 2008, Moldoveanu and Quigley, 2011, Abma et al., 2013], asopposed to non-overlapping shot records in conventional marine acquisition. A variety of simulta-neous source survey designs have been proposed for towed-streamer and ocean bottom acquisitions,where small-to-large random time delays between multiple sources have been used [Beasley, 2008,Moldoveanu and Fealy, 2010, Mansour et al., 2012c, Abma et al., 2013, Wason and Herrmann,2013b, Mosher et al., 2014].An instance of low-variability in source firing times—e.g., 0 ≤ 1 (or 2) second, is the over/under(or multi-level) source acquisition [Hill et al., 2006, Moldoveanu et al., 2007, Lansley et al., 2007,Long, 2009, Hegna and Parkes, 2012, Torben Hoy, 2013]. The benefits of acquiring and processingover/under data are clear, the recorded bandwidth is extended at both low and high ends of thespectrum since the depths of the sources produce complementary ghost functions, avoiding deepnotches in the spectrum. The over/under acquisition allows separation of the up- and down-goingwavefields at the source (or receiver) using a vertical pair of sources (or receivers) to determinewave direction. Simultaneous long offset acquisition (SLO) is another variation of simultaneoustowed-streamer acquisition, where an extra source vessel is deployed, sailing one spread-lengthahead of the main seismic vessel [Long et al., 2013]. The SLO technique is better in comparisonto conventional acquisition since it provides longer coverage in offsets, less equipment downtime(doubling the vessel count inherently reduces the streamer length by half), easier maneuvering, andshorter line turns.Simultaneous acquisition (e.g., over/under and SLO) results in seismic interferences or sourcecrosstalk that degrades quality of the migrated images. Therefore, an effective (simultaneous)source separation technique is required, which aims to recover unblended interference-free data—as acquired during conventional acquisition—from simultaneous data. The challenge of sourceseparation (or deblending) has been addressed by many researchers [Stefani et al., 2007, Moore et al.,2008, Akerberg et al., 2008, Huo et al., 2009], wherein the key observation has been that as longas the sources are fired at suitably randomly dithered times, the resulting interferences (or sourcecrosstalk) will appear noise-like in specific gather domains such as common-offset and common-receiver, turning the separation problem into a (random) noise removal procedure. Inversion-typealgorithms [Moore, 2010, Abma et al., 2010, Mahdad et al., 2011, Doulgeris et al., 2012, Baardman78and van Borselen, 2013] take advantage of sparse representations of coherent seismic signals. Wasonand Herrmann [2013a]; Wason and Herrmann [2013b] proposed an alternate sampling strategyfor simultaneous acquisition (time-jittered marine) that leverages ideas from compressed sensing(CS), addressing the deblending problem through a combination of tailored (blended) acquisitiondesign and sparsity-promoting recovery via convex optimization using one-norm constraints. Thisrepresents a scenario of high-variability in source firing times—e.g., > 1 second, resulting in irregularshot locations.One of the source separation techniques is the normal moveout based median filtering, where thekey idea is as follows: i) transform the blended data into the midpoint-offset domain, ii) performsemblance analysis on common-midpoint gathers to pick the normal moveout (NMO) velocitiesfollowed by NMO corrections, iii) perform median filtering along the offset directions and thenapply inverse NMO corrections. One of the major assumptions in the described workflow is thatthe seismic events become flat after NMO corrections, however, this can be challenging when thegeology is complex and/or with the presence of noise in the data. Therefore, the above processalong with the velocity analysis is repeated a couple of times to get a good velocity model toeventually separate simultaneous data.Recently, rank-minimization based techniques have been used for source separation by Maras-chini et al. [2012] and Cheng and Sacchi [2013]. The general idea is to exploit the low-rank structureof seismic data when it is organized in a matrix. Low-rank structure refers to the small numberof nonzero singular values, or quickly decaying singular values. Maraschini et al. [2012] followedthe rank-minimization based approach proposed by Oropeza and Sacchi [2011], who identified thatseismic temporal frequency slices organized into a block Hankel matrix, in ideal conditions, is amatrix of rank k, where k is the number of different plane waves in the window of analysis. Oropezaand Sacchi [2011] showed that additive random noise increase the rank of the block Hankel matrixand presented an iterative algorithm that resembles seismic data reconstruction with the method ofprojection onto convex sets, where they use a low-rank approximation of the Hankel matrix via therandomized singular value decomposition [Liberty et al., 2007, Halko et al., 2011a] to interpolateseismic temporal frequency slices. While this technique may be effective the approach requiresembedding the data into an even larger space where each dimension of size n is mapped to a matrixof size n×n. Consequently, these approaches are applied on small data windows, where one has tochoose the size of these windows. Although mathematically desirable due to the seismic signal be-ing stationary in sufficiently small windows, Kumar et al. [2015a] showed that the act of windowingfrom a matrix-rank point of view degrades the quality of reconstruction in the case of missing-traceinterpolation. Choosing window sizes apriori is also a difficult task, as it is not altogether obvioushow to ensure that the resulting sub-volume is approximately a plane-wave.794.2.1 MotivationThe success of CS hinges on randomization of the acquisition, as presented in our previous workon simultaneous source acquisition [Mansour et al., 2012c, Wason and Herrmann, 2013b], whichrepresents a case of high-variability in source firing times—e.g., within a range of 1-20 seconds, re-sulting in overlapping shot records that lie on irregular spatial grids. Consequently, this made ourmethod applicable to marine acquisition with ocean bottom cables/nodes. Successful separation ofsimultaneous data by sparse inversion via one-norm minimization, in this high-variability scenario,motivated us to analyze the performance of our separation algorithm for the low-variability, simulta-neous towed-streamer acquisitions. In this chapter, we address the challenge of source separation fortwo types of simultaneous towed-streamer marine acquisition—over/under and simultaneous longoffset. We also compare the sparsity-promoting separation technique with separation via rank-minimization based technique, since the latter is relatively computationally faster and memoryefficient, as shown by Kumar et al. [2015a] for missing-trace interpolation.4.2.2 ContributionsOur contributions in this work are the following: first, we propose a practical framework for sourceseparation based upon the compressed sensing (CS) theory, where we outline the necessary con-ditions for separating the simultaneous towed-streamer data using sparsity-promoting and rank-minimization techniques. Second, we show that source separation using the rank-minimizationbased framework includes a “transform domain” where we exploit the low-rank structure of seismicdata. We further establish that in simultaneous towed-streamer acquisition each monochromaticfrequency slice of the fully sampled blended data matrix with periodic firing times has low-rankstructure in the proposed transform domain. However, uniformly random firing-time delays increasethe rank of the resulting frequency slice in this transform domain, which is a necessary conditionfor successful recovery via rank-minimization based techniques.Third, we show that seismic frequency slices in the proposed transform domain exhibit low-rank structure at low frequencies, but not at high frequencies. Therefore, in order to exploit thelow-rank structure at higher frequencies we adopt the Hierarchical Semi-Separable matrix repre-sentation (HSS) method proposed by Chandrasekaran et al. [2006] to represent frequency slices.Finally, we combine the (SVD-free) matrix factorization approach recently developed by Lee et al.[2010a] with the Pareto curve approach proposed by Berg and Friedlander [2008]. This renders theframework suitable for large-scale seismic data since it avoids the computation of the singular valuedecomposition (SVD), a necessary step in traditional rank-minimization based methods, which isprohibitively expensive for large matrices.We simulate two simultaneous towed-streamer acquisitions—over/under and simultaneous longoffset, and also use a field data example for over/under acquisition. We compare the recovery interms of the separation quality, computational time and memory usage. In addition, we also make80comparisons with the NMO-based median filtering type technique proposed by Chen et al. [2014].4.3 TheoryCompressed sensing is a signal processing technique that allows a signal to be sampled at sub-Nyquist rate and offers three fundamental principles for successful reconstruction of the originalsignal from relatively few measurements. The first principle utilizes the prior knowledge that theunderlying signal of interest is sparse or compressible in some transform domain—i.e., if only asmall number k of the transform coefficients are nonzero or if the signal can be well approximatedby the k largest-in-magnitude transform coefficients. The second principle is based upon a samplingscheme that breaks the underlying structure—i.e., decreases the sparsity of the original signal in thetransform domain. Once the above two principles hold, a sparsity-promoting optimization problemcan be solved in order to recover the fully sampled signal. It is well known that seismic data admitsparse representations by curvelets that capture “wavefront sets” efficiently (see e.g., Smith [1998],Cande`s and Demanet [2005], Hennenfent and Herrmann [2006a] and the references therein).For high resolution data represented by the N -dimensional vector f0 ∈ RN , which admits asparse representation x0 ∈ CP in some transform domain characterized by the operator S ∈ CP×Nwith P ≥ N , the sparse recovery problem involves solving an underdetermined system of equations:b = Ax0, (4.1)where b ∈ Cn, n  N ≤ P , represents the compressively sampled data of n measurements, andA ∈ Cn×P represents the measurement matrix. We denote by x0 a sparse synthesis coefficientvector of f0. When x0 is strictly sparse—i.e., only k < n nonzero entries in x0, sparsity-promotingrecovery can be achieved by solving the `0 minimization problem, which is a combinatorial problemand quickly becomes intractable as the dimension increases. Instead, the basis pursuit denoise(BPDN) convex optimization problem:minimizex∈CP‖x‖1 subject to ‖b−Ax‖2 ≤ , (BPDN)can be used to recover x˜, which is an estimate of x0. Here,  represents the error-bound inthe least-squares misfit and the `1 norm ‖x‖1 is the sum of absolute values of the elements of avector x. The matrix A can be composed of the product of an n × N sampling (or acquisition)matrix M and the sparsifying operator S such that A := MSH , here H denotes the Hermitiantranspose. Consequently, the measurements are given by b = Ax0 = Mf0. A seismic line with Nssources, Nr receivers, and Nt time samples can be reshaped into an N dimensional vector f , whereN = Ns×Nr×Nt. For simultaneous towed-streamer acquisition, given two unblended data vectors81x1 and x2 and (blended) measurements b, we can redefine Equation 4.1 asA︷ ︸︸ ︷[MT1SH MT2SH] x︷ ︸︸ ︷[x1x2]= b,(4.2)where T1 and T2 are defined as the firing-time delay operators which apply uniformly randomtime delays to the first and second source, respectively. Note that accurate knowledge of the firingtimes is essential for successful recovery by the proposed source separation techniques. We wishto recover a sparse approximation f˜ of the discretized wavefield f (corresponding to each source)from the measurements b. This is done by solving the BPDN sparsity-promoting program, usingthe SPG`1 solver [see Berg and Friedlander, 2008, Hennenfent et al., 2008, for details], yieldingf˜ = SH x˜ for each source.Sparsity is not the only structure seismic data exhibits where three- or five-dimensional seismicdata is organized as a vector. High-dimensional seismic data volumes can also be representedas matrices or tensors, where the low-rank structure of seismic data can be exploited [Trickettand Burroughs, 2009, Oropeza and Sacchi, 2011, Kreimer and Sacchi, 2012a, Silva and Herrmann,2013b, Aravkin et al., 2014b]. This low-rank property of seismic data leads to the notion of matrixcompletion theory which offers a reconstruction strategy for an unknown matrix X from its knownsubsets of entries [Cande`s and Recht, 2009, Recht et al., 2010a]. The success of matrix completionframework hinges on the fact that regularly sampled target dataset should exhibit a low-rankstructure in the rank-revealing “transform domain” while subsampling should destroy the low-rankstructure of seismic data in the transform domain.4.3.1 Rank-revealing “transform domain”Following the same analogy of CS, the main challenge in applying matrix completion techniques tothe source separation problem is to find a “transform domain” wherein: i) fully sampled conven-tional (or unblended) seismic data have low-rank structure—i.e., quickly decaying singular values;ii) blended seismic data have high-rank structure—i.e., slowly decaying singular values. When theseproperties hold, rank-minimization techniques (used in matrix completion) can be used to recoverthe deblended signal. Kumar et al. [2013a] showed that the frequency slices of unblended seismicdata do not exhibit low-rank structure in the source-receiver (s-r) domain since strong wavefrontsextend diagonally across the s-r plane. However, transforming the data into the midpoint-offset(m-h) domain results in a vertical alignment of the wavefronts, thereby reducing the rank of the82frequency slice matrix. The midpoint-offset domain is a coordinate transformation defined as:xmidpoint =12(xsource + xreceiver),xoffset =12(xsource − xreceiver).These observations motivate us to exploit the low-rank structure of seismic data in the midpoint-offset domain for simultaneous towed-streamer acquisition. Figures 4.1a and 4.1c show a monochro-matic frequency slice (at 5 Hz) for simultaneous acquisition with periodic firing times in the source-receiver (s-r) and midpoint-offset (m-h) domains, while Figures 4.1b and 4.1d show the same forsimultaneous acquisition with random firing-time delays. Note that we use the source-receiverreciprocity to convert each monochromatic frequency slice of the towed-streamer acquisition tosplit-spread type acquisition, which is required by our current implementation of rank-minimizationbased techniques for 2-D seismic acquisition. For 3-D seismic data acquisition, however, where seis-mic data exhibit 5-D structure, we can follow the strategy proposed by Kumar et al. [2015a], wheredifferent matricization is used as a transformation domain to exploit the low-rank structure of seis-mic data. Therefore, in 3-D seismic data acquisition we do not have to work in the midpoint-offsetdomain which removes the requirement of source-receiver reciprocity.As illustrated in Figure 4.1, simultaneously acquired data with periodic firing times preservescontinuity of the waveforms in the s-r and m-h domains, which inherently do not change therank of blended data compared to unblended data. Introducing random time delays destroyscontinuity of the waveforms in the s-r and m-h domains, thus increasing the rank of the blendeddata matrix drastically, which is a necessary condition for rank-minimization based algorithmsto work effectively. To illustrate this behaviour, we plot the decay of the singular values of a 5Hz monochromatic frequency slice extracted from the periodically and randomized simultaneousacquisition in the s-r and m-h domains, respectively in Figure 4.2a. Note that uniformly randomfiring-time delays do not noticeably change the decay of the singular values in the source-receiver(s-r) domain, as expected, but significantly slow down the decay rate in the m-h domain.Similar trends are observed for a monochromatic frequency slice at 40 Hz in Figure 4.2b. Fol-lowing the same analogy, Figures 4.2c and 4.2d show how randomization in acquisition destroys thesparse structure of seismic data in the source-channel (or source-offset) domain—i.e., slow decay ofthe curvelet coefficients, hence, favouring recovery via sparsity-promotion in this domain. Similarly,for simultaneous long offset acquisition, we exploit the low-rank structure of seismic data in them-h domain, and the sparse structure in the source-channel domain.Seismic frequency slices exhibit low-rank structure in the m-h domain at low frequencies, butthe same is not true for data at high frequencies. This is because in the low-frequency slices, thevertical alignment of the wavefronts can be accurately approximated by a low-rank representation.On the other hand, high-frequency slices include a variety of wave oscillations that increase the rank,83[a] [b] [c][d]Figure 4.1: Monochromatic frequency slice at 5 Hz in the source-receiver (s-r) and midpoint-offset (m-h) domain for blended data (a,c) with periodic firing times and (b,d) withuniformly random firing times for both sources.even though the energy remains focused around the diagonal [Kumar et al., 2013a]. To illustratethis phenomenon, we plot a monochromatic frequency slice at 40 Hz in the s-r domain and the m-hdomain for over/under acquisition in Figure 4.3. When analyzing the decay of the singular valuesfor high-frequency slices in the s-r domain and the m-h domain (Figure 4.2b), we observe that thesingular value decay is slower for the high-frequency slice than for the low-frequency slice. Therefore,rank-minimization in the high-frequency range requires extended formulations that incorporate thelow-rank structure.To exploit the low-rank structure of high-frequency data, we rely on the Hierarchical Semi-Separable matrix representation (HSS) method proposed by Chandrasekaran et al. [2006] to rep-resent frequency slices. The key idea in the HSS representation is that certain full-rank matrices,e.g., matrices that are diagonally dominant with energy decaying along the off-diagonals, can berepresented by a collection of low-rank sub-matrices. Kumar et al. [2013a] showed the possibilityof finding accurate low-rank approximations of sub-matrices of the high-frequency slices by par-titioning the data into the HSS structure for missing-trace interpolation. Jumah and Herrmann[2014] showed that HSS representations can be used to reduce the storage and computational cost84[a] [b] [c][d]Figure 4.2: Decay of singular values for a frequency slice at (a) 5 Hz and (b) 40 Hz ofblended data. Source-receiver domain: blue—periodic, red—random delays. Midpoint-offset domain: green—periodic, cyan—random delays. Corresponding decay of thenormalized curvelet coefficients for a frequency slice at (c) 5 Hz and (d) 40 Hz ofblended data, in the source-channel domain.for the estimation of primaries by sparse inversions. They combined the HSS representation withthe randomized SVD proposed by Halko et al. [2011c] to accelerate matrix-vector multiplicationsthat are required for sparse inversion.4.3.2 Hierarchical semi-separable matrix representation (HSS)The HSS structure first partitions a matrix into diagonal and off-diagonal sub-matrices. The samepartitioning structure is then applied recursively to the diagonal sub-matrices only. To illustrate theHSS partitioning, we consider a 2-D monochromatic high-frequency data matrix at 40 Hz in the s-rdomain. We show the first-level of partitioning in Figure 4.4a and the second-level partitioning inFigure 4.4b in their corresponding source-receiver domains. Figures 4.5a and 4.5b display the first-level off-diagonal sub-blocks, Figure 4.5c is the diagonal sub-block, and the corresponding decay ofthe singular values is displayed in Figure 4.6. We can clearly see that the off-diagonal sub-matriceshave low-rank structure, while the diagonal sub-matrices have higher rank. Further partitioningof the diagonal sub-blocks (Figure 4.4b) allows us to find better low-rank approximations. Thesame argument holds for the simultaneous long offset acquisition. Therefore, for low-variabilityacquisition scenarios, each frequency slice is first partitioned using HSS and then deblended in itsrespective m-h domain, as showed for missing-trace interpolation by Kumar et al. [2013a].85Figure 4.3: Monochromatic frequency slice at 40 Hz in the s-r and m-h domain for blendeddata (a,c) with periodic firing times and (b,d) with uniformly random firing times forboth sources.[a] [b]Figure 4.4: HSS partitioning of a high-frequency slice at 40 Hz in the s-r domain: (a) first-level, (b) second-level, for randomized blended acquisition.86[a] [b][c]Figure 4.5: (a,b,c) First-level sub-block matrices (from Figure 4.4a).Figure 4.6: Decay of singular values of the HSS sub-blocks in s-r domain: red—Figure 4.5a,black—Figure 4.5b, blue—Figure 4.5c.87One of the limitations of matrix completion type approaches for large-scale seismic data is thenuclear-norm projection, which inherently involves the computation of singular value decomposi-tions (SVD). Aravkin et al. [2014b] showed that the computation of SVD is prohibitively expensivefor large-scale data such as seismic, therefore, we propose a matrix-factorization based approach toavoid the need for expensive computation of SVDs [see Aravkin et al., 2014b, for details]. In thenext section, we introduce the matrix completion framework and explore its necessary extension toseparate large-scale simultaneous seismic data.4.3.3 Large-scale seismic data: SPG-LR frameworkLet X0 be a low-rank matrix in Cn×m and A be a linear measurement operator that maps fromCn×m → Cp with p n×m. Under the assumption that the blending process increases the rankof the matrix X0, the source separation problem is to find the matrix of lowest possible rank thatagrees with the above observations. The rank-minimization problem involves solving the followingproblem for A, up to a given tolerance :minimizeXrank(X) subject to ‖A(X)− b‖2 ≤ ,where rank is defined as the maximum number of linearly independent rows or column of a ma-trix, b is a set of blended measurements. For simultaneous towed-streamer acquisition, we followequation 4.2 and redefine our system of equations asA︷ ︸︸ ︷[MT1SH MT2SH] X︷ ︸︸ ︷[X1X2]= b,where S is the transformation operator from the s-r domain to the m-h domain. Recht et al.[2010b] showed that under certain general conditions on the operator A, the solution to the rank-minimization problem can be found by solving the following nuclear-norm minimization problem:minimizeX‖X‖∗ subject to ‖A(X)− b‖2 ≤ , (BPDN)where ‖X‖∗ = ‖σ‖1, and σ is a vector of singular values. Unfortunately, for large-scale data, solvingthe BPDN problem is difficult since it requires repeated projections onto the set ‖X‖∗ ≤ τ , whichmeans repeated SVD or partial SVD computations. Therefore, in this paper, we avoid computingsingular value decompositions (SVD) of the matrices and use an extension of the SPG`1 solver[Berg and Friedlander, 2008] developed for the BPDN problem in Aravkin et al. [2013b]. We referto this extension as SPG-LR in the rest of the paper. The SPG-LR algorithm finds the solutionto the BPDN problem by solving a sequence of LASSO (least absolute shrinkage and selection88operator) subproblems:minimizeX‖A(X)− b‖2 subject to ||X||∗ ≤ τ, (LASSOτ )where τ is updated by traversing the Pareto curve. The Pareto curve defines the optimal trade-offbetween the two-norm of the residual and the one-norm of the solution [Berg and Friedlander,2008]. Solving each LASSO subproblem requires a projection onto the nuclear-norm ball ‖X‖∗ ≤ τin every iteration by performing a singular value decomposition and then thresholding the singularvalues. For large-scale seismic problems, it becomes prohibitively expensive to carry out such alarge number of SVDs. Instead, we adopt a recent factorization-based approach to nuclear-normminimization [Rennie and Srebro, 2005a, Lee et al., 2010a, Recht and Re´, 2011]. The factorizationapproach parametrizes the matrix (X1, X2) ∈ Cn×m as the product of two low-rank factors (L1,L2) ∈ Cn×k and (R1, R2) ∈ Cm×k such that,X =[L1RH1L2RH2]. (4.3)Here, k represents the rank of the L and R factors. The optimization scheme can then be carriedout using the factors (L1,L2) and (R1,R2) instead of (X1,X2), thereby significantly reducing thesize of the decision variable from 2nm to 2k(n + m) when k  m,n. Rennie and Srebro [2005a]showed that the nuclear-norm obeys the relationship:‖X‖∗ ≤ 12∥∥∥∥∥[L1R1]∥∥∥∥∥2F+12∥∥∥∥∥[L2R2]∥∥∥∥∥2F=: Φ(L1,R1,L2,R2), (4.4)where ‖ · ‖2F is the Frobenius norm of the matrix—i.e., sum of the squared entires. Consequently,the LASSO subproblem can be replaced byminimizeL1,R1,L2,R2‖A(X)− b‖2 subject to Φ(L1,R1,L2,R2) ≤ τ , (4.5)where the projection onto Φ(L1,R1,L2,R2) ≤ τ is easily achieved by multiplying each factor(L1,L2) and (R1,R2) by the scalar√2τ/Φ(L1,R1,L2,R2). Equation 4.4, for each HSS sub-matrixin the m-h domain, guarantees that ‖X‖∗ ≤ τ for any solution of 4.5. Once the optimization problemis solved, each sub-matrix in the m-h domain is transformed back into the s-r domain, where weconcatenate all the sub-matrices to get the deblended monochromatic frequency data matrices.One of the advantages of the HSS representation is that it works with recursive partitioning of amatrix and sub-matrices can be solved in parallel, speeding up the optimization formulation.894.4 ExperimentsWe perform source separation for two simultaneous towed-streamer acquisition scenarios—over/underand simultaneous long offset, by generating synthetic datasets on complex geological models usingthe IWAVE [Symes et al., 2011a] time-stepping acoustic simulation software, and also use a fielddataset from the Gulf of Suez. Source separation for over/under acquisition is tested on two differ-ent datasets. The first dataset is simulated on the Marmousi model [Bourgeois et al., 1991], whichrepresents a complex-layer model with steeply dipping reflectors that make the data challenging.With a source (and channel/receiver) sampling of 20.0 m, one dataset is generated with a source-depth of 8.0 m (Figures 4.7a and 4.7d), while the other dataset has the source at 12.0 m depth(Figures 4.7b and 4.7e), resulting in 231 sources and 231 channels. The temporal length of eachdataset is 4.0 s with a sampling interval of 0.004 s. The second dataset is a field data example fromthe Gulf of Suez. In this case, the first source is placed at 5.0 m depth (Figures 4.8a and 4.8d)and the second source is placed at 10.0 m depth (Figures 4.8b and 4.8e). The source (and channel)sampling is 12.5 m, resulting in 178 sources and 178 channels with a time sampling interval of 0.004s.The simultaneous long offset acquisition is simulated on the BP salt model [Billette and Brandsberg-Dahl, 2004], where the presence of salt-bodies make the data challenging. The two source vesselsare 6.0 km apart and the streamer length is 6.0 km. Both the datasets (for source 1 and source2) contain 361 sources and 361 channels with a spatial interval of 12.5 m, where the source andstreamer depth is 6.25 m. The temporal length of each dataset is 6.0 s with a sampling intervalof 0.006 s. A single shot gather from each dataset is shown in Figures 4.9a and 4.9b and thecorresponding channel gathers are shown in Figures 4.9d and 4.9e. The datasets for each sourcein both the acquisition scenarios are (simply) summed for simultaneous acquisition with periodicfiring times, while uniformly random time delays between 0-1 second are applied to each source forthe randomized simultaneous acquisition. Figures 4.7c, 4.8c and 4.9c show the randomized blendedshot gathers for the Marmousi, the Gulf of Suez and the BP datasets, respectively. As illustratedin the figures, both the sources fire at random times (independent of each other) within the intervalof 0-1 second, hence, the difference between the firing times of the sources is always less than 1second. The corresponding randomized blended channel gathers are shown in Figures 4.7f, 4.8fand 4.9f. Note that the speed of the vessels in both the acquisition scenarios is no different thanthe current practical speed of the vessels in the field.For deblending via rank-minimization, second-level of HSS partitioning, on each frequency slicein the s-r domain, was sufficient for successful recovery in both the acquisition scenarios. Aftertransforming each sub-block into the m-h domain, deblending is then performed by solving thenuclear-norm minimization formulation (BPDN) on each sub-block, using 350 iterations of SPG-LR. In order to choose an appropriate rank value, we first perform deblending for frequency slicesat 0.2 Hz and 125 Hz. For the over/under acquisition simulated on the Marmousi model, the best90[a] Channel (km)0 2 4Time (s)00.511.522.533.5[b] Channel (km)0 2 4Time (s)00.511.522.533.5[c] Channel (km)0 2 4Time (s)00.511.522.533.5[d] Source (km)0 2 4Time (s)00.511.522.533.5[e] Source (km)0 2 4Time (s)00.511.522.533.5[f] Source (km)0 2 4Time (s)00.511.522.533.5Figure 4.7: Original shot gather of (a) source 1, (b) source 2, and (c) the correspondingblended shot gather for simultaneous over/under acquisition simulated on the Mar-mousi model. (d, e) Corresponding common-channel gathers for each source and (f)the blended common-channel gather.rank value is 30 and 80 for each frequency slice, respectively. The best rank values for the Gulf ofSuez dataset are 20 and 100, respectively. For simultaneous long offset acquisition, the best rankvalue is 10 and 90 for frequency slices at 0.15 Hz and 80 Hz, respectively. Hence, we adjust therank linearly within these ranges when moving from low to high frequencies, for each acquisitionscenario. For deblending via sparsity-promotion, we use the BPDN formulation to minimize the`1 norm (instead of the nuclear-norm) where the transformation operator S is the 2-D curveletoperator. Here, we run 350 iterations of SPG`1.For the over/under acquisition scenario simulated on the Marmousi model, Figures 4.10a and 4.10cshow the deblended shot gathers via rank-minimization and Figures 4.10e and 4.10g show the de-blended shot gathers via sparsity-promotion, respectively. The deblended common-channel gath-ers via rank-minimization and sparsity-promotion are shown in Figures 4.11a, 4.11c and Fig-ures 4.11e, 4.11g, respectively. For the Gulf of Suez field dataset, Figures 4.12 and 4.13 show91[a] Channel (km)0 1 2Time (s)00.511.522.533.5[b] Channel (km)0 1 2Time (s)00.511.522.533.5[c] Channel (km)0 1 2Time (s)00.511.522.533.5[d] Source (km)0 1 2Time (s)00.511.522.533.5[e] Source (km)0 1 2Time (s)00.511.522.533.5[f] Source (km)0 1 2Time (s)00.511.522.533.5Figure 4.8: Original shot gather of (a) source 1, (b) source 2, and (c) the correspondingblended shot gather for simultaneous over/under acquisition from the Gulf of Suezdataset. (d, e) Corresponding common-channel gathers for each source and (f) theblended common-channel gather.the deblended gathers and difference plots in the common-shot and common-channel domain, re-spectively. The corresponding deblended gathers and difference plots in the common-shot andcommon-channel domain for the simultaneous long offset acquisition scenario are shown in Fig-ures 4.14 and 4.15.As illustrated by the results and their corresponding difference plots, both the CS-based ap-proaches of rank-minimization and sparsity-promotion are able to deblend the data for the low-variability acquisition scenarios fairly well. In all the three different datasets, the average SNRsfor separation via sparsity-promotion is slightly better than rank-minimization, but the differenceplots show that the recovery via rank-minimization is equivalent to the sparsity-promoting basedrecovery where it is able to recover most of the coherent energy. Also, rank-minimization outper-forms the sparsity-promoting technique in terms of the computational time and memory usage asrepresented in Tables 4.1, 4.2 and 4.3. Both the CS-based recoveries are better for the simultane-92[a] Channel (km)0 2 4 6Time (s)012345[b] Channel (km)0 2 4 6Time (s)012345[c] Channel (km)0 2 4 6Time (s)012345[d] Source (km)0 2 4 6Time (s)012345[e] Source (km)0 2 4 6Time (s)012345[f] Source (km)0 2 4 6Time (s)012345Figure 4.9: Original shot gather of (a) source 1, (b) source 2, and (c) the correspondingblended shot gather for simultaneous long offset acquisition simulated on the BP saltmodel. (d, e) Corresponding common-channel gathers for each source and (f) theblended common-channel gather.ous long offset acquisition than the recoveries from the over/under acquisition scenario. A possibleexplanation for this improvement is the long offset distance that increases randomization in thesimultaneous acquisition, which is a more favourable scenario for recovery by CS-based approaches.Figure 4.16 demonstrates the advantage of the HSS partitioning, where the SNRs of the deblendeddata are significantly improved.4.4.1 Comparison with nmo-based median filteringWe also compare the performance of our CS-based deblending techniques with deblending usingthe NMO-based median filtering technique proposed by Chen et al. [2014], where we work on acommon-midpoint gather from each acquisition scenario. For the over/under acquisition simulatedon the Marmousi model, Figures 4.17a and 4.17e show the blended common-midpoint gathers and93Marmousi modelTime Memory SNRSparsity 167 7.0 16.7, 16.7Rank 12 2.8 15.0, 14.8Table 4.1: Comparison of computational time (in hours), memory usage (in GB) and averageSNR (in dB) using sparsity-promoting and rank-minimization based techniques for theMarmousi model.Gulf of SuezTime Memory SNRSparsity 118 6.6 14.6Rank 8 2.6 12.8Table 4.2: Comparison of computational time (in hours), memory usage (in GB) and averageSNR (in dB) using sparsity-promoting and rank-minimization based techniques for theGulf of Suez dataset.deblending using the median filtering technique is shown in Figures 4.17b and 4.17f. The cor-responding deblended common-midpoint gathers from the two CS-based techniques are shown inFigures 4.17(c,d,g,h). Figure 4.18 shows the blended and deblended common-midpoint gathers forthe field data from the Gulf of Suez. We observe that recoveries via the proposed CS-based ap-proaches are comparable to the recovery from the median filtering technique. Similarly, Figure 4.19shows the results for the simultaneous long offset acquisition simulated on the BP salt model. Here,the CS-based techniques result in slightly improved recoveries.4.4.2 RemarkIt is important to note here that we perform the CS-based source separation algorithms only once,however, we can always perform a few more runs of the algorithms where we can first subtract thedeblended source 1 and source 2 from the acquired blended data and then re-run the algorithms todeblend the energy in the residual data. Hence, the recovery can be further improved until necessary.Since separation via rank-minimization is computationally faster than the sparsity based technique,multiple passes through the data is a computationally viable option for the former deblendingtechnique.4.5 DiscussionThe above experiments demonstrate the successful implementation of the proposed CS-based ap-proaches of rank-minimization and sparsity-promotion for source separation in the low-variability,94[a] Channel (km)0 2 4Time (s)00.511.522.533.5[b] Channel (km)0 2 4Time (s)00.511.522.533.5[c] Channel (km)0 2 4Time (s)00.511.522.533.5[d] Channel (km)0 2 4Time (s)00.511.522.533.5[e] Channel (km)0 2 4Time (s)00.511.522.533.5[f] Channel (km)0 2 4Time (s)00.511.522.533.5[g] Channel (km)0 2 4Time (s)00.511.522.533.5[h] Channel (km)0 2 4Time (s)00.511.522.533.5Figure 4.10: Deblended shot gathers and difference plots (from the Marmousi model) ofsource 1 and source 2: (a,c) deblending using HSS based rank-minimization and (b,d)the corresponding difference plots; (e,g) deblending using curvelet-based sparsity-promotion and (f,h) the corresponding difference plots.BP modeltime memory SNRSparsity 325 7.0 32.0, 29.4Rank 20 2.8 29.4, 29.0Table 4.3: Comparison of computational time (in hours), memory usage (in GB) and averageSNR (in dB) using sparsity-promoting and rank-minimization based techniques for theBP model.95[a] Source (km)0 2 4Time (s)00.511.522.533.5[b] Source (km)0 2 4Time (s)00.511.522.533.5[c] Source (km)0 2 4Time (s)00.511.522.533.5[d] Source (km)0 2 4Time (s)00.511.522.533.5[e] Source (km)0 2 4Time (s)00.511.522.533.5[f] Source (km)0 2 4Time (s)00.511.522.533.5[g] Source (km)0 2 4Time (s)00.511.522.533.5[h] Source (km)0 2 4Time (s)00.511.522.533.5Figure 4.11: Deblended common-channel gathers and difference plots (from the Marmousimodel) of source 1 and source 2: (a,c) deblending using HSS based rank-minimizationand (b,d) the corresponding difference plots; (e,g) deblending using curvelet-basedsparsity-promotion and (f,h) the corresponding difference plots.simultaneous towed-streamer acquisitions. The recovery is comparable for both approaches, how-ever, separation via rank-minimization is significantly faster and memory efficient. This is furtherenhanced by incorporating the HSS partitioning since it allows the exploitation of the low-rankstructure in the high-frequency regime, and renders its extension to large-scale data feasible. Notethat in the current implementation, we work with each temporal frequency slice and perform thesource separation individually. The separation results can further be enhanced by incorporatingthe information from the previously recovered frequency slice to the next frequency slice, as shownby Mansour et al. [2013] for seismic data interpolation.The success of CS hinges on randomization of the acquisition. Although, the low degree ofrandomization (e.g., 0 ≤ 1 second) in simultaneous towed-streamer acquisitions seems favourablefor source separation via CS-based techniques, however, high-variability in the firing times enhancesthe recovery quality of separated seismic data volumes, as shown in Wason and Herrmann [2013a];96[a] Channel (km)0 1 2Time (s)00.511.522.533.5[b] Channel (km)0 1 2Time (s)00.511.522.533.5[c] Channel (km)0 1 2Time (s)00.511.522.533.5[d] Channel (km)0 1 2Time (s)00.511.522.533.5[e] Channel (km)0 1 2Time (s)00.511.522.533.5[f] Channel (km)0 1 2Time (s)00.511.522.533.5[g] Channel (km)0 1 2Time (s)00.511.522.533.5[h] Channel (km)0 1 2Time (s)00.511.522.533.5Figure 4.12: Deblended shot gathers and difference plots (from the Gulf of Suez dataset) ofsource 1 and source 2: (a,c) deblending using HSS based rank-minimization and (b,d)the corresponding difference plots; (e,g) deblending using curvelet-based sparsity-promotion and (f,h) the corresponding difference plots.Wason and Herrmann [2013b] for ocean-bottom cable/node acquisition with continuous recording.One of the advantages of the proposed CS-based techniques is that it does not require velocityestimation, which can be a challenge for data with complex geologies. However, the proposedtechniques require accurate knowledge of the random firing times.So far, we have not considered the case of missing traces (sources and/or receivers), however,incorporating this scenario in the current framework is straightforward. This makes the problema joint deblending and interpolation problem. In reality, seismic data are typically irregularlysampled along spatial axes, therefore, future work includes working with non-uniform samplinggrids. Finally, we envisage that our methods can, in principle, be extended to separate 3-D blendedseismic data volumes.97[a] Source (km)0 1 2Time (s)00.511.522.533.5[b] Source (km)0 1 2Time (s)00.511.522.533.5[c] Source (km)0 1 2Time (s)00.511.522.533.5[d] Source (km)0 1 2Time (s)00.511.522.533.5[e] Source (km)0 1 2Time (s)00.511.522.533.5[f] Source (km)0 1 2Time (s)00.511.522.533.5[g] Source (km)0 1 2Time (s)00.511.522.533.5[h] Source (km)0 1 2Time (s)00.511.522.533.5Figure 4.13: Deblended common-channel gathers and difference plots (from the Gulf of Suezdataset) of source 1 and source 2: (a,c) deblending using HSS based rank-minimizationand (b,d) the corresponding difference plots; (e,g) deblending using curvelet-basedsparsity-promotion and (f,h) the corresponding difference plots.4.6 ConclusionsWe have presented two compressed sensing based methods for source separation for simultaneoustowed-streamer type acquisitions, such as the over/under and the simultaneous long offset acquisi-tion. Both the compressed sensing based approaches of rank-minimization and sparsity-promotiongive comparable deblending results, however, the former approach is readily scalable to large-scaleblended seismic data volumes and is computationally faster. This can be further enhanced by in-corporating the HSS structure with factorization-based rank-regularized optimization formulations,along with improved recovery quality of the separated seismic data. We have combined the Paretocurve approach for optimizing BPDN formulations with the SVD-free matrix factorization meth-ods to solve the nuclear-norm optimization formulation, which avoids the expensive computation ofthe singular value decomposition (SVD), a necessary step in traditional rank-minimization basedmethods. We find that our proposed techniques are comparable to the commonly used NMO-based98[a] Channel (km)0 2 4 6Time (s)012345[b] Channel (km)0 2 4 6Time (s)012345[c] Channel (km)0 2 4 6Time (s)012345[d] Channel (km)0 2 4 6Time (s)012345[e] Channel (km)0 2 4 6Time (s)012345[f] Channel (km)0 2 4 6Time (s)012345[g] Channel (km)0 2 4 6Time (s)012345[h] Channel (km)0 2 4 6Time (s)012345Figure 4.14: Deblended shot gathers and difference plots (from the BP salt model) of source 1and source 2: (a,c) deblending using HSS based rank-minimization and (b,d) the cor-responding difference plots; (e,g) deblending using curvelet-based sparsity-promotionand (f,h) the corresponding difference plots.median filtering techniques.99[a] Source (km)0 2 4 6Time (s)012345[b] Source (km)0 2 4 6Time (s)012345[c] Source (km)0 2 4 6Time (s)012345[d] Source (km)0 2 4 6Time (s)012345[e] Source (km)0 2 4 6Time (s)012345[f] Source (km)0 2 4 6Time (s)012345[g] Source (km)0 2 4 6Time (s)012345[h] Source (km)0 2 4 6Time (s)012345Figure 4.15: Deblended common-channel gathers and difference plots (from the BP saltmodel) of source 1 and source 2: (a,c) deblending using HSS based rank-minimizationand (b,d) the corresponding difference plots; (e,g) deblending using curvelet-basedsparsity-promotion and (f,h) the corresponding difference plots.100Figure 4.16: Signal-to-noise ratio (dB) over the frequency spectrum for the deblended datafrom the Marmousi model. Red, blue curves—deblending without HSS; cyan, blackcurves—deblending using second-level HSS partitioning. Solid lines—separated source1, + marker—separated source 2.101[a] Offset (km)0 2 4Time (s)00.511.522.533.5[b] Offset (km)0 2 4Time (s)00.511.522.533.5[c] Offset (km)0 2 4Time (s)00.511.522.533.5[d] Offset (km)0 2 4Time (s)00.511.522.533.5[e] Offset (km)0 2 4Time (s)00.511.522.533.5[f] Offset (km)0 2 4Time (s)00.511.522.533.5[g] Offset (km)0 2 4Time (s)00.511.522.533.5[h] Offset (km)0 2 4Time (s)00.511.522.533.5Figure 4.17: Blended common-midpoint gathers of (a) source 1 and (e) source 2 for theMarmousi model. Deblending using (b,f) NMO-based median filtering, (c,g) rank-minimization and (d,h) sparsity-promotion.102[a] Offset (km)0 1 2Time (s)00.511.522.533.5[b] Offset (km)0 1 2Time (s)00.511.522.533.5[c] Offset (km)0 1 2Time (s)00.511.522.533.5[d] Offset (km)0 1 2Time (s)00.511.522.533.5[e] Offset (km)0 1 2Time (s)00.511.522.533.5[f] Offset (km)0 1 2Time (s)00.511.522.533.5[g] Offset (km)0 1 2Time (s)00.511.522.533.5[h] Offset (km)0 1 2Time (s)00.511.522.533.5Figure 4.18: Blended common-midpoint gathers of (a) source 1, (e) source 2 for the Gulfof Suez dataset. Deblending using (b,f) NMO-based median filtering, (c,g) rank-minimization and (d,h) sparsity-promotion.103[a] Offset (km)0 2 4 6Time (s)012345[b] Offset (km)0 2 4 6Time (s)012345[c] Offset (km)0 2 4 6Time (s)012345[d] Offset (km)0 2 4 6Time (s)012345[e] Offset (km)0 2 4 6Time (s)012345[f] Offset (km)0 2 4 6Time (s)012345[g] Offset (km)0 2 4 6Time (s)012345[h] Offset (km)0 2 4 6Time (s)012345Figure 4.19: Blended common-midpoint gathers of (a) source 1, (e) source 2 for the BP saltmodel. Deblending using (b,f) NMO-based median filtering, (c,g) rank-minimizationand (d,h) sparsity-promotion.104Chapter 5Large-scale time-jittered simultaneousmarine acquisition: rank-minimizationapproach5.1 SummaryIn this chapter, we present a computationally tractable rank-minimization algorithm to separatesimultaneous time-jittered continuous recording for a 3D ocean-bottom cable survey. We will showexperimentally that the proposed algorithm is computationally tractable for 3D time-jittered marineacquisition.5.2 IntroductionSimultaneous source marine acquisition mitigates the challenges posed by conventional marineacquisition in terms of sampling and survey efficiency, since more than one shot can be fired at thesame time Beasley et al. [1998], de Kok and Gillespie [2002], Berkhout [2008a]. The final objective ofsource separation is to get interference-free shot records. Wason and Herrmann [2013b] have shownthat the challenge of separating simultaneous data can be addressed through a combination oftailored single- (or multiple-) source simultaneous acquisition design and curvelet-based sparsity-promoting recovery. The idea is to design a pragmatic time-jittered marine acquisition schemewhere acquisition time is reduced and spatial sampling is improved by separating overlapping shotrecords and interpolating jittered coarse source locations to fine source sampling grid. While theproposed sparsity-promoting approach recovers densely sampled conventional data reasonably well,it poses computational challenges since curvelet-based sparsity-promoting methods can becomecomputationally intractable—in terms of speed and memory storage—especially for large-scale 5DA version of this chapter has been published in the proceedings of SEG Annual Meeting, 2016, Denver, USA105seismic data volumes.Recently, nuclear-norm minimization based methods have shown the potential to overcome thecomputational bottleneck [Kumar et al., 2015a], hence, these methods are successfully used forsource separation [Maraschini et al., 2012, Cheng and Sacchi, 2013, Kumar et al., 2015b]. The gen-eral idea is that conventional seismic data can be well approximated in some rank-revealing trans-form domain where the data exhibit low-rank structure or fast decay of singular values. Therefore,in order to use nuclear-norm minimization based algorithms for source separation, the acquisitiondesign should increase the rank or slow the decay of the singular values. In chapter 4, we usednuclear-norm minimization formulation to separate simultaneous data acquired from an over/un-der acquisition design, where the separation is performed on each monochromatic data matrixindependently. However, by virtue of the design of the simultaneous time-jittered marine acquisi-tion we formulate a nuclear-norm minimization formulation that works on the temporal-frequencydomain—i.e., using all monochromatic data matrices together. One of the computational bottle-necks of working with the nuclear-norm minimization formulation is the computation of singularvalues. Therefore, in this chapter we combine the modified nuclear-norm minimization approachwith the factorization approach recently developed by Lee et al. [2010a]. The experimental resultson a synthetic 5D data set demonstrate successful implementation of the proposed methodology.5.3 MethodologySimultaneous source separation problem can be perceived as a rank-minimization problem. In thischapter, we follow the time-jittered marine acquisition setting proposed by Wason and Herrmann[2013b], where a single source vessel sails across an ocean-bottom array firing two airgun arraysat jittered source locations and time instances with receivers recording continuously (Figure 5.1).This results in a continuous time-jittered simultaneous data volume.Conventional 5D seismic data volume can be represented as a tensor D ∈ Cnf×nrx×nsx×nry×nsy ,where (nsx, nsy) and (nrx, nry) represents number of sources and receivers along x, y coordinatesand nf represents number of frequencies. The aim is to recover the data volume D from thecontinuous time-domain simultaneous data volume b ∈ CnT×nrx×nry by finding a minimum ranksolution D that satisfies the system of equations A(D) = b. Here, A represents a linear sampling-transformation operator, nT < nt×nsx×nsy is the total number of time samples in the continuoustime-domain simultaneous data volume, nt is the total number of time samples in the conventionalseismic data. Note that the operator A maps D to a lower dimensional simultaneous data volumeb since the acquisition process superimposes shot records shifted with respect to their firing times.The sampling-transformation operator A is defined as A = MRS, where the operator S per-mutes the tensor coordinates from (nrx, nsx, nry, nsy) (rank-revealing domain, i.e., Figure 5.2a) to(nrx, nry, nsx, nsy) (standard acquisition ordering, i.e., Figure 5.2b) and its adjoint reverses this per-mutation. The restriction operator R subsamples the conventional data volume at jittered source106Figure 5.1: Aerial view of the 3D time-jittered marine acquisition. Here, we consider onesource vessel with two airgun arrays firing at jittered times and locations. Startingfrom point a, the source vessel follows the acquisition path shown by black lines andends at point b. The receivers are placed at the ocean bottom (red dashed lines).Figure 5.2: Schematic representation of the sampling-transformation operator A during theforward operation. The adjoint of the operatorA follows accordingly. (a, b, c) representa monochromatic data slice from conventional data volume and (d) represents a timeslice from the continuous data volume.locations (Figure 5.2c), the sampling operator M maps the conventional subsampled temporal-frequency domain data to the simultaneous time-domain data (Figure 5.2d). Note that Figure 5.2drepresents a time slice from the continuous (simultaneous) data volume where the stars representlocations of jittered sources in the simultaneous acquisition.Rank-minimization formulations require that the target data set should exhibit a low-rankstructure or fast decay of singular values. Consequently, the sampling-restriction (MR) operationshould increase the rank or slow the decay of singular values. As we know, there is no uniquenotion of rank for tensors, therefore, we can choose the rank of different matricizations of DKreimer and Sacchi [2012b] where the idea is to create the matrix D(i) by group the dimensions ofD(i) specified by i and vectorize them along the rows while vectorizing the other dimensions along107[a] [b] [c] [d][e]Figure 5.3: Monochromatic slice at 10.0 Hz. Fully sampled data volume and simultaneousdata volume matricized as (a, c) i = (nsx, nsy), and (b, d) i = (nrx, nsx). (e) Decayof singular values. Notice that fully sampled data organized as i = (nsx, nsy) hasslow decay of the singular values (solid red curve) compared to the i = (nrx, nsx)organization (solid blue curve). However, the sampling-restriction operator slows thedecay of the singular values in the i = (nrx, nsx) organization (dotted blue curve)compared to the i = (nsx, nsy) organization (dotted red curve), which is a favorablescenario for the rank-minimization formulation.the columns. In this work, we consider the matricization proposed by Silva and Herrmann [2013a],where i = (nsx, nsy)—i.e., placing both source coordinates along the columns (Figure 5.3a), ori = (nrx, nsx)—i.e., placing receiver-x and source-x coordinates along the columns (Figure 5.3b).As we see in Figure 5.3e, the matricization i = (nsx, nsy) has higher rank or slow decay of thesingular values (solid red curve) compared to the matricization i = (nrx, nsx) (solid blue curve).The sampling-restriction operator removes random columns in the matricization i = (nsx, nsy)(Figure 5.3c), as a result the overall singular values decay faster (dotted red curve). This isbecause missing columns put the singular values to zero, which is opposite to the requirementof rank-minimization algorithms. On the other hand, the sampling-restriction operator removesrandom blocks in the matricization i = (nrx, nsx) (Figure 5.3d), hence, slowing down the decayof the singular values (dotted blue curve). This scenario is much closer to the matrix-completionproblem (Recht et al. [2010b]), where samples are removed at random points in a matrix. Therefore,we address the source separation problem by exploiting low-rank structure in the matricizationi = (nrx, nsx).Since rank-minimization problems are NP hard and therefore computationally intractable, Rechtet al. [2010b] showed that solutions to rank-minimization problems can be found by solving anuclear-norm minimization problem. Silva and Herrmann [2013a] showed that for seismic datainterpolation the sampling operatorM is separable, hence, data can be interpolated by working oneach monochromatic data tensor independently. Since in continuous time-jittered marine acquisi-tion, the sampling operator M is nonseparable as it is a combined time-shifting and shot-jittering108operator, we can not perform source separation independently over different monochromatic datatensors. Therefore, we formulate the nuclear-norm minimization formulation over the temporal-frequency domain as follows:minDnf∑j‖D(i)j ‖∗ subject to ‖A(D)− b‖2 ≤ , (5.1)where∑nfj ‖.‖∗ =∑nfj ‖σj‖1 and σj is the vector of singular values for each monochromaticdata matricization. One of the main drawbacks of the nuclear-norm minimization problem isthat it involves computation of the singular-value decomposition (SVD) of the matrices, which isprohibitively expensive for large-scale seismic data. Therefore, we avoid the direct approach tonuclear-norm minimization problem and follow a factorization-based approach Rennie and Srebro[2005a], Lee et al. [2010a], Recht and Re´ [2011]. The factorization-based approach parametrizeseach monochromatic data matrix D(i) as a product of two low-rank factors L(i) ∈ C(nrx·nsx)×kand R(i) ∈ C(nry ·nsy)×k such that, D(i) = L(i)R(i)H , where k represents the rank of the underlyingmatrix and H represents the Hermitian transpose. Note that tensors L,R can be formed byconcatenating each matrix L(i),R(i), respectively. The optimization scheme can then be carriedout using the tensors L,R instead of D, thereby significantly reducing the size of the decisionvariable from nrx × nry × nsx × nsy × nf to 2k × nrx × nsx × nf when k ≤ nrx × nsx. FollowingRennie and Srebro [2005a], the sum of the nuclear norm obeys the relationship:nf∑j‖D(i)j ‖∗ ≤nf∑j12‖L(i)j R(i)j ‖2F ,where ‖ · ‖2F is the Frobenius norm of the matrix (sum of the squared entries).5.4 Experiments & resultsWe test the efficacy of our method by simulating a synthetic 5D data set using the BG Compassvelocity model (provided by the BG Group) which is a geologically complex and realistic model.We also quantify the cost savings associated with simultaneous acquisition in terms of an improvedspatial-sampling ratio defined as a ratio between the spatial grid interval of observed simultaneoustime-jittered acquisition and the spatial grid interval of recovered conventional acquisition. Thespeed-up in acquisition is measured using the survey-time ratio (STR), proposed by Berkhout[2008a], which measures the ratio of time of conventional acquisition and simultaneous acquisition.Using a time-stepping finite-difference modelling code provided by Chevron, we simulate aconventional 5D data set of dimensions 2501× 101× 101× 40× 40 (nt×nrx×nry×nsx×nsy) overa survey area of approximately 4 km× 4 km. Conventional time-sampling interval is 4.0 ms, source-and receiver-sampling interval is 6.25 m. We use a Ricker wavelet with central frequency of 15.0 Hz109as source function. Figure 5.4a shows a conventional common-shot gather. Applying the sampling-transformation operator (A) to the conventional data generates approximately 65 minutes of 3Dcontinuous time-domain simultaneous seismic data, 30 seconds of which is shown in Figure 5.4b.By virtue of the design of the simultaneous time-jittered acquisition, the simultaneous data volumeb is 4-times subsampled compared to conventional acquisition. Consequently, the spatial samplingof recovered data is improved by a factor of 4 and the acquisition time is reduced by the samefactor.Simply applying the adjoint of the sampling operatorM to simultaneous data b results in stronginterferences from other sources as shown in Figure 5.4c. Therefore, to recover the interference-freeconventional seismic data volume from the simultaneous time-jittered data, we solve the factor-ization based nuclear-norm minimization formulation. We perform the source separation for arange of rank k values of the two low-rank factors L(i),R(i) and find that k = 100 gives the bestsignal-to-noise ratio (SNR) of the recovered conventional data. Figure 5.4d shows the recoveredshot gather, with an SNR of 20.8 dB, and the corresponding residual is shown in Figure 5.4e. Asillustrated, we are able to separate the shots along with interpolating the data to the finer grid of6.25 m. To establish that we loose very small coherent energy during source separation, we inten-sify the amplitudes of the residual plot by a factor of 8 (Figure 5.4e). The late arriving events,which are often weak in energy, are also separated reasonably well. Computational efficiency ofthe rank-minimization approach—in terms of the memory storage—in comparison to the curvelet-based sparsity-promoting approach is approximately 7.2 when compared with 2D curvelets and 24when compared with 3D curvelets.5.5 ConclusionsWe propose a factorization based nuclear-norm minimization formulation for simultaneous sourceseparation and interpolation of 5D seismic data volume. Since the sampling-transformation op-erator is nonseparable in the simultaneous time-jittered marine acquisition, we formulate the fac-torization based nuclear-norm minimization problem over the entire temporal-frequency domain,contrary to solving each monochromatic data matrix independently. We show that the proposedmethodology is able to separate and interpolate the data to a fine underlying grid reasonably well.The proposed approach is computationally memory efficient in comparison to the curvelet-basedsparsity-promoting approach.110[a] [b][c] [d][e]Figure 5.4: Source separation recovery. A shot gather from the (a) conventional data; (b)a section of 30 seconds from the continuous time-domain simultaneous data (b); (c)recovered data by applying the adjoint of the sampling operatorM; (d) data recoveredvia the proposed formulation (SNR = 20.8 dB); (e) difference of (a) and (d) whereamplitudes are magnified by a factor of 8 to illustrate a very small loss in coherentenergy.111Chapter 6Enabling affordable omnidirectionalsubsurface extended image volumesvia probing6.1 SummaryImage gathers as a function of subsurface offset are an important tool for the inference of rock prop-erties and velocity analysis in areas of complex geology. Traditionally, these gathers are thought ofas multidimensional correlations of the source and receiver wavefields. The bottleneck in computingthese gathers lies in the fact that one needs to store, compute, and correlate these wavefields for allshots in order to obtain the desired image gathers. Therefore, the image gathers are typically onlycomputed for a limited number of subsurface points and for a limited range of subsurface offsets,which may cause problems in complex geological areas with large geologic dips. We overcome in-creasing computational and storage costs of extended image volumes by introducing a formulationthat avoids explicit storage and removes the customary and expensive loop over shots, found inconventional extended imaging. As a result, we end up with a matrix-vector formulation from whichdifferent image gathers can be formed and with which amplitude-versus-angle and wave-equationmigration velocity analyses can be performed without requiring prior information on the geologicdips. Aside from demonstrating the formation of two-way extended image gathers for differentpurposes and at greatly reduced costs, we also present a new approach to conduct automatic wave-equation based migration-velocity analysis. Instead of focussing in particular offset directions andpreselected subsets of subsurface points, our method focuses every subsurface point for all subsur-face offset directions using a randomized probing technique. As a consequence, we obtain goodvelocity models at low cost for complex models without the need to provide information on theA version of this chapter has been published in Geophysical Prospecting, 2016, issn 1365-2478.112geologic dips.6.2 IntroductionSeismic reflection data are a rich source of information about the subsurface and by studying bothdynamic and kinematic properties of the data we can infer both large-scale velocity variations aswell as local rock properties. While seismic data volumes – as a function of time, source andreceiver positions – contains all reflected and refracted events, it is often more convenient to mapthe relevant events present in pre-stack data to their respective positions in the subsurface. Thisstripping away of the propagation effects leads to the definition of an image volume or extendedimage – as a function of depth, lateral position and some redundant spatial coordinate(s) – thathas the same (or higher) dimensionality as the original data volume. This mapping can be thoughtof as a coordinate transform, depending on the large-scale background velocity model features,that maps reflection events observed in data collected at the surface to focussed secondary pointsources tracing out the respective reflectors. The radiation pattern of these point sources revealsthe angle-dependent reflection coefficient and can be used to infer the local rock properties. Errorsin the large-scale background velocity model features are revealed through the failure of the eventsto fully focus. This principle forms the basis of many velocity-model analysis procedures.Over the past decades various methods have been proposed for computing and exploiting imagevolumes or slices thereof – the so-called image gathers [Claerbout, 1970, Doherty and Claerbout,1974, de Bruin et al., 1990, Symes and Carazzone, 1991, ten Kroode et al., 1994, Chauris et al., 2002,Biondi and Symes, 2004, Sava and Vasconcelos, 2011, Koren and Ravve, 2011]. These approachesdiffer in the way the redundant coordinate is introduced and the method used to transform datavolumes into image volumes. Perhaps the most well-known example are normal moveout correctedmidpoint gathers, where simple move-out corrections transform observed data into volumes thatcan be migrated into extended images that are a function of time, midpoint position and surfaceoffset [Claerbout, 1985b].While this type of surface-offset pre-stack images have been used for migration-velocity analysisand reservoir characterization, these extended images may suffer from unphysical artifacts. Forthis reason extended images are nowadays formed as functions of the subsurface offset, rather thansurface offset as justified by Stolk et al. [2009], who showed that this approach produces artifact-free image gathers in complex geological settings. So far, wave-equation based constructions ofthese extended images are based on the double-square-root equation [Claerbout, 1970, Dohertyand Claerbout, 1974, Biondi et al., 1999, Prucha et al., 1999, Duchkov and Maarten, 2009], whichproduces image volumes as a function of depth and virtual subsurface source and receiver positions.With the advent of reverse-time migration [Whitmore et al., 1983, Baysal et al., 1983, Levin,1984], these one-way, and therefore angle-limited one-way approaches, are gradually being replacedby methods based on the two-way wave-equation, which is able to produce gathers as a function113of depth and horizontal or vertical offset, or both [Biondi and Symes, 2004, Sava and Vasconcelos,2011]. This is achieved by forward propagating the source and backward propagating the data andsubsequently correlating these source and receiver wavefields at non-zero offset/time lag. Dependingon the extended imaging conditions [Sava and Biondi, 2004, Sava and Fomel, 2006], various typesof subsurface extended image gathers can be formed by restricting the multi-dimensional cross-correlations between the propagated source and receiver wavefields to certain specified coordinatedirections. The motivation to depart from horizontal offset only is that steeply dipping events donot optimally focus in the horizontal direction, even for a kinematically correct velocity model. Atemporal shift instead of a subsurface offset is sometimes also used as it is more computationallyefficient MacKay and Abma [1992], Sava and Fomel [2006].Up to this point, most advancements in image-gather based velocity-model building and reser-voir characterization are due to improved sampling of the extended images and/or a more accuratewave propagators. However, these improvements carry a heavy price tag since the computationstypically involve the solution of the forward and adjoint wave equation for each shot, followed bysubsequent cross-correlations. As a result, computational and memory requirements grow uncon-trollably and many practical implementations of subsurface offset images are therefore restricted,(e.g., by allowing only for lateral interaction over a short distance), and the gathers are computedfor a subset of image points only. Unfortunately, forming full subsurface extended image volumesrapidly becomes prohibitively expensive even for small two-dimensional problems.Extended images play an important role in wave-equation migration-velocity analysis (WEMVA),where velocity model updates are calculated by minimizing an objective function that measuresthe coherency of image gathers [Symes and Carazzone, 1991, Shen and Symes, 2008]. Regrettably,the computational and storage costs associated with this approach easily becomes unmanageableunless we restrict ourselves to a few judiciously chosen subsurface points [Yang and Sava, 2015].As a result, we end up with an expensive method, which needs to loop over all shot records andallowing for the computation of subsurface offset gathers for a limited number of subsurface pointsand directions. This is problematic because the effectiveness of WEMVA is impeded for dippingreflectors that do not focus as well along horizontal offsets.Extended images also serve as input to amplitude-versus-angle (AVA) or amplitude-versus-offset(AVO) analysis methods that derive from the (linearized) Zoeppritz equations [Aki and Richards,1980]. In this situation, the challenge is to produce reliable amplitude preserving angle/offsetgathers in complex geological environments (see e.g., de Bruin et al. [1990], van Wijngaarden[1998]) from primaries only or, more recently from surface-related multiples, as demonstrated byLu et al. [2014]. In the latter case, good angular illumination can be obtained from conventionalmarine towed-streamer acquisitions with dense receiver sampling. As WEMVA, these amplitudeanalysis are biased when reflectors are dipping, calling for corrections dependent on geological dipsthat are generally unknown [Brandsberg-Dahl et al., 2003].We present a new wave-equation based factorization principle that removes computational bot-114tlenecks and gives us access to the kinematics and amplitudes of full subsurface offset extendedimages without carrying out explicit cross-correlations between source and receiver wavefields foreach shot. We accomplish this by carrying out the correlations implicitly among all wavefields viaactions of full extended image volumes on certain probing/test vectors. Depending on the choiceof these probing vectors, these actions allow us to compute common-image-point (CIPs), common-image- (CIGs) and dip-angle gathers at certain predefined subsurface points in 2- and 3-D as well asobjective functions for WEMVA. None of these require storage of wavefields and loops over shots.The paper proceeds as follows. After introducing the governing equations of continuous-spacemonochromatic extended image volumes, their relation to migrated images, CIPs and CIGs, wederive the two-way equivalent of the double square-root equation and move to a discrete settingthat reveals a wave-equation based factorization of discrete full extended image volumes. Next,we show how this factorization can be used to extract information on local geological dips andradiation patterns for AVA purposes and how full offset extended image volumes can be used tocarry out automatic WEMVA. Each application is illustrated by carefully selected stylized numericalexamples in 2- and 3-D.6.2.1 NotationIn this paper, we use lower/upper case letters to represents scalars, e.g., xm, R, and lower case bold-face letters to represents vectors (i.e. one-dimensional quantities), e.g., x,x′. We denote matricesand tensors using upper case boldface letters, e.g., E, E˜,R. Subscript i represents frequencies,j represents the number of sources and/or receivers and k represents the subsurface grid points.2-D represents seismic volumes with one source, one receiver, and time dimensions. 3-D representsseismic volumes with two sources, two receivers, and time dimensions.6.3 Anatomy & physicsBefore describing the proposed methodology of extracting information from full subsurface-offsetextended image volumes using the proposed probing technique, we first review the governing equa-tions. We denote the full-extended image volumes ase(ω,x,x′) =∫Dsdxs u(ω,x,xs)v(ω,x′,xs), (6.1)where the overline denotes complex conjugation, u and v are source and receiver wavefields as afunction of the frequency ω ∈ Ω ⊂ R, subsurface positions x,x′ ∈ D ⊂ Rn (n = 2 or 3) and sourcepositions xs ∈ Ds ⊂ Rn−1. These monochromatic wavefields are obtained by solvingH(m)u(ω,x,xs) = q(ω,x,xs), (6.2)H(m)∗v(ω,x,xs) =∫Drdxr d(ω,xr,xs)δ(x− xr), (6.3)115where H(m) = ω2m(x)2 +∇2 is the Helmholtz operator with Sommerfeld boundary conditions, thesymbol ∗ represents the conjugate-transpose (adjoint), m is the squared slowness, q is the sourcefunction and d represents the reflection-data at receiver positions xr ∈ Dr ⊂ Rn−1.Equations (6.1-6.3) define a linear mapping from a 2n− 1 dimensional data volume d(ω,xr,xs)to a 2n + 1 dimensional image volume e(ω,x,x′). Conventional migrated images are obtained byapplying the zero-time/offset imaging condition [Sava and Vasconcelos, 2011]r(x) =∫Ωdω e(ω,x,x). (6.4)Once we have the full extended image, we can extract all conceivable image and angle gathers byapplying appropriate imaging conditions [Sava and Vasconcelos, 2011]. For instance, conventional2-D common-image-gathers (CIGs), as a function of lateral positions xm, depth z and horizontaloffset hx are defined asICIG(z, h;xm) =∫Ωdω e(ω, (xm − hx, z)T , (xm + hx, z)T), (6.5)where T denotes the transpose.Similarly, time-shifted common-image-point (CIP) gathers at subsurface points xk, as a functionof all spatial offset vectors h and temporal shifts ∆t becomesIextCIP(h,∆t; xk) =∫Ωdω e(ω,xk,xk + h)eıω∆t, (6.6)where ı =√−1. Note that we departed in this expression from the usual symmetric definitione(ω,xk − h,xk + h) because it turns out to be more natural for our shot-based computations.Because we consider these gathers of full-subsurface offset only, we apply a zero-time imagingcondition to the CIPs–i.e., ICIP(h; xk) = IextCIP(h,∆t = 0; xk). These gathers form the theoreticalbasis for AVA and WEMVA.Moving to a discrete setting, we have Ns sources {xs,j}Nsj=1 ∈ Ds, Nr receivers {xr,j}Nrj=1 ∈ Dr,Nf frequencies {ωi}Nfi=1 ∈ Ω and discretize the domain D using a rectangular grid with a total of Nxgrid points {xk}Nxk=1. We organize the source and receiver wavefields in tensors of size Nf ×Nx×Nswith elements uikj ≡ u(ωi,xk,xs,j) and vikj ≡ v(ωi,xk,xs,j). For the ith frequency, we can representthe wavefields for all sources as complex valued Nx ×Ns matrices Ui and Vi, where each columnof the matrix represents a monochromatic source experiment.Full 2n + 1 dimensional image volumes can now be represented as a 3-D tensor with elementseikk′ ≡ e(ωi,xk,xk′). A slice through this tensor at frequency i is an Nx × Nx matrix, which canbe expressed as an outer product of the matrices Ui and ViEi = UiV∗i , (6.7)116where ∗ denotes the complex conjugate transpose. The matrix Ei is akin to Berkhout’s reflectivitymatrices [Berkhout, 1993], except that Ei captures vertical interactions as well since it is derivedfrom the two-way wave-equation. These source and receiver wavefields obey the following discretizedHelmholtz equations:Hi(m)Ui = P∗sQi, (6.8)Hi(m)∗Vi = P∗rDi, (6.9)where Hi(m) represents the discretized Helmholtz operator for frequency ωi with absorbing bound-ary conditions and for a gridded squared slowness m. The Ns×Ns matrix Qi represents the sources(i.e., each column is a source function), the Nr ×Ns matrix Di contains reflection-data (i.e., eachcolumn is a monochromatic shot gather after subtraction of the direct arrival) and the matricesPs,Pr sample the wavefields at the source and receiver positions (and hence, their transpose injectsthe sources and receivers into the grid). Remark that these are the discretized versions of equations(6.2, 6.3).Substituting relations (6.8, 6.9) into the definition of the extended image (6.7) yieldsHiEiHi = P∗sQiD∗iPr. (6.10)This defines a natural 2-way analogue of the well-known double-square-root equation. From equa-tion (6.10), we derive the following expression for monochromatic full extended image volumesEi = H−1i P∗sQiD∗iPrH−1i , (6.11)which is a discrete analogue of the linear mapping from data to image volumes defined in equations(6.1-6.3). Note that for co-located sources and receivers (Pr = Ps) and ideal discrete point sources(Q is the identity matrix) we find that E is complex symmetric (i.e., <(E∗) = <(E) and =(E∗) =−=(E)) because of source-receiver reciprocity.The usual zero-time/offset imaging conditions translate tor =Nf∑i=1diag(Ei), (6.12)where r is the discretized reflectivity and diag(A) denotes the diagonal elements of A organizedin a vector. Various image gathers are embedded in our extended image volumes as illustrated inFigure 6.1.As we will switch between continuous and discrete notation throughout the paper, the corre-spondence between the image volume e(ω,x,x′) and the matrices Ei is listed in Table 6.1. Forsimplicity, we will drop the frequency-dependence from our notation for the remainder of the pa-117per and implicitly assume that all quantities are monochromatic with the understanding that allcomputations can be repeated as needed for multiple frequencies. We will also assume that thezero-time imaging condition is applied by summing over the frequencies, followed by taking thereal part. We further note that full extended image volumes can be severely aliased in case ofinsufficient source-receiver sampling. Therefore, one would in practice only extract gathers fromimage volumes in well-sampled directions.6.4 Computational aspectsOf course, we can never hope to explicitly form the complete image volumes owing to the enormouscomputational and storage costs associated with these volumes that are quadratic in the numberof grid points Nx. In particular, we will discuss the computation of monochromatic image volumesEi and drop the subscript i in the remainder of the section.To avoid forming extended images E explicitly, we instead propose to probe these volumes byright-multiplying them with Nx ×K sampling matrices W = [w1, . . . ,wK ], where K denotes thenumber of samples and wk denotes a single probing or sampling vector. After sampling, the reducedimage volume E˜ now readsE˜ = EW = H−1P∗sQD∗PrH−1W. (6.13)Our main contribution is that we can compute these compressed volumes efficiently with algorithm1 or 2. As one can see, these computations derive from wave-equation based factorizations thatavoid storage and loops over all shots.Algorithm 1 Compute matrix-vector multiplication of image volume matrix with given vectorsW = [w1, . . . ,wK ]. The computational cost is 2Ns wave-equation solves plus the cost of correlatingthe wavefields.compute all the source wavefields U = H−1P∗sQ,compute all the receiver wavefields V = H−∗P∗rDcompute weights Y˜ = V∗Wcompute the product E˜ = UY˜.Algorithm 2 Compute matrix-vector multiplication of image volume matrix with given vectorsW = [w1, . . . ,wK ]. The computational cost is 2K wave-equation solves plus the cost of correlatingthe data matrices.compute U˜ = H−1W and sample this wavefield at the receiver locations D˜ = PrU˜;correlate the result with the data W˜ = D∗D˜ to get the source weights;use the source weights to generate the simultaneous sources Q˜ = QW˜;compute the resulting wavefields E˜ = H−1P∗sQ˜.While both algorithms produce the same compressed image volume, Algorithm 1 corresponds118to the traditional way of computing image volumes where all source and receiver wavefields arecomputed first and subsequently cross-correlated. Algorithm 2, on the other hand, produces exactlythe same result without looping over all shots and without carrying out explicit cross-correlations.It arrives at this result by cross-correlating the data and source wavefields and by solving onlytwo wave-equations per probing vector instead of solving the forward and adjoint wave-equationsfor each shot. This means that our method (Algorithm 2) gains computationally as long as thenumber of probings is small compared to the number of sources (K < Ns). The choice of theprobing vectors depends on the application.For completeness, we summarized the computational complexities of the two schemes in Table6.2 in terms of the number of sources Ns, receivers Nr, subsurface sample points Nx and desirednumber of subsurface offsets in each direction Nh{x,y,z} .6.5 Case study 1: computing gathersWe now describe how common-image-point gathers (CIPs) and common-image-gathers (CIGs) canbe formed with Algorithm 2.CIPs: According to Algorithm 2, a CIP at midpoint xk can be computed efficiently by extractingthe corresponding column from the monochromatic matrix E representing the full extendedimage. We achieve this at the cost of only two wave-equation solves by setting wk to a cardinalbasis vector with its non-zero entry corresponding to the location of a single point source atxk. Because of the proposed factorization, the number of required wave-equation solves isreduced from twice the number of shots to only two per subsurface point, representing anorder-of-magnitude improvement. As long as the number of subsurface points are not toolarge, this reduction allows for targeted quality control with omnidirectional extended imagegathers.For reasonable background velocity models, we can even compute these image gathers simul-taneously as long as the corresponding subsurface points are spatially separated. In this case,the probing vectors wk correspond to simultaneous subsurface sources without encoding. Asa result, we may introduce cross-talk in the gathers, but expect that there will be very littleinterference when the locations being probed are sufficiently far away from each other. Eventhough this cross-talk may interfere with the ability to visually inspect the CIPs, we will seethat these interferences can be rendered into incoherent noise via randomized source encodingas shown by van Leeuwen et al. [2011] for full waveform inversion (FWI), a property we willlater use in automated migration-velocity analyses.CIGs We can also extract common-image-gather at a lateral position xk by (densely) sampling theimage volumes at xk = (xk, zk)T at all depth levels. In this configuration, the sampling matrixtakes the form W = Wx⊗Wz, where Wx = (0, 0, . . . , 1, . . . , 0)T is a cardinal basis vector as119before with a single non-zero entry located at the index corresponding to the midpoint positionxk and Wz = I is the identity matrix, sampling all grid points in the vertical direction. Theresulting gathers E˜ = EW are now a function of (z,∆x,∆z) and contain both vertical andlateral offsets. We can form conventional image gathers by extracting a slice of the volume at∆z = 0. The computational cost per CIG in 2-D are roughly the same as the conventionallycomputed CIGs; 2Nz vs. 2Ns wave-equation solves, where Nz denotes the number of samplesin depth. In 3-D however, the proposed way of computing the gathers via algorithm 2 is aorder of magnitude faster because the number of sources in a 3-D seismic acquisition is anorder of magnitude bigger while the number of samples in depth stays the same.As with the CIPs, we can generate these gathers simultaneously, by sampling various lateralpositions at the same time for each depth level.6.5.1 Numerical results in 2-DTo illustrate the discussed methodology, we compute various gathers on a central part of Marmousimodel. We use a grid spacing of 10 m, 81 co-located sources and receivers with 50 m spacing andfrequencies between 5 and 25 Hz with 0.5 Hz spacing. The source wavelet is a Ricker wavelet with apeak frequency of 15 Hz. The wavefields are modeled using a 9-point finite difference discretizationof the Helmholtz operator with a sponge boundary condition. The direct wave is removed prior tocomputing the image gathers.Figure 6.2 shows the migrated image for wrong (a) and correct background velocity models(b). At three locations indicated by the ∗ symbol we extract the CIP gathers, shown in (c) fora background velocity model that is too low and in (d) for the correct velocity. As a result ofthe cross-correlations between various events in propagated wavefields several spurious events arepresent so the recovered CIP gathers are only meaningful for interpretation close to the imagepoint. However, as expected most of the energy concentrates along the normal to the reflector. Wecan generate these three CIPs simultaneously at the cost of generating one CIP by defining thesampling vector wk to represent the three point sources simultaneously. As we mentioned before,this may result in cross-talk between the CIPs, however, in this case the events do not significantlyinterfere because the events are separated laterally, as shown in Figure 6.2 (c,d).To illustrate the benefits of the proposed scheme, we also report the computational time (in sec)and memory (in MB) required to compute a single common-image point gather using Algorithm1 and 2. The results are shown in Table 6.3. We can see that even for a small toy model, theprobing technique reduces the computational time and memory requirement by a factor of 20 and30, respectively.Finally, Figures 6.2 (e-f) contain CIGs at two lateral positions for a background velocity thatis too low (e) and correct (f). In this example, we generated the image volume for each depth levelsimultaneously for all lateral positions. When creating many gathers, such a simultaneous probing120can substantially reduce the required computational cost.6.5.2 Numerical results in 3-DNotwithstanding the achieved speedup and memory reduction in 2-D, the proposed probing methodoutlined in Algorithm 2 is a true enabler in 3-D where there is no realistic hope to store full extendedimage volumes whose size is quadratic in the number of grid points Nx. Besides, the number ofsources also becomes quadratic for full-azimuth acquisitions. As in the 2-D case, our probingtechnique is a key enabler allowing us to compute CIPs without allocating exorbitant amounts ofmemory and computational resources. To illustrate our claim, we compute a single CIP for theCompass velocity model provided to us by BG Group. Figures 6.3, 6.4 contains both vertical andlateral cross-sections of this complicated 3D velocity model, which contains 131 × 101 × 101 gridpoints. The model is 780 m deep and 2.5 km across in both lateral directions.We generated data from this velocity model following an ocean-bottom node configuration wheresources are placed at the water surface along the x and y directions with a sampling interval of 75 m.The receivers are placed at the sea bed with a sampling of 50 m resulting in a data volume with1156 sources and 2601 receivers. We generated this marine data volume with a 3-D time-harmonicHelmholtz solver, based on a 27 point discretization, perfectly-matching boundary conditions, anda Ricker wavelet with a central frequency of 15 Hz. During the simulations and imaging, we used15 frequencies ranging from 5 to 12 Hz with a sampling interval of 0.5 Hz. For further details on theemployed wave-equation solver, we refer to Lago et al. [2014], van Leeuwen and Herrmann [2014].Figures 6.3b and 6.5 show an example of a full CIP gather extracted at z = 390, x = 1250and y = 1250 m. Aside from behaving as expected, the observed running time with Algorithm 2is 1500 times faster compared to the corresponding time needed by Algorithm 1. While Algorithm1 and 2 lend themselves well for parallelization over frequencies, shots, and probing vectors, ourmethod avoids a loop over 1156 shots while avoiding explicit formation of extended image thatwould require the allocation of a matrix with 1016 entries.6.6 Case study 2: dip-angle gathersAside from their use for kinematical quality control during velocity model building, common-image gathers (CIGs) also contain dynamic amplitude-versus-angle (AVA) information on reflectinginterfaces that can serve to invert associated rock properties [Mahmoudian and Margrave, 2009].For this purpose, various definitions of angle-domain CIGs have been proposed [de Bruin et al.,1990, van Wijngaarden, 1998, Rickett and Sava, 2002, Sava and Fomel, 2003, Kuhel and Sacchi,2003, Biondi and Symes, 2004, Sava and Vasconcelos, 2011]. Extending the work of de Bruinet al. [1990] to include corrections for the geological dip θ, we extract angle-dependent reflection121coefficients from a subsurface point x0 by evaluating the following integral:R(x0, α; θ) ∝∫Ωdω∫ hmax−hmaxdh e(ω,x0,x0 + hn(θ)⊥)eıω sin(α)h/v(x) (6.14)over frequencies and offsets. We use the ∝ symbol to indicate that this expression holds up toproportionality constant.In this expression, α is the angle of incidence with respect to the normal (see Figure 6.6), v(x)is the local background velocity used to convert subsurface offsets to angles, and n(θ)⊥ denotes thetangent vector to the reflector defining the offset vector in this direction (hn(θ)⊥). The integral iscarried out over the effective offset range denoted by hmax, which decreases for deeper parts of themodel. The integral over frequencies correspond to the zero-time imaging condition. As illustratedin Figure 6.6, the normal n(θ) = (sin θ, cos θ)T and tangent vectors to a reflecting interface dependon the geological dip τ , which is unknown in practice. Unfortunately, ignoring this factor maylead to erroneous amplitudes in cases where this dip is steep. This means that we need to extractCIGs, following the procedure outlined above, as well as estimates on the geological dip from theextended image volumes.Following Brandsberg-Dahl et al. [2003], we use the stack power to estimate the geological dipat subsurface point xk viaθ̂ = argmaxθ∫ hmax−hmaxdh∣∣∣∣∫Ωdω e(ω,x,xk + hn(θ))∣∣∣∣2 . (6.15)This maximization is based on the assumption that the above integral attains a maximumvalue when we collect energy from the image volume at time zero and along a direction thatcorresponds to the true geologic dip. Both integrals in equation 6.14 and 6.15 require informationon monochromatic extended image volumes at subsurface position xk only–i.e., e(ω,xk,x′) forω ∈ Ω, to which we have readily access via probing as described in Algorithm 2. The resultinggathers e(ω,xk,x′) contain the required information to estimate local geologic-dip corrected angle-dependent reflection coefficient R(α; xk, θ̂) where the correction is carried with dip estimate θ̂that maximizes the stack power. Since we have access to all subsurface offsets at no additionalcomputational costs, there is no need to select a maximum subsurface offsets hmax priori eventhough the effective subsurface offset decreases with depth due to the finite aperture in the seismicdata acquisition.6.6.1 Numerical resultsTo illustrate the proposed method of computing angle-domain common-image gathers (CIGs),we compare the modulus for plotting reasons of the estimated reflection coefficients |R(α; xk, θk)|with the theoretical PP-reflection coefficients predicted by the Zoeppritz equations [Koefoed, 1955,122Shuey, 1985].To make this comparison, we used a finite-difference time-domain acoustic modelling code[Symes et al., 2011b] to generate three synthetic data sets for increasingly complex models, namelya two-layer velocity and constant density model (Figures 6.7 (a, b)); a four layer model with prop-erties taken from de Bruin et al. [1990] (Figures 6.8 (a,b)), and a two layer lateral varying velocityand density model–i.e., one-horizontal reflector and one-dipping reflector (Figures 6.9(a, b)). Thepurpose of these three experiments is to (i) verify the velocity-change-only angle dependent re-flection coefficients; (ii) study the effects of density and decreasing effective horizontal offset withdepth; and (iii) illustrate the effect of the geologic dip on the reflection coefficients. We used aRicker wavelet with a peak frequency of 15 Hz as a source signature. In all three examples, seismicdata is simulated using split-spread acquisition. The gathers used in the AVA analyses are obtainedusing Algorithm 2 and discrete version of equations 6.14 and 6.15 for a smoothed version of thelayered velocity models.The estimated angle-dependent reflection coefficients for the first model is displayed in Figure6.7 (c). We can see that these estimates, extracted from our extended image volumes, matchthe theoretical reflection coefficients according to the Zoeppritz equations after a single amplitudescaling fairly well up to angles of 50◦ that are reasonably close to effective aperture angle (depictedby the black line in the Figures of the AVA curves). Beyond these angles, the Zoeppritz equationsare no longer accurate.The results for the deeper four layer model depicted in Figures 6.8 (c, d, e, f) clearly show theimprint of smaller horizontal offsets with depth.From the AVA plots in Figure 6.8, we can observe that the reflection coefficients for the first andsecond reflector are well matched up to 50◦ and 40◦ and, for the third and fourth reflectors, to only20◦. We also confirm that that the angle-dependent reflection coefficient associated with a changesin density is approximately flat (see Figure 6.8 (e)). The finite aperture of the data accounts forthe discrepancy beyond these angles.To illustrate our method’s ability to correct for the geological dip, we consider a common-image-point (CIP) gather at x = 2250 m and z = 960 m. As we can see from Figure 6.9 (c), this CIPis well focused and with the local geologic dip of the reflector. The corresponding stack power isplotted in Figure 6.9 (d), which attains maximum power at θ̂ = 10.8◦ an estimate close to theactual dip of 11◦. To demonstrate the effect of ignoring the geological dip when computing anglegathers, we evaluate equation 6.14 for θ = 0. The results included in Figure 6.10 clearly illustratethe benefit of incorporating the dip-information in angle-domain image gathers as it allows for amore accurate estimation of the angle-dependent reflection coefficients. This experimental resultsupports our claim that proposed extended image volume framework lends itself well to estimategeologic-dip corrected angle gathers. As we mentioned before, the proposed method also reaps thecomputational benefits of probing extended image volumes as outlined in Algorithm 2.1236.7 Case study 3: wave-equation migration-velocity analysis(WEMVA)Aside from providing localized information on the kinematics and dynamics, full subsurface offsetextended image volumes also lend themselves well for automatic velocity analyses that minimizesome global focusing objective [Symes and Carazzone, 1991, Biondi and Symes, 2004, Shen andSymes, 2008, Sava and Vasconcelos, 2011, Mulder, 2014, Yang and Sava, 2015]. For this reason,wave-equation migration-velocity analysis (WEMVA) can be considered as another important ap-plication where subsurface image gathers are being used extensively. In this case, the aim is tobuild kinematically correct background velocity models that either promote similarity amongst dif-ferent surface offset gathers, as in the original work by Symes and Carazzone [1991] on DifferentialSemblance, or that aim to focus at zero subsurface offset, an approach promoted by recent workon WEMVA [Brandsberg-Dahl et al., 2003, Shen and Symes, 2008, Symes, 2008b]. While thesewave-equation based methods are less prone to imaging artifacts Stolk and Symes [2003], they arecomputationally expensive restricting the number of directions in which the subsurface offsets canbe calculated. This practical limitation may affect our ability to handle unknown geological dips[Sava and Vasconcelos, 2011, Yang and Sava, 2015].Before presenting an alternative approach that overcomes these practical limitations, let us firstformulate an instance of WEMVA, based on the probing technique outlined in Algorithm 2. In thisdiscrete case, WEMVA corresponds to minimizingminmN2x∑k=1‖SkE(m)wk‖22, (6.16)with E(m) =∑i∈Ω Ei(m). As before the Ei(m)’s denote monochromatic extended image volumesfor the background velocity model m. With this definition, we absorb both the zero-time imagingcondition, by summing over frequencies. The vector wn represents a point-source at a subsurfacepoint corresponding to the kth entry of w. We compute this sum efficiently via first probing theeach monochromatic extended image volumes with the vector wk followed by sum over frequencyinstead of summing over the monochromatic full subsurface image volumes followed by probingwith the vector wk. Finally, the diagonal matrix Sk penalizes defocussed energy by applying aweighting function. Often, this weight is chosen proportional to the lateral subsurface offset [Shenand Symes, 2008]. The main costs of this approach (cf. equation 6.16) are formed by the numberof gathers, which equals the number of grid points. Of course, WEMVA is conducted in practicewith a subset of grid points Nx, the number of which depends on the complexity of the subsurface.To arrive at an alternative more cost effective formulation that offers flexibility to focus in alloffset directions, we take a different tack by using the fact that focussed extended image volumescommute with diagonal weighting matrices [Kumar et al., 2013b, Symes, 2014] that penalize off-124diagonal energy–i.e., we haveE diag(s) ≈ diag(s) E (6.17)for a given weighting vector s. For the specific case where the entries in s correspond to lateralpositions for each grid point in the model, forcing the above commutation relation corresponds to theobjective of equation 6.16 with weights proportional to the horizontal subsurface offset. However,other options are also available including focusing in all offset directions. The optimization problemcan now be written asminm{φ(m) = ||E(m)diag(s)− diag(s)E(m)||2F}, (6.18)where ||A||2F =∑i,j a2i,j denotes the Frobenius norm. Minimization of this norm forces E to focusas a function of the velocity model m.For obvious reasons, this formulation is impractical because even in 2-D we can not hope tostore the extended image volumes E, whose size is quadratic in the number of grid points – E isa N2x ×N2x matrix. However, we can still minimize the above objective function via random-traceestimation [Avron and Toledo, 2011], a technique that also underlies phase encoding techniquesin full-waveform inversion van Leeuwen et al. [2011]. With this technique, equation 6.18 can beevaluated to arbitrary accuracy via actions of E on random vectors w. With this approximation,the WEMVA objective becomesφ(m) ≈ φ˜(m) = 1KK∑k=1‖R(m)wk‖22, (6.19)where R(m) = E (m)diag(s) − diag(s) E(m). While other choices are possible, we select the wkas Gaussian vectors with independent, identically distributed random entries with zero mean andunit variance. For this choice, φ˜(m) and φ(m) are equal in expectation, which means that theabove sample average is unbiased [van Leeuwen et al., 2011, Haber et al., 2012]. As we know fromsource-encoding in full-waveform inversion [Krebs et al., 2009, van Leeuwen et al., 2011, Haberet al., 2012], good approximations can be obtained for small sample size K  Nx. We will studythe quality of this approximation below.The gradient of the approximate objective is given by∇φ˜(m) = 1KK∑k=1(∇E(m,diag(s)wk)− diag(s)∇E(m,wk))∗R(m)wk, (6.20)where∇E(m,y) = ∂E(m)y∂m125is the Jacobian of E(m)y. We do not form this Jacobian matrix explicitly, but instead compute itsaction on a vector as follows:∇E(m,y)δm = −ω2 (E(m)diag(y˜) + H(m)−1diag(e˜)) δm (6.21)where w˜ = H(m)−1y, e˜ = E(m)y. The computation of the action of the adjoint of the Jacobianfollows naturally.We can now employ an iterative gradient-based method to find a minimizer of φ(m) by usinga K-term approximation of this objective and its gradient [van Leeuwen et al., 2011, Haber et al.,2012]. To remove possible bias from using a fixed set of random probing vectors, we redraw thesevectors after each gradient update. Remaining errors can be controlled by increasing K [Haberet al., 2012, Friedlander and Schmidt, 2012, van Leeuwen and Herrmann, 2014].6.7.1 Numerical resultsWe test the proposed wave-equation migration-velocity analysis (WEMVA) formulation on syn-thetic examples, illustrating the efficacy of the probing techniques. In all experiments, we use a9-point finite-difference discretization of the 2-D Helmholtz equation to simulate the wavefields.The direct wave is removed from the data prior to performing the velocity analysis. To regularizethe inversion, we parameterize the model using cubic B-splines [Symes, 2008a]. We solve the result-ing optimization problem with the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS)method [Nocedal and Wright, 2000].Need for full subsurface-offset image volumesOur first experiment is designed to illustrate the main benefit of working with full subsurface offsetsin all directions in situations where both horizontal and vertical reflectors are present (Figure6.11 (a) adapted from [Yang and Sava, 2015]). For this purpose, we juxtapose the focussing, for avelocity model with the correct kinematics, of common-image gathers (CIGS) against focussing withcommon-image point gathers (CIPs) for a model with vertical and horizontal horizontal. Whileboth CIG and CIP gathers can be used to form WEMVA objectives, their performance can bequite different. Ideally, CIGs measure focusing along offsets in the direction of the geologic dip forsubsurface points sampled along spatial coordinated perpendicular to this direction, denoted bythe yellow lines in Figure 6.11 (a). The green dots denote the location of the selected CIPs.Since prior knowledge of geologic dips is typically not available, why not device a WEMVAscheme that focusses in all spatial directions? In that way, we are guaranteed to focus as illustratedin Figure 6.11 (b-d), for the horizontal reflector, and in Figure 6.11 (e-g) for the vertical reflector.From these figures it is clear that CIGs do not focus (Figure 6.11 (c,e)), despite the fact thatthe velocity model is correct. This lack of focusing may lead to erroneous biases during CIG-basedWEMVA, a problem altogether avoided when we work with CIPs (Figure 6.11 (d,f)). Because CIPs126are sensitive to focusing in all directions, there is no need to focus in time [Sava and Vasconcelos,2011]. While the advantages of CIP-based WEMVA are clear, we cheated by selecting CIPs on topof the reflecting interfaces, whose position is also generally not known in advance. This is where therandomized sampling comes to our rescue. By treating all subsurface points as random amplitudeencoded sources, we will be able to form a CIP-based WEMVA objective, which does not requireexplicit knowledge on the location of the reflecting interfaces.Quality of the stochastic approximationAs with FWI, randomly encoded sources—e.g. via random Gaussian weights, provide a vehicleto approximate (prohibitively) expensive to evaluate wave-equation based objectives (cf. equa-tions 6.18 and 6.19) to controllable accuracy by increasing K. Following Haber et al. [2012], weevaluate for a small subset (Figure 6.12 (a)) of the Marmousi model [Bourgeois et al., 1991] thetrue φ(m0 + αδm) (solid blue line) and approximate φ˜(m0 + αδm) (denoted by the error bars)objectives as a function of α in the direction of the gradient—i.e., δm = −∇φ, evaluated at thestarting model m0 depicted in Figure 6.12 (b). The results for K = 10 and K = 80 are included inFigures 6.12 (c,d). We calculated the error bars from 5 independent realizations. As we increaseK, the true objective is better approximated reflected in tighter and better centred (comparedto the true objective) error bars. These results also show that we can substantially reduce thecomputational costs while approximating the true objective function accurately.WEMVA on the marmousi modelTo validate our approach to WEMVA based on random probing and full-subsurface offset common-image point gathers (CIPs), we minimize the approximate objective in equation with a quasi-Newton method using approximate evaluations for the objective (equation 6.18) and gradients(equation 6.20). We choose the Marmousi model plotted in Figure 6.13 (a), because of its complexityand relatively steep reflectors. The model is 3.0 km deep and 9.2 km wide, sampled at 12 m. Weacquired synthetic data for this model using a split-spread acquisition geometry resulting in 767sources and receivers sources sampled at a 12 m interval. The data simulation and inversion arecarried out over 201 frequencies, sampled at 0.1 Hz ranging from 5 to 25 Hz and scaled by a Rickerwavelet with a central frequency of 10 Hz.We use a highly smoothed starting model with small (or no) lateral variations to start theinversions for different numbers of probing vectors, K = 10 and K = 100 respectively. To regularizethe inversion, we use B-splines sampled at x = 48 m and z = 48 m in the lateral and verticaldirections. Figures 6.13 (c, d) show the inverted models after 25 L-BFGS iterations. We can clearlysee that even 10 probing vectors are good enough to reveal the structural information. Comparedto computing the full image gathers (which would need 2 · 767 PDE solves per evaluation), thisreduces the computational cost and memory use by roughly a factor 60.127Encouraged by this result, we conduct a second experiment for a poor starting model (Fig-ure 6.14 (b)) and for data with fewer low frequencies (8 − 25 Hz). In this case, we use a slightlyhigher number of probing vectors (K = 100) to reduce the error in approximating the true objec-tive function. Figures 6.12 (c, d) show that the errors for small K are relatively large when thestarting model is poor. This is the case for small α’s in Figure 6.12. To regularize the inversion,we use B-splines sampled at x = 96 m and z = 96 m in the lateral and vertical directions in orderto recover the smooth (low-frequency) component of the true velocity model. We can clearly see inFigure 6.14 (c) that we build the low-frequency component of the velocity model. Figure 6.14 (d)shows our velocity estimate overlaid with a contour plot of the true velocity perturbation. We cansee that despite missing low frequencies the inverted model captures the shallow complexity of themodel reasonably well.6.8 DiscussionTo our knowledge this work represents the first instance of deriving a discrete two-way equivalentof Claerbout’s double square-root equation [Claerbout, 1970, 1985a] that enables us to computefull-subsurface offset extended image volumes. Contrary to approaches based on the one-way waveequation, which are dip-limited and march along depth, our method provides access to extendedimage volumes via actions of a wave-equation based factorization on certain probing vectors. Thisfactorization enables matrix-vector products with matrices that encode wavefield interactions be-tween arbitrary pairs of points in the subsurface that can not formed explicitly. As a result, wearrive at a formulation that is computationally feasible and that offers new perspectives on thedesign and implementation of workflows that exploit information embedded in various types ofsubsurface extended images.Depending on the choice of the probing vectors, we either obtain local information, tied toindividual subsurface points that can serve as quality control for velocity analyses or as input tolocalized amplitude-versus-offset analysis, or global information that can as we have shown be usedto drive automatic velocity analyses. Aside from guiding kinematical inversions through focusing,we expect that our randomized probings of extended image volumes also provide information on therock properties. Further extension of the methodology include forming fully elastic image volumes,which turns our matrix representation into a tensor representation. We leave such generalizationsto a future paper.6.9 ConclusionsExtended subsurface image volumes carry information on interactions between pairs of subsurfacepoints encoding essential information on the kinematics—to be used during velocity analysis—and the dynamics, which serve as input to inversions of the rock properties. While conceptuallybeautiful, full-subsurface offset image volumes have not yet been considered in practice because128these objects are too large to be formed explicitly. Through a wave-equation based factorization, weavoid explicit computations by forming matrix-vector products instead that only require two wave-equation solves each, and thereby removing the customary and expensive loop over shots, found inconventional extended imaging. This approach leads to significant computational gains in certainsituations since we are no longer constrained by costs that scale with the number of shots and thenumber of subsurface points and offsets visited during the cross-correlation calculations. Instead,we circumvent these expensive explicit computations by carrying out these correlations implicitlythrough the wave equation solves. As a result, we end up with a matrix-vector formulation fromwhich different image gathers can be formed and with which amplitude-versus-angle and wave-equation migration-velocity analyses can be performed without requiring prior information on thegeologic dip. We showed that these operations can be accomplished at affordable computationalcosts.By means of concrete examples, we demonstrate how localized information on focussing andscattering amplitudes can be revealed by forming different extended image volumes in both 2-Dand 3-D. Because full-subsurface extended image volumes are quadratic in the number of griddedsubsurface parameters, it would be difficult if not impossible to obtain these results by conven-tional methods. We also verify that our matrix-vector formulation lends itself well for automaticmigration-velocity analysis if Gaussian random probing vectors are used. These vectors act assimultaneous sources and allow for significant computational gains in the evaluation of global fo-cusing objectives that are key to migration-velocity analysis. Instead of focussing in a particularoffset direction, as in most current approaches, our objective and its gradient force full-subsurfaceextended image volumes to focus in all offset directions at all subsurface points. It accomplishes thisby forcing a commutation relation between the extended image volume and a matrix that expressesthe Euclidean distance between points in the subsurface. The examples show that the computa-tional costs can be controlled by probing. Application of this new automatic migration-velocityanalysis technique to a complex synthetic shows encouraging results in particular in regions withsteep geological dips.129Table 6.1: Correspondence between continuous and discrete representations of the image vol-ume. Here, ω represents frequency, x represents subsurface positions, and (i, j) repre-sents the subsurface grid points. The colon (:) notation extracts a vector from e at thegrid point i, j for all subsurface offsets.Continuous Discretefull image volume e(ωi,x,x′) Eimigrated image∫Ωdω e(ω,x,x)∑Nfi=1 diag(Ei)CIP e(ωi,xk,xk′) eikk′# of PDE solves flopsconventional 2Ns NsNhxNhyNhzthis paper 2Nx NrNsTable 6.2: Computational complexity of the two schemes in terms of the number of sourcesNs, receivers Nr sample points Nx and desired number of subsurface offsets in eachdirection Nh{x,y,z} .time (s) memory (MB)conventional 456 152this paper 23 5.1Table 6.3: Comparison of the computational time (in sec) and memory (in megabytes) forcomputing CIP’s gather on a central part of Marmousi model. We can see the significantdifference in time and memory using the probing techniques compared to the conven-tional method and we expect this difference to be greatly exacerbated for realisticallysized models.130(a) (b) (c) (d)(e) (f) (g) (h)Figure 6.1: Different slices through the 4-dimensional image volume e(z, z′, x, x′) around z =zk and x = xk. (a) Conventional image e(z, z, x, x), (b) Image gather for horizontal andvertical offset e(z, z′, xk, x′), (c) Image gather for horizontal offset e(z, z, xk, x′) and (d)Image gather for a single scattering point e(zk, z′, xk, x′). (e-g) shows how these slicesare organized in the matrix representation of e.131(a) (b)(c)(d)(e) (f)Figure 6.2: Migrated images for a wrong (a) and the correct (b) background velocity areshown with 3 locations at which we extract CIPs for a wrong (c) and the correct (d)velocity. The CIPs contain many events that do not necessarily focus. However, theseevents are located along the line normal to the reflectors. Therefore, it seems feasible togenerate multiple CIPs simultaneously as long as they are well-separated laterally. Apossible application of this is the extraction of CIGs at various lateral positions. CIGsat x = 1500 m and x = 2500 m for a wrong (e) and the correct (f) velocity indeedshow little evidence of crosstalk, allowing us to compute several CIGs at the cost of asingle CIG.132(a)(b)Figure 6.3: (a) Compass 3D synthetic velocity model provided to us by BG group. (b) ACIP gather at (x, y, z) = (1250, 1250, 390)m. The proposed method (Algorithm 2) is1500 times faster then the classical method (Algorithm 1) to generate CIP gather.133(a)(b)Figure 6.4: Cross-section of Compass 3D velocity model (Figure 6.3 (a)) along (a) x, and (b)y direction.(a) (b) (c)Figure 6.5: Slices extracted along the horizontal (a,b) and vertical (c) offset directions fromthe CIP gather shown in Figure 6.3 (b).Figure 6.6: Schematic depiction of the scattering point and related positioning of the reflec-tor.134(a)(b)(c)Figure 6.7: (a) Horizontal one-layer velocity model and (b) constant density model. CIPlocation is x = 1250 m and z = 400 m. (c) Modulus of angle-dependent reflectivitycoefficients at CIP. The black lines are included to indicate the effective aperture atdepth. The red lines are the theoretical reflectivity coefficients and the blue lines arethe wave-equation based reflectivity coefficients.135(a) (b)(c) (d)(e) (f)Figure 6.8: Angle dependent reflectivity coefficients in case of horizontal four-layer (a) ve-locity and (b) density model at x = 1250 m. Modulus of angle-dependent reflectivitycoefficients at (c) z = 200 m, (d) z = 600 m, (e) z = 1000 m, (f) z = 1400 m.136(a) (b)(c) (d)Figure 6.9: Estimation of local geological dip. (a,b) Two-layer model. (c) CIP gather atx = 2250 m and z = 960 m overlaid on dipping model. (d) Stack-power versus dip-angle. We can see that the maximum stack-power corresponds to the dip value of 10.8◦,which is close to the true dip value of 11◦.137(a) (b)(c)Figure 6.10: Modulus of angle-dependent reflectivity coefficients in two-layer model at z =300 and 960 m and x = 2250 m. (a) Reflectivity coefficients at z = 300 m andx = 2250 m. Reflectivity coefficients at z = 900 m (b) with no dip θ = 0◦ and (c)with the dip obtained via the method described above (θ = 10.8◦).138(a)(b) (c) (d)(e) (f) (g)Figure 6.11: Comparison of working with CIGs versus CIPs. (a) True velocity model. Theyellow line indicates the location along which we computed the CIGs and the greendot is the location where we extracted the CIPs. (b,c) CIGs extracted along verticaland horizontal offsets directions in case of vertical reflector. (d) CIPs extracted alongvertical (z = 1.2 km, x = 1 km) reflector. (e,f) CIGs extracted along vertical andhorizontal offsets directions in case of horizontal reflector. (g) CIPs extracted alonghorizontal (z = 1.5 km, x = 4.48 km) reflector.139(a) (b)(c) (d)Figure 6.12: Randomized trace estimation. (a,b) True and initial velocity model. Objectivefunctions for WEMVA based on the Frobenius norm, as a function of velocity pertur-bation using the complete matrix (blue line) and error bars of approximated objectivefunction evaluated via 5 different random probing with (c) K=10 and (d) K = 80 forthe Marmousi model.140(a) (b)(c) (d)Figure 6.13: WEMVA on Marmousi model with probing technique for a good starting model.(a,b) True and initial velocity models. Inverted model using (c) K = 10 and (b)K = 100 respectively. We can clearly see that even 10 probing vectors are goodenough to start revealing the structural information.141(a) (b)(c) (d)Figure 6.14: WEMVA on Marmousi model with probing technique for a poor starting veloc-ity model and 8-25 Hz frequency band. (a,b) True and initial velocity models. Invertedmodel using (c) K = 100. (d) Inverted velocity model overlaid with a contour plotof the true model perturbation. We can see that we captures the shallow complexityof the model reasonably well when working with a realistic seismic acquisition andinversion scenario.142Chapter 7ConclusionsIn this chapter I summarize the main contributions of this thesis, propose follow-up work, discusssome limitations, and outline possible extensions.7.1 Main contributionsIn this thesis, I have developed fast computational techniques for large-scale seismic applicationssuch as missing-trace interpolation, simultaneous source separation, and wave-equation based mi-gration velocity analysis. In Chapters 2 and 3, I designed a large-scale singular value decomposi-tion (SVD)-free factorization based rank-minimization approach, which is built upon the existingknowledge of compressed sensing as a successful signal recovery paradigm, and outlined the nec-essary components of a low-rank domain, a rank-increasing sampling scheme, and a SVD-freerank-minimizing optimization scheme for successful missing-trace interpolation. Note that for thematrix completion approach, the limiting component for large scale data is that of the nuclearnorm projection, which requires the computation of SVD. For large scale 5D seismic data, it is pro-hibitively expensive to compute singular value decompositions because the underlying matrix hastens of thousands or even millions of rows and columns. Hence, we design a SVD-free factorizationbased approach for missing-trace interpolation and source separation. In Chapter 4 and 5, I ex-tended the SVD-free rank-minimization framework to remove source cross-talk during simultaneoussource acquisition because subsequent seismic data processing and imaging assumes well separateddata. In Chapter 6, I proposed a matrix-vector formulation, which overcomes computational andstorage costs of full-subsurface offset extended image volumes that are prohibitively expensive toform for large-scale seismic data imaging problems. This matrix-vector formulation avoids explicitstorage of full-subsurface offset extended image volumes and removes the expensive loop over shotsfound in the conventional extended imaging [Sava and Vasconcelos, 2011].The remainder of this section provides more details about the aforementioned contributions.1437.1.1 SVD-free factorization based matrix completionCurrent efforts towards imaging subsalt structures under complex overburdens have led to a movetowards wide-azimuth towed streamer (WATS) acquisition. Due to budgetary and/or physical con-straints, WATS is typically coarsely sampled (sub-sampled) either along sources and/or receivers.However, most of the processing and imaging techniques require densely sampled seismic data on aregular periodic grid, thus call for large-scale seismic data interpolation techniques. Following ideasfrom the field of compressed sensing [Donoho, 2006b, Cande`s et al., 2006], a variety of method-ologies, each based on various mathematical techniques, have been proposed to interpolate seismicdata where the underlying principle is to exploit the sparse or low-rank structure of seismic data ina transform domain [Hennenfent et al., 2010, Kreimer, 2013]. However, these previous compressedsensing (CS)-based approaches, using sparsity or rank-minimization, incur computational difficul-ties when applied to large scale seismic data volumes. For instance, methods that involve redundanttransforms, such as curvelets [Hennenfent et al., 2010], or that add additional dimensions, such astaking outer products of tensors [Kreimer, 2013], are computationally no longer tractable for largedata volumes with four or more dimensions.From a theoretical point of view, the success of rank-minimization based techniques dependsupon two main principles, namely a low-rankifying transform and a sub-Nyquist sampling strategythat subdues coherent aliases in the rank-revealing transform domain. As identified in Chapter2 (Figures 2.2 and 2.3), for 2D seismic data acquisition, seismic frequency slices exhibit low-rankstructure (fast decay of the singular values) in the midpoint-offset domain, whereas, subsamplingincreases the rank in the midpoint-offset domain, i.e., singular values decay slowly. Hence, we canuse the rank-minimization based techniques to reconstruct the low-rank representation of seismicdata in the midpoint-offset domain. Similarly as shown in Chapter 3, 4D monochromatic frequencyslices (extracted from a 3D seismic data acquisition) exhibit a low-rank structure in the (source-x,receiver-x) matricization compared to other possible matricization, whereas, subsampling destroysthe low-rank structure in the (source-x, receiver-x) matricization (Figures 3.4 and 3.5). Here,matricization refers to an operation which reshapes a tensor into a matrix along specific dimensions.When these two principles hold, i.e., a low-rank domain and a rank-increasing sampling scheme,rank-minimization formulations allow the recovery of missing traces.From a practical standpoint, we need a large-scale seismic data interpolation framework, whichcan handle seismic data measurements in the order of 1010 to 1012. Unfortunately, one of thelimitations of rank-minimization based techniques for large-scale seismic problems is the nuclear-norm projection, which inherently involves prohibitively expensive computations of singular valuedecompositions (SVD). For this reason, I proposed, in Chapter 2 and 3, a practical frameworkfor recovering missing-traces in large-scale seismic data volumes using a matrix-factorization basedapproach [Lee et al., 2010b, Recht and Re´, 2011], where I avoided the need for expensive computa-tions of SVDs. I showed that the proposed SVD-free factorization based framework can interpolate144large-scale seismic data volumes with high fidelity from significantly subsampled, and thereforeeconomic and environmental friendly, seismic data volumes. In Chapter 3, I also demonstratedthat the proposed factorization framework allows me to work with fully sampled seismic data vol-ume without relying on a windowing operation that divides 5D data up into small cubes followedby normal-moveout corrections designed to remove curvature of seismic reflection events—one ofthe more recent approaches in missing-trace interpolation literature [Kreimer, 2013]. The reportedresults in Chapter 2 and 3 on realistic 3D and synthetic 5D seismic data interpolation demonstratethat while the SVD-free factorization based approaches are comparable (in reconstruction quality)to the existing matrix completion [Becker et al., 2011, Wen et al., 2012] and sparsity-promotionbased techniques that use sparsity in transform domains (such as wavelets and curvelets) [Herrmannand Hennenfent, 2008b], it significantly outperforms these techniques in terms of the computationaltime and memory requirements.Large-scale simultaneous source separationPractitioners also proposed to acquire simultaneous seismic data acquisition to mitigate the sam-pling related issues and improve the quality of seismic data, wherein a single or multiple sourcevessels fire sources at near-simultaneous or slightly random times. This results in overlapping andmissing shot records, as opposed to non-overlapping shot records in conventional marine acquisition.Although simultaneous source surveys result in improved quality recorded seismic data volumes ata reduced acquisition turnaround time, the costs of reduced acquisition are coherent artifacts fromseismic cross-talk. This may degrade the quality of final migrated images because subsequentseismic data processing and imaging assumes well separated data acquired in the conventional(non-overlapping) way. Therefore, a practical (simultaneous) source separation and interpolationtechnique is required, which separates overlapping and interpolated missing shots. Following thecompressed sensing principles outlined in Chapter 1 and 2 for matrix completion framework, Ifound that fully sampled conventional seismic data has low-rank structure—i.e., quickly decayingsingular values in the midpoint-offset domain, whereas, overlapping and missing seismic data resultsin high-rank structure—i.e., slowly decaying singular values. For this reason, in Chapter 4 and 5, Ideveloped a highly parallelizable singular-value decomposition (SVD)-free matrix factorization ap-proach for source separation and missing-trace interpolation. I showed that the proposed frameworkregains the low-rank structure structure of the separated and interpolated seismic data volumes. Itested the SVD-free framework on two different seismic data acquisition scenarios, namely static(receivers are moving with sources) and dynamic (receivers are fixed at ocean-bottom) geometries.For dynamic geometry, in Chapter 4, I investigated two instances of low-variability in source fir-ing times—e.g., 0 ≤ 1 (or 2) second, namely over/under and simultaneous long offset (Figure 4.7and 4.9). In Chapter 5, I investigated an instance of static geometry in source firing times—e.g., >1 second, where a single source vessel sails across an ocean-bottom array firing two airgun arrays145at jittered source locations and time instances with receivers recording continuously (Figure 5.1).For both the cases, I showed that the proposed SVD-free rank-minimization framework is ableto separate and interpolate the large-scale seismic data to a desired grid with negligible loss ofthe coherent energy where the reconstruction quality is comparable to sparsity-promotion [Wasonand Herrmann, 2013b] and NMO-based median filtering type techniques [Chen et al., 2014]. Ialso demonstrated that the proposed SVD-free factorization based rank-minimization approach forsource separation outperforms sparsity-promotion based techniques by an order of magnitude incomputational speed while using only 1/20th of the memory.Missing-trace interpolation without windowingSeismic data is often acquired in large volumes, which makes the processing of seismic data com-putationally expensive. For this reason, seismic practitioners follow workflows that involve: i)normal-moveout (NMO) correction to remove the curvature of seismic reflection events such thatthe events tend to become linear in small enough windows, ii) dividing data into overlappingspatial-temporal windows, iii) performing Fourier transform along the time coordinate within eachwindow, iv) using a matrix or tensor based technique to interpolate individual monochromaticslices, v) doing the inverse Fourier transform along the frequency coordinate within each window,vi) combining all the windows to get reconstructed seismic data volumes. While this approach iscomputationally feasible because it can be readily parallelized and utilized with success, there areseveral issues during the windowing process: i) one needs a reasonably accurate root-mean squarevelocity model to perform the NMO correction that may be difficult to compute when data ismissing, ii) proper window size selection, which depends upon the complexity of seismic data, i.e.,large window size for linear reflection events and smaller window size for complex seismic reflectionevents with curvature, iii) averaging operations along the overlapping windows (see [Kreimer, 2013]for details). These issues are very important because they may lead to underperformance.One of the main contributions of this thesis was the design and implementation of a SVD-free matrix-factorization approach, which exploits the inherent redundancy in seismic data whileavoiding the customary preprocessing steps of windowing and NMO corrections. Seismic data isinherently redundant because we are collecting data on the same subsurface at different angles.In this thesis, I used this inherent redundancy of seismic data to exploit the low-rank structure,where the idea is to represent complete seismic data volume using only a few singular vectorscorresponding to the largest singular values. Note that this redundancy can only be exploitedwhen the data is organized in a particular manner, namely exploiting the low-rank structure ofseismic data in the midpoint-offset domain for 2D seismic data surveys or (source-x, receiver-x)matricizations for 3D seismic data surveys.When data volumes are organized in this way, the singular values decay rapidly, which is a reflec-tion of the intrinsic redundancy exhibited by seismic data. To illustrate this inherent redundancy,146I considered a split-spread seismic acquisition over a complex geological model provided by the BGGroup, where each source is recorded by all the receivers. This simulation resulted in a seismicdata with 1024 time samples, 60 × 60 sources and 60 × 60 receivers. From this seismic volume, Iextracted a monochromatic 4D tensor at 10 Hz, where the size of the tensor is 60× 60× 60× 60,followed by an analysis of the decay of the singular values as a function of window sizes. To comparethe recovery within windows versus carrying out the interpolation over the complete non-windowedsurvey, I computed the singular values as follows: i) I perform xsrc, xrec matricization (as explainedin Chapter 3) on a 10 Hz monochromatic slice, resulting in a matrix of 3600 rows and 3600 columns,ii) I define window sizes of 100, 200, 300, 400, 600, 900, 1200, 1800 and 3600, iii) for each window size,I extract all possible sub-matrices and compute their corresponding singular values, iv) I concate-nate all the singular values of sub-windowed matrices, for a given window size, and sort them inthe descending order. In Figure 7.1 (a), I compared the decay of singular values of monochromaticslices at 10 Hz for different window sizes, where we can see that i) fully sampled seismic datavolumes have the fastest decay of singular values, and ii) smaller window sizes result in a slowerdecay rate of the singular values, i.e., we need relatively more singular values to approximate theunderlying fully sampled seismic data. This simple experiment demonstrated that fully samplednon-windowed monochromatic slices exhibit low-rank structure because we can approximate themby using a few singular vectors that correspond to the first few largest singular values with minimalloss of coherent seismic energy. Therefore, the non-windowed seismic data should be used duringmissing-trace interpolation and source separation using the rank-minimization based techniquesand avoiding the customary windowed- and NMO-based preprocessing steps.7.1.2 Enabling computation of omnidirectional subsurface extended imagevolumesImage gathers contain kinematics and dynamics information on interactions between pairs of sub-surface points. We can use this information to design wave-equation based velocity analysis,amplitude-versus-angle inversion to estimate the rock properties, and target-oriented imaging usingredatuming techniques in geological challenging environments to overcome the effects of complexoverburden. Unfortunately, forming full-subsurface offset extended image volumes is prohibitivelyexpensive because they are quadratic in the image size and therefore impossible to store. Apart fromstorage impediments, the costs associated with computing the extended image volumes scale withthe number of shots, the number of subsurface points, and the number of subsurface offsets visitedduring the cross-correlation calculations. In Chapter 6, I proposed a computationally and memoryefficient way of gleaning information from the full-subsurface offset extended image volumes, whereI first organized the full-subsurface offset extended image volumes as a matrix. Then, I computedthe action of the matrix on a given vector without explicitly constructing the full-subsurface offsetextended image volumes. The proposed matrix-vector product removes the expensive loop overshots found in the conventional methods, thus leading to significant computational and memory14710 0 10 1 10 2 10 3 10 4# of singular values00.10.20.30.40.50.60.70.80.91normalized singular values100200300400600900120018003600(a)Figure 7.1: To understand the inherent redundancy of seismic data, we analyze the decayof singular values of windowed versus non-windowed cases. We see that fully sampledseismic data volumes have fastest decay of singular values, whereas, smaller windowsizes result in the slower decay rate of the singular values.gains in certain situations. Using this matrix-vector formulation, I formed the image-gathers alongall offset directions to perform amplitude-versus-angle and automatic wave-equation migration-velocity analyses without requiring prior information on the geologic dip. By means of concreteexamples, I demonstrated that we can extract the localized scattering amplitudes information fromthe image volumes computed on 2D and 3D velocity models. I further validated the potential ofprobing techniques to perform automatic wave-equation migration-velocity analyses on a complexsynthetic model with steep geological dips.7.2 Follow-up workAlthough I tested the proposed SVD-free factorization-based rank-minimization framework on re-alistic 3D and complex 5D synthetic data volumes and compared it to the existing matrix / tensorcompletions and sparsity-promotion based interpolation techniques (such as curvelets), it would beinteresting to see its application to a realistic 5D data set observed on complex geological structuressuch as salt. Since our SVD-free rank-minimization framework works with the full seismic datavolumes without windowing, it will be computationally infeasible to interpolate the complete dataon a single computing node, i.e., performing the interpolation in a serial mode, hence, we need148to devise a parallel interpolation framework that can exploit the inherent redundancy of seismicdata without performing the windowing operation and overcome the computational bottleneck ofhandling the large-scale seismic data volumes where unknowns are of the order of 1010 to 1012.Finally, it would be good to see the impact of 5D seismic interpolation on various seismic post pro-cessing workflows such as surface-related multiple estimation, wavefield-decomposition, migration,waveform inversion, target-imaging and uncertainty analysis.7.3 Current limitationsOne of the main requirements of the current rank-minimization approach is to find a transformdomain where the target recovered data exhibit a low-rank structure. In this thesis, I showedthat 2D seismic monochromatic slices exhibit low-rank structure in the midpoint-offset domain atthe lower frequencies, but not at the higher frequencies. This behaviour is due to the increasein wave oscillations as we move from low to high-frequency slices in the midpoint-offset domain,even though the energy of the wavefronts remains focused around the diagonal (see Figures 2.2and 2.3 in Chapter 2). Therefore, interpolation via rank minimization in the high-frequency regimerequires extended formulations that incorporate low-rank structure. [Kumar et al., 2013a] proposedto addressed this issue for 2D seismic data acquisition using Hierarchically Semi-Separable matrixrepresentation (HSS) [Chandrasekaran et al., 2006]. Although the initial results of HSS basedtechniques for 3D seismic data interpolation and source separation are encouraging, it would beinteresting to see its benefits to interpolate higher frequencies in large-scale realistic 5D seismic datavolumes generated using 3D seismic data acquisitions. Moreover one of the limitations in proposedSVD-free matrix-factorization approach is to find the rank parameter k associated with each low-rank factor. I estimated it using the cross-validation techniques (see chapter 3 for more details) inall the examples presented in this thesis, however, it is still not a practical approach when dealingwith large-scale subsampled seismic data volumes, since it will involve finding the appropriate smallvolume of seismic data to run the cross-validation techniques—a computationally expensive process.7.4 Future extensions7.4.1 Extracting on-the-fly informationWhile the traditional interpolation and/or source separation approaches can deal with missinginformation and/or source cross-talk during seismic data acquisition, these processing methodsresult in massive seismic data volumes in the orders of terabytes to petabytes. This overwhelmingamount of data makes subsequent imaging and inversion workflows daunting for realistic data sets,since extracting various data gathers, such as common-source and receiver gathers can be verychallenging (Input/Output costs), apart from storing the massive volumes of interpolated and/orseparated seismic data on disks. To overcome this computational and memory burden, we can149design a fast, resilient, and scalable workflow, where we first compress the seismic data volumesusing a SVD-free rank-minimizing optimization scheme. Then, we can access the information fromthe compressed volumes on-the fly, i.e., extracting the common-source and/or receiver gathers fromthe low-rank factors without forming the fully sampled seismic data volumes during the objectiveand gradient calculations of inversion framework.7.4.2 Compressing full-subsurface offset extended image volumesEven though the probing techniques circumvent the computational and storage requirements offorming the full-subsurface offset extended image volumes, we have only limited access to theinformation from the full-subsurface offset image volumes at the subsurface locations defined bythe probing vectors. As a consequence, the proposed framework of probing vectors can only bebeneficial when the number of probing vectors is very small compared to the number of sources.To circumvent the computational requirement of forming the full-subsurface offset extended imagevolumes at every point in the subsurface, I proposed to exploit the low-rank structure of imagevolumes. To understand the low-rank behaviour of image volumes, I analyzed its singular valuedecay on a small section of Marmousi model (Figure 7.2 (a)). I chose this particular part of themodel because it consists of highly dipping reflectors with strong lateral variations in the velocity.For this 2D model, the monochromatic full-subsurface offset extended image volume is a fourdimensional tensor with dimensions, depth z, offset x, horizontal lag δx, and vertical lag δz. Asoutlined in Chapter 3, there is no unique generalization of the SVD to tensors and as a result, there isno unique notion of rank for tensors. However, rank can be computed on the different matricizationsof tensors where matricization reshapes a tensor to a matrix along specific dimensions. To analyzethe decay of singular values of each monochromatic full-subsurface offset extended image volume,I matricize this 4D tensor where the depth z and vertical lag coordinates (z, δz) are groupedalong the rows, and the offset x and horizontal lag δx coordinates are grouped along the columns.Figure 7.2 (b) shows the matricized tensor and Figure 7.2 (c) shows its associated singular valuesdecay. I also tested the decay of singular values for all possible combination of matricizations,i.e., (z, δx), (z, x), (z, δz), (δx, δz), and find that (z, δz) matricization gives the fastest decay of thesingular values. I further approximated the full-subsurface offset extended image volume with itsfirst 10 singular vectors computed using (z, δz) matricization, which results in a reconstructionerror of 10−4. This example demonstrates that even for highly complex geological structures, thefull-subsurface offset extended image volumes for all subsurface points exhibits low-rank structure,which can be exploited to compress full-subsurface offset extended image volumes while gleaninginformation from the full-subsurface offset extended image volumes during wave-equation migrationvelocity analyses.150(a)(b)(c)Figure 7.2: To visualize the low-rank nature of image volumes, I form a full-subsurface offsetextended image volume using a subsection of the Marmousi model and analyzed thedecay of singular values. (a) Complex subsection of the Marmousi model with highlydipping reflectors with strong lateral variations in the velocity, and (b) correspondingfull-subsurface offset extended image volume at 5 Hz. (c) To demonstrate the low-ranknature of image volumes, I plot the decay of singular values, where I observed that weonly required the first 10 singular vectors to get a reconstruction error of 10−4.1517.4.3 Comparison of MVA and FWIApart from performing velocity analysis using extended image volumes, which is known as migrationvelocity analysis (MVA) in seismic literature, full-waveform inversion (FWI) is another useful toolto invert for the velocity model of the subsurface. FWI is a nonlinear data-fitting procedure,where the aim is to get the velocity updates via minimizing the mismatch between the observedand predicted seismic data. Here, the predicted data is generated using an initial guess of thesubsurface by solving a wave-equation [Virieux and Operto, 2009]. Both MVA and FWI have itsown pitfall, which can lead to spurious artifacts in the velocity inversion. For example, FWI ismainly driven by the turning waves and MVA is mainly driven by the reflected waves. FWI hasdeeper depth of penetration for lower frequencies but not for higher frequencies, whereas MVA canprovide the velocity updates in the deeper part of the model for both lower and higher frequencies.Another well-known drawback of FWI is the occurrence of local-minima in the misfit functional,which leads to erroneous artifacts in the velocity updates. This can be circumvented by eitherrecording the low-frequencies in the observed data [Virieux and Operto, 2009], which are generallyabsents from the seismic data, or starting with a good initial velocity model that is sufficiently closeto the true model. In future work, I would like to come up with an inversion framework, which hasflavors of both FWI and MVA and help us to overcome some of these pitfalls.152BibliographyAAPG. Seismic data display. American Association of Petroleum Geologists Wiki, January 242017. URL http://wiki.aapg.org/Seismic data display. → pages 2R. Abma, T. Manning, M. Tanis, J. Yu, and M. Foster. High quality separation of simultaneoussources by sparse inversion. In 72nd EAGE Conference and Exhibition, 2010. → pages 78R. Abma, A. Ford, N. Rose-Innes, H. Mannaerts-Drew, and J. Kommedal. Continueddevelopment of simultaneous source acquisition for ocean bottom surveys. In 75th EAGEConference and Exhibition, 2013. doi: 10.3997/2214-4609.20130081. URLhttp://earthdoc.eage.org/publication/publicationdetails/?publication=68897. → pages 78P. Akerberg, G. Hampson, J. Rickett, H. Martin, and J. Cole. Simultaneous source separation bysparse radon transform. SEG Technical Program Expanded Abstracts, 27(1):2801–2805, 2008.doi: 10.1190/1.3063927. URL http://link.aip.org/link/?SGA/27/2801/1. → pages 78K. Aki and P. G. Richards. Quantitative seismology. Freeman and Co. New York, 1980. → pages114A. Aravkin, M. Friedlander, F. Herrmann, and T. van Leeuwen. Robust inversion, dimensionalityreduction, and randomized sampling. Mathematical Programming, 134(1):101–125, 2012. →pages 14, 24A. Aravkin, J. Burke, and M. Friedlander. Variational properties of value functions. SIAMJournal on Optimization, 23(3):1689–1717, 2013a. doi: 10.1137/120899157. URLhttp://dx.doi.org/10.1137/120899157. → pages 14, 15, 16, 23, 24, 25A. Aravkin, R. Kumar, H. Mansour, B. Recht, and F. J. Herrmann. Fast methods for denoisingmatrix completion formulations, with applications to robust seismic data interpolation. SIAMJournal on Scientific Computing, 36(5):S237–S266, 2014a. doi: 10.1137/130919210. → pages50, 57, 58, 60, 69A. Y. Aravkin, J. V. Burke, and M. P. Friedlander. Variational properties of value functions.SIAM Journal on optimization, 23(3):1689–1717, 2013b. → pages 88A. Y. Aravkin, R. Kumar, H. Mansour, B. Recht, and F. J. Herrmann. Fast methods fordenoising matrix completion formulations, with applications to robust seismic datainterpolation. SIAM Journal on Scientific Computing, 36(5):S237–S266, 10 2014b. doi:10.1137/130919210. URL http://epubs.siam.org/doi/abs/10.1137/130919210. → pages 82, 88153H. Avron and S. Toledo. Randomized algorithms for estimating the trace of an implicitsymmetric positive semi-definite matrix. Journal of the Association for Computing Machinery,58(2):P1–P16, Apr. 2011. ISSN 00045411. doi: 10.1145/1944345.1944349. URLhttp://portal.acm.org/citation.cfm?doid=1944345.1944349http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.163.6194&amp;rep=rep1&amp;type=pdf. →pages 125R. H. Baardman and R. G. van Borselen. Method and system for separating seismic sources inmarine simultaneous shooting acquisition. Patent Application, EP 2592439 A2, 2013. → pages78V. Bardan. Trace interpolation in seismic data processing. Geophysical Prospecting, 35(4):343–358, 1987. ISSN 1365-2478. doi: 10.1111/j.1365-2478.1987.tb00822.x. URLhttp://dx.doi.org/10.1111/j.1365-2478.1987.tb00822.x. → pages 49E. Baysal, D. D. Kosloff, and J. W. Sherwood. Reverse time migration. Geophysics, 48(11):1514–1524, 1983. → pages 113C. J. Beasley. A new look at marine simultaneous sources. The Leading Edge, 27(7):914–917,2008. doi: 10.1190/1.2954033. URL http://tle.geoscienceworld.org/cgi/content/abstract/27/7/914. →pages 78C. J. Beasley, R. E. Chambers, and Z. Jiang. A new look at simultaneous sources. SEG TechnicalProgram Expanded Abstracts, 17(1):133–135, 1998. doi: 10.1190/1.1820149. URLhttp://link.aip.org/link/?SGA/17/133/1. → pages 105S. R. Becker, E. J. Candes, and M. C. Grant. Templates for convex cone problems withapplications to sparse signal recovery. Mathematical Programming Computation, 3(3):165–218,2011. ISSN 1867-2949. doi: 10.1007/s12532-011-0029-5. URLhttp://dx.doi.org/10.1007/s12532-011-0029-5. → pages 23, 28, 35, 145E. v. Berg and M. P. Friedlander. Probing the pareto frontier for basis pursuit solutions. SIAMJournal on Scientific Computing, 31(2):890–912, 2008. → pages 13, 14, 15, 16, 20, 24, 29, 51,80, 82, 88, 89E. v. Berg and M. P. Friedlander. Sparse optimization with least-squares constraints. SIAM J.Optimization, 21(4):1201–1229, 2011. → pages 13, 14, 15, 28, 35A. Berkhout. Changing the mindset in seismic data acquisition. The Leading Edge, 27(7):924–938, 2008a. → pages 105, 109A. Berkhout and D. Verschuur. Imaging of multiple reflections. Geophysics, 71, no. 4(4):SI209–SI220, 2006. doi: 10.1190/1.2215359. URL http://dx.doi.org/10.1190/1.2215359. → pages 61A. J. Berkhout. A unified approach to acoustical reflection imaging. I: The forward model. TheJournal of the Acoustical Society of America, 93(4):2005–2016, 1993. ISSN 00014966. doi:10.1121/1.406714. URL http://link.aip.org/link/JASMAN/v93/i4/p2005/s1&Agg=doi. → pages 117A. J. Berkhout. Changing the mindset in seismic data acquisition. The Leading Edge, 27(7):924–938, 2008b. doi: 10.1190/1.2954035. URLhttp://tle.geoscienceworld.org/cgi/content/abstract/27/7/924. → pages 78154F. J. Billette and S. Brandsberg-Dahl. The 2004 bp velocity benchmark. In 67th EAGEConference & Exhibition, 2004. → pages 90B. Biondi and W. W. Symes. Angle-domain common-image gathers for migration velocityanalysis by wavefield-continuation imaging. Geophysics, (5), 69(5):1283, 2004. ISSN 00168033.doi: 10.1190/1.1801945. URL http://link.aip.org/link/GPYSA7/v69/i5/p1283/s1&Agg=doi. → pages113, 114, 121, 124B. Biondi, P. Sava, et al. Wave-equation migration velocity analysis. 69th Ann. Internat. MtgSoc. of Expl. Geophys, pages 1723–1726, 1999. → pages 113A. Bourgeois, M. Bourget, P. Lailly, M. Poulet, P. Ricarte, and R. Versteeg. Marmousi, modeland data. In 1990 workshop on Practical Aspects of Seismic Data Inversion, EuropeanAssociation of Exploration Geophysicists, volume 5-16. EAGE, 1991. → pages 90, 127S. Brandsberg-Dahl, M. de Hoop, and B. Ursin. Focusing in dip and ava compensation onscattering angle/azimuth common image gathers. Geophysics, (1), 68(1):232–254, 2003. →pages 114, 122, 124S. Burer and R. D. Monteiro. Local minima and convergence in low-rank semidefiniteprogramming. Mathematical Programming, 103(3):427–444, 2005. → pages 60S. Burer and R. D. C. Monteiro. Local minima and convergence in low-rank semidefiniteprogramming. Mathematical Programming, 103:2005, 2003. → pages 15, 18, 41J. Caldwell and C. Walker. An overview of marine seismic operations. The internationalassociation of oil and gas producers, January 24 2017. URL http://www.ogp.org.uk/pubs/448.pdf.→ pages xiii, 1, 2, 3E. J. Cande`s and L. Demanet. The curvelet representation of wave propagators is optimallysparse. Communications on Pure and Applied Mathematics, 58(11):1472–1528, 2005. ISSN1097-0312. doi: 10.1002/cpa.20078. URL http://dx.doi.org/10.1002/cpa.20078. → pages 81E. J. Cande`s and B. Recht. Exact matrix completion via convex optimization. Foundations ofComputational mathematics, 9(6):717–772, 2009. → pages 54, 82E. J. Cande`s and T. Tao. Near-optimal signal recovery from random projections: Universalencoding strategies. Information Theory, IEEE Transactions on, 52(12):5406–5425, December2006. ISSN 0018-9448. doi: 10.1109/TIT.2006.885507. → pages 13E. J. Cande`s, J. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccuratemeasurements. CPAM, 59(8):1207–1223, 2006. → pages 144E. J. Cande`s, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? Journal of theACM, 58(3), May 2011. → pages 13E. J. Cande`s, T. Strohmer, and V. Voroninski. Phaselift: Exact and stable signal recovery frommagnitude measurements via convex programming. Communications on Pure and AppliedMathematics, 66(8):1241–1274, 2013. ISSN 1097-0312. doi: 10.1002/cpa.21432. URLhttp://dx.doi.org/10.1002/cpa.21432. → pages 21155A. Canning and G. Gardner. Reducing 3-d acquisition footprint for 3-d dmo and 3-d prestackmigration. Geophysics, 63(4):1177–1183, 1998. doi: 10.1190/1.1444417. → pages 49S. Chandrasekaran, P. Dewilde, M. Gu, W. Lyons, and T. Pals. A fast solver for hssrepresentations via sparse matrices. SIAM J. Matrix Analysis Applications, 29(1):67–81, 2006.doi: http://dx.doi.org/10.1137/050639028. → pages 80, 84, 149C.-H. Chang and C.-W. Ha. Sharp inequalities of singular values of smooth kernels. IntegralEquations and Operator Theory, 35(1):20–27, 1999. → pages 65H. Chauris, M. S. Noble, G. Lambare´, and P. Podvin. Migration velocity analysis from locallycoherent events in 2-d laterally heterogeneous media, part i: Theoretical aspects. Geophysics,67(4):1202–1212, 2002. → pages 113Y. Chen, J. Yuan, Z. Jin, K. Chen, and L. Zhang. Deblending using normal moveout and medianfiltering in common-midpoint gathers. Journal of Geophysics and Engineering, 11(4):045012,2014. → pages 81, 93, 146J. Cheng and M. D. Sacchi. Separation of simultaneous source data via iterative rank reduction.In SEG Technical Program Expanded Abstracts, pages 88–93, 2013. URLhttp://dx.doi.org/10.1190/segam2013-1313.1. → pages 79, 106J. Claerbout. Imaging the earth’s interior. Blackwell Scientific Publishers, 1985a. → pages 128J. F. Claerbout. Coarse grid calculations of waves in inhomogeneous media with application todelineation of complicated seismic structure. Geophysics, 35(3):407–418, 1970. doi:10.1190/1.1440103. URL http://geophysics.geoscienceworld.org/content/35/3/407.abstract. → pages113, 128J. F. Claerbout. Fundamentals of geophysical data processing. Pennwell Books, Tulsa, OK, 1985b.→ pages 113W. J. Curry. Interpolation with fourier radial adaptive thresholding. 79th Annual InternationalMeeting, SEG, Expanded Abstracts, pages 3259–3263, 2009. doi: 10.1190/1.3255536. URLhttp://library.seg.org/doi/abs/10.1190/1.3255536. → pages 49C. de Bruin, C. Wapenaar, and A. Berkhout. Angle-dependent reflectivity by means of prestackmigration. Geophysics, (9), 55(9):1223, Sept. 1990. ISSN 1070485X. doi: 10.1190/1.1442938.URL http://library.seg.org/getabs/servlet/GetabsServlet?prog=normal&id=GPYSA7000055000009001223000001&idtype=cvips&gifs=yes. → pages 113, 114, 121, 123R. de Kok and D. Gillespie. A universal simultaneous shooting technique. In 64th EAGEConference and Exhibition, 2002. → pages 78, 105L. Demanet. Curvelets, Wave Atoms, and Wave Equations. PhD thesis, California Institute ofTechnology, 2006a. → pages 35L. Demanet. Curvelets, Wave Atoms, and Wave Equations. PhD thesis, California Institute ofTechnology, 2006b. → pages 56156S. M. Doherty and J. F. Claerbout. Velocity analysis based on the wave equation. TechnicalReport 1, Stanford Exploration Project, 1974. URLhttp://sepwww.stanford.edu/oldreports/sep01/01 12.pdf. → pages 113D. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289–1306,2006a. → pages 13, 49D. L. Donoho. Compressed sensing. IEEE Transactions on information theory, 52(4):1289–1306,2006b. → pages 6, 144P. Doulgeris, K. Bube, G. Hampson, and G. Blacquiere. Convergence analysis of acoherency-constrained inversion for the separation of blended data. Geophysical Prospecting, 60(4):769–781, 2012. doi: 10.1111/j.1365-2478.2012.01088.x. → pages 78A. A. Duchkov and V. Maarten. Velocity continuation in the downward continuation approach toseismic imaging. Geophysical Journal International, 176(3):909–924, 2009. → pages 113A. Duijndam, M. Schonewille, and C. Hindriks. Reconstruction of band-limited signals,irregularly sampled along one spatial direction. Geophysics, 64(2):524–538, 1999. doi:10.1190/1.1444559. → pages 49B. Engquist and L. Ying. Fast directional multilevel algorithms for oscillatory kernels. SIAMJournal on Scientific Computing, 29(4):1710–1737, 2007. → pages 62J.-M. Enjolras. Total pioneers cable-less 3d seismic surveys in uganda. Oil in Uganda, January 242017. URLhttp://www.oilinuganda.org/features/environment/uganda-pioneers-3d-seismic-surveys.html. → pagesxiii, 2M. Fazel. Matrix rank minimization with applications. PhD thesis, Stanford University, 2002. →pages 13M. Figueiredo, R. Nowak, and S. Wright. Gradient projection for sparse reconstruction:Application to compressed sensing and other inverse problems. IEEE Journal of SelectedTopics in Signal Processing, 1(4):586 –597, dec. 2007. → pages 15M. Friedlander, H. Mansour, R. Saab, and O. Yilmaz. Recovering compressively sampled signalsusing partial support information. IEEE Transactions on Information Theory, 58(1), January2011. → pages 14, 26M. P. Friedlander and M. Schmidt. Hybrid Deterministic-Stochastic Methods for Data Fitting.SIAM Journal on Scientific Computing, 34(3):A1380–A1405, May 2012. ISSN 1064-8275. doi:10.1137/110830629. URL http://epubs.siam.org/doi/abs/10.1137/110830629. → pages 126S. Funk. Netflix update: Try this at home, December 2006. URLhttp://sifter.org/∼simon/journal/20061211.html. → pages 28S. Gandy, B. Recht, and I. Yamada. Tensor completion and low-n-rank tensor recovery via convexoptimization. Inverse Problems, 27(2):025010, Jan. 2011. → pages 60157R. Giryes, M. Elad, and Y. C. Eldar. The projected gsure for automatic parameter tuning initerative shrinkage methods. Applied and Computational Harmonic Analysis, 30(3):407–422,2011. → pages 15, 20D. Gross. Recovering Low-Rank Matrices From Few Coefficients in Any Basis. IEEETransactions on Information Theory, 57:1548–1566, 2011. → pages 28E. Haber, M. Chung, and F. Herrmann. An effective method for parameter estimation with pdeconstraints with multiple right-hand sides. SIAM Journal on Optimization, 22(3):739–757,2012. doi: 10.1137/11081126X. URL http://epubs.siam.org/doi/abs/10.1137/11081126X. → pages125, 126, 127N. Halko, P.-G. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilisticalgorithms for constructing approximate matrix decompositions. SIAM Review, 53(2):217–288,2011a. → pages 79N. Halko, P.-G. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilisticalgorithms for constructing approximate matrix decompositions. SIAM Review, 53(2):217–288,2011b. → pages 49N. Halko, P.-G. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilisticalgorithms for constructing approximate matrix decompositions. SIAM review, 53(2):217–288,2011c. → pages 85G. Hampson, J. Stefani, and F. Herkenhoff. Acquisition using simultaneous sources. The LeadingEdge, 27(7):918–923, 2008. doi: 10.1190/1.2954034. URLhttp://tle.geoscienceworld.org/cgi/content/abstract/27/7/918. → pages 78S. Hegna and G. E. Parkes. Method for acquiring and processing marine seismic data to extractand constructively use the up-going and down-going wave-fields emitted by the source (s),november 2012. US Patent Application 13/686,408. → pages 5, 78G. Hennenfent and F. J. Herrmann. Seismic denoising with nonuniformly sampled curvelets.Computing in Science & Engineering, 8(3):16–25, 05 2006a. doi: 10.1109/MCSE.2006.49. URLhttps://www.slim.eos.ubc.ca/Publications/Public/Journals/CiSE/2006/hennenfent06CiSEsdn/hennenfent06CiSEsdn.pdf. → pages 81G. Hennenfent and F. J. Herrmann. Application of stable signal recovery to seismic datainterpolation. In SEG Technical Program Expanded Abstracts, volume 25, pages 2797–2801.SEG, 2006b. → pages 49G. Hennenfent, E. van den Berg, M. P. Friedlander, and F. J. Herrmann. New insights intoone-norm solvers from the Pareto curve. Geophysics, 73(4):A23–A26, 07 2008. doi:10.1190/1.2944169. URL https://www.slim.eos.ubc.ca/Publications/Public/Journals/Geophysics/2008/hennenfent08GEOnii/hennenfent08GEOnii.pdf. → pages 82G. Hennenfent, L. Fenelon, and F. J. Herrmann. Nonequispaced curvelet transform for seismicdata reconstruction: A sparsity-promoting approach. Geophysics, 75(6):WB203–WB210, 122010. URL https://www.slim.eos.ubc.ca/Publications/Public/Journals/Geophysics/2010/hennenfent2010GEOPnct/hennenfent2010GEOPnct.pdf. → pages 144158F. J. Herrmann and G. Hennenfent. Non-parametric seismic data recovery with curvelet frames.Geophysical Journal International, 173(1):233–248, 2008a. ISSN 1365-246X. → pages 13, 30,32, 49, 62F. J. Herrmann and G. Hennenfent. Non-parametric seismic data recovery with curvelet frames.Geophysical Journal International, 173:233–248, April 2008b. URLhttps://www.slim.eos.ubc.ca/Publications/Public/Journals/GeophysicalJournalInternational/2008/herrmann08nps/herrmann08nps.pdf. → pages 145F. J. Herrmann, M. P. Friedlander, and O. Yilmaz. Fighting the curse of dimensionality:Compressive sensing in exploration seismology. Signal Processing Magazine, IEEE, 29(3):88–100, 2012a. ISSN 1053-5888. doi: 10.1109/MSP.2012.2185859. URLhttps://www.slim.eos.ubc.ca/Publications/Public/Journals/IEEESignalProcessingMagazine/2012/Herrmann11TRfcd/Herrmann11TRfcd.pdf. → pages 13F. J. Herrmann, M. P. Friedlander, and O. Yilmaz. Fighting the curse of dimensionality:compressive sensing in exploration seismology. IEEE Signal Processing Magazine, 29:88–100,May 2012b. doi: 10.1109/MSP.2012.2185859. → pages 20D. Hill, C. Combee, and J. Bacon. Over/under acquisition and data processing: the nextquantum leap in seismic technology? First Break, 24(6):81–95, 2006. → pages 5, 78P. Huber. Robust Statistics. Wiley, 1981. → pages 14, 23S. Huo, Y. Luo, and P. Kelamis. Simultaneous sources separation via multi-directionalvector-median filter. SEG Technical Program Expanded Abstracts, 28(1):31–35, 2009. doi:10.1190/1.3255522. URL http://link.aip.org/link/?SGA/28/31/1. → pages 78P. Jain, R. Meka, and I. Dhillon. Guaranteed rank minimization via singular value projection. InIn NIPS 2010, 2010. → pages 23P. Jain, P. Netrapalli, and S. Sanghavi. Low-rank matrix completion using alternatingminimization. In Proceedings of the Forty-fifth Annual ACM Symposium on Theory ofComputing, STOC ’13, pages 665–674, New York, NY, USA, 2013. ACM. ISBN978-1-4503-2029-0. doi: 10.1145/2488608.2488693. URLhttp://doi.acm.org/10.1145/2488608.2488693. → pages x, 21, 22B. Jumah and F. J. Herrmann. Dimensionality-reduced estimation of primaries by sparseinversion. Geophysical Prospecting, 62(5):972–993, 09 2014. doi: 10.1111/1365-2478.12113.URL http://onlinelibrary.wiley.com/doi/10.1111/1365-2478.12113/abstract. → pages 84M. N. Kabir and D. Verschuur. Restoration of missing offsets by parabolic radon transform1.Geophysical Prospecting, 43(3):347–368, 1995. ISSN 1365-2478. doi:10.1111/j.1365-2478.1995.tb00257.x. URL http://dx.doi.org/10.1111/j.1365-2478.1995.tb00257.x.→ pages 49B. Kanagal and V. Sindhwani. Rank selection in low-rank matrix approximations: A study ofcross-validation for nmfs. Advances in Neural Information Processing Systems, 1:10, 2010. →pages 51159O. Koefoed. On the effect of poisson’s ratios of rock strata on the reflection coefficients of planewaves. Geophysical Prospecting, 3(4):381–387, 1955. ISSN 1365-2478. doi:10.1111/j.1365-2478.1955.tb01383.x. URL http://dx.doi.org/10.1111/j.1365-2478.1955.tb01383.x.→ pages 122Z. Koren and I. Ravve. Full-azimuth subsurface angle domain wavefield decomposition andimaging part i: Directional and reflection image gathers. Geophysics, 76(1):S1–S13, 2011. →pages 113J. Krebs, J. Anderson, D. Hinkley, R. Neelamani, S. Lee, A. Baumstein, and M. Lacasse. Fastfull-wavefield seismic inversion using encoded sources. Geophysics, (6), 74(6):P177–P188, 2009.doi: 10.1190/1.3230502. URL http://dx.doi.org/10.1190/1.3230502. → pages 125N. Kreimer. Multidimensional seismic data reconstruction using tensor analysis. 2013. → pages144, 145, 146N. Kreimer and M. Sacchi. A tensor higher-order singular value decomposition for prestackseismic data noise reduction and interpolation. Geophysics, 77, no. 3(3):V113–V122, 2012a. doi:10.1190/geo2011-0399.1. → pages 49, 50, 82N. Kreimer and M. D. Sacchi. A tensor higher-order singular value decomposition for prestackseismic data noise reduction and interpolation. GEOPHYSICS, 77(3):V113–V122, 2012b. doi:10.1190/geo2011-0399.1. → pages 107N. Kreimer and M. D. Sacchi. Tensor completion via nuclear norm minimization for 5d seismicdata reconstruction. In 83rd Annual International Meeting, SEG, Expanded Abstracts, pages1–5, 2012c. doi: 10.1190/segam2012-0529.1. URLhttp://library.seg.org/doi/abs/10.1190/segam2012-0529.1. → pages 49N. Kreimer, A. Stanton, and M. D. Sacchi. Tensor completion based on nuclear normminimization for 5d seismic data reconstruction. Geophysics, 78, no. 6(6):V273–V284, 2013. →pages 51, 60, 69H. Kuhel and M. Sacchi. Least-squares wave-equation migration for avp/ava inversion.Geophysics, (1), 68(1):262–273, 2003. doi: 10.1190/1.1543212. URLhttp://library.seg.org/doi/abs/10.1190/1.1543212. → pages 121R. Kumar, H. Mansour, A. Y. Aravkin, and F. J. Herrmann. Reconstruction of seismic wavefieldsvia low-rank matrix factorization in the hierarchical-separable matrix representation. 84thAnnual International Meeting, SEG, Expanded Abstracts, pages 3628–3633, 9 2013a. doi:10.1190/segam2013-1165.1. URL https://www.slim.eos.ubc.ca/Publications/Public/Conferences/SEG/2013/kumar2013SEGHSS/kumar2013SEGHSS.pdf. → pages 62, 82, 84, 85, 149R. Kumar, T. van Leeuwen, and F. J. Herrmann. Efficient WEMVA using extended images. InSEG Workshop on Advances in Model Building, Imaging, and FWI; Houston, 9 2013b. →pages 124R. Kumar, A. Y. Aravkin, E. Esser, H. Mansour, and F. J. Herrmann. SVD-free low-rank matrixfactorization : wavefield reconstruction via jittered subsampling and reciprocity. 76stConference and Exhibition, EAGE, Extended Abstracts, 06 2014. doi:16010.3997/2214-4609.20141394. URL https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2014/kumar2014EAGErank/kumar2014EAGErank.pdf. → pages 63R. Kumar, C. D. Silva, O. Akalin, A. Y. Aravkin, H. Mansour, B. Recht, and F. J. Herrmann.Efficient matrix completion for seismic data reconstruction. Geophysics, 80(05):V97–V114, 092015a. doi: 10.1190/geo2014-0369.1. URL https://www.slim.eos.ubc.ca/Publications/Public/Journals/Geophysics/2015/kumar2014GEOPemc/kumar2014GEOPemc.pdf. (Geophysics). → pages79, 80, 83, 106R. Kumar, H. Wason, and F. J. Herrmann. Source separation for simultaneous towed-streamermarine acquisition –- a compressed sensing approach. Geophysics, 80(06):WD73–WD88, 112015b. doi: 10.1190/geo2015-0108.1. URL https://www.slim.eos.ubc.ca/Publications/Public/Journals/Geophysics/2015/kumar2015sss/kumar2015sss revised.pdf. (Geophysics). → pages 106R. Lago, A. Petrenko, Z. Fang, and F. J. Herrmann. Fast solution of time-harmonicwave-equation for full-waveform inversion. In EAGE Annual Conference Proceedings, 06 2014.doi: 10.3997/2214-4609.20140812. URL https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2014/lago2014EAGEfst/lago2014EAGEfst.pdf. → pages 121K. L. Lange, R. J. A. Little, and J. M. G. Taylor. Robust statistical modeling using the tdistribution. Journal of the American Statistical Association, 84(408):881–896, 1989. → pages14, 24R. M. Lansley, M. Berraki, and M. M. M. Gros. Seismic array with spaced sources havingvariable pressure, september 2007. US Patent Application 12/998,723. → pages 5, 78A. Lee, B. Recht, R. Salakhutdinov, Nathan Srebro, and J. A. Tropp. Practical Large-ScaleOptimization for Max-Norm Regularization. In Advances in Neural Information ProcessingSystems, 2010a. → pages 17, 51, 80, 89, 106, 109J. Lee, B. Recht, R. Salakhutdinov, N. Srebro, and J. Tropp. Practical large-scale optimizationfor max-norm regularization. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. Zemel, andA. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 1297–1305.2010b. → pages 14, 17, 20, 144S. A. Levin. Principle of reverse-time migration. Geophysics, 49(5):581–583, 1984. → pages 113C. Li, C. C. Mosher, L. C. Morley, Y. Ji, and J. D. Brewer. Joint source deblending andreconstruction for seismic data. SEG Technical Program Expanded Abstracts 2013, pages 82–87,2013. → pages 5E. Liberty, F. Woolfe, P.-G. Martinsson, V. Rokhlin, and M. Tygert. Randomized algorithms forthe low-rank approximation of matrices. National Academy of Sciences, 104(51):20167–20172,2007. doi: 10.1073/pnas.0709640104. → pages 49, 79Z. Lin and S. Wei. A block lanczos with warm start technique for accelerating nuclear normminimization algorithms. CoRR, abs/1012.0365, 2010. → pages 23A. Long. A new seismic method to significantly improve deeper data character andinterpretability. 2009. → pages 5, 78161A. S. Long, E. von Abendorff, M. Purves, J. Norris, and A. Moritz. Simultaneous long offset(SLO) towed streamer seismic acquisition. In 75th EAGE Conference & Exhibition, 2013. →pages 5, 78S. Lu, N. Whitmore, A. Valenciano, and N. Chemingui. Illumination from 3d imaging ofmultiples: An analysis in the angle domain. 84th SEG, Denver, USA, Expended Abstracts,2014. → pages 114S. MacKay and R. Abma. Imaging and velocity analysis with depth-focusing analysis.Geophysics, (12), 57(12):1608–1622, 1992. URL http://dx.doi.org/10.1190/1.1443228. → pages 114A. Mahdad, P. Doulgeris, and G. Blacquiere. Separation of blended data by iterative estimationand subtraction of blending interference noise. Geophysics, 76(3):Q9–Q17, 2011. doi:10.1190/1.3556597. URL http://link.aip.org/link/?GPY/76/Q9/1. → pages 78F. Mahmoudian and G. F. Margrave. A review of angle domain common image gathers. TechnicalReport, University of Calgary, 2009. URLhttp://www.crewes.org/ForOurSponsors/ResearchReports/2009/CRR200953.pdf. → pages 121M. W. Mahoney. Randomized algorithms for matrices and data. Foundations and Trends inMachine Learning, 3(2):123–224, Feb. 2011. ISSN 1935-8237. doi: 10.1561/2200000035. URLhttp://dx.doi.org/10.1561/2200000035. → pages 49J. Mairal, M. Elad, and G. Sapiro. Sparse representation for color image restoration. IEEETransactions on Image Processing, 17(1):53–69, Jan. 2008. → pages 13H. Mansour, R. Saab, P. Nasiopoulos, and R. Ward. Color image desaturation using sparsereconstruction. In Proc. of the IEEE International Conference on Acoustics, Speech, and SignalProcessing (ICASSP), pages 778–781, March 2010. → pages 13H. Mansour, F. J. Herrmann, and O. Yilmaz. Improved wavefield reconstruction from randomizedsampling via weighted one-norm minimization. submitted to Geophysics, 2012a. → pages 37H. Mansour, H. Wason, T. T. Lin, and F. J. Herrmann. Randomized marine acquisition withcompressive sampling matrices. Geophysical Prospecting, 60(4):648–662, July 2012b. URLhttp://onlinelibrary.wiley.com/doi/10.1111/j.1365-2478.2012.01075.x/abstract. → pages 13H. Mansour, H. Wason, T. T. Lin, and F. J. Herrmann. Randomized marine acquisition withcompressive sampling matrices. Geophysical Prospecting, 60(4):648–662, 2012c. URLhttp://onlinelibrary.wiley.com/doi/10.1111/j.1365-2478.2012.01075.x/abstract. → pages 78, 80H. Mansour, F. J. Herrmann, and O. Yilmaz. Improved wavefield reconstruction from randomizedsampling via weighted one-norm minimization. Geophysics, 78, no. 5(5):V193–V206, 08 2013.doi: 10.1190/geo2012-0383.1. URL https://www.slim.eos.ubc.ca/Publications/Public/Journals/Geophysics/2013/mansour2013GEOPiwr/mansour2013GEOPiwr.pdf. → pages 62, 96M. Maraschini, R. Dyer, K. Stevens, and D. Bird. Source separation by iterative rank reduction -theory and applications. In 74th EAGE Conference and Exhibition, 2012. → pages 79, 106R. A. Maronna, D. Martin, and Yohai. Robust Statistics. Wiley Series in Probability andStatistics. Wiley, 2006. → pages 23, 24162B. Mishra, G. Meyer, F. Bach, and R. Sepulchre. Low-rank optimization with trace norm penalty.SIAM Journal on Optimization, 23(4):2124–2149, 2013. → pages 50N. Moldoveanu and S. Fealy. Multi-vessel coil shooting acquisition. Patent Application, US20100142317 A1, 2010. → pages 78N. Moldoveanu and J. Quigley. Random sampling for seismic acquisition. In 73rd EAGEConference & Exhibition, 2011. → pages 78N. Moldoveanu, L. Combee, M. Egan, G. Hampson, L. Sydora, and W. Abriel. Over/undertowed-streamer acquisition: A method to extend seismic bandwidth to both higher and lowerfrequencies. The Leading Edge, 26:41–58, 01 2007. URLhttp://www.slb.com/∼/media/Files/westerngeco/resources/articles/2007/jan07 tle overunder.pdf. →pages 5, 78I. Moore. Simultaneous sources - processing and applications. In 72nd EAGE Conference andExhibition, 2010. → pages 78I. Moore, B. Dragoset, T. Ommundsen, D. Wilson, C. Ward, and D. Eke. Simultaneous sourceseparation using dithered sources. SEG Technical Program Expanded Abstracts, 27(1):2806–2810, 2008. doi: 10.1190/1.3063928. URL http://link.aip.org/link/?SGA/27/2806/1. → pages78C. Mosher, C. Li, L. Morley, Y. Ji, F. Janiszewski, R. Olson, and J. Brewer. Increasing theefficiency of seismic data acquisition via compressive sensing. The Leading Edge, 33(4):386–391,2014. → pages 78W. Mulder. Subsurface offset behaviour in velocity analysis with extended reflectivity images.Geophysical Prospecting, 62(1):17–33, 2014. ISSN 1365-2478. doi: 10.1111/1365-2478.12073.URL http://dx.doi.org/10.1111/1365-2478.12073. → pages 124R. N. Neelamani, C. E. Krohn, J. R. Krebs, J. K. Romberg, M. Deffenbaugh, and J. E. Anderson.Efficient seismic forward modeling using simultaneous random sources and sparsity. Geophysics,75(6):WB15–WB27, 2010. doi: 10.1190/1.3509470. URLhttp://geophysics.geoscienceworld.org/content/75/6/WB15.abstract. → pages 13J. Nocedal and S. J. Wright. Numerical Optimization. Springer, Aug. 2000. ISBN 0387987932.URL http://www.amazon.com/exec/obidos/redirect?tag=citeulike07-20&path=ASIN/0387987932. →pages 126V. Oropeza and M. Sacchi. Simultaneous seismic data denoising and reconstruction viamultichannel singular spectrum analysis. Geophysics, 76(3):V25–V32, 2011. → pages 13, 28, 49,79, 82A. B. Owen and P. O. Perry. Bi-cross-validation of the svd and the nonnegative matrixfactorization. The Annals of Applied Statistics, 3(2):564–594, 06 2009. doi:10.1214/08-AOAS227. → pages 51S. Oymak, A. Jalali, M. Fazel, Y. C. Eldar, and B. Hassibi. Simultaneously structured modelswith application to sparse and low-rank matrices. IEEE Transactions on Information Theory,pages 1–1, 2012. → pages 61163M. Prucha, B. Biondi, W. Symes, et al. Angle-domain common image gathers by wave-equationmigration. 69th Ann. Internat. Mtg: Soc. of Expl. Geophys, pages 824–827, 1999. → pages 113B. Recht. A simpler approach to matrix completion. The Journal of Machine Learning Research,7:3413–3430, 2011. → pages 54B. Recht and C. Re´. Parallel stochastic gradient algorithms for large-scale matrix completion. InOptimization Online, 2011. → pages 14, 17, 20, 28, 89, 109, 144B. Recht and C. Re´. Parallel stochastic gradient algorithms for large-scale matrix completion.Mathematical Programming Computation, 5(2):201–226, 2013. ISSN 1867-2949. doi:10.1007/s12532-013-0053-8. URL http://dx.doi.org/10.1007/s12532-013-0053-8. → pages 51, 60B. Recht, M. Fazel, and P. Parrilo. Guaranteed minimum-rank solutions of linear matrixequations via nuclear norm minimization. SIAM Review, 52(3):471–501, 2010a. doi:10.1137/070697835. → pages 49, 82B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimum rank solutions to linear matrixequations via nuclear norm minimization. SIAM Review, 52(3):471–501, 2010b. → pages 13, 18,88, 108J. D. M. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborativeprediction. In Proceedings of the 22nd international conference on Machine learning, ICML ’05,pages 713–719, New York, NY, USA, 2005a. ACM. ISBN 1-59593-180-5. → pages 89, 109J. D. M. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborativeprediction. In ICML ’05 Proceedings of the 22nd international conference on Machine learning,pages 713 – 719, 2005b. → pages 14, 17J. Rickett and P. Sava. Offset and angle domain common imagepoint gathers for shot profilemigration. Geophysics, (3), 67(3):883–889, 2002. doi: 10.1190/1.1484531. URLhttp://library.seg.org/doi/abs/10.1190/1.1484531. → pages 121RigZone. How does marine seismic work. RigZone, January 24 2017. URLhttp://www.rigzone.com/training/insight.asp-insight id=303. → pages xiii, 2R. Rockafellar and R.-B. Wets. Variational Analysis, volume 317. Springer, 1998. → pages 16M. Sacchi, T. Ulrych, and C. Walker. Interpolation and extrapolation using a high-resolutiondiscrete fourier transform. Signal Processing, IEEE Transactions on, 46(1):31 –38, jan 1998. →pages 28, 32, 49M. Sacchi, S. Kaplan, and M. Naghizadeh. F-x gabor seismic data reconstruction. 71stConference and Exhibition, EAGE, Extended Abstracts, 2009. → pages 49M. D. Sacchi and B. Liu. Minimum weighted norm wavefield reconstruction for ava imaging.Geophysical Prospecting, 53(6):787–801, 2005. ISSN 1365-2478. doi:10.1111/j.1365-2478.2005.00503.x. URL http://dx.doi.org/10.1111/j.1365-2478.2005.00503.x. →pages 49164P. Sava and S. Fomel. Angle-domain common-image gathers by wavefield continuation methods.Geophysics, (3), 68(3):1065–1074, 2003. doi: 10.1190/1.1581078. URLhttp://library.seg.org/doi/abs/10.1190/1.1581078. → pages 121P. Sava and I. Vasconcelos. Extended imaging conditions for wave-equation migration.Geophysical Prospecting, 59(1):35–55, Jan. 2011. ISSN 00168025. doi:10.1111/j.1365-2478.2010.00888.x. URL http://doi.wiley.com/10.1111/j.1365-2478.2010.00888.x. →pages 113, 114, 116, 121, 124, 127, 143P. C. Sava and B. Biondi. Wave-equation migration velocity analysis. I. Theory. GeophysicalProspecting, 52:593–606, 2004. URL http://dx.doi.org/10.1111/j.1365-2478.2004.00447.x. → pages114P. C. Sava and S. Fomel. Time-shift imaging condition in seismic migration. Geophysics, (6), 71(6):S209—-S217, 2006. URL http://dx.doi.org/10.1190/1.2338824. → pages 114H. Schaeffer and S. Osher. A low patch-rank interpretation of texture. SIAM Journal on ImagingSciences, 6(1):226–262, 2013. → pages 50P. Shen and W. W. Symes. Automatic velocity analysis via shot profile migration. Geophysics, 73(5):VE49–VE59, 2008. URL http://dx.doi.org/10.1190/1.2972021. → pages 114, 124R. Shuey. A simplification of the zoeppritz equations. Geophysics, (4), 50(4):609–614, 1985. doi:10.1190/1.1441936. URL http://dx.doi.org/10.1190/1.1441936. → pages 123M. Signoretto, R. Van de Plas, B. De Moor, and J. A. Suykens. Tensor versus matrix completion:a comparison with application to spectral data. Signal Processing Letters, IEEE Transactionson Information Theory, 18(7):403–406, 2011. → pages 61C. D. Silva and F. J. Herrmann. Hierarchical Tucker tensor optimization - applications to 4Dseismic data interpolation. In EAGE Annual Conference Proceedings, 06 2013a. doi:10.3997/2214-4609.20130390. URL https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2013/dasilva2013EAGEhtucktensor/dasilva2013EAGEhtucktensor.pdf. → pages 50, 108C. D. Silva and F. J. Herrmann. Structured tensor missing-trace interpolation in the HierarchicalTucker format. In SEG Technical Program Expanded Abstracts, volume 32, pages 3623–3627, 92013b. doi: 10.1190/segam2013-0709.1. URL https://www.slim.eos.ubc.ca/Publications/Public/Conferences/SEG/2013/dasilva2013SEGhtuck/dasilva2013SEGhtuck.pdf. → pages 82C. D. Silva and F. J. Herrmann. Hierarchical Tucker tensor optimization - applications to 4dseismic data interpolation. In EAGE, 06 2013c. URL https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2013/dasilva2013EAGEhtucktensor/dasilva2013EAGEhtucktensor.pdf. →pages 35S. Sinha, P. S. Routh, P. D. Anno, and J. P. Castagna. Spectral decomposition of seismic datawith continuous-wavelet transform. Geophysics, 70, no. 6(6):P19–P25, 2005. → pages 50H. F. Smith. A Hardy space for Fourier integral operators. Journal of Geometric Analysis, 8(4):629–653, 1998. → pages 81165N. Srebro. Learning with matrix factorizations, PhD Thesis. PhD thesis, Massachusetts Instituteof Technology, 2004. URL http://hdl.handle.net/1721.1/28743. → pages 58J.-L. Starck, M. Elad, and D. Donoho. Image decomposition via the combination of sparserepresentation and a variational approach. IEEE Transaction on Image Processing, 14(10),2005. → pages 13J. Stefani, G. Hampson, and F. Herkenhoff. Acquisition using simultaneous sources. In 69thEAGE Conference and Exhibition, 2007. → pages 78C. C. Stolk and W. W. Symes. Smooth objective functionals for seismic velocity inversion.Inverse Problems, 19:73–89, 2003. → pages 124C. C. Stolk, M. V. de Hoop, and W. W. Symes. Kinematics of shot-geophone migration.Geophysics, (6), 74(6):WCA19—-WCA34, 2009. ISSN 00168033. doi: 10.1190/1.3256285. URLhttp://library.seg.org/getabs/servlet/GetabsServlet?prog=normal&id=GPYSA70000740000060WCA19000001&idtype=cvips&gifs=yeshttp://dx.doi.org/10.1190/1.3256285.→ pages 113M. Stoll. A krylov-schur approach to the truncated svd. Linear Algebra and its Applications, 436(8):2795 – 2806, 2012. ISSN 0024-3795. doi: http://dx.doi.org/10.1016/j.laa.2011.07.022. URLhttp://www.sciencedirect.com/science/article/pii/S0024379511005349. → pages 57R. Sun and Z.-Q. Luo. Guaranteed matrix completion via non-convex factorization.http://arxiv.org/abs/1411.8003, accessed 28 November 2014, 2014. → pages 60W. Symes. Approximate linearized inversion by optimal scaling of prestack depth migration.Geophysics, (2), 73(2):R23–R35, 2008a. doi: 10.1190/1.2836323. URLhttp://dx.doi.org/10.1190/1.2836323. → pages 126W. Symes. Seismic inverse poblems : recent developments in theory and practice. In A. Louis,S. Arridge, and B. Rundell, editors, Proceedings of the Inverse Problems from Theory toApplications Conference, pages 2–6. IOP Publishing, 2014. → pages 124W. W. Symes. Migration velocity analysis and waveform inversion. Geophysical Prospecting, 56(6):765–790, 2008b. → pages 124W. W. Symes and J. J. Carazzone. Velocity inversion by differential semblance optimization.Geophysics, (5), 56(5):654–663, 1991. → pages 113, 114, 124W. W. Symes, D. Sun, and M. Enriquez. From modelling to inversion: designing a well-adaptedsimulator. Geophysical Prospecting, 59(5):814–833, 2011a. ISSN 1365-2478. doi:10.1111/j.1365-2478.2011.00977.x. URL http://dx.doi.org/10.1111/j.1365-2478.2011.00977.x. →pages 61, 90W. W. Symes, D. Sun, and M. Enriquez. From modelling to inversion: designing a well-adaptedsimulator. Geophysical Prospecting, 59(5):814–833, 2011b. ISSN 1365-2478. doi:10.1111/j.1365-2478.2011.00977.x. URL http://dx.doi.org/10.1111/j.1365-2478.2011.00977.x. →pages 123166A. ten Kroode, D.-J. Smit, and A. Verdel. Linearized inversed scattering in the presence ofcaustics. In SPIE’s 1994 International Symposium on Optics, Imaging, and Instrumentation,pages 28–42. International Society for Optics and Photonics, 1994. → pages 113P. Torben Hoy. A step change in seismic imaging–using a unique ghost free source and receiversystem. CSEG, Geoconvetion, 2013. → pages 5, 78D. Trad. Five-dimensional interpolation: Recovering from acquisition constraints. Geophysics, 74,no. 6(6):V123–V132, 2009. → pages 49S. Trickett and L. Burroughs. Prestack rank-reducing noise suppression: Theory. Society ofExploration Geophysicists, 2009. → pages 82S. Trickett, L. Burroughs, A. Milton, L. Walton, and R. Dack. Rank reduction based traceinterpolation. 80th Annual International Meeting, SEG, Expanded Abstracts, pages 3829–3833,2010. doi: 10.1190/1.3513645. → pages 49S. Trickett, L. Burroughs, and A. Milton. Interpolation using hankel tensor completion. 83rdAnnual International Meeting, SEG, Expanded Abstracts, pages 3634–3638, 2013. doi:10.1190/segam2013-0416.1. URL http://library.seg.org/doi/abs/10.1190/segam2013-0416.1. →pages 50T. van Leeuwen and F. J. Herrmann. 3D frequency-domain seismic inversion with controlledsloppiness. SIAM Journal on Scientific Computing, 36(5):S192–S217, 10 2014. doi:10.1137/130918629. URL http://epubs.siam.org/doi/abs/10.1137/130918629. (SISC). → pages 121,126T. van Leeuwen, A. Y. Aravkin, and F. J. Herrmann. Seismic waveform inversion by stochasticoptimization. International Journal of Geophysics, 2011, 12 2011. doi: 10.1155/2011/689041.URL https://www.slim.eos.ubc.ca/Publications/Public/Journals/InternationJournalOfGeophysics/2011/vanLeeuwen10IJGswi/vanLeeuwen10IJGswi.pdf. → pages 119, 125, 126A. van Wijngaarden. Imaging and Characterization of Angle-dependent Seismic Reflection Data.PhD thesis, Delft University of Technology, 1998. URLhttp://books.google.ca/books?id=x5NaAAAACAAJ. → pages 114, 121B. Vandereycken. Low-rank matrix completion by Riemannian optimization. SIAM Journal onOptimization, 23(2):1214—1236, 2013. → pages 29J. Virieux and S. Operto. An overview of full-waveform inversion in exploration geophysics.Geophysics, 74(6):WCC1–WCC26, 2009. doi: 10.1190/1.3238367. URLhttp://library.seg.org/doi/abs/10.1190/1.3238367. → pages 152J. Wang, M. Ng, and M. Perz. Seismic data interpolation by greedy local radon transform.Geophysics, 75, no. 6(6):WB225–WB234, 2010. doi: 10.1190/1.3484195. → pages 49L. Wang, J. Gao, W. Zhao, and X. Jiang. Nonstationary seismic deconvolution by adaptivemolecular decomposition. In Geoscience and Remote Sensing Symposium (IGARSS), pages2189–2192. IEEE Transactions on Information Theory, 2011. → pages 50167H. Wason and F. J. Herrmann. Ocean bottom seismic acquisition via jittered sampling. In EAGE,06 2013a. doi: 10.3997/2214-4609.20130379. URL https://www.slim.eos.ubc.ca/Publications/Public/Conferences/EAGE/2013/wason2013EAGEobs/wason2013EAGEobs.pdf. → pages 79, 96H. Wason and F. J. Herrmann. Time-jittered ocean bottom seismic acquisition. 32:1–6, 9 2013b.doi: 10.1190/segam2013-1391.1. URL https://www.slim.eos.ubc.ca/Publications/Public/Conferences/SEG/2013/wason2013SEGtjo/wason2013SEGtjo.pdf. → pages 5, 78, 79, 80, 97, 105, 106, 146Z. Wen, W. Yin, and Y. Zhang. Solving a low-rank factorization model for matrix completion bya nonlinear successive over-relaxation algorithm. Mathematical Programming Computation, 4(4):333–361, 2012. → pages 70, 145N. Whitmore et al. Iterative depth migration by backward time propagation. In 1983 SEGAnnual Meeting. Society of Exploration Geophysicists, 1983. → pages 113T. Yang and P. Sava. Image-domain wavefield tomography with extended common-image-pointgathers. Geophysical Prospecting, 63(5):1086–1096, 2015. ISSN 1365-2478. doi:10.1111/1365-2478.12204. URL http://dx.doi.org/10.1111/1365-2478.12204. → pages 114, 124, 126Y. Yang, J. Ma, and S. Osher. Seismic data reconstruction via matrix completion. Inverseproblem and imaging, 7(4):1379–1392, 2013. → pages 49, 50168` #### Cite Citation Scheme: Citations by CSL (citeproc-js) #### Embed Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website. ``` ``` <div id="ubcOpenCollectionsWidgetDisplay"> <script id="ubcOpenCollectionsWidget" src="{[{embed.src}]}" data-item="{[{embed.item}]}" data-collection="{[{embed.collection}]}" `https://iiif.library.ubc.ca/presentation/dsp.24.1-0355266/manifest`
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8729423880577087, "perplexity": 1928.2460440519517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247482478.14/warc/CC-MAIN-20190217192932-20190217214932-00198.warc.gz"}
https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLI-B5/527/2016/
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLI-B5, 527–531, 2016 https://doi.org/10.5194/isprs-archives-XLI-B5-527-2016 Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLI-B5, 527–531, 2016 https://doi.org/10.5194/isprs-archives-XLI-B5-527-2016 15 Jun 2016 15 Jun 2016 # EXPERIMENTAL ASSESSMENT OF THE QUANERGY M8 LIDAR SENSOR M.-A. Mitteta, H. Nouira, X. Roynard, F. Goulette, and J.-E. Deschaud M.-A. Mitteta et al. • MINES ParisTech, PSL Research University, Centre for robotics, 60 Bd St Michel 75006 Paris, France Keywords: Quanergy M8 / LIDAR / range imaging / calibration / point cloud / assessment Abstract. In this paper, some experiments with the Quanergy M8 scanning LIDAR system are related. The distance measurement obtained with the Quanergy M8 can be influenced by different factors. Moreover, measurement errors can originate from different sources. The environment in which the measurements are performed has an influence (temperature, light, humidity, etc.). Errors can also arise from the system itself. Then, it is necessary to determine the influence of these parameters on the quality of the distance measurements. For this purpose different studies are presented and analyzed. First, we studied the temporal stability of the sensor by analyzing observations during time. Secondly, the assessment of the distance measurement quality has been conducted. The aim of this step is to detect systematic errors in measurements regarding the range. Differents series of measurements have been conducted : at different range and in diffrent conditions (indoor and outdoor). Finally, we studied the consistency between the differents beam of the LIDAR.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8104625940322876, "perplexity": 4637.177686849494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104141372.60/warc/CC-MAIN-20220702131941-20220702161941-00757.warc.gz"}
http://www.integreat.ca/NOTES/CALG/04.02.html
Algebra I / Elem. Algebra Introduction to Algebra Linear Equations and Inequalities Functions and Graphs I Lines and thier Graphs Linear Systems Algebra II / E&I Algebra Exponents & Polynomials Intermediate Algebra starts here! Factoring Rational Expressions Rational Equations and Applications Algebra III / Inter. Algebra Radical Expressions Nonlinear Equations and Applications Functions and Graphs II Exponential and Logarithmic Functions College Algebra Equations and Inequalitites Functions and Graphs Polynomial and Rational Functions Exponential and Logarithmic Functions Systems and Matrices Geometry Basics Conic Sections Sequences and Series Trigonometry The Six Trigonometric Functions Right Triangle Trigonometry Circular Functions Graphs of Trigonometric Functions Trigonometric Identities Trigonometric Equations Oblique Triangles and the Laws Vectors Complex, Parametric, and Polar Forms Calculus I Limits and Continuity Derivatives Analysis of Curves Antiderivatives Calculus II Transcendental Functions Geometry Physics Integration Techniques Calculus of Infinity Parametric, Polar, and Conic Curves Calculus III Calculus IV 15.Functions of Several Variables 16.Multiple Integration 17.Vector Analysis Course: College Algebra Topic: Exponential and Logarithmic Functions Subtopic: Logarithmic Functions and Graphs Overview Since much of the material in this lesson is review from an intermediate algebra course, our focus here will be to expand on the basic topics, bump-up the level of difficulty, and get logarithmic functions down really well. As you work through the material, try to do what you can algebraically as well as take advantage of technology where appropriate. Objectives By the end of this topic you should know and be prepared to be tested on: • 4.2.1 Understand the basic logarithmic function f(x)=logb(x) and its graph including being able to produce its graph both manually and that it is the inverse of the function g(x)=bx • 4.2.2 Evaluate f(x)=logb(x) for given values of b and x both manually and electronically • 4.2.3 Convert from logarithmic form to exponential form and from exponential form to logarithmic form • 4.2.4 Know the features of the graph of f(x)=logb(x) including basic shape, domain, range, intercepts, and asymptotes • 4.2.5 Understand the effects of a, c, and d in y=a·logb(x-c)+d on the graph of f(x)=logb(x) including reflections, stretches and shrinks, vertical and horizontal shifts (translations) • 4.2.6 Understand the common logarithm function log(x) and the natural logarithm function ln(x) including some history of their origins and applications in science • 4.2.7 Understand the common and natural logarithmic functions including that they are inverses of the functions 10x and ex, respectively • 4.2.8 Convert common and natural logarithms from from logarithmic form to exponential form and from exponential form to logarithmic form • 4.2.9 Produce the graphs of the common and natural logarithmic functions both manually and electronically • 4.2.10 Evaluate the common and natural logarithmic functions for given values of x both manually and electronically Terminology Define: logarithmic function, common logarithm, natural logarithm Supplemental Resources (optional) If you need supplemental tutorial videos with examples relevant to this section go to James Sousa's MathIsPower4U and search for topics below. Some will be review material from Intermediate Algebra and some will be the more challenging level of this course. "Using the Definition of a Logarithm" "Evaluating Logarithmic Expressions" "Graphing Logarithmic Functions"
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8493717312812805, "perplexity": 2756.007189463695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257939.82/warc/CC-MAIN-20190525084658-20190525110658-00355.warc.gz"}
https://www.physicsforums.com/threads/saw-this-question-online.629111/
# Saw this question online. • Start date • #1 20 0 The amount of water in a tank doubles every minute. If the tank was full at the 1 hour mark, when was the tank half full? ============================= It's not homework, I'm just trying to get the actual (reasoned) answer.. ============================= Here is the answer I argued (though, I'm not sure): 2^x = [(2^59)/2] 2^x = 2^58 x = 58 ============================= But, almost everyone argues: If it doubles every minute, isn't it half full the minute before it's full? ie. 59 ============================= Pretty sure I'm incorrect, I just want to know for sure.. Last edited: ## Answers and Replies • #2 296 0 The argument of everyone (not yours) is correct. Your working is correct as well, its just that your first equation is wrong. Last edited: • #3 Mentallic Homework Helper 3,798 94 V = k*2x, where x is in minutes and k is some unknown constant (the volume of the tank at the start of the timer x=0). At x=60 the tank is full, therefore, if we denote the full volume of the tank as Vf then Vf=k*260. Now, We want to know when the tank is half full, which is Vf/2 Clearly, Vf/2 = k*260/2 = k*259 If we compare this to the original equation, x=59 minutes as you would expect. I know this explanation is long-winded, but I hope it got the point across as to how you should have set up the equation. • Last Post Replies 4 Views 2K • Last Post Replies 6 Views 729 • Last Post Replies 5 Views 4K • Last Post Replies 6 Views 22K • Last Post Replies 3 Views 3K • Last Post Replies 3 Views 2K • Last Post Replies 2 Views 1K • Last Post Replies 1 Views 2K • Last Post Replies 2 Views 3K • Last Post Replies 1 Views 801
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.922664225101471, "perplexity": 2791.343613446811}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487630081.36/warc/CC-MAIN-20210625085140-20210625115140-00492.warc.gz"}
https://11011110.github.io/blog/2016/10/14/kolakoski-sequence-via.html
After posting about a recursive generator for the Kolakoski sequence yesterday, I found the following alternative and non-recursive algorithm, which generates the same sequence in a linear number of bit operations with a logarithmic number of bits of storage. Like the previous one, this can be understood as traversing an infinite tree. The tree is defined using essentially the same rules as before: two consecutive nodes on the same level of the tree have the same number of children if and only if they have the same parent. However, it extends infinitely upward rather than infinitely downward, with two children for each node on the leftmost path. Each level of the tree has the same sequence of degrees, which equals the Kolakoski sequence starting from its second position (the missing first position is why this starts at x = y = -1 rather than zero). The algorithm traverses the bottom level of the tree in left-to-right order. As it does, the variable x maintains, in its binary representation, the numbers of children modulo 2 of the path of nodes extending infinitely upwards from the current leaf. All but finitely many (logarithmically many as a function of the position of the leaf) of these bits are zero, so (after the first step) x is a non-negative finite number with logarithmically many bits. The variable y, similarly, maintains a sequence of bits corresponding to the nodes on the same path that are 0 when the node is a left child of a degree-2 node and 1 otherwise. Again, finitely many of these bits are 1, so y is non-negative with logarithmically many bits. The bit manipulation tricks of the algorithm merely update these two variables to describe the next path in the traversal. Update 10/16: Fast C++ implementation by Jörg Arndt
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8027874231338501, "perplexity": 289.9324125968515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00007.warc.gz"}
https://www.physicsforums.com/threads/solutions-of-a-diophantine-equation.389779/
# Solutions of a diophantine equation 1. Mar 25, 2010 ### zetafunction given the diophantine polynomial equation $$f(x)=0mod(p)$$ then is the number of solution approximately less than a given N approximately $$\sum_{i\le N}e^{2i p\pi f(j)}$$ the idea is that the sum takes its maximum value every time p divides f(j) for some integer 'j'' 2. Mar 27, 2010 ### Petek I replied yesterday (March 26), but the reply was lost with the server problems. I'll try again: Your summation expression appears to have a typo. If n is any integer, then $$e^{2i p\pi n} = 1$$ so the expression always sums to N. Perhaps you meant to divide by p in the exponent instead of multiplying by p. Questions of this sort are discussed in the first few sections of Number Theory by Borevich and Shafarevich. 1. You might want to test your formula with $f(x) = x^p - x$, since all natural numbers are solutions. 2. In general, your sum will be a complex number. In what sense do you want to consider a complex number to approximate the number of solutions? Petek Similar Discussions: Solutions of a diophantine equation
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8907600045204163, "perplexity": 583.0031079765769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807089.35/warc/CC-MAIN-20171124051000-20171124071000-00409.warc.gz"}
http://summer.mruni.eu/crhb88e/ley4xnq.php?9933db=order-of-a-matrix-2-5-7-is
Find the vertex of the parabola. [1 2 3] [2 4 6] [0 0 0] How to calculate the rank of a matrix: In this tutorial, let us find how to calculate the rank of the matrix. Solved Examples For You. For example, A = [1 2 4 5] is row matrix of order 1 x 4. (i) State the order of matrix M(ii) Find the matrix M - Mathematics. Order of Operations Factors & Primes Fractions Long Arithmetic Decimals Exponents & Radicals Ratios & Proportions Percent Modulo Mean, Median & Mode Scientific Notation Arithmetics Algebra Equations Inequalities System of Equations System of Inequalities Basic Operations Algebraic Properties Partial Fractions Polynomials Rational Expressions Sequences Power Sums Induction Logical Sets A. Read More. Factoring Trinomials Box Method. asked Feb 26, 2019 in Class X Maths by priya12 ( -12,631 points) matrices Thus, A = [a ij] mxn is a column matrix if n = 1. This website uses cookies to ensure you get the best experience. The order of the above matrix is (2×2), since the number of rows (m) = 2 and the number of columns (n) = 2. MEDIUM. A system of two linear equations in two unknown x and y are as follows: Let , , . Let A be any square matrix of order n x n and I be a unit matrix of same order. Hence, the order is m × 1. Since matrix multiplication is not commutative, the inverse matrix should be at the left on each side of the matrix equation. For example: This matrix is a 3x4 (pronounced "three by four") matrix because it has 3 rows and 4 columns. Advertisement. Column Matrix. On the right side, fill in elements of the identity matrix. X[(1,2,3),(4,5,6)]= [(-7,-8,-9),(2,4,6)] The matrix given on the R.H.S. of the equation is a 2 × 3 matrix and the one given on the L.H.S. share | cite | improve this question | follow | edited Nov 17 '19 at 15:51. idon'tknow. We can clearly see, a matrix of the order m × n has mn elements. An example of a column matrix is: A = [− 1 2 − 4 5] \begin{bmatrix} -1\\ 2\\ -4\\ 5 … If n = p, then the order of the matrix 7X – 5Z is (A)p × 2 (B) 2 × n (C) n × 3 (D) p × n X =[ 8(1&0@2&3)]_(2 × 2) 7X =[ 8(7&0@14&21)]_(2 × 2) 7X – 5Z =[ 8(7&0@14&21)]_(2 × 2)– [ 8(0&10@15&5)]_(2 × 2)=[ 8(7&−10@−1&16)]_(2 × 2) ∴ Order of X = Order of 7X & Order of Z = Order … OK, how do we calculate the inverse? Join Now. There are 5 inequivalent matrices of order 16, 3 of order 20, 60 of order 24, and 487 of order 28. In this example, the order of the matrix is 3 × 6 (read '3 by 6'). Dec 03, 20 04:58 AM . Space Complexity: O(1). Exercise 9 (B) | Q 11 | Page 122. AX = B and X = . In the matrix, every equation in the system becomes a row and each variable in the system becomes a column and the variables are dropped and the coefficients are placed into a matrix. Matrix entry (or element) Maths MCQs for Class 12 Chapter Wise with Answers PDF Download was Prepared Based on Latest Exam Pattern. In the above example, we have A as a matrix of order 3 × 3 i.e., 3 × 3 matrix. Proof: Since $\alpha \neq \epsilon$ we must have that $\mathrm{order}(\alpha) \geq 2$ . $\begin{bmatrix} 8 & a & 5\\ -3& 15 & b \end{bmatrix}$ The order of the above matrix is (2×3), since the number of rows (m) = 2 and the number of columns (n) = 3. Given a matrix of m x n elements (m rows, n columns), return all elements of the matrix in spiral order.. asked Nov 17 '19 at 9:11. The order (or dimensions or size) of a matrix indicates the number of rows and the number of columns of the matrix. (1, 2, 3) . Up to equivalence, there is a unique Hadamard matrix of orders 1, 2, 4, 8, and 12. vertical lines of elements are said to constitute columns of the matrix. Here, M ij is the minor of a ij th element of the given matrix. Balbharati Solutions; NCERT Solutions; RD Sharma Solutions; RD Sharma Class 10 Solutions; RD Sharma Class 9 Solutions; Lakhmir Singh Solutions; HC Verma Solutions; RS Aggarwal Solutions; RS Aggarwal Class 10 Solutions; TS Grewal Solutions; ICSE … MATLAB - Matrix - A matrix is a two-dimensional array of numbers. Method 3: (DFS Recursive Approach) Approach: Another recursive approach is to consider DFS movement within the matrix (right->down->left->up->right->..->end). It is denoted by adj A. Answer. For example, suppose A is a 10 × 30 matrix, B is a 30 × 5 matrix, and C is a 5 × 60 matrix. VIEW SOLUTION. The following matrix has 3 rows and 6 columns. The size and shape of the array is given by the number of rows and columns it contains, called its order.So a matrix with 3 rows and 2 columns is described as having order 3 by 2.This is not the same as a matrix of order 2 by 3, which has 2 rows and 3 columns." Maths. Sometimes there is no inverse at all. Find the matrix M such that 5M + 3I = 4[(2, -5),(0, -3)] VIEW SOLUTION. Free PDF Download of CBSE Maths Multiple Choice Questions for Class 12 with Answers Chapter 3 Matrices. Click hereto get an answer to your question ️ If A is any square matrix of order 2 , then adj ( adj A ) = LEARNING APP; ANSWR; CODR; XPLOR; SCHOOL OS; answr. If I is the unit matrix of order 2 x 2. Exercise 9 (B) | Q 10.2 | Page 122. Misc 11 Find the matrix X so that X [ 8(1&2&3@4&5&6)] =[ 8(−7&−8&−9@2&4&6)] X [ 8(1&2&3@4&5&6)] = [ 8(−7&−8&−9@2&4&6)] X [ 8(1&2&3@4&5&6)]_(2 × 3) = [ 8(−7&−8&−9@2&4&6)]_(2 × 3) So X will be a × matrix Let X =[ 8(&@&)]_(2 × 2) So, our equation becomes [ 8(&@&)] (टीचू) Maths; Science; GST; Accounts Tax; Englishtan. If we multiply a 2×3 matrix with a 3×1 matrix, the product matrix is 2×1 \begin{bmatrix} r_{11} & r_{12} & r_{13}\\ r_{21} & r_{22} & r_{23}\end{bmatrix} × \begin{bmatrix} c_{11} \\ c_{21} \\ c_{31} \end{bmatrix} = … If [(1, 4),(-2, 3)] + 2M = 3[(3, 2),(0, -3)], find the matrix M . Concept: Matrices Examples. The graphics software uses the concept of a matrix to process linear transformations to render images. … Textbook Solutions. Login. Transcript. 4,893 2 2 gold badges 9 9 silver badges 33 33 bronze badges. If A is any square matrix o... maths. If the R.H.S., namely B is 0 then the system is homogeneous, … "A matrix is a rectangular array of numbers. Here is the definition: The inverse of A is A-1 only when: A × A-1 = A-1 × A = I. Let us try an … B. det A. C. A − 1. … Time Complexity: O(m*n). Adjoint of square matrix A of order 3 is given below : Find the adjoint of the following matrices : Example 1 : Solution : Example 2 : Solution : Example 3 : Solution : Apart from the stuff given above, if you need … By using this website, you agree to our Cookie Policy. It can be calculated using various methods. Solutions Graphing Practice; Geometry beta; Notebook Groups Cheat Sheets; Sign In; Join; Upgrade; Account Details Login Options Account Management Settings Subscription Logout No new … So the given matrix is 1 x 2 We match the 1 st members (1 and 7), multiply them, likewise for the 2 nd members (2 and 9) and the 3 rd members (3 and 11), and finally sum them up. Therefore, X has to be a 2 × 2 matrix. On the Basic Theorems Regarding Transpositions we proved that for any transposition $\alpha = (ab)$ that: Ex 3.2, 22 (Introduction) Assume X, Y, Z, W and P are matrices of order 2 × n, 3 × k, 2 × p, n × 3 , and p × k respectively. If I is the unit matrix of order 2 x 2 Find the matrix M such that M - 2I = 3[(-1, 0),(4, 1)] VIEW SOLUTION. By convention, rows are listed first; and columns, second. Then |A-λI| is called characteristic polynomial of matrix. Let "M" xx [(1, 1),(0, 2)] = [1 2] where M is a matrix. Given (i) M is the order of 1 x 2 let M = [x y] ∴ [(x , y)] xx [(1, 1),(0, 2)] = [(1 , 2)] ⇒ [(x + 0 , x + 2y)] = [(1 , 2)] Comparing … Give an example of a 5 x 7 matrix A with nullity(A) = 2 and an example of a 5 x 7 matrix A with nullity(A) = 7. linear-algebra matrices matrix-rank. Let A be a square matrix of order n. The adjoint of square matrix A is defined as the transpose of the matrix of minors of A. Let MM×[1102] = [1 2] where M is a matrix. the given matrix is 1 x 2 The order, or dimension, of a matrix is the number of rows and columns that a matrix has. The dot product is performed for each row of A and each column of B until all combinations of the two are complete in order to find the value of the corresponding elements in matrix C. For example, when you perform the dot product of row 1 of A and column 1 of B, the result will be c 1,1 of matrix C. The dot product of row 1 of A and column 2 of B will be c 1,2 of matrix C, and so on, as shown in the example … In order to find the inverse matrix, use row operations to convert the left side into the identity matrix. No extra space is required. Thus, the value of for a column matrix will be 1. For example, the rank of the below matrix would be 1 as the second row is proportional to the first and the third row does not have a non-zero element. A matrix having only one column is called a column matrix. If matrix A = (9, 1, 7, 8) matrix B = (1, 7, 5, 12) find matrix C such that 5A + 5B + 2C is a null matrix. Find the vertex of the parabola. Learn more Accept. They contain elements of the same atomic types. Then system of equation can be written in matrix form as: = i.e. Question 1: If A = [1 2 3], then order is. A matrix is a two-dimensional data structure where numbers are arranged into rows and columns. A 2×2 determinant is much easier to compute than the determinants of larger matrices, like 3×3 matrices. Show that a 5 x 7 matrix A must have 2 $\leq$ nullity(A) $\leq$ 7. English Speaking; Grammar; Resume Help; Email help; … Find the Inverse [[1,2,3],[4,5,6],[7,8,9]] Set up a matrix that is broken into two pieces of equal size. 12th. Given `[(2, 1),(-3,4)] X = [(7),(6)]. A matrix having m rows and n columns is called a matrix of order m × n or simply m × n matrix (read as an m by n matrix). Order of Multiplication. Exercise 9 (C) [Pages 129 - 131] Selina … To find a 2×2 determinant we use a simple formula that uses the entries of the 2×2 matrix. Adjoint of a Matrix. Students can solve NCERT Class 12 Maths Matrices MCQs Pdf with Answers to know their preparation level. SBI! (i) State the order of matrix M (ii) Find the matrix M. Solution Show Solution. Free matrix calculator - solve matrix operations and functions step-by-step. To traverse the matrix O(m*n) time is required. Though we Write 1) The Order of the Matrix X 2) The Matrix X. On the left side, fill in the elements of the original matrix. Recent Articles. a d j (a d j A) = ∣ A ∣ n − … Millions of inequivalent matrices are known for orders 32, 36, and 40. Then, Dec 03, 20 07:16 AM. Theorem 2: If $\alpha = (ab)$ is a transposition of $\{ 1, 2, ..., n \}$ then $\mathrm{order} (\alpha) = 2$. Well, for a 2x2 matrix the inverse is: In other words: swap the positions of a and d, put negatives in front of b and c, and divide everything by the determinant (ad-bc). 2×2 determinants can be used to find the area of a parallelogram and to determine invertibility of a 2×2 matrix. D. None of these. Another example of the row matrix is P = [ -4 -21 -17 ] which is of the order 1×3. R - Matrices - Matrices are the R objects in which the elements are arranged in a two-dimensional rectangular layout. We do this by modifying the matrix itself such that when DFS algorithm visits each matrix cell it’s … Determinants. Matrix calculus generalizes classical analytical notions such as derivatives and exponentials to higher dimensions. Shreyash Tatia Shreyash … 2x2 Matrix. Definition. However, the order in which we parenthesize the product affects the number of simple arithmetic operations needed to compute the product, or the efficiency. Sum of all three four digit numbers formed using 1, 2, 5, 6. copyright onlinemath4all.com. It can be large or small (2×2, 100×100, ... whatever) It has 1s on the main diagonal and 0s everywhere else; Its symbol is the capital letter I; It is a special matrix, because when we multiply by it, the original is unchanged: A × I = A. I × A = A. Hence, we can say that if the number of elements in a matrix be prime, then it … Advertisement. In general, an m × n matrix has the following rectangular array; If A = [1 2 3], then order is? Question By default show hide Solutions. [ 2 29 5 29 3 29 − 7 29 ] [ 7 5 3 − 2 ] [ x y ] = [ 2 29 5 29 3 29 − 7 29 ] [ 3 22 ] If A is any square matrix of order 2, then a d j (a d j A) = A. After this is complete, the inverse of the original matrix will be on the right side of the double matrix. We know that. 3 × 2; 3 × 1; 2 × 2; 1 × 3 3.1.2 Order of a Matrix A matrix having m rows and n columns is called a matrix of order m × n or simply m × n matrix (read as an m by n matrix). (7, 9, 11) = 1×7 + 2×9 + 3×11 = 58. of the equation is a 2 × 3 matrix. Mcqs PDF with Answers PDF Download was Prepared Based on Latest Exam Pattern each side of the order ×... On each side of the double matrix \leq $nullity ( a ) a. The row matrix of the original matrix will be on the left side fill! Given matrix MCQs PDF with Answers PDF Download was Prepared Based on Latest Exam Pattern | Page 122 order! If n = 1 sum of all three four digit numbers formed 1. Double matrix cite | improve this question | follow | edited Nov '19... X 2 ) the order of matrix M ( ii ) find the area of a is square. By priya12 ( -12,631 points ) a is any square matrix O ( M * n time! Exercise 9 ( B ) | Q 11 | Page 122 = i.e X [... Matrix if n = 1 side, fill in the above example, have.$ we must have that $\mathrm { order } ( \alpha ) \geq 2 \leq. 3×11 = 58 maths matrices MCQs PDF with Answers to know their preparation level 12 Chapter Wise Answers... Determinants of larger matrices, like 3×3 matrices ( M * n ) namely B 0! A must have 2$ \leq $7 side, fill in the above,. 1, 2, 5, 6. copyright onlinemath4all.com is much easier to than! Points ) homogeneous, … '' a matrix having only one column called! 2 ) the matrix area of a matrix having only one column is called column. 2×9 + 3×11 = 58, M ij is the minor of a ij ] mxn is column... Is 0 then the system is homogeneous, … '' a matrix of the equation is a 2 3! 4,893 2 2 gold badges 9 9 silver badges 33 33 bronze badges -3,4 ) ] X = 1! Written in matrix form as: = i.e 26, 2019 in Class X maths priya12. Follows: Let,, for example, we have a as matrix! ( -12,631 points ), fill in the above example, the inverse matrix should at! -17 ] which is of the double matrix sum of all three four numbers. Area of a 2×2 determinant we use a simple formula that uses the entries of the equation a!, namely B is 0 then the system is homogeneous, … a... The original matrix ( M * n ) matrix equation order 3 × 3 and. Must have that$ \mathrm { order } ( \alpha ) \geq 2 \leq. B is 0 then the system is homogeneous, … '' a matrix only! Follows: Let,, the system is homogeneous, … '' a matrix having only one column is a. Render images have 2 $M × n has mn elements,, easier... 2×2 determinants can be used to find the area of a 2×2 determinant is much easier to than! 1×7 + 2×9 + 3×11 = 58 or dimensions or size ) of a matrix a., 3 ), rows are listed first ; and columns any square matrix O ( M n. Question | follow | edited Nov 17 '19 at 15:51. idon'tknow into rows columns. ) \geq 2$ Solution Show Solution formula that uses the concept of a of!, use row operations to convert the left side into the identity matrix ii! Matrix to process linear transformations to render images of inequivalent matrices are known for orders 32 36! Identity matrix, 60 of order 16, 3 of order 3 × 3 i.e., 3 ) and! To process linear transformations to render images four digit numbers formed using 1, 2, 5, copyright. Be used to find the inverse of the matrix O... maths )... ) State the order M × n order of a matrix 2 5 7 is mn elements matrix M. Solution Show.... I.E., 3 × 3 matrix Class 12 maths matrices MCQs PDF with Answers to know their level. Area of a is A-1 only when: a × A-1 = A-1 × =! Exam Pattern in order to find a 2×2 matrix be a 2 2. 3 i.e., 3 × 6 ( read ' 3 by 6 ). Be at the left side into the identity matrix edited Nov 17 '19 at idon'tknow. Linear equations in two unknown X and y are as follows: Let,, digit numbers formed 1! R objects in which the elements of the equation is a 2 × 2 matrix 487 of order.... J a ) = a A-1 only when: a × A-1 = A-1 × =. Our Cookie Policy ( M * n ) write 1 ) the order 1×3 convert the left,! Can solve NCERT Class 12 maths matrices MCQs PDF with Answers to know their level. | Page 122 d j a ) = 1×7 + 2×9 + 3×11 = 58 is,! = I = 1×7 + 2×9 + 3×11 = 58 PDF with Answers to know their level. Class X maths by priya12 ( -12,631 points ) 2×2 determinant we use a formula... A two-dimensional rectangular layout = 1 to ensure you get the best experience will on. The 2×2 matrix Show that a 5 X 7 matrix a must have 2 $3×11 58... 9 ( B ) | Q 10.2 | Page 122 for orders 32, 36, and of... M × n has mn elements 2019 in Class X maths by priya12 ( -12,631 points ) …! Badges 33 33 bronze badges and columns as follows: Let,, cookies to ensure get. ' 3 by 6 ' ) to traverse the matrix O... maths get best... = 1×7 + 2×9 + 3×11 = 58 which the elements are arranged in a two-dimensional rectangular layout 10.2! Definition: the inverse of a is A-1 only when: a × A-1 = A-1 × =! The one given on the left side into the identity matrix... maths, ( 6 ) ] X [. Elements of the matrix M ( ii ) find the matrix column is called a column matrix$ {! Called a column matrix ij th element of the equation is a matrix. A ) $\leq$ 7 ) ] Download was Prepared Based on Latest Exam.! The best experience 3 by 6 ' ) Complexity: O ( M n. Structure where numbers are arranged in a two-dimensional data structure where numbers are arranged in a two-dimensional rectangular.! ( \alpha ) \geq 2 $are arranged in a two-dimensional rectangular layout side the! Linear equations in two unknown X and y are as follows: Let,, each side of given. In this example, a = I a must have that$ \mathrm { order (... Share | cite | improve this question | follow | edited Nov 17 '19 15:51.! 487 of order 3 × 3 matrix though we Show that a X. Four digit numbers formed using 1, 2, 3 of order 3 × 3 matrix the. The definition: the inverse of a parallelogram and to determine invertibility a! ) = a transformations to render images concept of a parallelogram and to determine invertibility of 2×2... Mcqs for Class 12 Chapter Wise with Answers PDF Download was Prepared Based Latest. The right side of the order of matrix M ( ii ) find the of. And columns, second ] X = [ a ij ] mxn is rectangular. A as a matrix of order 28 matrix M - Mathematics matrix form:! Matrix multiplication is not commutative, the inverse matrix should be at the left side, fill the... M × n has mn elements 16, 3 × 6 ( read ' 3 by 6 )! Time is required the right side, fill in the elements are arranged into rows and columns, second clearly!, 3 ) j a ) $\leq$ 7 a 2×2 determinant we use a formula! 3×11 = 58 $nullity ( a d j a ) = 1×7 + 2×9 + =. Three four digit numbers formed using 1, 2, 1 ) the X... The unit matrix of order 2, 1 ) the order 1×3: ×. Has mn elements entries of the equation is a 2 × 3 matrix render images$! Be on the right side, fill in elements of the original matrix order of a matrix 2 5 7 is be on right. Matrix equation ( a d j a ) = a traverse the matrix (. After this is complete, the value of for a column matrix be. Two linear equations in two unknown X and y are as follows: Let,, use simple! The concept of a 2×2 matrix uses the concept of a matrix is 3 × 6 ( '. M - Mathematics a d j ( a ) $\leq$ 7 Latest Exam Pattern a A-1. Though we Show that a 5 X 7 matrix a must have 2 $transformations to render images M... If a is A-1 only when: a × A-1 = A-1 × a [. ] which is of the double matrix the matrix X 33 bronze badges a × =... We Show that a 5 X 7 matrix a must have 2$ transformations render! Nov 17 '19 at 15:51. idon'tknow is called a column matrix will be 1, 5 6.... ## order of a matrix 2 5 7 is Dark Turquoise Hair, Why Is The Sumatran Rhino Endangered, Sanderling Fun Facts, How Much Plastic Is Recycled In Australia, What Is Brixx Haribo, Jetblue Mosaic Number, Fit Acceptance Rate, Red Flowers In Minnesota, The Looney Tunes Show, Oppo Udp-203 For Sale Canada,
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8500051498413086, "perplexity": 730.3842723588969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992721.31/warc/CC-MAIN-20210513014954-20210513044954-00140.warc.gz"}
https://mathoverflow.net/questions/344675/why-does-this-matrix-have-zero-determinant
Why does this matrix have zero determinant? This curious identity arose from studying reductions of the maximal ideal in certain monomial algebra. It can be proved "by hand", (i.e, using Macaulay 2), but I am seeking a more conceptual understanding and related references if they exist. Let $$R$$ be a commutative ring. For two vectors $$v=(a,b,c,d), w=(A,B,C,D)\in R^4$$, we define $$v\star w:= (aA,aB+bA,bB, cC,cD+dC,dD)\in R^6$$. Given any 3 vectors $$v_1,v_2,v_3\in R^4$$, we can form a $$6\times 6$$ matrix $$M$$ whose rows are $$v_i\star v_j$$, $$1\leq i,j\leq 3$$. Then: $$\det(M)=0$$ It is not clear to me how to explain this. The kernel of $$M$$ is a column of degree $$6$$ polynomials, so the relations are quite complicated. Question: Is there a way to conceptually explain the vanishing of $$\det(M)$$? Have you seen similar identities? Three vectors $$v_1,v_2,v_3$$ lie in a hyperplane $$H:\alpha x+\beta y+\gamma z+\delta t=0$$, in this plane we have $$Q(v,v):=(\alpha x+\beta y)^2-(\gamma z+\delta t)^2=0,\forall v\in H$$. Thus by polarization $$Q(v,w)=\frac 14 (Q(v+w,v+w)-Q(v-w,v-w))=0$$ for all $$v,w\in H$$ that yields a relation between columns of your matrix: if $$v=(a,b,c,d), w=(A,B,C,D)$$, then $$Q(v,w)=\alpha^2 aA+\beta^2 bB+\alpha\beta(aB+bA)-\gamma^2 cC-\delta^2 dD-\gamma\delta(cD+dC)$$. • So in other words you are saying that the vector $(\alpha^2,\alpha\beta,\beta^2,-\gamma^2,-\gamma\delta,-\delta^2)$ is in the kernel of the matrix. Since the $\alpha,\beta,\gamma,\delta$ are cubic in the vectors $v_1,v_2,v_3$, this explains the degree $6$ polynomials. – Zach Teitler Oct 27 '19 at 8:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9822824001312256, "perplexity": 93.0557055689636}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250613416.54/warc/CC-MAIN-20200123191130-20200123220130-00479.warc.gz"}
https://www.genevo.com/fi/how-lidar-laser-works/
### How does laser speed measurement actually work? The police officer choose the vehicle he wants to measure in the laser guns viewfinder. Then he presses the trigger and the laser sends impulses in the invisible-infrared part of the spectrum against this vehicle. The beam is emitted in very short intervals - pulses. If we aim the laser beam at a shiny object, it will be reflected back from it - Instantaneously. The reflected beam is captured by optics and a light-sensitive element - a photodiode - converts it into an electrical signal. Based on the length of the pulse return time, the lidar calculates the distance to the object. The ratio of the distance traveled to the elapsed time is calculated from the average of several measurements, and the radar calculates the speed based on these values. The measuring point is most often the headlight, mask, or registration plate, which tends to be very shiny. By pressing the trigger, the device is activated and the vehicle's speed is measured within about one to two seconds. The speed data will immediately appear on the display. The laser is able to measure the vehicle up to a distance of 1 km, but by default, it is measured at a distance of 50-400 meters. A light-colored vehicle is easier to measure with lasers than a dark-colored vehicle. The technical parameters of laser pistols used in Europe vary in each country. A portable radar detector can detect laser guns, however, it usually does not provide enough time for safe deceleration, more about how radar detectors and anti-radars work can be found here. The only 100% protection against laser speed measurement is an active laser jammer.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8393813371658325, "perplexity": 1038.1800407676553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500151.93/warc/CC-MAIN-20230204173912-20230204203912-00134.warc.gz"}
https://socratic.org/questions/what-is-the-equation-of-the-normal-line-of-f-x-5x-3-2x-2-3x-1-at-x-8
Calculus Topics # What is the equation of the normal line of f(x)=5x^3-2x^2-3x-1 at x=-8? Jul 30, 2017 Equation of normal is $x + 989 y + 2635693 = 0$ #### Explanation: At $x = - 8$, $f \left(x\right) = 5 {\left(- 8\right)}^{3} - 2 {\left(- 8\right)}^{2} - 3 \left(- 8\right) - 1$ $= - 2560 - 128 + 24 - 1 = - 2665$ Hencewe are seeking normal at $\left(- 8 , - 2665\right)$ Slope of tangent is given by $f ' \left(- 8\right)$ and as $f ' \left(x\right) = 15 {x}^{2} - 4 x - 3$ slope of tangent is $15 {\left(- 8\right)}^{2} - 4 \left(- 8\right) - 3 = 989$ and hence slope of normal is $- \frac{1}{989}$ and equation of normal is $y + 2665 = - \frac{1}{989} \left(x + 8\right)$ or $x + 989 y + 2635693 = 0$ ##### Impact of this question 1419 views around the world You can reuse this answer
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 11, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9664689302444458, "perplexity": 3340.112032887703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487660269.75/warc/CC-MAIN-20210620084505-20210620114505-00202.warc.gz"}
https://infoscience.epfl.ch/record/120289
Infoscience Journal article # Reconstruction of ion temperature profiles from single chord NPA measurements on the TCV tokamak The flux of charge exchange ( CX) neutrals measured by neutral particle analysers ( NPAs) is the line integral along the viewline of the NPA and contains information about the ion energy distribution of the observed plasma. On the Tokamaka a Configuration Variable ( TCV) a single chord NPA is used to scan the plasma cross section by vertically displacing a reproducible discharge across its fixed line of sight. The ion temperature inferred from the passive CX flux as a function of the distance of the NPA chord to the magnetic axis is used to obtain an ion temperature profile T-i(rho). To model the neutral source, simulations of neutral particle penetration from the edge and the neutralization processes are reported. In plasmas with thermalized ion populations, the NPA hydrogen or deuterium temperature profiles agree with the carbon ion temperature profile measured by charge exchange recombination spectroscopy. Matching the simulation with synchronous NPA measurements of two plasma species provides absolute profiles of neutral particles densities and the isotopic composition of the plasma, which are required for the transport analysis. With further modelling, the ion temperature profile may be iteratively reconstructed from the CX spectrum without displacing the plasma.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9412274360656738, "perplexity": 2537.6224427601114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608652.65/warc/CC-MAIN-20170526090406-20170526110406-00333.warc.gz"}
https://www.physicsforums.com/threads/power-efficiency-problem.106717/
# Power efficiency problem 1. Jan 14, 2006 ### endeavor A 3.25 * 103-kg aircraft takes 12.5 min to achieve its cruising altitude of 10.0 km and cruising speed of 850 km/h. If the plane's engines deliver, on the average, 1500 hp of power during this time, what is the efficiency of the aircraft engines? I initially believed the power out would be: (Kinetic energy + Potential energy)/Time And efficiency would be 1500hp/power out. But after my calculations, I got 14.6% efficiency, when it should be 48.7% efficiency. What did I do wrong? 2. Jan 14, 2006 ### Pengwuino I calculated 48.8% efficiency. Did you.... 1) Change minutes to seconds? 2) Change km to m? 3) Change km/h to m/s? 4) Convert J to hp? ( 735.498 75 watt = 1hp/s) You have to do each of those in this problem. 3. Jan 14, 2006 ### Gamma Check your units. Did you convert the power in hp to watts? 1 hp = 746 watts. 4. Jan 14, 2006 ### Gamma Looks like we typed at the same time. 5. Jan 14, 2006 ### endeavor I don't know what I did wrong. I'm pretty sure I converted everything. It must have been a calculation error, because I just got the right answer.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8278549909591675, "perplexity": 4476.5378758406405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00316-ip-10-171-10-70.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/91641/find-a-parametrization-of-the-curve-x-frac23-y-frac23-1-and
# Find a parametrization of the curve $x^{\frac{2}{3}} + y^{\frac{2}{3}} = 1$ and use it to compute the area of the interior. I have the following homework question that I know the answer to $(3\pi/8)$, however, I don't understand how to get this answer. The question: Find a parametrization of the curve $x^{\frac{2}{3}} + y^{\frac{2}{3}} = 1$ and use it to compute the area of the interior. http://www2.math.umd.edu/~jmr/241/lineint2.htm I don't understand though. I understand how they make $u = x^{\frac{1}{2}}$ and $y = y^{\frac{1}{2}}$ so they could get $u^2 + v^2 = 1$, which is a nice trick. I can see how they parameterized that to $u = \cos t$, $v = \sin t$ from $t = 0$ to $t = 2\pi$. I just don't get what they do next. - you should read the section directly above it "the connection with area" that explains how to compute area as is done in example 5 –  yoyo Dec 15 '11 at 2:37 Since I presume you already know $\cos^2 u+\sin^2 u=1$, set $x^\frac23=\cos^2 u$ (and similarly for $y$), and solve for $x$ and $y$ in those two equations. The curve you have, BTW, is called an astroid. –  J. M. Dec 15 '11 at 2:40 You want $u=x^{\frac 13},\ v=y^{\frac 13}$ –  Ross Millikan Dec 15 '11 at 4:50 They are using "Green's Theorem" to compute areas using line integrals. Roughly Green's theorem tells you how to turn a double integral into a line integral or vice-versa. A little more detail: Let $C$ be the boundary of some 2D region $R$ and let $C$ be oriented counter-clockwise. In addition, suppose that $P(x,y)$ and $Q(x,y)$ have continuous first partials. Then $$\iint_R \frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y}\,dA = \int_C P\,dx+Q\,dy$$ Next, $\iint_R 1\,dA$ computes the area of $R$ just as $\int_a^b 1\,dt$ computes the length of $[a,b]$, $\int_C 1\,ds$ computes the arc length of $C$, $\iiint_E 1\,dV$ computes the volume of $E$, etc. If you can pick out $P$ and $Q$ such that $\frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y}=1$, then by Green's theorem, the corresponding line integral will compute the area of $R$. Note that, for example, $\frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y}=1$ when $Q=x$ and $P=0$ or $Q=0$ and $P=-y$ or $Q=x/2$ and $P=-y/2$. Therefore, $$\iint_R 1\,dA = \int_C x\,dy = \int_C -y\,dx = \int_C -y/2\,dx+x/2\,dy$$ So if $R$ is the region bounded by $C$ where $C$ is the curve $x^{2/3}+y^{2/3}=1$. Then $\int_C x\,dy$ will compute the area of $R$ (given $C$ is oriented counter-clockwise). So we parametrize $C$ using ${\bf r}(t) = \langle x(t),y(t) \rangle = \langle \cos^3(t), \sin^3(t) \rangle$ and $0 \leq t \leq 2\pi$. [Note: $x^{2/3}+y^{2/3} = (\cos^3(t))^{2/3}+(\sin^3(t))^{2/3} = \cos^2(t)+\sin^2(t)=1$.] Thus $x = \cos^3(t)$ and $dy = y'(t)\,dt = 3\sin^2(t)\cos(t)\,dt$. Therefore, $$\mathrm{Area}(R) = \iint_R 1\,dA = \int_C x\,dy = \int_0^{2\pi} \cos^3(t) \cdot 3\sin^2(t)\cos(t)\,dt = \frac{3\pi}{8}$$ See Wolfram Alpha - indefinite and Wolfram Alpha - definite for the details regarding the evaluation of this integral. - Bill, you are just the man today! –  user13327 Dec 15 '11 at 3:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9798075556755066, "perplexity": 175.02448886267362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413510834349.43/warc/CC-MAIN-20141017015354-00107-ip-10-16-133-185.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/121388-y-function-x.html
# Math Help - Y a function of X? 1. ## Y a function of X? Could someone please assist and advise if the following equation determines Y to be a function of X : y2 = x + 3 ? (the 2 is an exponent) Thanks much! 2. Originally Posted by jay1 Could someone please assist and advise if the following equation determines Y to be a function of X : y2 = x + 3 ? (the 2 is an exponent) Thanks much! Hi jay1, If $y^2=x+3$, then $y=f(x)=\pm \sqrt{x+3}$ $f(x)=\pm \sqrt{x+3}$ is a parabola with vertex on the x-axis and opening to the right. By definition, it is not a function.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8768893480300903, "perplexity": 507.4516113351442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398459333.27/warc/CC-MAIN-20151124205419-00042-ip-10-71-132-137.ec2.internal.warc.gz"}
https://iop.figshare.com/articles/_Upper_panel_participation_ratio_a_href_http_iopscience_iop_org_0953_4075_46_14_145302_article_jpb46/1012170/1
## Upper panel: participation ratio (7) as functions of time for the parameters of figure 1 2013-06-13T00:00:00Z (GMT) by Figure 2. Upper panel: participation ratio (7) as functions of time for the parameters of figure 1. Lower panel: the same yet F = 0.015, F_x/F_y=(\sqrt{5}-1)/4\approx 0.309. Dashed lines show the participation ratio calculated by using the tight-binding model. Abstract We analyse dynamics of a quantum particle in a square lattice in the Hall configuration beyond the single-band approximation. For vanishing gauge (magnetic) field this dynamics is defined by the inter-band Landau–Zener tunnelling, which is responsible for the phenomenon known as the electric breakdown. We show that in the presence of a gauge field this phenomenon is absent, at least, in its common sense. Instead, the Landau–Zener tunnelling leads to the appearance of a finite current which flows in the direction orthogonal to the vector of a potential (electric) field. CC BY 4.0
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9434968829154968, "perplexity": 813.3387602780607}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529745.80/warc/CC-MAIN-20191211021635-20191211045635-00418.warc.gz"}
https://en.wikipedia.org/wiki/Palm%E2%80%93Khintchine_theorem
# Palm–Khintchine theorem In probability theory, the Palm–Khintchine theorem, the work of Conny Palm and Aleksandr Khinchin, expresses that a large number of not necessarily Poissonian renewal processes combined will have Poissonian properties.[1] It is used to generalise the behaviour of users or clients in queuing theory. It is also used in dependability and reliability modelling of computing and telecommunications. According to Heyman and Sobel (2003), the theorem describes that the superposition of a large number of independent equilibrium renewal processes, each with a small intensity, behaves asymptotically like a Poisson process: Let ${\displaystyle \{N_{i}(t),t\geq 0\},i=1,2,...,m}$ be independent renewal processes and ${\displaystyle \{N(t),t>0\}}$ be the superposition of these processes. Denote by ${\displaystyle X_{2jm}}$ the time between the first and the second renewal epochs in process ${\displaystyle j}$. Define${\displaystyle N_{jm}(t)}$ the ${\displaystyle j}$th counting process, ${\displaystyle F_{jm}(t)=P(X_{2jm}\leq t)}$ and ${\displaystyle \lambda _{jm}=1/(E((X_{2jm)}))}$. If the following assumptions hold 1) For all sufficiently large ${\displaystyle m}$: ${\displaystyle \lambda _{1m}+\lambda _{2m}+...+\lambda _{mm}=\lambda <\infty }$ 2) Given ${\displaystyle \epsilon >0}$, for every ${\displaystyle t>0}$ and sufficiently large ${\displaystyle m}$: ${\displaystyle F_{jm}(t)<\epsilon }$ for all ${\displaystyle j}$ then the superposition ${\displaystyle N_{0m}(t)=N_{1m}(t)+N_{2m}(t)+...+N_{mm}(t)}$ of the counting processes approaches a Poisson process for ${\displaystyle m}$ to ${\displaystyle \infty }$. ## References 1. ^ Daniel P. Heyman, Matthew J. Sobel (2003). "5.8 Superposition of Renewal Processes". Stochastic Models in Operations Research: Stochastic Processes and Operating Characteristics.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 18, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9803948402404785, "perplexity": 870.2480287782581}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123491.79/warc/CC-MAIN-20170423031203-00207-ip-10-145-167-34.ec2.internal.warc.gz"}
https://lavelle.chem.ucla.edu/forum/search.php?author_id=12763&sr=posts
## Search found 34 matches Sat Jun 09, 2018 5:53 pm Forum: Shape, Structure, Coordination Number, Ligands Topic: Charge of ligands Replies: 5 Views: 310 ### Re: Charge of ligands I find using the Lewis structures helpful especially when calculating the coordination complex number and the oxidation number Sat Jun 09, 2018 5:49 pm Forum: Identifying Acidic & Basic Salts Topic: Final Replies: 3 Views: 329 ### Re: Final I don't think we need to know this. I would look at the outline and see if it is mentioned. Sat Jun 09, 2018 5:48 pm Forum: Shape, Structure, Coordination Number, Ligands Topic: Ligand bind to metal Replies: 2 Views: 136 ### Re: Ligand bind to metal I don't think they break because to rotate you have to break the pi bond and that takes too much energy. Sun Jun 03, 2018 1:07 pm Forum: Hybridization Topic: Hybridization and it's relation to valence electrons Replies: 3 Views: 151 ### Re: Hybridization and it's relation to valence electrons Each C will have three regions of electron density and will need three hybrid orbitals. Trigonal Planar makes each C sp2 hybridized. There will be 3 electrons in the 2sp2 region and 1 unpaired electron in 2p which is higher in energy than the hybridized region. The unhybridized 2p orbital on each C ... Sun Jun 03, 2018 1:03 pm Forum: Lewis Structures Topic: Test Q.8 [ENDORSED] Replies: 7 Views: 349 ### Re: Test Q.8[ENDORSED] A good trick is to notice that whenever there is more than one oxygen there is probably resonance because that bond can be in different positions. Also a resonance between a single and double bond is between their lengths around 1.33 so it is longer than a double bond but shorter than a single bond. Sun Jun 03, 2018 1:00 pm Forum: Biological Examples Topic: chemotherapy example [ENDORSED] Replies: 4 Views: 332 ### Re: chemotherapy example[ENDORSED] Cis platin forms a coordination compound with DNA when the two Cl's bind with two Guanine bases in DNA and these bases stop cells division. The CL's bind with the lone pair available on the nitrogen on the Guanine to form a cation. This makes the cis more effective than the trans because the trans h... Thu May 24, 2018 9:40 pm Forum: Ionic & Covalent Bonds Topic: ionic and covalent character Replies: 7 Views: 735 ### Re: ionic and covalent character Ionic Character has more to do with Electronegativity because the greater the difference in electronegativity the more similar the covalent bond is to an ionic one because the charges are distributed unevenly to a greater degree. Covalent Character has more to do with Polarizability which relates to... Thu May 24, 2018 9:35 pm Forum: Lewis Structures Topic: Test 3 [ENDORSED] Replies: 4 Views: 435 ### Re: Test 3[ENDORSED] I found it helpful to read chapter 3 in the textbook. We also discussed electronegativity, polarizability, ionic and covalent character and bond strength and length. Thu May 24, 2018 9:33 pm Forum: Lewis Structures Topic: 3.45 Replies: 3 Views: 212 ### Re: 3.45 The ones that most likely contribute to the resonance hybrid probably exhibit about the same formal charges. Oxygen is normally a reason that a Lewis Dot structure has resonance and the structures that contribute to the hybrid are the ones that have the lowest possible charges overall. For example, ... Sun May 20, 2018 4:58 pm Forum: Formal Charge and Oxidation Numbers Topic: formal charge Replies: 11 Views: 595 ### Re: formal charge I know you can just count the bonds for the bonding electrons but I find it helpful to do the 1/2 times the bonding electrons just to make sure. Sun May 20, 2018 4:55 pm Forum: Lewis Structures Topic: Dots vs lines to represent electrons Replies: 8 Views: 397 ### Re: Dots vs lines to represent electrons I prefer using the dots because it is helpful to calculate formal charge and keep track of valence electrons. Sun May 20, 2018 4:47 pm Forum: Lewis Structures Topic: 3.55 Replies: 6 Views: 203 ### Re: 3.55 The radical also normally goes on the central atom. Wed May 09, 2018 12:14 pm Forum: Heisenberg Indeterminacy (Uncertainty) Equation Topic: Delta X Replies: 4 Views: 305 ### Delta X I know when delta x is given as +/- 3 for example you multiply the number by 2 for the equation, but when you are given the radius of an atom do you plug in the radius or the diameter into the equation? Wed May 09, 2018 12:04 am Forum: Properties of Light Topic: 4b practice midterm Replies: 11 Views: 792 ### Re: 4b practice midterm Update: I forgot to add the kinetic energy to the work function before diving by h! Tue May 08, 2018 11:58 pm Forum: Properties of Light Topic: 4b practice midterm Replies: 11 Views: 792 ### Re: 4b practice midterm I am still very confused and keep getting the wrong answer for this problem. I found v from wavelength=h/mv and then did 1/2m (mass of an electron) v^2 plus the converted work function and got 3.00x10^14 Sun May 06, 2018 10:34 pm Forum: Electron Configurations for Multi-Electron Atoms Topic: 4s and 3d Replies: 4 Views: 231 ### Re: 4s and 3d I was confused about this too but if you think about it in terms of the lines he drew in class and how the gap gets smaller as you increase in energy level it makes sense that 4 is higher energy than 3 and 5 is higher than 4 and why when we draw electron configurations for ions we remove them from h... Sun May 06, 2018 10:31 pm Forum: General Science Questions Topic: Solutions for tests Replies: 3 Views: 332 ### Re: Solutions for tests I would look on Chemistry Community because a lot of people have asked questioned about the tests. Sun May 06, 2018 10:27 pm Forum: Wave Functions and s-, p-, d-, f- Orbitals Topic: half full? Replies: 5 Views: 207 ### Re: half full? This is similar to how for Chromium and Copper the electron configuration is more stable for example Cr [Ar] 3d5 4s1 is more stable than Cr [Ar] 3d4 4s2. Sun May 06, 2018 10:23 pm Forum: Trends in The Periodic Table Topic: Trends to Know Replies: 12 Views: 630 ### Re: Trends to Know I'm pretty sure this won't be on the midterm. The midterm covers Chapter 1 and 2 and the Fundamentals. Fri Apr 27, 2018 5:20 pm Forum: DeBroglie Equation Topic: 1.33 using De Broglie Replies: 3 Views: 194 ### Re: 1.33 using De Broglie Part C is where the question gets tricky but if you recall the equation Ephoton-Work=E excess(or kinetic energy) we know the mass of the electron and the velocity so we can find the kinetic energy and we know that it takes 1.66x10^-17 to eject the electron from the previous problem so if we rearrang... Fri Apr 27, 2018 5:16 pm Forum: DeBroglie Equation Topic: 1.33 using De Broglie Replies: 3 Views: 194 ### Re: 1.33 using De Broglie For part B the energy required to remove an electron is the energy of the photon E=hv plugging in v= 2.50 x 10^16 HZ and h and you get 1.66x10^-17 J Fri Apr 27, 2018 5:10 pm Forum: Wave Functions and s-, p-, d-, f- Orbitals Topic: Why does the 4s orbital come before the 3d? Replies: 7 Views: 412 ### Re: Why does the 4s orbital come before the 3d? 3d is at a lower energy level than 4s therefore it comes before it in the electron configuration because it going from lowest to highest energy level. Fri Apr 27, 2018 5:08 pm Forum: Photoelectric Effect Topic: Test 2 final question [ENDORSED] Replies: 15 Views: 774 ### Re: Test 2 final question[ENDORSED] The test also mentioned that it was a UV light which is important to acknowledge because if it was a longer wavelength then no electrons would be ejected. Sun Apr 22, 2018 9:31 pm Forum: Einstein Equation Topic: E=hv vs E=hf [ENDORSED] Replies: 4 Views: 1844 ### Re: E=hv vs E=hf[ENDORSED] I was also confused about this! I would argue that velocity and frequency are interchangeable especially because they have the same units. Sun Apr 22, 2018 9:28 pm Forum: Significant Figures Topic: Use of sigfigs Replies: 8 Views: 470 ### Re: Use of sigfigs Sig Figs are important, but I would argue that making sure you have the right units is also important. Especially when using the speed of light and disproving certain ideas like that an electron is drawn into the nucleus, correct significant figures are essential. Sun Apr 22, 2018 9:25 pm Forum: Molarity, Solutions, Dilutions Topic: Test 1 Q2 Replies: 2 Views: 211 ### Re: Test 1 Q2 I was also confused by this question!! I'm pretty sure we were given for information than we needed in the question and some numbers didn't need to be used. Sun Apr 22, 2018 9:24 pm Forum: General Science Questions Topic: Tools for remembering equations Replies: 5 Views: 589 ### Re: Tools for remembering equations Yeah I asked Lavelle in class if we need to know the equations and they will be provided on the exam. It would probably be helpful to memorize the units each symbol represents because it will be easier to do dimensional analysis and manipulate equations. Sun Apr 22, 2018 9:22 pm Forum: Properties of Light Topic: still don't uderrsatnd what a photon is [ENDORSED] Replies: 20 Views: 527 ### Re: still don't uderrsatnd what a photon is[ENDORSED] You just need to manipulate the equations and its easy to see the relationship between photons and momentum especially if you do dimensional analysis. Sun Apr 15, 2018 8:46 pm Forum: Balancing Chemical Reactions Topic: Balancing Chemical Reactions Replies: 5 Views: 301 ### Balancing Chemical Reactions Does anyone have any tips and tricks for quickly balancing equations? Sun Apr 15, 2018 8:44 pm Forum: Balancing Chemical Reactions Topic: Help with homework problem L.39 [ENDORSED] Replies: 2 Views: 227 ### Re: Help with homework problem L.39[ENDORSED] The total mass of tin oxide is: 28.35g-26.45g=1.90g Then you convert grams of Sn to moles of Sn and do the same thing for oxygen to get the molar ratio: 1.50g Sn x (1 mol Sn/118.71g Sn)=1.264 x10^-2 mol Sn (1.90-1.50)g O x (1 mol O/16.00 g O)=0.025 mol O Mole Ratio of Sn:O is 1:2 so the Empirical Fo... Sun Apr 15, 2018 8:38 pm Forum: SI Units, Unit Conversions Topic: writing out conversions in one long line vs. steps Replies: 16 Views: 805 ### Re: writing out conversions in one long line vs. steps I was taught to do it in one long line but it is important to write out what chemicals you are using when converting mass and moles. This is a great technique was trying to cancel things out Fri Apr 06, 2018 1:27 pm Forum: SI Units, Unit Conversions Topic: Using units in calculations [ENDORSED] Replies: 4 Views: 188 ### Re: Using units in calculations[ENDORSED] In addition to units I would also add what element or molecule the value is connected to, especially when dealing with moles. Fri Apr 06, 2018 1:25 pm Forum: Balancing Chemical Reactions Topic: H.5: Balancing Equations Replies: 6 Views: 493 ### Re: H.5: Balancing Equations I would also recommend making a chart. It is also important to remember that even if an equation is given to you in a problem, if there are no coefficients you need to make sure it is balanced. Even if there are coefficients it is important to double check. Fri Apr 06, 2018 12:48 pm Forum: Limiting Reactant Calculations Topic: General Process [ENDORSED] Replies: 3 Views: 223 ### Re: General Process[ENDORSED] Also it is helpful to convert the moles of the given reactants to the mass of the product you are looking for the limiting reactant because you can determine the limiting reactant and find the mass in one step.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8226557970046997, "perplexity": 3647.1775743999992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655897707.23/warc/CC-MAIN-20200708211828-20200709001828-00485.warc.gz"}
http://libros.duhnnae.com/2017/jul8/15014021866-Efficient-prime-counting-and-the-Chebyshev-primes.php
Efficient prime counting and the Chebyshev primes Efficient prime counting and the Chebyshev primes - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online. 1 FEMTO-ST - Franche-Comté Électronique Mécanique, Thermique et Optique - Sciences et Technologies 2 Télécom ParisTech Abstract : The function $\epsilonx=\mbox{li}x-\pix$ is known to be positive up to the very large Skewes- number. Besides, according to Robin-s work, the functions $\epsilon {\theta}x=\mbox{li}\thetax-\pix$ and $\epsilon {\psi}x=\mbox{li}\psix-\pix$ are positive if and only if Riemann hypothesis RH holds the first and the second Chebyshev function are $\thetax=\sum {p \le x} \log p$ and $\psix=\sum {n=1}^x \Lambdan$, respectively, $\mbox{li}x$ is the logarithmic integral, $\mun$ and $\Lambdan$ are the Möbius and the Von Mangoldt functions. Negative jumps in the above functions $\epsilon$, $\epsilon {\theta}$ and $\epsilon {\psi}$ may potentially occur only at $x+1 \in \mathcal{P}$ the set of primes. One denotes $j p=\mbox{li}p-\mbox{li}p-1$ and one investigates the jumps $j p$, $j {\thetap}$ and $j {\psip}$. In particular, $j p1$ for \$p Autor: Michel Planat - Patrick Solé - Fuente: https://hal.archives-ouvertes.fr/ DESCARGAR PDF
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9687797427177429, "perplexity": 1841.0281318860666}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815812.83/warc/CC-MAIN-20180224132508-20180224152508-00657.warc.gz"}
https://quantumcomputing.stackexchange.com/questions/21755/does-a-photonic-quantum-computer-control-a-single-photon
# Does a photonic quantum computer control a single photon? Does a photonic quantum computer control a single photon and use it to represent a single qubit? I think ion trapped quantum computers use a single ion to represent a qubit. I would like to know how a photonic quantum computer represents each qubit. There are various ways in which one or multiple photons can be used to encode qubits. Potentially the most widely used encoding (at least when quantum communication is assumed to within the scope of 'quantum computing' for this question) is the polarization-encoded photon. Here, a single photon is used as a qubit, where two orthogonal polarization directions are used as the computational basis of the encoded qubit. Another often used encoding within communication tasks is the 'time-bin' encoding. Here, the qubit can be regarded as a window in time during which a single photon can exist. The $$|0\rangle$$ state is then encoded as the photon being (roughly speaking) within the first half of the window, and the $$|1\rangle$$ state is encoded as the photon being in the latter half of the window. Oftentimes there are more stringent windows instead of a 'full half' for the states. Turning more to proper 'quantum computers', photons can be used in various ways to encode qubits. Another widely used single photon encoding is the so-called 'rail'-encoding when talking about integrated photonics (i.e. 'photons on a chip'). Here, there are two waveguides engraved into a chip through which a photon can travel. The qubit's computational basis is encoded as the photon being in the one or the other waveguide. It is, in principle, even possible to use both the polarization and the 'rail' encoding together at the same time, to encode two qubits into a single photon - this is area of active research. This is sometimes called 'dual-rail' encoding, compared to the 'normal' rail encoding described above, which is then known as 'single-rail' encoding. There exist, however, also photonic quantum computers (at least on paper) that do not use a single photon to encode a qubit. As photons are bosons, they can be used to create bosonic codes like GKP, cat or binomial codes, although most research for implementations of these codes is being performed for harmonic oscillators in, for instance, superconductor devices. If we focus on computation, there are various ways of implementing gates. Relatively agnostic of the encoding (as long as its single-photon) is the KLM protocol, which uses combinations of beam splitters and phase shift materials (which fall both within the scope of linear optics) to implement single-qubit gates. Two qubit gates (namely, a conditional sign flip) are also implemented by linear elements, but the big drawback of this method is that it is non-deterministic. Focusing specifically on rail encodings, one can also have non-linear interactions between the two modes by bringing the two waveguides together in a birefringent (?) material. This method also allows to implement two-qubit gates, by bringing the $$|1\rangle$$ waveguides of both qubits together in a similar fashion. I am not at all an expert on this matter, however. • Thank you. For quantum computers, not quantum communication, when using a single photo to encode a qubit, how does two photons do COMPUTING ? Nov 1 at 18:51 • @david I've added a couple of thoughts, although I'm not very well versed on the matter unfortunately. – JSdJ Nov 3 at 12:38 • thank you for the explanation. Nov 3 at 23:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8023490905761719, "perplexity": 706.2395856285267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363301.3/warc/CC-MAIN-20211206133552-20211206163552-00331.warc.gz"}
https://proceedings.neurips.cc/paper/2020/hash/1bd413de70f32142f4a33a94134c5690-Abstract.html
#### Authors Alessandro Epasto, Mohammad Mahdian, Vahab Mirrokni, Emmanouil Zampetakis #### Abstract A soft-max function has two main efficiency measures: (1) approximation - which corresponds to how well it approximates the maximum function, (2) smoothness - which shows how sensitive it is to changes of its input. Our goal is to identify the optimal approximation-smoothness tradeoffs for different measures of approximation and smoothness. This leads to novel soft-max functions, each of which is optimal for a different application. The most commonly used soft-max function, called exponential mechanism, has optimal tradeoff between approximation measured in terms of expected additive approximation and smoothness measured with respect to Renyi Divergence. We introduce a soft-max function, called piece-wise linear soft-max, with optimal tradeoff between approximation, measured in terms of worst-case additive approximation and smoothness, measured with respect to l_q-norm. The worst-case approximation guarantee of the piece-wise linear mechanism enforces sparsity in the output of our soft-max function, a property that is known to be important in Machine Learning applications Martins et al. 16, Laha et al. 18 and is not satisfied by the exponential mechanism. Moreover, the l_q-smoothness is suitable for applications in Mechanism Design and Game Theory where the piece-wise linear mechanism outperforms the exponential mechanism. Finally, we investigate another soft-max function, called power mechanism, with optimal tradeoff between expected multiplicative approximation and smoothness with respect to the Renyi Divergence, which provides improved theoretical and practical results in differentially private submodular optimization.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9644556045532227, "perplexity": 1109.0462698763697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154320.56/warc/CC-MAIN-20210802110046-20210802140046-00142.warc.gz"}
https://labs.tib.eu/arxiv/?author=S.%20Koenig
• ### Planck's dusty GEMS. IV. Star formation and feedback in a maximum starburst at z=3 seen at 60-pc resolution(1704.05853) July 6, 2017 astro-ph.GA We present an analysis of high-resolution ALMA interferometry of CO(4-3) line emission and dust continuum in the "Ruby" (PLCK_G244.8+54.9), a bright, gravitationally lensed galaxy at z = 3.0 discovered with the Planck all-sky survey. The Ruby is the brightest of Planck's Dusty GEMS, a sample of 11 of the brightest gravitationally lensed high-redshift galaxies on the extragalactic sub-mm sky. We resolve the high-surface-brightness continuum and CO line emission of the Ruby in several extended clumps along a partial, nearly circular Einstein ring with 1.4" diameter around a massive galaxy at z = 1.5. Local star-formation intensities are up to 2000 M$_{\odot}$ yr$^{-1}$ kpc$^{-2}$, amongst the highest observed at high redshift, and clearly in the range of maximal starbursts. Gas-mass surface densities are a few $\times$ 10$^4$ M$_{\odot}$ pc$^{-2}$. The Ruby lies at, and in part even above, the starburst sequence in the Schmidt-Kennicutt diagram, and at the limit expected for star formation that is self-regulated through the kinetic energy injection from radiation pressure, stellar winds, and supernovae. We show that these processes can also inject sufficient kinetic energy and momentum into the gas to explain the turbulent line widths, which are consistent with marginally gravitationally bound molecular clouds embedded in a critically Toomre-stable disk. The star-formation efficiency is in the range 1-10% per free-fall time, consistent with the notion that the pressure balance that sets the local star-formation law in the Milky Way may well be universal out to the highest star-formation intensities. AGN feedback is not necessary to regulate the star formation in the Ruby, in agreement with the absence of a bright AGN component in the infrared and radio regimes. • ### Planck's Dusty GEMS. III. A massive lensing galaxy with a bottom-heavy stellar initial mass function at z=1.5(1703.02984) March 8, 2017 astro-ph.GA We study the properties of the foreground galaxy of the Ruby, the brightest gravitationally lensed high-redshift galaxy on the sub-millimeter sky as probed by the Planck satellite, and part of our sample of Planck's Dusty GEMS. The Ruby consists of an Einstein ring of 1.4" diameter at z = 3.005 observed with ALMA at 0.1" resolution, centered on a faint, red, massive lensing galaxy seen with HST/WFC3, which itself has an exceptionally high redshift, z = 1.525 $\pm$ 0.001, as confirmed with VLT/X-Shooter spectroscopy. Here we focus on the properties of the lens and the lensing model obtained with LENSTOOL. The rest-frame optical morphology of this system is strongly dominated by the lens, while the Ruby itself is highly obscured, and contributes less than 10% to the photometry out to the K band. The foreground galaxy has a lensing mass of (3.70 $\pm$ 0.35) $\times$ 10$^{11}$ M$_{\odot}$. Magnification factors are between 7 and 38 for individual clumps forming two image families along the Einstein ring. We present a decomposition of the foreground and background sources in the WFC3 images, and stellar population synthesis modeling with a range of star-formation histories for Chabrier and Salpeter initial mass functions (IMFs). Only the stellar mass range obtained with the latter agrees well with the lensing mass. This is consistent with the bottom-heavy IMFs of massive high-redshift galaxies expected from detailed studies of the stellar masses and mass profiles of their low-redshift descendants, and from models of turbulent gas fragmentation. This may be the first direct constraint on the IMF in a lens at z = 1.5, which is not a cluster central galaxy. • ### Planck's Dusty GEMS. II. Extended [CII] emission and absorption in the Garnet at z=3.4 seen with ALMA(1610.01169) Oct. 4, 2016 astro-ph.GA We present spatially resolved ALMA [CII] observations of the bright (flux density S=400 mJy at 350 microns), gravitationally lensed, starburst galaxy PLCK G045.1+61.1 at z=3.427, the "Garnet". This source is part of our set of "Planck's Dusty GEMS", discovered with the Planck's all-sky survey. Two emission-line clouds with a relative velocity offset of ~600 km/s extend towards north-east and south-west, respectively, of a small, intensely star-forming clump with a star-formation intensity of 220 Msun/yr/kpc^2, akin to maximal starbursts. [CII] is also seen in absorption, with a redshift of +350 km/s relative to the brightest CO component. [CII] absorption has previously only been found in the Milky Way along sightlines toward bright high-mass star-forming regions, and this is the first detection in another galaxy. Similar to Galactic environments, the [CII] absorption feature is associated with [CI] emission, implying that this is diffuse gas shielded from the UV radiation of the clump, and likely at large distances from the clump. Since absorption can only be seen in front of a continuum source, the gas in this structure can definitely be attributed to gas flowing towards the clump. The absorber could be part of a cosmic filament or merger debris being accreted onto the galaxy. We discuss our results also in light of the on-going debate of the origin of the [CII] deficit in dusty star-forming galaxies. • ### Planck's Dusty GEMS: Gravitationally lensed high-redshift galaxies discovered with the Planck survey(1506.01962) June 5, 2015 astro-ph.GA We present an analysis of 11 bright far-IR/submm sources discovered through a combination of the Planck survey and follow-up Herschel-SPIRE imaging. Each source has a redshift z=2.2-3.6 obtained through a blind redshift search with EMIR at the IRAM 30-m telescope. Interferometry obtained at IRAM and the SMA, and optical/near-infrared imaging obtained at the CFHT and the VLT reveal morphologies consistent with strongly gravitationally lensed sources. Additional photometry was obtained with JCMT/SCUBA-2 and IRAM/GISMO at 850 um and 2 mm, respectively. All objects are bright, isolated point sources in the 18 arcsec beam of SPIRE at 250 um, with spectral energy distributions peaking either near the 350 um or the 500 um bands of SPIRE, and with apparent far-infrared luminosities of up to 3x10^14 L_sun. Their morphologies and sizes, CO line widths and luminosities, dust temperatures, and far-infrared luminosities provide additional empirical evidence that these are strongly gravitationally lensed high-redshift galaxies. We discuss their dust masses and temperatures, and use additional WISE 22-um photometry and template fitting to rule out a significant contribution of AGN heating to the total infrared luminosity. Six sources are detected in FIRST at 1.4 GHz. Four have flux densities brighter than expected from the local far-infrared-radio correlation, but in the range previously found for high-z submm galaxies, one has a deficit of FIR emission, and 6 are consistent with the local correlation. The global dust-to-gas ratios and star-formation efficiencies of our sources are predominantly in the range expected from massive, metal-rich, intense, high-redshift starbursts. An extensive multi-wavelength follow-up programme is being carried out to further characterize these sources and the intense star-formation within them. • A detailed study is presented of the expected performance of the ATLAS detector. The reconstruction of tracks, leptons, photons, missing energy and jets is investigated, together with the performance of b-tagging and the trigger. The physics potential for a variety of interesting physics processes, within the Standard Model and beyond, is examined. The study comprises a series of notes based on simulations of the detector and physics processes, with particular emphasis given to the data expected from the first years of operation of the LHC at CERN. • ### Simultaneous NIR/sub-mm observation of flare emission from SgrA*(0811.2753) Nov. 17, 2008 astro-ph We report on a successful, simultaneous observation and modeling of the sub-millimeter to near-infrared flare emission of the Sgr A* counterpart associated with the super-massive black hole at the Galactic center. Our modeling is based on simultaneous observations that have been carried out on 03 June, 2008 using the NACO adaptive optics (AO) instrument at the ESO VLT and the LABOCA bolometer at the APEX telescope. Inspection and modeling of the light curves show that the sub-mm follows the NIR emission with a delay of 1.5+/-0.5 hours. We explain the flare emission delay by an adiabatic expansion of the source components. • ### An evolving hot spot orbiting around Sgr A*(0810.0138) Oct. 1, 2008 astro-ph Here we report on recent near-infrared observations of the Sgr A* counterpart associated with the super-massive ~ 4x10^6 M_sun black hole at the Galactic Center. We find that the May 2007 flare shows the highest sub-flare contrast observed until now, as well as evidence for variations in the profile of consecutive sub-flares. We modeled the flare profile variations according to the elongation and change of the shape of a spot due to differential rotation within the accretion disk. • ### Coordinated multi-wavelength observations of Sgr A*(0810.0168) Oct. 1, 2008 astro-ph We report on recent near-infrared (NIR) and X-ray observations of Sagittarius A* (Sgr A*), the electromagnetic manifestation of the ~4x10^6 solar masses super-massive black hole (SMBH) at the Galactic Center. The goal of these coordinated multi-wavelength observations is to investigate the variable emission from Sgr A* in order to obtain a better understanding of the underlying physical processes in the accretion flow/outflow. The observations have been carried out using the NACO adaptive optics (AO) instrument at the European Southern Observatory's Very Large Telescope (July 2005, May 2007) and the ACIS-I instrument aboard the Chandra X-ray Observatory (July 2005). We report on a polarized NIR flare synchronous to a 8x1033 erg/s X-ray flare in July 2005, and a further flare in May 2007 that shows the highest sub-flare to flare contrast observed until now. The observations can be interpreted in the framework of a model involving a temporary disk with a short jet. In the disk component flux density variations can be explained due to hot spots on relativistic orbits around the central SMBH. The variations of the sub-structures of the May 2007 flare are interpreted as a variation of the hot spot structure due to differential rotation within the disk. • ### Coordinated mm/sub-mm observations of Sagittarius A* in May 2007(0810.0177) Oct. 1, 2008 astro-ph At the center of the Milky Way, with a distance of ~8 kpc, the compact source Sagittarius A* (SgrA*) can be associated with a super massive black hole of ~4x10^6 solar masses. SgrA* shows strong variability from the radio to the X-ray wavelength domains. Here we report on simultaneous NIR/sub-millimeter/X-ray observations from May 2007 that involved the NACO adaptive optics (AO) instrument at the European Southern Observatory's Very Large Telescope, the Australian Telescope Compact Array (ATCA), the US mm-array CARMA, the IRAM 30m mm-telescope, and other telescopes. We concentrate on the time series of mm/sub-mm data from CARMA, ATCA, and the MAMBO bolometer at the IRAM 30m telescope.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9053936004638672, "perplexity": 3215.575140377076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141747774.97/warc/CC-MAIN-20201205104937-20201205134937-00063.warc.gz"}
https://www.physicsforums.com/threads/quantum-field-theory-profound-insight-antiparticles.900572/
# A Quantum Field theory profound insight antiparticles 1. Jan 16, 2017 ### binbagsss Hi, I have recently began studying quatum field theory and have just seen how the quantization of the complex scalar field, noting that there is invariance of the action under a phase rotation shows the existence of antiparticles. I just have a couple of questions, apologies in advance if they are stupid: - I'm after some more physical intuition on the phase rotations? I can't find much in a google - Probably a stupid question: are there any other field types with other in variances that demonstrate the existence of antiparticles in the same way? Many thanks 2. Jan 16, 2017 ### Staff: Mentor Can you give a reference? 3. Jan 17, 2017 ### binbagsss is it incorrect? 4. Jan 17, 2017 ### stevendaryl Staff Emeritus What you may thinking of is this: • From the Lagrangian of the complex scalar field, you can use Noether's theorem to derive a conserved current. • This current, unlike that of the Schrodinger equation, is not always positive. • So it cannot be interpreted as a particle probability density. • But it can be interpreted as a charge density, with contributions from both negatively charged and positively charged particles. 5. Jan 17, 2017 ### vanhees71 Any useful QFT textbook is a reference. The line of argument roughly goes like this: To define a Poincare covariant local (micro causal) QFT you need a combination of positive and negative-frequency solution of free-field equations of motion. Here local means that you have field operators that transform under Poincare transformations (represented by unitary representations of the covering group of the proper orthochronous Lorentz group) as their classical pendants, i.e., for a scalar field $$\hat{U}(\Lambda) \hat{\phi}(x) \hat{U}^{\dagger}(\Lambda) = \hat{\phi}(\Lambda^{-1} x),$$ where $\Lambda$ is an arbitrary orthochronous Lorentz transformation (and analogously for space-time translations). In order to have a Hamiltonian that is bounded from below, which we need to have a stable ground state, the negative-frequency modes have to enter as a term proportional to a creation operator and the positive-frequency modes as a term proportional to an annihilation operator (in non-relativistic QFT the field consists only of a superposition of annihilation operators!). In this way the negative-frequency solutions appear as positive-energy modes of either the same particles as represented by the annihilation operators associated with the positive-fequency modes (then for a scalar field you have an Hermitean scalar field) or a different kind of particles, which necessarily have the same mass as the particles but with the opposite charge of the conserved Noether current from the then possible U(1) symmetry (invariance under global phase rotations of the field). These are called the anti-particles associated with the particles. This analysis goes through for particles with any spin $s \in \{0,1/2,1,3/2,\ldots \}$. It also follows from the representation theory of the Poincare group that half-integer (integer) spin particles must be quantized as fermions (bosons), which is the famous spin-statistics theorem. Further, any local microcausal QFT with Hamiltonian bounded from below is also necessarily symmetric under the "grand reflection" CPT (charge conjugation, parity=spatial reflections, and time reversal), which is the famous Pauli-Lueders CPT theorem. There's no necessity for C, P, T, or CP to be conserved for such a QFT, and indeed the weak interaction violates all of these discrete symmetries, but of course not CPT. 6. Jan 17, 2017 ### binbagsss mmm I have noether's theorem roughly as: if S is invariant up to a surface term under some transformation this implies there is a conserved current. The complex scalar field leave S invariance under a complex phase rotation. we find the conserved current. We then operate with H and Q, H the hamiltionian and Q the conserved charge assoaiced to the conserved current described above, on both a+|0> and b+|0> , where, from the quantisation of the complex scalar field, a+ is the creation of particle type 1 and b+ is the creator of particle type 2, and the outcome is that operating with H returns the same energy(mass), whilst operating with Q returns the same charge but with a different sign, thus showing the existence of particles and antiparticles. 7. Jan 17, 2017 ### stevendaryl Staff Emeritus Yes, what you're saying is right. I was saying that even at the level of a classical field theory, the current for a complex scalar field must involve charge densities of both signs. Draft saved Draft deleted Similar Discussions: Quantum Field theory profound insight antiparticles
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9567897915840149, "perplexity": 794.0361213362826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948587496.62/warc/CC-MAIN-20171216084601-20171216110601-00091.warc.gz"}
https://math.stackexchange.com/questions/2251090/galois-group-of-an-irreducible-separable-polynomial-be-abelian-then-each-of
# Galois group of an irreducible , separable polynomial be abelian , then each of the roots of the polynomial generates the splitting field? Let $f(x)\in k[x]$ be an irreducible , separable polynomial , let $E$ be the splitting field of $f(x)$ , then $E/k$ is a Galois extension . If $Gal(E/k)$ is abelian then is it true that $E=k(a)$ for every root $a$ of $f(x)$ ? My Attempt : As $G:=Gal(E/k)$ is normal so for any root $a$ of $f(x)$ , $k(a)$ is Galois over $k$ , then if $H:=Gal(E/k(a))$ then $|H|=[E:k(a)]$ and $[G:H]=Gal(k(a)/k)=[k(a):k]=n$ . Also , as $n:=\deg f =[k(a):k]|[E:k]=|G|$ , and $G$ is abelian , so $G$ has a subgroup $K$ of order $n$ , let $F$ be the fixed field of $H$ , then $E/F$ and $F/k$ are Galois and $[E:F]=|K|=n$ . But this is not leading anywhere . • @GalPorat : what definition ? do you mean we don't need the automorphism group to be abelian etc. ? – user228169 Apr 25 '17 at 8:08 • Related 1, 2 – Jyrki Lahtonen Apr 25 '17 at 10:25 The proof goes as follows: Since $\operatorname{Gal}(E/k)$ is abelian, any subgroup is normal, hence any intermediate field of $E/k$ is normal. In particular $k(a)/k$ is normal and by the definition of a normal extension, it follows that all roots of $f$ are contained in $k(a)$. Hence $E=k(a)$. • Ah yes ! Thanks a lot – user228169 Apr 25 '17 at 8:27 • Short and to the point. Nice. +1 – DonAntonio Apr 25 '17 at 8:41 Since $f$ is irreducible, the Galois group is transitive on the roots. In a transitive commutative permutation group, for any $x$, $Stab_G(x) = \{id\}$ : Let $x$ be a root. if some element $g$ fixes $x$ and $y$ is another root, then there is an $h$ such that $h(x)=y$ and so $g(y) = gh(x) = hg(x) = h(x) = y$. Therefore if $g$ fixes $x$, it fixes every root, and so $g$ must be the identity. This shows that $k(x) = E^{Stab_G(x)} = E^{\{id\}} = E$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.994854211807251, "perplexity": 118.73552337817293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317817.76/warc/CC-MAIN-20190823020039-20190823042039-00077.warc.gz"}
https://math.stackexchange.com/questions/1490120/let-a-be-a-bounded-open-set-in-mathbbrn-give-an-example-where-int-b
Let $A$ be a bounded open set in $\mathbb{R}^n$. Give an example where $\int_{\bar{A}} f$ exists but $\int_A f$ does not. Integration in the Riemann sense. Let $A$ be a bounded open set in $\mathbb{R}^n$; let $f: \mathbb{R}^n\to \mathbb{R}$ be a bounded continuous function. Give an example where $\int_{\bar{A}} f$ exists but $\int_A f$ does not. I cannot think of any such example. I would greatly appreciate it if anyone can provide me with some insight. • In which sense are you defining integration, Lebesgue or Riemann? In the Lebesgue sense this result is nonsensical, since $f 1_A$ will necessarily also be integrable if $f$ is. – Ian Oct 21 '15 at 2:59 • Sorry should've made it clear, I mean Riemann. – nomadicmathematician Oct 21 '15 at 3:14 • @Ian: this might be silly, but is it obvious that $1_A$ is measurable if $1_{\overline{A}}$ is? – Giovanni Oct 21 '15 at 3:14 • @Giovanni $A$ is open and $\overline{A}$ is closed, so the indicator functions are definitely measurable. – Ian Oct 21 '15 at 3:15 • @takecare How do you define Riemann integration on an open set? (All notions of Riemann integration that I understand depend on compactness.) – Ian Oct 21 '15 at 3:16 I don't think this statement makes sense as formulated. In particular, if $f$ is integrable on $\overline{A}$, then its set of discontinuity points has measure zero. The restriction of $f$ to $A$ will not add any discontinuity points (because it can be given by the composition of $f$ with the inclusion map, and the inclusion map is continuous). So the set of discontinuity points of the restriction will also have measure zero, so $f$ will be integrable on $A$ as well (if that notion even makes sense in the Riemann setting). A reformulation (assuming we've somehow sensibly defined Riemann integration on an open set) is as follows. Let $C$ be a fat Cantor set, let $A=[0,1] \setminus C$, and then consider $f=1_A$. Then $f$ is actually continuous on $A$, because it is constant there. But no matter what, $f$ cannot be Riemann integrable on $\overline{A}$, because its set of discontinuity points is $C$ which has positive measure. • I don't really understand your example. $f$ is supposed to be continuous on $\mathbb R^n$, not only on $A$. Beside $f$ is supposed to be integrable on $\bar A$. – user251257 Oct 21 '15 at 4:00 • @user251257 I agree, I had to change some of the original hypotheses somewhat because as formulated they do not make sense. In particular, if $f$ is integrable on $\overline{A}$, then no discontinuities are introduced by restricting to $A$, so by the Lebesgue criterion, $f$ is also integrable on $A$ (whatever that even means in the Riemann case). – Ian Oct 21 '15 at 4:21 • @user251257 Er, what? $A$ is a bounded open set. $f=1$ will be integrable on both $\overline{A}$ and $A$. There is no way to satisfy the OP's condition by having the integrals blow up. – Ian Oct 21 '15 at 4:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9781813025474548, "perplexity": 142.47484331995554}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986648481.7/warc/CC-MAIN-20191014003258-20191014030258-00045.warc.gz"}
http://mathhelpforum.com/calculus/208033-rc-circuit.html
# Math Help - RC-Circuit 1. ## RC-Circuit a) The current in a wire is defined as the derivative of the charge: I(t)=Q'(t). The quantity of charge Q in coulombs (C) that has passed through a point in a wire up to time t (measured in seconds) is given by Q(t)=Qmax(1-(e^((-1/RC)t))). At what time is the charge the maximum charge, Qmax? At what time is the charge minimum? Determine limQ(t). 2. ## Re: RC-Circuit To find when $Q(t)=Q_{\text{max}}$, we may set: $Q_{\text{max}}=Q_{\text{max}}\left(1-e^{-\frac{1}{RC}t} \right)$ Now, solve for $t$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9260697960853577, "perplexity": 1243.6821363089464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663467.44/warc/CC-MAIN-20140930004103-00041-ip-10-234-18-248.ec2.internal.warc.gz"}
http://openstudy.com/updates/55db53c4e4b02e69bf5cff6c
Here's the question you clicked on: 55 members online • 0 viewing ## anonymous one year ago Can someone please help me step by step with this problem? x^(-2/3) (2-x)^2 - 6x^1/3 (2-x) Delete Cancel Submit • This Question is Open 1. anonymous • one year ago Best Response You've already chosen the best response. 0 Without knowing exactly what you're trying to do, we can't help much. I'd guess you're wanting to simplify the given expression. $x^{-2/3}(2-x)^2-6x^{1/3}(2-x)$ For starters, there's a common factor of $$2-x$$, so let's pull that out: $(2-x)\left[x^{-2/3}(2-x)-6x^{1/3}\right]$ Next, notice that $$-\dfrac{2}{3}=-\dfrac{3}{3}+\dfrac{1}{3}$$. Since $$x^{a+b}=x^ax^b$$, this means that $$x^{-2/3}=x^{-3/3}x^{1/3}=x^{-1}x^{1/3}$$, and so the two bracketed terms share another common factor of $$x^{1/3}$$. Pulling that out gives $x^{1/3}(2-x)\left[x^{-1}(2-x)-6\right]$ What else can you do? 2. Not the answer you are looking for? Search for more explanations. • Attachments: Find more explanations on OpenStudy ##### spraguer (Moderator) 5→ View Detailed Profile 23 • Teamwork 19 Teammate • Problem Solving 19 Hero • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9746406674385071, "perplexity": 2254.3483714398035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00020-ip-10-171-10-70.ec2.internal.warc.gz"}
https://ntnuopen.ntnu.no/ntnu-xmlui/handle/11250/227501/browse?type=subject&value=angular+momentum
Now showing items 1-1 of 1 • #### Different solutions in generation of angular momentum during imitation jump take-offs among diverse level ski jumpers  (Master thesis, 2017) Introduction: During the push-off motion in ski jumping, the athlete strives to generate both a high vertical velocity and a correct angular momentum. The latter is important as it brings the athlete into the flight position. ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8723161816596985, "perplexity": 3735.166146301258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487641593.43/warc/CC-MAIN-20210618200114-20210618230114-00620.warc.gz"}
http://mathhelpforum.com/trigonometry/213121-compound-angles-help-print.html
# Compound angles HELP! • Feb 14th 2013, 03:33 PM Kosky1 Compound angles HELP! Hi I'm new to this, I've been given a question on compound angles and I really don't know how to go about it. Any help would be greatly appreciated: Using compound angles prove that Cos(y-pi)+Sin(y+pi/2)=0 thanks Shaun • Feb 14th 2013, 04:25 PM Soroban Re: Compound angles HELP! Hello, Shaun! You are expected to know these Compound Angle Identities: . . $\sin(A \pm B) \:=\:\sin A\cos B \pm \cos A\sin B$ . . $\cos(A \pm B) \:=\:\cos A\cos B \mp \sin A\sin V$ Quote: Using compound angles prove that: . $\cos(x-\pi)+\sin(x+\tfrac{\pi}{2})\:=\:0$ We have: . $\cos(x - \pi) \:=\:\cos(x)\cos(\pi) + \sin(x)\sin(\pi)$ . . . . . . . . . . . . . . . . $=\;\cos(x)\!\cdot\!(\text{-}1) + \sin(x)\!\cdot\!(0)$ . . . . . . . . . . . . . . . . $=\;\text{-}\cos(x)$ We have: . $\sin(x +\tfrac{\pi}{2}) \;=\;\sin(x)\cos(\tfrac{\pi}{2}) + \cos(x)\sin(\tfrac{\pi}{2})$ . . . . . . . . . . . . . . . . $=\;\sin(x)\!\cdot\!(0) + \cos(x)\!\cdot\!(1)$ . . . . . . . . . . . . . . . . $=\;\cos(x)$ Therefore: . $\cos(x-\pi) + \sin(x + \tfrac{\pi}{2}) \;=\;\text{-}\cos(x) + \cos(x) \;=\;0$ • Feb 14th 2013, 04:47 PM Kosky1 Re: Compound angles HELP! Ah thankyou. I had those identities right infront of me! Had a complete mind block Thank you ever so much!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9494987726211548, "perplexity": 250.6848537153191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320206.44/warc/CC-MAIN-20170623235306-20170624015306-00295.warc.gz"}
http://math.stackexchange.com/questions/76985/checking-the-binary-relations-symmetric-antisymmetric-and-etc
# Checking the binary relations, symmetric, antisymmetric and etc this is my first post. My homework was to check each of tables and findout they are reflexive, symmetric, antisymmetric and transitional. I would appreciate your help. I need someone to check my results. I really don't know why I do what I do. Especially while antysymmetric and transitional. 1: reflexive:yes, symmetric:no, antisymmetric:yes and transitional:yes 2: reflexive:yes, symmetric:yes, antisymmetric:no and transitional:yes 3: reflexive:no, symmetric:yes, antisymmetric:no and transitional:no 4: reflexive:yes, symmetric:no, antisymmetric:yes and transitional:yes 5: reflexive:yes, symmetric:no, antisymmetric:yes and transitional:no I read all infos about that relations but it is hard for me. So what's the story. If I need find out antisymmetric case I need find at least one exception? Should I do the same also with transitional? - if anyone can edit my post, please insert/correct image :) – CrystalCrystalNYPD Oct 29 '11 at 19:21 I believe you mean "transitive", not "transitional". – Austin Mohr Oct 29 '11 at 19:59 yes, transitive. I haven't wrote Math problems in English :) It should be transitive but i can not change due low level and the image will disapear :O – CrystalCrystalNYPD Oct 29 '11 at 20:41 Reflexive: there are no zeros on the diagonal. Symmetric: the table has to be symmertic. Antisymmetric: if you reflect the table with the diagonal (I mean a mirror symetry, where the diagonal is the mirror), then 1 goes to 0 (but 0 can go to 0). Transitive: I can't think of any smart method of checking that. You just check if the relation is transitive, so you take element#1 (and then all the rest) and look at all the ones in the row (probably in the row, but it's a matter of signs): if there is one in a column with - say - number #3 (you have to check all the 1s), you look at the row#3 and check if for every 1 in this row, there is 1 in the row#1 - it is one eye-sight, so it is not that bad. If you want to say 'yes', you have to check everything. But if while checking you find that something is 'wrong', then you just say 'no', because one exception is absolutely enough. There is no such thing like 'yes but...' in mathematics :) You are wrong about antisymmetric: it does not mean 'asymmetric'. It means that if there is (1,5) in the relation, then (5,1) is not. Reflexive and symmetric are OK. I'm too lazy to check the last part. - You are correct about reflexivity and symmetry. The main diagonal is all $1$’s, so the relation is reflexive, and the matrix is symmetric about the main diagonal, so the relation is symmetric. A relation $R$ is antisymmetric if $$(a R b\text{ AND }bRa)\implies (a=b)\;.$$ For your first relation you have $1R3$ and $3R1$, and yet $3\ne 1$, so this relation is not antisymmetric. For the same reason the second and fourth relations are not antisymmetric. The third relation is not antisymmetric because $1R5$ and $5R1$, even though $1 \ne 5$. The fifth is not antisymmetric because $2R5$ and $5R2$, even though $2 \ne 5$. Thus, none of the relations is antisymmetric. The visual test is to see whether there is a pair of $1$’s that are symmetric with respect to the main diagonal; if there are $-$ and we found such a pair of every one of these five relations $-$ then the relation is not antisymmetric. There is no simple test for transitivity. For the first relation, note that $2R3$ and $3R1$, but it’s not true that $2R1$, so the first relation is not transitive. In the third relation $1R4$ and $4R2$, but it’s not true that $1R2$, so the relation is not transitive. In the fourth relation $2R3$ and $3R1$, but it isn’t true that $2R1$, so the relation is not transitive. In the last relation $3R2$ and $2R1$, but it’s not the case that $3R1$, so the relation is not transitive. The second relation is, as you say, transitive. One way to demonstrate this is to notice that in this relation every odd number in $\{1,2,3,4,5\}$ is related to every odd number, and every even number to every even number, but no odd number is related to any even number or vice versa. Thus, if $aRb$ and $bRc$, then either $a,b$, and $c$ are all odd, or they’re all even, and in either case it’s true that $aRc$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9553759694099426, "perplexity": 269.1016097081878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701150206.8/warc/CC-MAIN-20160205193910-00019-ip-10-236-182-209.ec2.internal.warc.gz"}
https://lavelle.chem.ucla.edu/forum/search.php?author_id=10889&sr=posts
## Search found 50 matches Thu Mar 15, 2018 2:07 pm Forum: Galvanic/Voltaic Cells, Calculating Standard Cell Potentials, Cell Diagrams Topic: Cell Diagram Question from Wednesday's (03/15/18) Lecture Replies: 2 Views: 174 ### Re: Cell Diagram Question from Wednesday's (03/15/18) Lecture The line separates the electrode. Thu Mar 15, 2018 2:04 pm Forum: Galvanic/Voltaic Cells, Calculating Standard Cell Potentials, Cell Diagrams Topic: Coefficients when writing cell diagrams Replies: 3 Views: 217 ### Re: Coefficients when writing cell diagrams No. Cell diagrams are simply a summary of the players in the redox reactions. Thu Mar 15, 2018 2:00 pm Forum: Reaction Mechanisms, Reaction Profiles Topic: Water in Mechanism Replies: 5 Views: 280 ### Re: Water in Mechanism If it is the solvent, then you can leave it out of the rate law. This is because solvents are present in such high concentrations that their concentrations don't change much during the reaction. Sat Mar 10, 2018 7:46 pm Forum: Balancing Redox Reactions Topic: 14.5 Part a Replies: 4 Views: 253 ### Re: 14.5 Part a If there are any H+ ions, you need to convert them to H2O using OH-. Sat Mar 10, 2018 7:45 pm Forum: Calculating Standard Reaction Entropies (e.g. , Using Standard Molar Entropies) Topic: Change in Enthalpy vs. Change in Entropy [ENDORSED] Replies: 2 Views: 307 ### Re: Change in Enthalpy vs. Change in Entropy[ENDORSED] Enthalpy is a state property and entropy is not. Sat Mar 10, 2018 7:38 pm Forum: Concepts & Calculations Using First Law of Thermodynamics Topic: 8.49 Replies: 2 Views: 186 ### Re: 8.49 Using the 8.314 results in the correct units. Sun Mar 04, 2018 4:27 am Forum: Method of Initial Rates (To Determine n and k) Topic: K' Replies: 7 Views: 402 ### Re: K' You can also use the formulas given in the equations sheet. Just make sure you understand the formulas and which order reaction to use them for. Sun Mar 04, 2018 4:26 am Forum: Reaction Mechanisms, Reaction Profiles Topic: Independent of rate Replies: 3 Views: 196 ### Re: Independent of rate The rate is independent of the concentration if the reaction is zero order with respect to that reactant. Sun Mar 04, 2018 4:25 am Forum: General Rate Laws Topic: 15.13 part a? Replies: 3 Views: 207 ### Re: 15.13 part a? The rate equals k(concentration of H2)(concentration of I2). It is, in a way, k(concentration)^2. Sun Feb 25, 2018 4:49 pm Forum: Galvanic/Voltaic Cells, Calculating Standard Cell Potentials, Cell Diagrams Topic: reducing power? Replies: 5 Views: 578 ### Re: reducing power? Reducing power is the ability of something to give up electrons so that those electrons can go reduce some other molecule. Sun Feb 25, 2018 4:46 pm Forum: General Rate Laws Topic: rate of consumption sign Replies: 2 Views: 171 ### Re: rate of consumption sign The terms themselves imply the sign. If you have a negative rate of consumption, you would be forming the molecule; it doesn't make sense to call it the rate of consumption. The same goes for rate of formation. Sun Feb 25, 2018 4:40 pm Forum: Galvanic/Voltaic Cells, Calculating Standard Cell Potentials, Cell Diagrams Topic: Strongly Reducing Metals Replies: 5 Views: 269 ### Re: Strongly Reducing Metals The lower the standard reduction potential, the more unlikely the metal will get reduced (since more negative E values result in more positive G values). Sun Feb 18, 2018 11:59 pm Forum: Galvanic/Voltaic Cells, Calculating Standard Cell Potentials, Cell Diagrams Topic: 14.15.c Replies: 2 Views: 170 ### 14.15.c For this question, the overall equation is given as Cd(s) + 2 Ni(OH)3(s) --> Cd(OH)2(s) + Ni(OH)2(s). However, in the cell diagram, there was a KOH. Where did the potassium come from? Sun Feb 18, 2018 11:52 pm Forum: Galvanic/Voltaic Cells, Calculating Standard Cell Potentials, Cell Diagrams Topic: 14.13 d Replies: 1 Views: 173 ### 14.13 d What are the rules for balancing a redox equation with only one reactant/product? For example, the equation given in 14.13.d is Au+(aq) --> Au(s) + Au3+(aq). I thought Au+(aq) would be used in both half reactions but the solutions manual chose to use Au(s) for both reactions instead. Sun Feb 18, 2018 11:42 pm Forum: Work, Gibbs Free Energy, Cell (Redox) Potentials Topic: G=-nFE Replies: 3 Views: 255 ### G=-nFE Why is the "n" in this equation always positive even when there is a negative change in moles? Wed Feb 07, 2018 10:54 pm Forum: Gibbs Free Energy Concepts and Calculations Topic: 9.51 [ENDORSED] Replies: 3 Views: 339 ### Re: 9.51[ENDORSED] deltaG = deltaH - TdeltaS. If deltaG is negative, then the reaction is spontaneous. Since deltaH is negative, the reaction will be spontaneous for all positive values of deltaS and small negative values of deltaS. Wed Feb 07, 2018 10:45 pm Forum: Third Law of Thermodynamics (For a Unique Ground State (W=1): S -> 0 as T -> 0) and Calculations Using Boltzmann Equation for Entropy Topic: 9.25 Replies: 2 Views: 158 ### Re: 9.25 They are asking for molar entropy, so there are 6.02E23 molecules. Wed Feb 07, 2018 10:37 pm Forum: Third Law of Thermodynamics (For a Unique Ground State (W=1): S -> 0 as T -> 0) and Calculations Using Boltzmann Equation for Entropy Topic: 9.25 Replies: 1 Views: 116 ### Re: 9.25 The question asks for molar entropy, and there are 6.02E23 molecules. 6 orientations and 6.02E23 molecules results in 6^6.02E23. Plug it into the Boltzmann's equation and you will get the answer. Sun Feb 04, 2018 8:13 pm Forum: Phase Changes & Related Calculations Topic: enthalpy and entropy when it comes to spontaneous reactions Replies: 6 Views: 348 ### Re: enthalpy and entropy when it comes to spontaneous reactions It depends on the temperature. Sun Feb 04, 2018 8:04 pm Forum: Entropy Changes Due to Changes in Volume and Temperature Topic: 8.53 Replies: 2 Views: 121 ### Re: 8.53 The problem gives all reactants and products. As for the phases, everything is in gas form at the given temperatures. Water can be liquid, but the problem specifies that the reaction uses water vapor. Sat Feb 03, 2018 7:22 pm Forum: Entropy Changes Due to Changes in Volume and Temperature Topic: Which value of R to use? Replies: 3 Views: 167 ### Re: Which value of R to use? The method you mentioned should work as long as you make sure that you end up with the units that correspond to the variable you are solving for. Fri Jan 26, 2018 4:41 pm Forum: Phase Changes & Related Calculations Topic: 8.41 Replies: 6 Views: 343 ### Re: 8.41 Use the specific heat capacity of ice when you are raising the temperature of ice. Because the ice is already at 0C, you don't need to use the specific heat capacity of ice because the ice doesn't get warmer than 0C. Instead, use the heat of fusion to calculate the energy needed to melt the ice, the... Fri Jan 26, 2018 4:29 pm Forum: Phase Changes & Related Calculations Topic: When to use Kelvin or Celsius Replies: 10 Views: 1385 ### Re: When to use Kelvin or Celsius If the equation requires deltaT, then it doesn't matter whether you use K or C. If the equation requires T and not deltaT, then usually you will need to use K. Make sure to keep track of your units and that after cancelling, your ending units are actually what you are looking for. Fri Jan 26, 2018 4:26 pm Forum: Reaction Enthalpies (e.g., Using Hess’s Law, Bond Enthalpies, Standard Enthalpies of Formation) Topic: Refresher on sig figs Replies: 3 Views: 224 ### Re: Refresher on sig figs The values given in the appendix are accurate to the hundredths place. When you add them, your answer will still be accurate to the hundredths place. It happens that with the hundredths place, you will have 6 sig figs. Sun Jan 21, 2018 1:56 am Forum: Heat Capacities, Calorimeters & Calorimetry Calculations Topic: change in internal energy Replies: 4 Views: 190 ### Re: change in internal energy deltaU = q + w. When no work is done, U=q. When no heat is transferred, U=w. Sun Jan 21, 2018 1:48 am Forum: Thermodynamic Definitions (isochoric/isometric, isothermal, isobaric) Topic: Reversible and Irreversible Replies: 4 Views: 267 ### Re: Reversible and Irreversible For irreversible reactions, w=-(Pext)(deltaV). For reversible reactions, w=-nRTln(v2/v1). Sun Jan 21, 2018 1:42 am Forum: Thermodynamic Systems (Open, Closed, Isolated) Topic: Difference between Closed and Isolated Replies: 10 Views: 1091 ### Re: Difference between Closed and Isolated In an isolated system, neither particles nor heat can be transferred. In a closed system, heat can be transferred but particles cannot. Sun Jan 21, 2018 1:41 am Forum: Reaction Enthalpies (e.g., Using Hess’s Law, Bond Enthalpies, Standard Enthalpies of Formation) Topic: 8.65 Replies: 2 Views: 122 ### Re: 8.65 The solution stops at 2NO(g) + 3/2O2(g) --> N2O5(g) because you can use the enthalpy of formation of NO(g) to figure out the enthalpy of formation of N2O5(g). For 2NO(g) + 3/2O2(g) --> N2O5(g), deltaH is -169.2 kJ/mol. Thus, 1(deltaHfN2O5) - 2(deltaHfNO) = -169.2 kJ/mol. Using the appendix, 1(deltaH... Sun Jan 21, 2018 1:30 am Forum: Calculating Work of Expansion Topic: Reversible vs Isothermal Replies: 3 Views: 248 ### Re: Reversible vs Isothermal A reversible isothermal reaction is one in which the volume changes slowly at a constant temperature. There is no heat transfer, so q=0, so deltaU=w. Sat Jan 20, 2018 1:04 am Forum: Heat Capacities, Calorimeters & Calorimetry Calculations Topic: 8.37 Replies: 3 Views: 187 ### 8.37 For homework problem 8.37b, we are asked to calculate the enthalpy of vaporization of ethanol (given that 21.2kJ vaporized 22.45 grams). The answer key gives the answer as 43.5kJ/mol, but would 0.944kJ/g also be correct? In other words, what units are we supposed to use for enthalpy of vaporization ... Mon Dec 04, 2017 10:42 pm Forum: Bronsted Acids & Bases Topic: ch 12 #33 Replies: 1 Views: 148 ### Re: ch 12 #33 We are using the equation M1V1=M2V2 because the number of moles is the same between the two solutions. Thus, (0.18M)(0.5000L)=M2(0.00500L). The solution manual simply rearranges this. Mon Dec 04, 2017 10:25 pm Forum: Lewis Acids & Bases Topic: 12.27 Replies: 1 Views: 171 ### Re: 12.27 From the problem, we know that the concentration of HCl in the intended solution is 0.025M. Therefore, [H3O+] in the intended solution is also 0.025M. The number of moles of HCl remains the same between intended and actual solutions. From earlier in the course, we learned that M1V1=M2V2. Thus, using... Mon Nov 27, 2017 9:14 pm Forum: Lewis Structures Topic: N2H6 Replies: 2 Views: 1305 ### Re: N2H6 The nitrogen is already very unhappy having 9 outer shell electrons in the single bond. I imagine that it will become suicidal if you decide to give it 10 outer shell electrons with the double bond. It wasn't even supposed to have an expanded octet in the first place. Don't force the relationship or... Mon Nov 27, 2017 9:03 pm Forum: Equilibrium Constants & Calculating Concentrations Topic: 11.1 Homework Replies: 2 Views: 178 ### Re: 11.1 Homework Simply changing the concentration of the reactants or products will not change the equilibrium constant. If you add more reactant, then the reaction will shift to the right. If you add more product, then the reaction will shift to the left. In the end (after you wait a while for the reaction to reac... Wed Nov 22, 2017 1:28 am Forum: Hybridization Topic: 4.95 composition of the pi bond Replies: 2 Views: 174 ### Re: 4.95 composition of the pi bond When carbon forms single bonds, it is usually in either sp, sp2, or sp3 hybridization. The single bond is a sigma bond. If the carbon is either sp or sp2 hybridized, then it will have one (in the case of sp2) or two (in the case of sp) unhybridized p-orbitals. The extra p orbitals will form double b... Wed Nov 22, 2017 1:19 am Forum: Naming Topic: Ligand IUPAC naming Replies: 1 Views: 117 ### Re: Ligand IUPAC naming He said that there is a newer method, but we will be using the older one. Mon Nov 13, 2017 7:49 pm Forum: Determining Molecular Shape (VSEPR) Topic: AXE Replies: 3 Views: 217 ### Re: AXE The notation is AX_E_, with the _ indicating the number of terminal atoms (X) or number of lone pairs (E). If there are no lone pairs, simply omit the E (you will end up with AX_). If there is one lone pair, omit the subscript (you will end up with AX_E). If there is more than one lone pair, you wil... Mon Nov 13, 2017 7:38 pm Forum: Determining Molecular Shape (VSEPR) Topic: CO2 vs. H2O Replies: 6 Views: 1396 ### Re: CO2 vs. H2O 1. Draw VSEPR structures. 2. Draw dipole moments. 3. CO2 has dipole moments, but they cancel out because they are equal in magnitude and opposite in direction. Therefore, CO2 is a nonpolar molecule despite having polar bonds. H2O has dipole moments, but they do not cancel out because they are equal ... Wed Nov 08, 2017 11:40 pm Forum: Lewis Structures Topic: Exceptions Replies: 4 Views: 311 ### Re: Exceptions chromium and copper because they only need one electron to fill half or all of the d orbitals Wed Nov 08, 2017 11:38 pm Forum: Ionic & Covalent Bonds Topic: Polar vs. Non-polar Replies: 7 Views: 434 ### Re: Polar vs. Non-polar If the molecule is not symmetric, then it is (generally) polar because the dipole moments (if any) do not cancel out. If the molecule is symmetric but the outside atoms are not the same, then it is polar because the dipole moments do not cancel out. If the molecule is symmetric and the outside atoms... Mon Oct 30, 2017 9:24 pm Forum: Electron Configurations for Multi-Electron Atoms Topic: 2.63 Replies: 2 Views: 209 ### Re: 2.63 In general, the elements to top right of the periodic table have higher ionization energies. Conversely, the elements to bottom left have lower ionization energies. The elements given in the problem all follow this trend. Just rank them by which ones are higher or righter on the periodic table. Mon Oct 30, 2017 9:06 pm Forum: Electron Configurations for Multi-Electron Atoms Topic: Valence-shell Configuration for Transition Metals Replies: 1 Views: 165 ### Valence-shell Configuration for Transition Metals For HW problem 2.55 c, we are asked to write the valence-shell configuration for the Group 5 transition metals. I got (n-1)d3 ns2, but the answer key says (n-1)d5 ns2. Does the question mean to ask for the fifth group of transition metals or is there some special property of Group 5 metals that make... Mon Oct 23, 2017 10:30 pm Forum: Quantum Numbers and The H-Atom Topic: "No two electrons are the same..." [ENDORSED] Replies: 5 Views: 461 ### Re: "No two electrons are the same..."[ENDORSED] No two electrons in a single atom can have the same four quantum numbers. Mon Oct 23, 2017 10:26 pm Forum: Quantum Numbers and The H-Atom Topic: Abbreviating e- configurations [ENDORSED] Replies: 7 Views: 554 ### Re: Abbreviating e- configurations[ENDORSED] It depends on what the question is asking for. I would use noble gas configuration when the question asks for it. Fri Oct 20, 2017 3:25 am Forum: Properties of Light Topic: Angstrom Replies: 3 Views: 343 ### Re: Angstrom It can refer to either. The Angstrom is a unit of length. Fri Oct 20, 2017 3:20 am Forum: Properties of Light Topic: Color of visible light Replies: 11 Views: 698 ### Re: Color of visible light No, it is not necessary. Fri Oct 13, 2017 12:46 am Forum: Properties of Light Topic: Prefix Conversion Replies: 12 Views: 661 ### Re: Prefix Conversion I will usually convert the nm into m first because most ratios use m (such as speed of light). After I finish solving the problem, I make sure that the answer is in the units that the question asks for. If the question does not satisfy, I will usually just leave it in meters. Wed Oct 11, 2017 11:05 pm Forum: Properties of Light Topic: Relationship between the frequency of electromagnetic radiation and electrical field Replies: 2 Views: 222 ### Relationship between the frequency of electromagnetic radiation and electrical field On number 1.3, the question asks for what happens when the frequency of electromagnetic radiation decreases. The solutions manual says that the correct answer is "The extent of the change in the electrical field at a given point decreases". Why is this the correct answer? Fri Oct 06, 2017 11:34 am Forum: Limiting Reactant Calculations Topic: M9 Replies: 3 Views: 247 ### Re: M9 For net ionic equations, the soluble compounds will dissolve in water. Then, if the separated ions are able to form insoluble compounds, we will have a precipitate. In this problem, the precipitate is Cu(OH)2. The only molecules represented in the net ionic equations are precipitates and the aqueous... Fri Oct 06, 2017 11:22 am Forum: SI Units, Unit Conversions Topic: Sig Figs [ENDORSED] Replies: 16 Views: 2101 ### Re: Sig Figs[ENDORSED] There are 3 sig figs. The 0's to the left of nonzero numbers do not count as sig figs. However, if there were 0's to the right of nonzero numbers, those 0's would be significant.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.831297755241394, "perplexity": 4533.485328900687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890566.2/warc/CC-MAIN-20200706222442-20200707012442-00366.warc.gz"}
https://www.miniphysics.com/electric-current-ss.html
# Electric Current (SS) Show/Hide Sub-topics (Current Electricity | O Level) The electric charges in motion is called electric current and it forms the basis of current electricity. Static electricity, or electrostatics, on the other hand involves charges at rest. Electric current (I) is the rate of flow of charges (Q). • SI unit: Ampere (A) • Can be measured by an ammeter (must be connected in SERIES to the circuit) $$I = \frac{Q}{t}$$ A current of one ampere is a flow of charge at the rate of one coulomb per second. For electric current in a metal conductor (a solid), the charge carriers are electrons. For historical reasons, the direction of the conventional current is always treated as the opposite direction in which electron effectively moves. • Current in gases and liquid generally consists of a flow of positive ions in one direction together with a flow of negative ions in the opposite direction. Electric current generates a magnetic field. The strength of the magnetic field depends on the magnitude of the electric current. Current electricity consists of any movement of electric charge carriers, such as subatomic charged particles (e.g. electrons having negative charge, protons having positive charge), ions (atoms that have lost or gained one or more electrons), or holes (electron deficiencies that may be thought of as positive particles) • If the direction of the current (charge flow) is fixed, it is known as a direct current. If the motion of the electric charges is periodically reversed; it is called an alternating current. ## Analogy to river: In order to help you understand the concept of current better, you can think of a river. Current in an electric circuit is similar to water flowing through the river. ## Examples: An electric current in a wire involves the movement of 1. electrons 2. atoms 3. molecules 4. protons A. Electric current in a wire (solid conductor) involves the movement of electrons. The lower part of a cloud has a positive charge. The cloud discharges in a flash of lightning. In which direction do electrons and conventional current flow? As the cloud is positively charged, negative charges (electrons) flow upwards. Therefore, the conventional current flow is in the opposite direction (downwards). A battery moves a charge of 60 C around a circuit at a constant rate in a time of 20 s. What is the current in the circuit? Current is the rate of movement of charge. \begin{aligned} I &= \frac{Q}{t} \\ &= \frac{60}{20} \\ &= 3.0 \, C/s \\ &= 3.0 \, A \end{aligned}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.994855523109436, "perplexity": 611.4643275604449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948868.90/warc/CC-MAIN-20230328170730-20230328200730-00680.warc.gz"}
https://www.physicsforums.com/threads/planewave-cutoff-energy-convergence.549372/
# Planewave cutoff energy convergence! 1. Nov 10, 2011 ### mayerz Hello everyone, I have some problems with the convergence for the planewave cutoff energy, I tried to find the correct PWcE for a system with 2 atoms, I've determined the minimum k-point mesh with success, but when I compute the Total energy varying the PWcE, it seems that the convergence will be achieved till too high values of PWcE, should I change the pseudopotentials for the atoms? I use DACAPO, I'm Mexican by the way, so sorry if I made a mistake while I wrote this. best, 2. Nov 11, 2011 ### Useful nucleus If the pseudopotential is hard then you may not achieve convergence until reaching to very high cutoff (~700+ eV). On the other extreme ultrasoft pseudopotentials tend to converge with much lower values. Another factor is the type of element you are studying. Check the manual of your code and the recommended values of the pseudopotential. 3. Nov 11, 2011 ### mayerz Thanks for your response, but in fact I'm using ultrasoft pseudopotentials (Vanderbilt), It seems that the Total energy converge at 400 eV but when I compute the total Energy with higher PWcE the tot E fall again to lower values. I didnt allow the atoms to relax, so the unique converge criteria is the total Energy, should I change to a relaxation calculation? best 4. Nov 11, 2011 ### Useful nucleus There is no need to conduct the energy convergence test in a relaxation simulation. A single point calculation is good enough to test energy convergence. Can you tell us what the total energy difference per atom between the 400eV cutoff calculation and the more conservative one? 5. Nov 11, 2011 ### mayerz Of course! If I compute the total energy difference between the calculations done with a planewave cutoff energy of 320 and 400 eV , I get like 36 KJ/mol; something similar but in the interval of 400 - 480 eV (PWcE) I get about 4 KJ/mol; and the same with a PWcE range 480-700 eV, I get 17 KJ/mol (I've already done some calculations with higher pwce, up to 880 eV, and at this point (starting in 700) the plot seems to be almost asymptotic). But it seems to be a kind of weird to me, because I was hoping for a lower pwce value, it makes any sense? could I take the 400 eV value as the planewave cutoff energy? by the way I use a 6 atoms periodic model! Last edited: Nov 11, 2011 Similar Discussions: Planewave cutoff energy convergence!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.909853994846344, "perplexity": 1404.795633421376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689615.28/warc/CC-MAIN-20170923085617-20170923105617-00609.warc.gz"}
https://www.striveworks.com/blog/pruning-resnets-for-fun-and-profit
# Pruning ResNets for Fun and Profit ### What are Neural Networks? Neural networks are highly parameterized functions which can be trained to fit/represent/model data. Parameterized models are ubiquitous; even simple linear regression, fitting $$f(x) = mx+b$$ to find the best fit line is parameterized by the (learnable) slope $$m$$ and intercept $$b$$. In practice, neural networks are typically far more complex than simple linear regression. Typical neural network models are a composition of several to many simpler parameterized functions often referred to as layers. We tend to think of data flowing through a neural network layer-by-layer; the output of one layer becomes the input the subsequent layer. These iterative transformations and the many learnable parameters in neural networks give them their expressive power–or, the ability to approximate functions and learn distributions of data. ### What is Neural Network Model Pruning Neural network models are often over-parameterized and frequently contain millions or even billions of learned parameters. For some of the most difficult machine learning tasks, these massive models may be required to achieve state of the art accuracy (or even just acceptably good accuracy). However, in most cases, models can be greatly simplified and still maintain most of their accuracy. Neural network pruning is one method for simplifying a neural network model. Figure 1 illustrates the pruning process. In this example, the full model’s layer has 3 inputs ($$x_1$$, $$x_2$$, and $$x_3$$), 3 outputs ($$y_1$$, $$y_2$$, and $$y_3$$), and 9 learned weights ($$w_{i,j}$$ for $$i, j\in {1, 2, 3}$$). The output $$y_1$$ is a weighted sum of the inputs: $y_1 = w_{1,1}x_1 + w_{2,1}x_2 + w_{3,1}x_3.$ If it’s the case that $$|w_{1,1}| + |w_{2,1}| + |w_{3,1}|$$ is very close to zero, then $$y_1$$ will be very close to zero for most inputs (assuming none of the $$x_i$$’s are very large). ### Figure 1: A simple example of pruning. Removing one output feature reduces the number of learned weights (edges) and the amount of computation required to compute all the layer outputs. If this is the case, then $$y_1$$ varies very little with the network’s input and thus has little impact on the network’s predictions. Pruning exploits this insensitivity by simply removing the output $$y_1$$ and the weights used to compute it ($$w_{1,1}$$, $$w_{2,1}$$, and $$w_{3,1}$$); note this is equivalent to removing the weights or simply setting $$y_1$$ to be a constant value of 0. Technical Note: If we remove the weights $$w_{1,1}$$, $$w_{2,1}$$, and $$w_{3,1}$$ thereby removing the output $$y_1$$, then we also need to adjust the layer immediately following the one pictured. This is because the subsequent layer originally expected three inputs ($$y_1$$, $$y_2$$, and $$y_3$$) but is now only receiving two inputs. The result will be that a few weights, which were applied solely to $$y_1$$, are no longer needed and can (must) also be removed. ### Figure 2: A high-level view of ResNet18 which comprises 8 residual blocks. ResNet models are deep neural networks that are composed of a sequence of residual blocks. Figure 2 shows a high level view of ResNet18 which is composed of 8 residual blocks. In a traditional convolutional neural network, the data flows along a single path forward, from one layer to the next. The residual blocks, which form the basis of ResNets, create skip connections allowing input data to bypass (or skip) some of the computation that would otherwise be done in the block–effectively creating two forward paths along which the data flows. One purpose of these skip connections is to shorten the path from input to output (or, more specifically, from output to input during a backwards pass) in an attempt to mitigate the vanishing gradient problem (see technical note for all the gory details). As a result, ResNets maintain the expressive power of deep neural networks without suffering (as badly as other deep networks) from vanishing gradients. ### Figure 3: The computational path through this residual block preserves the size and shape of the input tensor allowing for the final point-wise addition of the block’s input tensor with the block’s output tensor. Technical Note (vanishing gradient): The vanishing gradient problem refers to a phenomena where updates to weights (especially in early layers of a very deep network) become infinitesimally small preventing them from learning meaningful features. To illustrate the cause of the vanishing gradient and illustrate how skip connections lessen the risk of it occurring, consider a traditional neural network with 5 layers: $$f_1, f_2, f_3, f_4,$$ and $$f_5$$ where for any input $$x_0$$ the output is: $y=f_5(f_4(f_3(f_2(f_1(x_0))))).$ For notational simplicity, let’s denote $$y_i=f_i(f_{i-1}(…f_1(x_0)))$$ so that $$y=y_5=f_5(y_4),$$ etc. And, to abuse notation slightly we’ll simultaneously use $$f_i$$ to be the function which represents the $$i^{\text{th}}$$ layer and the learned weights (or parameters) contained in that layer. Weight updates at each layer will have the form: $f_i\leftarrow f_i - \lambda \frac{\partial L}{\partial f_i}$ where the partial derivative (gradient) is evaluated at that layer’s input, $$y_{i-1}$$ and $$L$$ represents whatever loss function is being optimized. As we update the weights in every layer, we compute the gradient for each layer (recycling any computation we can to avoid doing it twice). $\frac{\partial L}{\partial f_5} \rightarrow \text{directly computed}$ $\frac{\partial L}{\partial f_4} = \frac{\partial L}{\partial f_5}\cdot \frac{\partial f_5}{\partial f_4} \rightarrow \text{(recycled) (computed)}$ $\frac{\partial L}{\partial f_3} = \frac{\partial L}{\partial f_5}\cdot\frac{\partial f_5}{\partial f_4}\cdot\frac{\partial f_4}{\partial f_3}$ $\frac{\partial L}{\partial f_2} = \frac{\partial L}{\partial f_5}\cdot\frac{\partial f_5}{\partial f_4}\cdot\frac{\partial f_4}{\partial f_3}\cdot\frac{\partial f_3}{\partial f_2}$ $\frac{\partial L}{\partial f_1} = \frac{\partial L}{\partial f_5}\cdot\frac{\partial f_5}{\partial f_4}\cdot\frac{\partial f_4}{\partial f_3}\cdot\frac{\partial f_3}{\partial f_2}\cdot\frac{\partial f_2}{\partial f_1}$ If each of the terms $$\frac{\partial f_i}{\partial f_{i-1}}$$ are small (e.g. smaller in absolute value than 1) then the product of many such terms quickly becomes vanishingly small and, as a result, the weights of the first layer(s) are essentially static. If we insert a skip connection between layers 1 and 5 (so that the input to layer 5 consists of the output of layer 4 added to the output of layer 1) then our network description is altered slightly (and so are the derivatives). The network becomes: $y=f_5(f_1(x_0) + f_4(f_3(f_2(f_1(x_0))))).$ And the gradients (partial derivatives) become: $\frac{\partial L}{\partial f_5} \rightarrow \text{ (unchanged)}$ $\frac{\partial L}{\partial f_4} = \frac{\partial L}{\partial f_5}\cdot \frac{\partial f_5}{\partial f_4}\rightarrow \text{ (unchanged)}$ $\frac{\partial L}{\partial f_3} = \frac{\partial L}{\partial f_5}\cdot\frac{\partial f_5}{\partial f_4}\cdot\frac{\partial f_4}{\partial f_3}\rightarrow \text{ (unchanged)}$ $\frac{\partial L}{\partial f_2} = \frac{\partial L}{\partial f_5}\cdot\frac{\partial f_5}{\partial f_4}\cdot\frac{\partial f_4}{\partial f_3}\cdot\frac{\partial f_3}{\partial f_2}\rightarrow \text{ (unchanged)}$ $\frac{\partial L}{\partial f_1} = \frac{\partial L}{\partial f_5}\left(\frac{\partial f_5}{\partial f_1} + \frac{\partial f_5}{\partial f_4}\cdot\frac{\partial f_4}{\partial f_3}\cdot\frac{\partial f_3}{\partial f_2}\cdot\frac{\partial f_2}{\partial f_1}\right)$ In this case, most of the updates are the same. However, for layer 1, there is two terms (reflecting the two paths data follows to the network output). The first term has only two products (so vanishing gradient is not an issue) and the second term is the same as we previously saw (which may suffer from the vanishing gradient). The heuristic to keep in mind is that the number of steps required to get from a layer’s input to the network’s output is the number of products (of partial derivatives) that will be computed in making the weight update. If that number of steps is very high, then it is easily possible for the gradient at that layer to vanish. By inserting skip connections, one keeps the (minimal) number of steps required to get to the output relatively small. Figure 3 shows the structure of a basic, shape-preserving residual block. In this case, the computation within the block creates an output tensor which has the same shape and size as the block’s input tensor. This allows for the coordinate-wise addition of the input tensor with the output tensor. Note that there are several types of residual blocks and the internal structure of the blocks vary somewhat; the blocks illustrated here are the simplest such blocks and are referred to as Basic Blocks. More complex blocks may contain additional Convolution-BatchNorm-ReLU groups before applying the pointwise addition. The important feature that all residual blocks share is the pointwise addition of an output tensor with the block’s input tensor. Sometimes the computation within a residual block can alter the shape or size of the tensor moving through the block, see Figure 4. This may happen, for example, if the number of learned kernels in the final convolution layer (the number of output channels) isn’t the same as the number of input channels to the first convolution layer; in this case, the height and width of the output tensor may match the height and width of the input tensor, but the depth of the two tensors vary. Additionally, if any convolution layer within the block has a stride that is not 1, then the output tensor’s height and width will be smaller than the input tensor’s height and width. In order to make this final point-wise addition feasible, something must be done to reshape either the input tensor, the output tensor, or both. ### Figure 4: Sometimes the computational path through a residual block alters the size and shape of data passing through the block. In these cases, without modification, the block’s input tensor cannot be added point-wise to the output tensor. The solution which is typically employed by ResNet architectures is to insert, wherever necessary, a lightweight convolution layer (1x1 convolution) with a BatchNorm chaser (see Figure 5). Doing this allows for the needed flexibility to reshape the input and maintain the feasibility of the final pointwise addition. ### Figure 5: By inserting a relatively lightweight resampling function (convolution) the network can reshape the input tensor to match the output tensor’s shape making the point-wise addition possible. ### Preparing to Train and Prune ResNets To prune a convolutional neural network one selects (via any number of criteria) convolutional filters/kernels to remove. Since each convolutional filter/kernel produces a channel (or slice) of the layer’s output tensor, removing a kernel has the immediate impact of reducing the number of channels in the output tensor. Thus, if we were to remove a kernel from the second convolutional layer in a residual block that originally preserved shapes (e.g. as in Figure 3) we suddenly find ourselves in a position where shapes have been altered (e.g. as in Figure 4). To prepare for inevitably changing shapes, we ensure that every residual block contains a 1x1 convolution layer (as shown in Figure 5) and optionally also includes the batch norm layer depicted there. We use that convolutional “through” layer to resize the block’s input to match the shape of the block’s output; as we prune the block’s internal layers, we adapt the through layer so that output shapes always match. Pruning the first convolutional layer in any residual block is significantly easier. When we remove a kernel from the first convolutional layer the only other layers affected are the subsequent batch norm layer (which maintains a per-channel mean and variation) and the subsequent convolution layer (whose kernels expect a certain number of input channels). Ideally, when pruning a neural network, the kernels selected for removal are those which are not impacting the model’s ability to make predictions. However, in practice, rather than no impact those kernels have a small impact on model performance and pruning many kernels simultaneously can have a larger impact on performance. As a result, after pruning a model, it is important to allow for a modest amount of additional training–often called fine-tuning. Compared to the cost of training a full model to convergence, the cost of fine-tuning a pruned model is very small. When viewed as a whole, the training and pruning process follows this pattern: 1. Train the full model (as usual, typically long). 2. Prune a modest amount of the model. 3. Resume training (pruned model, typically short). 4. If pruned model recovers sufficient accuracy, repeat from step 2. ### Why not just train a pre-pruned network? At first glance it may seem like this process: train a full model, prune, fine-tune, repeat, is unnecessarily complicated. Why don’t we just start with the smaller model training it from scratch? The first challenge one encounters is simple: how do you know the right starting size of the model should you use? If one initially chooses a model which is too small, the accuracy may simply never be good enough and one would have to retrain with a larger network (again becoming an iterative process which doesn’t simplify the original one). If the initial model is still too large, one could again prune (again becoming iterative). In practice, and for reasons which are not fully understood, training a smaller model from scratch tends to produce worse models than training a very large model and subsequently pruning. In other words, if we applied the above pruning process to find the right starting size, then randomly initialize the small model and train it would be very unlikely that we would achieve comparable accuracy. One conjecture as to the source of this problem is the challenge of getting good weight initializations. In [3] the authors refer to this as the lottery ticket hypothesis. Roughly, the idea is that by starting with a large (fat) model, i.e. one with many kernels in each layer, one effectively buys many lottery tickets (each kernel being a lottery ticket). Before training starts, it is impossible to know which will be the winners, but once the winners have been determined, the rest are no longer needed and can be discarded. Then, trying to train from a small model is akin to trying to win the lottery by only purchasing a minimal number of tickets. The authors showed that if they stored the original weights, trained a model to convergence, determined which kernels were winners, then pruned the model (to only include the winners), reset the winners to their original weights, and re-train they could recover the highly accurate model. However, their result is still not conclusive as it was later shown to not always be effective, and in practical terms it wasn’t very useful (because learning which tickets were winners required training the full model). So, the short answer as to why we follow this iterative approach is simple: in practice, not only does it work, but it works better than any other strategy currently known. ### How to Select Kernels to Prune There is a wide body of research around how to select which kernels to prune from any given neural network model; in this section we describe very briefly some of that research. Pruning strategies roughly fall into two categories: data-agnostic pruning and data-aware pruning. One of the earliest proposed pruning strategies [5] involved locally approximating a loss function and estimating the impact on the loss function if any particular neuron were to be pruned–pruning the neurons with the least negative impact. A recent data-aware approach was proposed initially as a means to attempt to explain neural network predictions. This approach [2] was to remove convolutional kernels one-by-one and measure the impact on the predictive ability for each class. The goal was that by identifying a small number of kernels essential to accurate predict a class one could explain what it was about those kernels which made that prediction possible. While the method turned out to not lead to easy explanations a novel side-effect was that pruning could be done aggressively and in a single shot if one only kept a few kernels that were the most important for each class, e.g. at each layer one might keep the three most important (critical) kernels for each class–this frequently included a lot of overlap as kernels were often critical for more than one class. With a small amount of fine-tuning the models were shown to recover most of their accuracy. The real limitation with this approach however was the computational expense to determine which filters were critical. The model would have to be re-validated once for each kernel in the network (removing them one at a time and validating); for deep models with many, many kernels (and for large datasets) this computation may be prohibitively expensive. Data agnostic pruning strategies are very popular because they tend to be sufficiently effective and relatively inexpensive. We previously hinted that if the sum of the absolute values of a kernel’s weights was small then that kernel would produce an output that was almost always close to 0–and is therefore not needed. This is indeed a common pruning strategy and there are (at least) two ways to look at “how small” a kernel’s weights are; pruning is accomplished either by removing all the kernels that are smaller than a threshold, or by removing the smallest kernels regardless of their nominal size. The first way to “size” a kernel is to compute the max norm of that kernel, also called the $$\ell_\infty$$ norm, or simply, the largest absolute value of any weight in the kernel. This strategy appears in several papers including [6], it is easy to implement and typically quite effective. A second method for measuring the size of a kernel is to instead compute the kernel’s $$\ell_1$$ norm, or the sum of the absolute values of the weights. Again, after computing the sizes of the kernels one either prunes kernels whose size falls below a threshold or just the smallest few kernels in the network. This strategy is (obviously) very similar to the first but is also frequently used including [4]. Less common are data-agnostic strategies which do not rely on kernel size. One such strategy [1] revolves around identifying redundant kernels and removing redundancy. They identify redundant kernels by computing the cosine similarity of the outputs of kernels given random inputs. If the cosine similarity is high, than for many inputs the output of the two kernels are nearly identical and therefore both kernels are not needed. Surprisingly, or maybe not, the work of [7] demonstrated that simply randomly pruning kernels is comparably effective to any of these data-agnostic strategies–a testament to how plastic (adaptable) these neural network models are. So, how should you choose which kernels to prune? In light of [7] one might be tempted to think that they should simply randomly prune; however, the real take-away is that all of these strategies–at least on standard, benchmark datasets–perform their desired function. In all these cases, model sizes were dramatically shrunk with minimal loss in accuracy. So, in reality, the answer is: you should choose which ever method works for your data and your model keeping in mind that it is a good idea to be flexible. ## Real Impact of Pruning Neural network pruning has (at least) two impacts. First, a pruned model is necessarily a subset of its un-pruned parent. This means that the pruned model has a strict subset of the weights of its parent and therefore less expressive capacity. As an analogy, recall the function $$f(x)=mx+b$$ for simple linear regression. If we prune one of the parameters, for example the intercept, we are left with $$g(x)= mx$$. Anything expressible by $$g(x)$$ is equally expressible by $$f(x)$$ (setting $$b=0$$). But, there are some things which can be expressed by ((f(x))) that cannot be expressed by $$g(x)$$. While a pruned model must be less expressive than its parent, it is not necessarily the case that a pruned model must be less accurate than its parent. In our analogy, for example, if the data we were representing naturally had an intercept of 0, then both models would be equally accurate. In practice it is hard (maybe impossible) to know how accurate a pruned model can (or will) be until it is pruned and tested. Second, since a pruned model contains a strict subset of the weights of its un-pruned parent, it is necessarily the case that less computation must be done to compute the pruned network’s output than its parent. As a result of requiring less computation, the inference speed of a pruned model is at least as fast (usually faster) than its parent. There are many subtle factors which impact how much (if any) speed-up will be seen including: how much of the network was pruned (the more that is pruned the more likely it is to see a larger speed-up), how well the network’s computation is parallelized (pruning can impact positively or negatively the parallelization efficiency), I/O speeds (if it takes longer to load data than to do the computation then doing less computation will not produce a speed-up because it doesn’t affect the I/O speed), etc. In the following sections, we demonstrate the real affects of training and iteratively pruning a ResNet18 model on a dataset containing images of vehicles–the task being to identify the predominant color of the vehicle. ### Minimal Loss of Model Accuracy First we examine the impacts on the accuracy of our model as we train and iteratively prune. Figure 6 shows the validation accuracy over time as we perform this iterative procedure. The full model is updated through 40,000 iterations (batches of data) achieving an accuracy of just over 95% on the validation dataset. We then prune approximately 5% of all the convolutional kernels in the model and continue training. After the initial pruning little or no accuracy is lost and the model continues to learn. We continue to prune approximately 5% of the model (removing approximately 5% of the convolutional kernels) and immediately fine-tune the model for 8000 batches. Note that, after the initial pruning operations, the accuracy (if lost at all) is recovered significantly quicker than after 8000 iterations–indicating that would could have achieved similar results with less fine tuning. By the final few pruning iterations we performed, the initial drop is accuracy becomes more pronounced, although in every case the model is able to recover to at least 95% accuracy. ### Figure 6: The validation accuracy of a ResNet18 model as it is trained. The first 40,000 batches are training a full model. Then, pruning iterations begin. Each pruning iteration consists of removing approximately 5% of the convolutional kernels from the existing ResNet18 model and then fine-tuning (re-training) for 8000 additional batches. ### Reduction in Model Size In Figure 6 we estimated the percentage of convolutional kernels removed from the model. Note, this estimate is rough for two reasons: first, we can only remove an integer number of kernels (so we almost never remove precisely 5% of the model), and second, we are actually attempting to remove 5% of the current model at each step, so as the model shrinks, 5% of the current model is a decreasing fraction of the original model. But, removing entire kernels is not the only impact on the model. As previously discussed, when we remove some kernels, the remaining kernels in subsequent layers shrink (to accept inputs with fewer channels). In Figure 7 we illustrate the relative impact on model size as the pruning occurs. This model size is measured by examining the memory (disk space) required to store the model (its learned parameters). Since the exact model size is less important, we show here the size of pruned models relative to the full model (a ratio of pruned model to full model). This demonstrates that we can remove almost 80% (79.1%) of the models learned parameters without negatively impacting the predictive performance. ### Figure 7: The relative model size as the model is initially trained (full model) then iteratively pruned. Note, the size referred to here is the relative size of the learned weights and is directly proportional to the number of learned weights in the pruned model. Specifically, the final model has 20.9% the number of learned parameters as the full model, or, in other words, almost 80% of the learned weights were removed (pruned) from the full model. ### Improvement in Inference Time While the direct impacts of a smaller model size may not be obvious (e.g. smaller memory footprint requiring less power), pruning a model also has a positive effect on the speed at which the model can make predictions. Figure 8 illustrates how much additional throughput the model can produce, specifically, our model which has been pruned by almost 80%–with no loss in accuracy–can make almost 65% more predictions per second than the full model. ### Figure 8: As the model shrinks, inference time improves and more images can be processed in a fixed amount of time. With nearly 80% of the model pruned (and no noticeable loss in validation accuracy) we observe almost a 65% increase in the number of images that can be processed per second. ## Conclusion Pruned models are (almost always) better–in about every way that actually matters–than their full, over-parameterized counterparts. Pruned models are less likely to overfit data; pruned models (may) maintain all or most of the accuracy of the full model; and, pruned models require less memory, less energy, and inference faster. ## References 1. Babajide O Ayinde and Jacek M Zurada. 2018. Building efficient convnets usingredundant feature pruning.arXiv preprint arXiv:1802.07653(2018). 2. Mihaela Dimovska and Travis Johnston. 2019. A Novel Pruning Method for Convolutional Neural Networks Based off Identifying Critical Filters. In Proceedings of the Practice and Experience in Advanced Research Computing on Rise of the Machines (PEARC19). Association for Computing Machinery, New York, NY, USA, Article 63, 1–7. 3. Jonathan Frankle and Michael Carbin. 2019. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. International Conference on Learning Representations (ICLR), Available: https://arxiv.org/abs/1803.03635. 4. Hengyuan Hu, Rui Peng, Yu-Wing Tai, and Chi-Keung Tang. 2016. Network trimming: A data-driven neuron pruning approach towards efficient deep architectures. arXiv preprint arXiv:1607.03250 (2016). 5. Yann Le Cun, John S. Denker, and Sara A. Solla. 1990. Optimal brain damage. Advances in neural information processing systems. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 598–605. 6. Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. 2016. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710 (2016). 7. Deepak Mittal, Shweta Bhardwaj, Mitesh M Khapra, and Balaraman Ravindran. 2018. Recovering from Random Pruning: On the Plasticity of Deep ConvolutionalNeural Networks. In 2018 IEEE Winter Conference on Applications of ComputerVision (WACV). IEEE, 848–857. Travis Johnston Travis Johnston is a Senior Data Scientist at Striveworks. He holds a PhD in Mathematics from the University of South Carolina. Before joining Striveworks in the beginning of 2021, he was a Postdoctoral Researcher at the University of Delaware, and a Staff Research Scientist at Oak Ridge National Lab. He is the author/co-author of many scientific publications in mathematics, machine learning, and high performance computing and has mentored many undergraduate and graduate students as well as several postdocs.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.8199832439422607, "perplexity": 754.050097760256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711108.34/warc/CC-MAIN-20221206124909-20221206154909-00094.warc.gz"}
https://www.physicsforums.com/threads/low-pass-filter-uniform-gauss-distribution.871002/
# Low pass filter (uniform/gauss distribution) • Start date • #1 71 0 ## Homework Statement [/B] I have got data x, which is formed like uniform distribution. After using discrete low pass filter I got output data u, which is Gauss distribution. What is explanation, why using that filter, we from uniform distribution can get Gauss distribution? ## Homework Equations Low pass filter formula: ## The Attempt at a Solution [/B] Filtred data starts to reach mean value of input data. Related Engineering and Comp Sci Homework Help News on Phys.org • #2 collinsmark Homework Helper Gold Member 2,888 1,211 Discrete time filtering of noisy signals essentially involves the addition of scaled, random numbers. Each input sample can be thought of being associated with a random number (conforming to a random variable with some known distribution, such as uniform distribution, for example). Each output sample involves scaling these random numbers by some amount and adding them together. In an IIR filter (like this one), this also involves summing new, scaled inputs (new random numbers) together with scaled, previous outputs (old random numbers). The point of what I'm trying to say here is it involves the summing of random numbers. ----- Let's take a step back a little and talk about the summing of random variables. Flip a fair coin. It's either heads or tails. Let's assign 0 to tails and 1 to heads. It's uniformly distributed between 0 and 1. 0 [p = 1/2] tails Now flip two coins (or the same coin twice) and add the results together. The distribution isn't uniform any more. The possible outcomes are: 0 [p = 1/4] (both tails) 2 [p = 1/4] (both heads) Now flip the coin three times. Possible outcomes: 0 [p = 1/8] (all tails) 3 [p = 1/8] (all heads) Four flips: 0 [p = 1/16] (TTTT) 1 [p = 4/16] (HTTT --or-- THTT --or-- TTHT --or-- TTTH) 2 [p = 6/16] (HHTT --or-- HTHT --or-- HTTH --or-- THHT --or-- THTH --or-- TTHH) 3 [p = 4/16] (HHHT --or-- HHTH --or-- HTHH --or-- THHH) 4 [p = 1/16] (HHHH) Notice that as we sum together more random flips that the distribution function gets smoother and closer to a bell shape. What would happen if we increased the number of flips to a very large number? --- Now back to the discrete filter. Things might be a little more complicated here because the some of the random variables are scaled before they are added. What does the Central Limit Theorem, or something very much like it, tailored to this situation, have to say about this? • #3 collinsmark Homework Helper Gold Member 2,888 1,211 I don't want you to get bogged down with the Central Limit Theorem. It does apply in the case of adding up all the coin flips. But I only mentioned it in my above post as a point of reference (and the fact that the Central Limit Theorem involves the Gaussian distribution function). But it doesn't really apply directly in the case of the discrete filter because there is weighting and scaling going on. The distribution of the output of your discrete filter is not truly Guassian, even if it is pretty close. If you wish to find the true, mathematically accurate probability distribution of the filter's output, it may help to study what happens when you add two random variables together, each with a different probability distribution, and calculate the distribution of the sum. (Hint: Discrete convolution is involved in the distribution function of the sum). [Edit: Actually that's continuous (not discrete) convolvution that is involved in the probability density function of the sum. The filter performs discrete convolution when its doing its filtering thing, but if we want to find solve for the probability density functions, continuous convolution is involved.] Take that a step further and iteratively ["recursively" might be the better word] convolve the weighted, previous distribution function of the output with the weighted, distribution function of the input. The mathematics can get scary, but if you can do it, you'll have your precise answer. I'm not sure if this is what you are looking for, or if you are trying to find a more subjective (less precise) answer. [Edited for clarification.] Last edited: • #4 71 0 I like your approach to this question. I hadn't thought, that output is sum of two different distributions. Actually output isn't Gaussian distribution, but it just looks like that? Filter output can be calculated recursively using difference equation: $$y(n) = \sum_{k=0}^M b_nx(n-k)+\sum_{k=1}^N a_ky(n-k)$$ Or using convolution operation: $$y(n) =h(n) * x(n),$$ where $h(n)$ is impulse response. In my case transfer function of a IIR discrete time filter: $$H(z) =\frac {0.111} {1-0.889z^{-1}}$$ So then I tested few things in Matlab: Code: clear all, clc n=1000; x = -0.3 + (0.9-(-0.3)).*rand(n,1); y = filter(0.111,[1 -0.889],x); figure(1) subplot(211) plot(x) title('Input data') ylabel('x(n)') subplot(212) plot(y) title('Output data') ylabel('y(n)') figure(2) subplot(211) hist(x,24) title('Input data histogram') subplot(212) hist(y,24) title('Output data histogram') figure(3) a=[1, -0.889]; b=[0.111]; [h,t]=impz(b,a,1000); subplot(211) stem(t,h); title('Impulse response h(n)') subplot(212) y1=conv(h,x) plot(y,'-r') ylabel('y(n)') And this is result with convolution formula: There I tested that I can get same results with convolution formula and orginal filter formula. Is there all OK? But I guess that is not what I exactly need to explain output distribution. As I understand your hints, I need to do convolution operation with two different distribution functions. I know probability density function for uniform distribution (input data): $$F(x) =\frac {x-a+1} {b-a+1}, x\in [a, b]$$ But it's not clear with "weighted, previous distribution function of the output". How can I describe it with probability density function? • #5 collinsmark Homework Helper Gold Member 2,888 1,211 I like your approach to this question. I hadn't thought, that output is sum of two different distributions. Actually output isn't Gaussian distribution, but it just looks like that? Right. If my intuition and memory serves me, the probability density function of the filter's output will not be Gaussian unless $T$ approaches infinity. (Don't ask me to prove that though. ) In this exercise, $T$ is 9. The number 9 is pretty close to infinity, which is why the output looks sort of Gaussian, but it's not quite there, so the distribution function is not truly Gaussian. By the way, if the probability density function of the input was Gaussian, then the filter's output would be Gaussian regardless of what $T$ is. But that's a special case that does not apply to this exercise. Filter output can be calculated recursively using difference equation: $$y(n) = \sum_{k=0}^M b_nx(n-k)+\sum_{k=1}^N a_ky(n-k)$$ Or using convolution operation: $$y(n) =h(n) * x(n),$$ where $h(n)$ is impulse response. In my case transfer function of a IIR discrete time filter: $$H(z) =\frac {0.111} {1-0.889z^{-1}}$$ So then I tested few things in Matlab: Code: clear all, clc n=1000; x = -0.3 + (0.9-(-0.3)).*rand(n,1); y = filter(0.111,[1 -0.889],x); figure(1) subplot(211) plot(x) title('Input data') ylabel('x(n)') subplot(212) plot(y) title('Output data') ylabel('y(n)') figure(2) subplot(211) hist(x,24) title('Input data histogram') subplot(212) hist(y,24) title('Output data histogram') figure(3) a=[1, -0.889]; b=[0.111]; [h,t]=impz(b,a,1000); subplot(211) stem(t,h); title('Impulse response h(n)') subplot(212) y1=conv(h,x) plot(y,'-r') ylabel('y(n)') And this is result with convolution formula: There I tested that I can get same results with convolution formula and orginal filter formula. Is there all OK? Yes, I think that looks right to me. Except if I were you I would change your 0.111 and 0.889 numbers to 1/9 and 8/9 (or at least drag out the precision a little longer). Filters, whether they be analog or digital (including discrete time filters), can in some cases be pretty sensitive to precision and rounding errors. Of course, filters involve convolution as part of their very operation. But the process for finding the mathematically precise probability density function (pdf) of a filter uses a different process than what the filter uses directly, even though they both involve convolution. Both processes use convolution. But they use it in different ways. More on this below. But I guess that is not what I exactly need to explain output distribution. As I understand your hints, I need to do convolution operation with two different distribution functions. Yes, the process involves convolving two different probability density functions. And more so, it involves repeating this process a large (perhaps approaching infinity) number of times. I know probability density function for uniform distribution (input data): $$F(x) =\frac {x-a+1} {b-a+1}, x\in [a, b]$$ Firstly, that's closer to the Cumulative Probability Function, not the Probability Density function. Secondly, you don't need the '1's. I suggest getting rid of those. I would say the cumulative probability function $F_x (x)$, for a uniform random variable is $$F(x) = \begin{cases} 0 & x < a \\ \frac{x - a}{b - a} & a \leq x \leq b \\ 1 & x > b \end{cases}$$ To find the Probability Density Function (PDF), just take the derivative of that. It should give you as constant, non-zero value between $a$ and $b$ and 0 elsewhere. So the probability density function $f_x$ is $$f_x = \begin{cases} 0 & x < a \\ \frac{1}{b - a} & a \leq x \leq b \\ 0 & x > b \end{cases}$$ But it's not clear with "weighted, previous distribution function of the output". How can I describe it with probability density function? I'll try to get you started. Suppose we had a random variable what was uniformly random between 0 and 1. Let's call that variable $x$. Let's call its PDF $f_x$. Since the function $f_x$ is a probability density function, the area under the curve must be 1. So in this case, the amplitude of $f_x$ is 1 between $0 \leq x \leq 1$ and zero everywhere else. Probability density function for our example $f_x$: $$f_x = \begin{cases} 0 & x < 0 \\ 1 & 0 \leq x \leq 1 \\ 0 & x > 1 \end{cases}$$ So what if we were to multiply $x$ by $\frac{1}{10}$. What is the probability density function, $f_{x/10}$, of that? If you were to guess that it would be $\frac{1}{10} f_x$ that would seem deceptively simple. And it would also be wrong. The maximum possible value for $x$ is 1. So the maximum possible value for $\frac{1}{10}x$ is $\frac{1}{10}$. And we also need to scale the amplitude by the inverse of our constant so that the area under the curve remains 1. Probability density function for our example $f_{x/10}$: $$f_{x/10} = \begin{cases} 0 & x < 0 \\ 10 & 0 \leq x \leq 0.1 \\ 0 & x > 0.1 \end{cases}$$ So when we multiply our random variable by a small fraction, the resulting probability density function gets squished up; it gets narrower. But the height of the function grows taller (keeping the area under the curve a constant 1). --- This is the sort of thing you will have to do in your problem. The input signal gets multiplied by $1/9$. So you'll have to find a new probability density function of the input $x$ that has a PDF that is squished to 1/9th and 9 times taller. You'll then convolve that result with the PDF of the filter's previous output, scaled by $8/9$ ---- It might help to start from the beginning. $u_o = x_0$, so the PDF of $u_0$ has the same PDF as the input. To find the PDF of $u_1$ you'll need to convolve $f_{x/9}$ with $f_{8 u_0/9}$ Note that the PDF of $u_1$ is not uniform. It is certainly not Gaussian either. From here on out, you'll need to treat the PDFs of the outputs as having arbitrary shapes since they're neither Gaussian nor uniform, but somethings sort of in between (sort of). Keep on going indefinitely with the pattern. And good luck! Last edited: • #6 collinsmark Homework Helper Gold Member 2,888 1,211 Oh, and by the way, even though you are working with a discrete time filter, the PDFs involved are continuous functions (unless you are artificially imposing digital constrains such as those associated with a 32-bit floating point value or some such. But I don't think that is what you are interested in). The convolutions that I was talking about in my last post were continuous convolutions, not discrete. So when you calculate the output stream by convolving the input stream with $h[n]$, then that is discrete convolution. That's fine. That's what discrete time filters do. But that doesn't tell you anything about the probability distribution of the output. When you are finding the resulting PDFs, calculating $f_{u[n]} = f_{8 u[n-1]/9} \ast f_{x/9}$, that is continuous convolution to be done with integration. Last edited: • #7 71 0 Input distribution: $$f_x = \begin{cases} 0, & x \lt -0.3 \\ \frac {1} {1.2}, & -0.3 \leq x \leq 0.9 \\ 0, & x \gt 0.9 \end{cases}$$ Input distribution scaled by $1/9$:: $$f_{x/9} = \begin{cases} 0, & x \lt -\frac {1} {30} \\ 7.5, & -\frac {1} {30} \leq x \leq 0.1 \\ 0, & x \gt 0.1 \end{cases}$$ Filter's previous output scaled by $8/9$: $$f_{8u_{0}/9} = \begin{cases} 0, & x \lt -\frac {4} {135} \\ 8\frac {7} {16}, & -\frac {4} {135} \leq x \leq \frac {4} {45} \\ 0, & x \gt \frac {4} {45} \end{cases}$$ Then I need to calculate: $$f_{u_{n}}(z) = \int_{\infty}^{-\infty} f_{8u_{n-1}/9}(x) f_{x/9}(z-x) \, dx$$ First iteration n=1: $$f_{u_{1}}(z) = \int_{\infty}^{-\infty} f_{8u_{0}/9}(x) f_{x/9}(z-x) \, dx$$ But I have got 1000 random numbers (input), is it mean that to calculate resulting PDF I need to do n=1000 iterations? • #8 collinsmark Homework Helper Gold Member 2,888 1,211 Input distribution: $$f_x = \begin{cases} 0, & x \lt -0.3 \\ \frac {1} {1.2}, & -0.3 \leq x \leq 0.9 \\ 0, & x \gt 0.9 \end{cases}$$ Input distribution scaled by $1/9$:: $$f_{x/9} = \begin{cases} 0, & x \lt -\frac {1} {30} \\ 7.5, & -\frac {1} {30} \leq x \leq 0.1 \\ 0, & x \gt 0.1 \end{cases}$$ Filter's previous output scaled by $8/9$: $$f_{8u_{0}/9} = \begin{cases} 0, & x \lt -\frac {4} {135} \\ 8\frac {7} {16}, & -\frac {4} {135} \leq x \leq \frac {4} {45} \\ 0, & x \gt \frac {4} {45} \end{cases}$$ Then I need to calculate: $$f_{u_{n}}(z) = \int_{\infty}^{-\infty} f_{8u_{n-1}/9}(x) f_{x/9}(z-x) \, dx$$ First iteration n=1: $$f_{u_{1}}(z) = \int_{\infty}^{-\infty} f_{8u_{0}/9}(x) f_{x/9}(z-x) \, dx$$ But I have got 1000 random numbers (input), is it mean that to calculate resulting PDF I need to do n=1000 iterations? Good work so far! Before we go on though, I must ask: were you really tasked with finding the actual (mathematically exact) probability density function? In your original post, the question was asking for a reason why the filter's output's probability density function looked Gaussian (or at least Gaussian-like) when the input's probability density function was uniform. I ask because I think you are very close to being able to answer 1. why the filter's output's probability density function goes from being uniform to one that looks sort-of Gaussian like, and 2. why the filter's output's probability density function will never be actually Gaussian, even if it is sort of close. On the other hand, solving for the (mathematically exact) probability density function is a lot tougher. (Heck, I'm not even sure I'm up for that challenge.) --- You might want to try going through a few iterations to see what happens to $f_{u_n}$ at each iteration. I think that will help show 1) above, where the function starts out as uniform, then starts to look more Gaussian (sort of) with each iteration. (More on the second answer later.) The following might help with the second answer, and if you actually did wish to solve for the actual function. If we wanted to scale a random variable by some amount, say by $\frac{8}{9}$, and we know the original probability density function, say we call it $f_{u_n}(x)$, then the probability density function of the scaled random variable is $$f_{8u_n/9}(x) = \frac{9}{8}f_{u_n}(9x/8)$$ Make sure that makes sense to you before we move on. (And take note for later [see below] that if the original function is Gaussian, the density function of the scaled random variable is also Gaussian, albeit with a different mean and standard deviation.) After an infinite number of iterations (as $n \rightarrow \infty$), the probability density function is not going to change anymore. In other words, $f_{8u_n/9}(x) = f_{8u_{(n-1)}/9}(x)$. And with that bit of insight, we can say, $$f_{u_n}(x) = \int_{-\infty}^\infty \frac{9}{8}f_{u_n}(9z/8) f_{x/9}(x-z)dz$$ And that's the differential equation that will ultimately give you your answer. Notice that there is only one function that is unknown, and that is $f_{u_n}()$. You already know what $f_{x/9}()$ is, you've calculated that above (I did some variable substitution, btw). What that differential equation is asking is as follows: "Suppose I had an original function and then scaled it a little bit, in a known way, to make a new function, then I convolved that with this other, known function such that the result is equal to the original function. What [original] function meets these requirements?" I'm confident that the differential equation has a solution. But I'm not confident that I have the mathematical aptitude to solve for it. (Nor am I positive that it can be represented in closed form.) [Edit: I just came up with an idea or two on how this differential equation might be solved. But I haven't worked out the details yet.] Even without solving for the actual function, you might be able to convince yourself why the solution is not Gaussian. Is it possible to convolve a Gaussian function with one and only one other non-Gaussian function* and produce a truly Gaussian function? *[Edit: and not something trivial like an impulse either. I mean convolved with a uniform function, for example.] Last edited: • #9 71 0 were you really tasked with finding the actual (mathematically exact) probability density function? Actually I think that I don't need mathematically exact probability density function, that's why I didn't ask for it. Heh, we started to speak about probability density functions, then I thought that I could solve actual function. But I think I won't be able to solve that continuous convolvution integral. So I will try to explaine the question in my homework without mathematically exact output probability density function. By the way, I can't get it one thing: $$f_{8u_{n}/9}(x)=\frac {9}{8}f_{u_{n}}(9x/8)$$ Why and how scale factor $8/9$ started to react like $9/8$ in the second part of equation? When I got the answer whether my explanation is OK or not, I will inform you here. • #10 collinsmark Homework Helper Gold Member 2,888 1,211 By the way, I can't get it one thing: $$f_{8u_{n}/9}(x)=\frac {9}{8}f_{u_{n}}(9x/8)$$ Why and how scale factor $8/9$ started to react like $9/8$ in the second part of equation? When I got the answer whether my explanation is OK or not, I will inform you here. Multiplying the function's input by a number greater than 1 (i.e., the $\frac{9}{8}$ in $f(9 x /8)$) squishes the function along the x-axis. With this scaling factor, the output of the function when $x = \frac{8}{9}$ produces the same output as the original function when $x = 1$. Scaling the output of the function by the same factor ensures that the area under the curve remains 1. You have already calculated the relevant functions, $f_x(x)$, $f_{x/9}(x)$, and $f_{8x/9}(x)$ back in post #7. As a sanity check, go back and plug some numbers into them and ensure that $$f_{x/9}(x) = 9 f_x(9x)$$ and $$f_{8x/9}(x) = \frac{9}{8} f_x(9x/8) .$$ As an additional sanity check you should be able to confirm that the area under each curve is equal to 1. [Edit: regarding my above notation, recall that $u_0 = x_0$, thus $f_{8u_0/9}(x) = f_{8x/9}(x)$. As far as the differential equation goes though, we don't know what $f_{u_n}(x)$ is as $n \rightarrow \infty$, yet. That is the unknown function for which the differential equation is trying to solve (and we haven't solved that yet). But we do know, whatever that function is, that $f_{8u_n/9}(x) = \frac{9}{8} f_{u_n}(9x/8)$.] Last edited: • #11 71 0 Now I understand that part. Thanks. I think I calculated wrong $f_{8u_{0}/9}$ in post #7. It should be $f_{x}$ scaled by $8/9$, because $x_{1}=u_{0}$. But I did something else there. You might want to try going through a few iterations to see what happens to $f_{u_n}$ at each iteration. I think that will help show 1) above, where the function starts out as uniform, then starts to look more Gaussian (sort of) with each iteration. Then I tried to solve few discrete convolution iterations in Matlab. Here are results: Something looks incorrect there.. Of course, I can see how from two uniform distributions, sum of distributions starts to look like Gaussian. But values seems incorrect. Green line is Gaussian pdf with experimental mean and std from here (output data histogram): I think (and actually you said it) when n goes to infinity (or even as it is in my case n = 1000), my output pdf should be simular like that Gaussian pdf, but in my case amplitude is about value 1, which is not close to Gaussian max amplitude value (4). I guess, $f_{8u_{0}/9}$ is wrong. It's late now, I will chek it tomorrow one more time. And I think it will be OK without mathematically exact output probability density function to prove output. Graphic with iterations shows very well how convolution works and how sum of distributions starts to look like Gaussian. Just need to correct values there, I guess. • #12 71 0 I went through my calculations. Input distribution: $$f_x = \begin{cases} 0, & x \lt -0.3 \\ \frac {1} {1.2}, & -0.3 \leq x \leq 0.9 \\ 0, & x \gt 0.9 \end{cases}$$ Input distribution scaled by $1/9$:: $$f_{x/9} = \begin{cases} 0, & x \lt -\frac {1} {30} \\ 7.5, & -\frac {1} {30} \leq x \leq 0.1 \\ 0, & x \gt 0.1 \end{cases}$$ In the first iteration is filter's previous output scaled by $8/9$, which actually is $f_{x}$ scaled by $8/9$: $$f_{8u_{0}/9} = \begin{cases} 0, & x \lt -\frac {4} {15} \\ \frac {15} {16}, & -\frac {4} {15} \leq x \leq 0.8 \\ 0, & x \gt 0.8 \end{cases}$$ Matlab code: Code: clear all, clc T=0.001; t=-0.267:T:0.801; t2=-0.267*2:T:0.801*2 x1=7.5*(t>-1/30) - 7.5*(t>0.1); x11=7.5*(t2>-1/30) - 7.5*(t2>0.1); u0=0.9375*(t>-4/15) - 0.9375*(t>0.8); u1=T*conv(u0,x1); figure(1) plot(t,x1,'red',t,u0,'blue') hold on plot((-0.267*2):T:(0.801*2),u1,'green') grid on xlim([-0.4 1]) legend('pdf x1','pdf u0','pdf u1=x1*u0') u2=T*conv(u1,x11); figure(2) plot(t2,x11,'red',t2,u1,'blue') hold on plot((-0.267*2*2):T:(0.801*2*2),u2,'green') grid on xlim([-0.4 1]) legend('pdf x1','pdf u1','pdf u2=x1*u1') n=1 $f_{u[1]} = f_{8 u[0]/9} \ast f_{x/9}$ n=2 $f_{u[2]} = f_{8 u[1]/9} \ast f_{x/9}$ And now, writing this post, I realized where's my mistake.. I didn't scale $f_{u[1]}$. I used $f_{u[1]}$ in n=2, but I need $f_{8 u[1]/9}$. Ehhh.. Last edited: • Last Post Replies 0 Views 3K • Last Post Replies 3 Views 5K • Last Post Replies 1 Views 10K • Last Post Replies 5 Views 2K • Last Post Replies 6 Views 11K • Last Post Replies 4 Views 2K • Last Post Replies 5 Views 2K • Last Post Replies 3 Views 2K • Last Post Replies 4 Views 19K • Last Post Replies 8 Views 56K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8856070637702942, "perplexity": 680.9211086492586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496227.25/warc/CC-MAIN-20200329201741-20200329231741-00226.warc.gz"}
https://math.stackexchange.com/questions/1492704/is-a-positive-series-convergent-if-the-terms-decrease
# Is a positive series convergent if the terms decrease? Suppose positive real numbers $n_1>n_2>n_3>n_4...$ with these properties are given and you have the sum of $n_1+n_2+n_3+n_4...$ Is it possible to determine on the basis of this information whether or not the series will converge? • No : $\sum\frac{1}{n}$ won't converge whereas $\sum\frac{1}{n^2}$ will. – Balloon Oct 22 '15 at 18:23 • If it won't converge will it then necessarily diverge to infinity? – St.Clair Bij Oct 22 '15 at 18:25 • Yes because you are adding positive terms : the case where the limit doesn't exists is excluded. – Balloon Oct 22 '15 at 18:26 • Thank you for your answers. I don't see how adding positive terms will necessarily lead to infinity: since $\sum \frac{1}{n^2}$ is also positive but does not lead to infinity. – St.Clair Bij Oct 22 '15 at 18:31 • I didn't say that ; I just said there is two possibilities, convergence to a positive term or divergence (the case $\sum(-1)^n$ where the limit doesn't exists can't be encounter if you considerate positive numbers). – Balloon Oct 22 '15 at 18:38 No, the series might converge or diverge. The two classic examples are the harmonic series, $\sum\limits_{n=0}^\infty {\frac{1}{n}}$, which diverges, and the series $\sum\limits_{n=0}^\infty {\frac{1}{n^2}}$, which converges to $\pi^2/6$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.850250780582428, "perplexity": 337.2854218372992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038085599.55/warc/CC-MAIN-20210415125840-20210415155840-00352.warc.gz"}
https://www.physicsforums.com/threads/parallel-axis-theorem.103160/
# Parallel Axis Theorem 1. Dec 7, 2005 ### amcavoy I know what the parallel axis theorem is, but I'm a little confused about when to use it. I recently had a problem where a hoop was rolling down an incline where I used the parallel axis theorem to find the translational acceleration and got it correct. However, I had a problem about a spool being pulled by a string (think of a yo-yo being pulled on the ground), and when I set up the equations I got the wrong answer using the P.A.T. For instance, I had: $$F-F_{S}=ma$$ $$RF_{S}-rF=I_{CM}\alpha$$ Why isn't the ICM instead IP? I have solved the problem already and know the answer, I just can't see why the parallel axis theorem is not used. LINK: http://show.imagehosting.us/show/971155/0/nouser_971/T0_-1_971155.jpeg 2. Dec 7, 2005 ### Staff: Mentor I just depends on what you take as your axis of rotation. If you take the center of mass, then I and torques will be about that point. (And you'll have no need for the parallel axis theorem.) But you are certainly free to use the point of contact with the floor as your instantaneous axis of rotation. But if you do, be sure to take torques about that point as well. In this case you'll need to use the parallel axis theorem to find the rotational inertia about that point. Done correctly, you'll get the same answer either way.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8973529934883118, "perplexity": 199.47906342931117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00584-ip-10-171-10-108.ec2.internal.warc.gz"}
http://theinfolist.com/php/SummaryGet.php?FindGo=Well-founded_set
TheInfoList In mathematics, a binary relation ''R'' is called well-founded (or wellfounded) on a class ''X'' if every non-empty subset ''S'' ⊆ ''X'' has a minimal element with respect to ''R'', that is, an element ''m'' not related by ''sRm'' (for instance, "''s'' is not smaller than ''m''") for any ''s'' ∈ ''S''. In other words, a relation is well founded if :$\left(\forall S \subseteq X\right)\;\neq \emptyset \implies \left(\exists m \in S\right) \left(\forall s \in S\right) \lnot\left(sRm\right)$ Some authors include an extra condition that ''R'' is set-like, i.e., that the elements less than any given element form a set. Equivalently, assuming the axiom of dependent choice, a relation is well-founded if it contains no countable infinite descending chains: that is, there is no infinite sequence ''x''0, ''x''1, ''x''2, ... of elements of ''X'' such that ''x''''n''+1 ''R'' ''x''n for every natural number ''n''. In order theory, a partial order is called well-founded if the corresponding strict order is a well-founded relation. If the order is a total order then it is called a well-order. In set theory, a set ''x'' is called a well-founded set if the set membership relation is well-founded on the transitive closure of ''x''. The axiom of regularity, which is one of the axioms of Zermelo–Fraenkel set theory, asserts that all sets are well-founded. A relation ''R'' is converse well-founded, upwards well-founded or Noetherian on ''X'', if the converse relation ''R''−1 is well-founded on ''X''. In this case ''R'' is also said to satisfy the ascending chain condition. In the context of rewriting systems, a Noetherian relation is also called terminating. Induction and recursion An important reason that well-founded relations are interesting is because a version of transfinite induction can be used on them: if (''X'', ''R'') is a well-founded relation, ''P''(''x'') is some property of elements of ''X'', and we want to show that :''P''(''x'') holds for all elements ''x'' of ''X'', it suffices to show that: : If ''x'' is an element of ''X'' and ''P''(''y'') is true for all ''y'' such that ''y R x'', then ''P''(''x'') must also be true. That is, : Well-founded induction is sometimes called Noetherian induction,Bourbaki, N. (1972) ''Elements of mathematics. Commutative algebra'', Addison-Wesley. after [[Emmy Noether]]. On par with induction, well-founded relations also support construction of objects by [[transfinite recursion]]. Let (''X'', ''R'') be a [[binary relation#Relations over a set|set-like]] well-founded relation and ''F'' a function that assigns an object ''F''(''x'', ''g'') to each pair of an element ''x'' ∈ ''X'' and a function ''g'' on the initial segment of ''X''. Then there is a unique function ''G'' such that for every ''x'' ∈ ''X'', :$G\left(x\right) = F\left\left(x, G\vert_\right\right).$ That is, if we want to construct a function ''G'' on ''X'', we may define ''G''(''x'') using the values of ''G''(''y'') for ''y R x''. As an example, consider the well-founded relation (N, ''S''), where N is the set of all natural numbers, and ''S'' is the graph of the successor function ''x'' ↦ ''x''+1. Then induction on ''S'' is the usual mathematical induction, and recursion on ''S'' gives primitive recursion. If we consider the order relation (N, <), we obtain complete induction, and course-of-values recursion. The statement that (N, <) is well-founded is also known as the well-ordering principle. There are other interesting special cases of well-founded induction. When the well-founded relation is the usual ordering on the class of all ordinal numbers, the technique is called transfinite induction. When the well-founded set is a set of recursively-defined data structures, the technique is called structural induction. When the well-founded relation is set membership on the universal class, the technique is known as ∈-induction. See those articles for more details. Examples Well-founded relations which are not totally ordered include: * the positive integers , with the order defined by ''a'' < ''b'' if and only if ''a'' divides ''b'' and ''a'' ≠ ''b''. * the set of all finite strings over a fixed alphabet, with the order defined by ''s'' < ''t'' if and only if ''s'' is a proper substring of ''t''. * the set N × N of pairs of natural numbers, ordered by (''n''1, ''n''2) < (''m''1, ''m''2) if and only if ''n''1 < ''m''1 and ''n''2 < ''m''2. * the set of all regular expressions over a fixed alphabet, with the order defined by ''s'' < ''t'' if and only if ''s'' is a proper subexpression of ''t''. * any class whose elements are sets, with the relation $\in$ ("is an element of"). This is the axiom of regularity. * the nodes of any finite directed acyclic graph, with the relation ''R'' defined such that ''a R b'' if and only if there is an edge from ''a'' to ''b''. Examples of relations that are not well-founded include: * the negative integers , with the usual order, since any unbounded subset has no least element. * The set of strings over a finite alphabet with more than one element, under the usual (lexicographic) order, since the sequence "B" > "AB" > "AAB" > "AAAB" > … is an infinite descending chain. This relation fails to be well-founded even though the entire set has a minimum element, namely the empty string. * the rational numbers (or reals) under the standard ordering, since, for example, the set of positive rationals (or reals) lacks a minimum. Other properties If (''X'', <) is a well-founded relation and ''x'' is an element of ''X'', then the descending chains starting at ''x'' are all finite, but this does not mean that their lengths are necessarily bounded. Consider the following example: Let ''X'' be the union of the positive integers and a new element ω, which is bigger than any integer. Then ''X'' is a well-founded set, but there are descending chains starting at ω of arbitrary great (finite) length; the chain ω, ''n'' − 1, ''n'' − 2, ..., 2, 1 has length ''n'' for any ''n''. The Mostowski collapse lemma implies that set membership is a universal among the extensional well-founded relations: for any set-like well-founded relation ''R'' on a class ''X'' which is extensional, there exists a class ''C'' such that (''X'', ''R'') is isomorphic to (''C'', ∈). Reflexivity A relation ''R'' is said to be reflexive if ''a''R''a'' holds for every ''a'' in the domain of the relation. Every reflexive relation on a nonempty domain has infinite descending chains, because any constant sequence is a descending chain. For example, in the natural numbers with their usual order ≤, we have $1 \geq 1 \geq 1 \geq \cdots.$ To avoid these trivial descending sequences, when working with a partial order ≤, it is common to apply the definition of well foundedness (perhaps implicitly) to the alternate relation < defined such that ''a'' < ''b'' if and only if ''a'' ≤ ''b'' and ''a'' ≠ ''b''. More generally, when working with a preorder ≤, it is common to use the relation < defined such that ''a'' < ''b'' if and only if ''a'' ≤ ''b'' and ''b'' ≰ ''a''. In the context of the natural numbers, this means that the relation <, which is well-founded, is used instead of the relation ≤, which is not. In some texts, the definition of a well-founded relation is changed from the definition above to include these conventions. References * Just, Winfried and Weese, Martin (1998) ''Discovering Modern Set Theory. I'', American Mathematical Society . * Karel Hrbáček & Thomas Jech (1999) ''Introduction to Set Theory'', 3rd edition, "Well-founded relations", pages 251–5, Marcel Dekker {{ISBN|0-8247-7915-0 Category:Binary relations
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.981061577796936, "perplexity": 1076.5833513434984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039526421.82/warc/CC-MAIN-20210421065303-20210421095303-00067.warc.gz"}
https://socratic.org/questions/58ed71c9b72cff633c630180
Chemistry Topics # Question #30180 Apr 12, 2017 X = -3 Y = +6 #### Explanation: This is basically a math problem. Given $X {H}_{3}$ we know the valence of X must be -3 if H = +1. Then $Y {X}_{2}$ means that the valence of $Y = - 2 \cdot X$ ; $Y = 6$. If the hydrogen ion is actually -1 instead of +1, these signs would be reversed. ##### Impact of this question 553 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9078510403633118, "perplexity": 1382.873677547644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986669546.24/warc/CC-MAIN-20191016190431-20191016213931-00110.warc.gz"}
http://www.thefullwiki.org/Deformation_(mechanics)
# Deformation (mechanics): Wikis Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles. # Encyclopedia Continuum mechanics In continuum mechanics, deformation or strain is the change in the metric properties of a continuous body B in the displacement from an initial placement κ0(B) to a final placement κ(B). A change in the metric properties means that a curve drawn in the initial body placement changes its length when displaced to a curve in the final placement. If all the curves do not change length, it is said that a rigid body displacement occurred. A strain field associated with a displacement is defined, at any point, by the change in length of the tangent vectors representing the speeds of arbitrarily parametrized curves passing through that point. A basic geometric result, due to Fréchet, von Neumann and Jordan, states that, if the lengths of the tangent vectors fulfill the axioms of a norm and the parallelogram law, then the length of a vector is the square root of the value of the quadratic form associated, by the polarization formula, with a positive definite bilinear map called the metric tensor. Different equivalent choices may be made for the expression of a strain field depending on whether it is defined in the initial or in the final placement and on whether the metric tensor or its dual is considered. In a continuous body, a deformation field results from a stress field induced by applied forces or is due to changes in the temperature field inside the body. The relation between stresses and induced strains is expressed by elastic constitutive equations, e.g., Hooke's law for linear elastic materials. Deformations which are recovered after the stress field has been removed, are called elastic deformations. In this case, the continuum completely recovers its original configuration. On the other hand, irreversible deformations, which remain even after stresses have been removed, are called plastic deformations. Such deformations occur in material bodies after stresses have attained a certain threshold value known as the elastic limit or yield stress, and are the result of slip, or dislocation mechanisms at the atomic level. Deformation is measured in units of length. ## Strain Strain is the geometrical measure of deformation representing the relative displacement between particles in the material body. It measures how much a given displacement differs locally from a rigid-body displacement (Jacob Lubliner). Strain defines the amount of stretch or compression along a material line elements or fibers, the normal strain, and the amount of distortion associated with the sliding of plane layers over each other, the shear strain, within a deforming body (David Rees). Strain is a dimensionless quantity, which can be expressed as a decimal fraction, a percentage or in parts-per notation. This could be applied by elongation, shortening, or volume changes, or angular distortion.[1] The state of strain at a material point of a continuum body is defined as the totality of all the changes in length of material lines or fibers, the normal strain, which pass through that point and also the totality of all the changes in the angle between pairs of lines initially perpendicular to each other, the shear strain, radiating from this point. However, it is sufficient to know the normal and shear components of strain on a set of three mutually perpendicular directions. If there is an increase in length of the material line, the normal strain is called tensile strain, otherwise, if there is reduction or compression in the length of the material line, it is called compressive strain. ### Strain measures Depending on the amount of strain, or local deformation, the analysis of deformation is subdivided into three deformation theories: 1. Finite strain theory, also called large strain theory, large deformation theory, deals with deformations in which both rotations and strains are arbitrarily large. In this case, the undeformed and deformed configurations of the continuum are significantly different and a clear distinction has to be made between them. This is commonly the case with elastomers, plastically-deforming materials and other fluids and biological soft tissue. 2. Infinitesimal strain theory, also called small strain theory, small deformation theory, small displacement theory, or small displacement-gradient theory where strains and rotations are both small. In this case, the undeformed and deformed configurations of the body can be assumed identical. The infinitesimal strain theory is used in the analysis of deformations of materials exhibiting elastic behavior, such as materials found in mechanical and civil engineering applications, e.g. concrete and steel. 3. Large-displacement or large-rotation theory, which assumes small strains but large rotations and displacements. In each of these theories the strain is then defined differently. The engineering strain is the most common definition applied to materials used in mechanical and structural engineering, which are subjected to very small deformations. On the other hand, for some materials, e.g. elastomers and polymers, subjected to large deformations, the engineering definition of strain is not applicable, e.g. typical engineering strains greater than 1% (David Rees page 41), thus other more complex definitions of strain are required, such as stretch, logarithmic strain, Green strain, and Almansi strain. The Cauchy strain or engineering strain is expressed as the ratio of total deformation to the initial dimension of the material body in which the forces are being applied. The engineering normal strain or engineering extensional strain e of a material line element or fiber axially loaded is expressed as the change in length ΔL per unit of the original length L of the line element or fibers. The normal strain is positive if the material fibers are stretched or negative if they are compressed. Thus, we have $\ e=\frac{\Delta L}{L}=\frac{\ell -L}{L}$ where ℓ is the final length of the fiber. The engineering shear strain is defined as the change in the angle between two material line elements initially perpendicular to each other in the undeformed or initial configuration. The stretch ratio or extension ratio is a measure of the extensional or normal strain of a differential line element, which can be defined at either the undeformed configuration or the deformed configuration. It is defined as the ratio between the final length ℓ and the initial length L of the material line. $\ \lambda=\frac{\ell}{L}$ The extension ratio is related to the engineering strain by $\ e=\frac{\ell-L}{L}=\lambda-1$ This equation implies that the normal strain is zero, so that there is no deformation when the stretch is equal to unity. The stretch ratio is used in the analysis of materials that exhibit large deformations, such as elastometers, which can sustain stretch ratios of 3 or 4 before they fail. On the other hand, traditional engineering materials, such as concrete or steel, fail at much lower stretch ratios. The logarithmic strain ε, also called natural strain, true strain or Hencky strain. Considering an incremental strain (Ludwik) $\ \delta \varepsilon=\frac{\delta \ell}{\ell}$ the logarithmic strain is obtained by integrating this incremental strain: \ \begin{align} \int\delta \varepsilon &=\int_{L}^{\ell}\frac{\delta \ell}{\ell}\ \varepsilon&=\ln\left(\frac{\ell}{L}\right)=\ln \lambda \ &=\ln(1+e) \ &=e-e^2/2+e^3/3- \cdots \ \end{align} where e is the engineering strain. The logarithmic strain provides the correct measure of the final strain when deformation takes place in a series of increments, taking into account the influence of the strain path (David Rees). The Green strain is defined as $\ \varepsilon_G=\frac{1}{2}\left(\frac{\ell^2-L^2}{L^2}\right)=\frac{1}{2}(\lambda^2-1)$ The Green strain is addressed in more detail in the article on finite strain theory. The Euler-Almansi strain is defined as $\ \varepsilon_E=\frac{1}{2}\left(\frac{\ell^2-L^2}{\ell^2}\right)=\frac{1}{2}\left(1-\frac{1}{\lambda^2}\right)$ The Euler-Almansi strain is addressed in more detail in the finite strain theory. ## Description of deformation It is convenient to identify a reference configuration or initial geometric state of the continuum body which all subsequent configurations are referenced from. The reference configuration need not to be one the body actually will ever occupy. Often, the configuration at {nowrap|1=t = 0}} is considered the reference configuration, κ0(B). The configuration at the current time t is the current configuration. For deformation analysis, the reference configuration is identified as undeformed configuration, and the current configuration as deformed configuration. Additionally, time is not considered when analyzing deformation, thus the sequence of configurations between the undeformed and deformed configurations are of no interest. The components Xi of the position vector X of a particle in the reference configuration, taken with respect to the reference coordinate system, are called the material or reference coordinates. On the other hand, the components xi of the position vector x of a particle in the deformed configuration, taken with respect to the spatial coordinate system of reference, are called the spatial coordinates There are two methods for analysing the deformation of a continuum. One description is made in terms of the material or referential coordinates, called material description or Lagrangian description. A second description is of deformation is made in terms of the spatial coordinates it is called the spatial description or Eulerian description. There is continuity during deformation of a continuum body in the sense that: • The material points forming a closed curve at any instant will always form a closed curve at any subsequent time. • The material points forming a closed surface at any instant will always form a closed surface at any subsequent time and the matter within the closed surface will always remain within. ## Displacement Figure 1. Motion of a continuum body. A change in the configuration of a continuum body results in a displacement. The displacement of a body has two components: a rigid-body displacement and a deformation. A rigid-body displacement consist of a simultaneous translation and rotation of the body without changing its shape or size. Deformation implies the change in shape and/or size of the body from an initial or undeformed configuration $\ \kappa_0(\mathcal B)$ to a current or deformed configuration $\ \kappa_t(\mathcal B)$ (Figure 1). If after a displacement of the continuum there is a relative displacement between particles, a deformation has occurred. On the other hand, if after displacement of the continuum the relative displacement between particles in the current configuration is zero, then there is no deformation and a rigid-body displacement is said to have occurred. The vector joining the positions of a particle P in the undeformed configuration and deformed configuration is called the displacement vector u(X, t) = uiei in the Lagrangian description, or U(x, t) = UJEJ in the Eulerian description. A displacement field is a vector field of all displacement vectors for all particles in the body, which relates the deformed configuration with the undeformed configuration. It is convenient to do the analysis of deformation or motion of a continuum body in terms of the displacement field, In general, the displacement field is expressed in terms of the material coordinates as $\ \mathbf u(\mathbf X,t) = \mathbf b(\mathbf X,t)+\mathbf x(\mathbf X,t) - \mathbf X \qquad \text{or}\qquad u_i = \alpha_{iJ}b_J + x_i - \alpha_{iJ}X_J$ or in terms of the spatial coordinates as $\ \mathbf U(\mathbf x,t) = \mathbf b(\mathbf x,t)+\mathbf x - \mathbf X(\mathbf x,t) \qquad \text{or}\qquad U_J = b_J + \alpha_{Ji}x_i - X_J \,$ where αJi are the direction cosines between the material and spatial coordinate systems with unit vectors EJ and ei, respectively. Thus $\ \mathbf E_J \cdot \mathbf e_i = \alpha_{Ji}=\alpha_{iJ}$ and the relationship between ui and UJ is then given by $\ u_i=\alpha_{iJ}U_J \qquad \text{or} \qquad U_J=\alpha_{Ji}u_i$ Knowing that $\ \mathbf e_i = \alpha_{iJ}\mathbf E_J$ then $\mathbf u(\mathbf X,t)=u_i\mathbf e_i=u_i(\alpha_{iJ}\mathbf E_J)=U_J\mathbf E_J=\mathbf U(\mathbf x,t)$ It is common to superimpose the coordinate systems for the undeformed and deformed configurations, which results in b = 0, and the direction cosines become Kronecker deltas: $\ \mathbf E_J \cdot \mathbf e_i = \delta_{Ji}=\delta_{iJ}.$ Thus, we have $\ \mathbf u(\mathbf X,t) = \mathbf x(\mathbf X,t) - \mathbf X \qquad \text{or}\qquad u_i = x_i - \delta_{iJ}X_J = x_i - X_i$ or in terms of the spatial coordinates as $\ \mathbf U(\mathbf x,t) = \mathbf x - \mathbf X(\mathbf x,t) \qquad \text{or}\qquad U_J = \delta_{Ji}x_i - X_J =x_J - X_J$ The partial differentiation of the displacement vector with respect to the material coordinates yields the material displacement gradient tensor $\ \mathbf u\nabla_{\mathbf X}$. Thus we have: \ \begin{align} \mathbf u(\mathbf X,t) &= \mathbf x(\mathbf X,t) - \mathbf X \ \mathbf u\nabla_{\mathbf X} &= \mathbf x\nabla_{\mathbf X} - \mathbf I \ \mathbf u\nabla_{\mathbf X} &= \mathbf F - \mathbf I \ \end{align} \qquad \text{or} \qquad \begin{align} u_i& = x_i-\delta_{iJ}X_J = x_i - X_i\ \frac{\partial u_i}{\partial X_K}&=\frac{\partial x_i}{\partial X_K}-\delta_{iK} \ \end{align} where $\ \mathbf F$ is the deformation gradient tensor. Similarly, the partial differentiation of the displacement vector with respect to the spatial coordinates yields the spatial displacement gradient tensor $\ \mathbf U\nabla_{\mathbf x}$. Thus we have, \ \begin{align} \mathbf U(\mathbf x,t) &= \mathbf x - \mathbf X(\mathbf x,t) \ \mathbf U\nabla_{\mathbf x} &= \mathbf I - \mathbf X\nabla_{\mathbf X} \ \mathbf U\nabla_{\mathbf x} &= \mathbf I -\mathbf F^{-1}\ \end{align} \qquad \text{or} \qquad \begin{align} U_J& = \delta_{Ji}x_i-X_J =x_J - X_J\ \frac{\partial U_J}{\partial x_k} &= \delta_{Jk}-\frac{\partial X_J}{\partial x_k}\ \end{align} ## References 1. ^ "Earth."Encyclopædia Britannica from Encyclopædia Britannica 2006 Ultimate Reference Suite DVD .[2009].
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 23, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9712544679641724, "perplexity": 773.9176850148051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648594.80/warc/CC-MAIN-20180323200519-20180323220519-00002.warc.gz"}
https://science.sciencemag.org/content/251/4997/1054
Reports # Reduction of Deepwater Formation in the Greenland Sea During the 1980s: Evidence from Tracer Data See allHide authors and affiliations Science  01 Mar 1991: Vol. 251, Issue 4997, pp. 1054-1056 DOI: 10.1126/science.251.4997.1054 ## Abstract Hydrographic observations and measurements of the concentrations of chlorofluorocarbons (CFCs) have suggested that the formation of Greenland Sea Deep Water (GSDW) slowed down considerably during the 1980s. Such a decrease is related to weakened convection in the Greenland Sea and thus could have significant impact on the properties of the waters flowing over the Scotland-Iceland-Greenland ridge system into the deep Atlantic. Study of the variability of GSDW formation is relevant for understanding the impact of the circulation in the European Polar seas on regional and global deep water characteristics. New long-term multitracer observations from the Greenland Sea show that GSDW formation indeed was greatly reduced during the 1980s. A box model of deepwater formation and exchange in the European Polar seas tuned by the tracer data indicates that the reduction rate of GSDW formation was about 80 percent and that the start date of the reduction was between 1978 and 1982.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9352463483810425, "perplexity": 3061.6375521882082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528141.87/warc/CC-MAIN-20190722154408-20190722180408-00483.warc.gz"}
https://www.physicsforums.com/threads/real-analysis-convergence-l-u-b.184096/
# Real analysis- Convergence/l.u.b 1. Sep 11, 2007 ### Scousergirl I'm having a little difficulty understanding Epsilon in the definition of convergence. From what the book says it is any small real number greater than zero (as small as you can imagine???). Also, since I don't quite grasp what this epsilon is and how it helps define convergence, I am having difficulty applying it to the following problem: Let b=Least upper bound of a set S (S is a subset of the real numbers) that is bounded and non empty. Then Given epsilon greater than 0, there exists an s in S such that (b-Epsilon)<= s <= b. I started by proving that there exists an s in S, but I cannot figure out how to relate this all to epsilon. What is confusing me I guess is the actual definition of S. Can the set {1*, b} satisfy the requirements of s (it is a subset of the real numbers, bounded above by b and non empty) but then how do we show that the statement is true for s=1*. Also, b doesn't have to part of S right? Last edited: Sep 11, 2007 2. Sep 11, 2007 ### Dick A bounded set has many upper bounds. The least upper bound b is the one that is low enough that if you pick any number less than b (b-Epsilon) no matter how small the Epsilon, then that number is not an upper bound. And yes, b doesn't have to be in S. After that, you confuse me. What's 1*? 3. Sep 11, 2007 ### Scousergirl Ok, I think I understand the least upper bound part of the question, I just don't know how to go about proving that ANY subset of the real numbers (S) that is non empty and bounded above by b must contain an element s such that (b-epsilon)<= s <= b. 1* is a dedekind cut (the real number 1 basically). Doesn't a set say S={1 , b} satisfy these conditions yet if I choose a small epsilon, 1 is not neccesarily greater or equal to (b-epsilon). 4. Sep 11, 2007 ### Dick Mmm. That's the DEFINITION of least upper bound. I.e. it's the definition of 'b'. You don't have to prove there is such an s. If b is least upper bound of S there is such an s for every epsilon. Otherwise it's not a least upper bound. A dedekind cut is two subsets of the rationals with special properties. Neither subset is even bounded. The least upper bound of S={1,b} is 1 if b>1 and b if b<1. I think this concept is much less complicated than you think it is. 5. Sep 11, 2007 ### Scousergirl hmmm...I think maybe I am confusing multiple concepts here. The question that I am having difficulty with is how to prove that such an s exists for all subsets of the real numbers. (b-epsilon)<= s <= b. I tried tackling it by contradiction: Assume there does not exist an s in S such that (b-epsilon)<= s. Thus for all s in S, s< (b-epsilon). Is this a contradiction to the fact that b is the least upper bound of S? 6. Sep 11, 2007 ### Dick Such an s does not exist for all subsets of the real numbers. Some aren't bounded. So there is no b. On the other hand for this "Assume there does not exist an s in S such that (b-epsilon)<= s. Thus for all s in S, s< (b-epsilon). Is this a contradiction to the fact that b is the least upper bound of S?". Yes, that contradicts b being least upper bound. But this is getting really confusing for me as well as for you, it's getting existential. Try dealing with well defined subsets of the reals and figuring out what the least upper bound is and what it means. It sounds much more confusing in the abstract than what it really is. Trust me. Similar Discussions: Real analysis- Convergence/l.u.b
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9730895161628723, "perplexity": 308.515838539825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818696681.94/warc/CC-MAIN-20170926193955-20170926213955-00524.warc.gz"}
https://deepai.org/publication/a-linear-time-n-0-4-approximation-for-longest-common-subsequence
A Linear-Time n^0.4-Approximation for Longest Common Subsequence We consider the classic problem of computing the Longest Common Subsequence (LCS) of two strings of length n. While a simple quadratic algorithm has been known for the problem for more than 40 years, no faster algorithm has been found despite an extensive effort. The lack of progress on the problem has recently been explained by Abboud, Backurs, and Vassilevska Williams [FOCS'15] and Bringmann and Künnemann [FOCS'15] who proved that there is no subquadratic algorithm unless the Strong Exponential Time Hypothesis fails. This has led the community to look for subquadratic approximation algorithms for the problem. Yet, unlike the edit distance problem for which a constant-factor approximation in almost-linear time is known, very little progress has been made on LCS, making it a notoriously difficult problem also in the realm of approximation. For the general setting, only a naive O(n^ε/2)-approximation algorithm with running time Õ(n^2-ε) has been known, for any constant 0 < ε≤ 1. Recently, a breakthrough result by Hajiaghayi, Seddighin, Seddighin, and Sun [SODA'19] provided a linear-time algorithm that yields a O(n^0.497956)-approximation in expectation; improving upon the naive O(√(n))-approximation for the first time. In this paper, we provide an algorithm that in time O(n^2-ε) computes an Õ(n^2ε/5)-approximation with high probability, for any 0 < ε≤ 1. Our result (1) gives an Õ(n^0.4)-approximation in linear time, improving upon the bound of Hajiaghayi, Seddighin, Seddighin, and Sun, (2) provides an algorithm whose approximation scales with any subquadratic running time O(n^2-ε), improving upon the naive bound of O(n^ε/2) for any ε, and (3) instead of only in expectation, succeeds with high probability. Authors • 35 publications • 28 publications • 12 publications 12/15/2021 Approximating the Longest Common Subsequence problem within a sub-polynomial factor in linear time The Longest Common Subsequence (LCS) of two strings is a fundamental str... 12/22/2017 Longest common substring with approximately k mismatches In the longest common substring problem we are given two strings of leng... 02/24/2020 Upper Tail Analysis of Bucket Sort and Random Tries Bucket Sort is known to run in expected linear time when the input keys ... 03/16/2020 Approximating LCS in Linear Time: Beating the √(n) Barrier Longest common subsequence (LCS) is one of the most fundamental problems... 05/07/2021 Improved Approximation for Longest Common Subsequence over Small Alphabets This paper investigates the approximability of the Longest Common Subseq... 05/22/2021 How Packed Is It, Really? The congestion of a curve is a measure of how much it zigzags around loc... 08/09/2018 Longest Increasing Subsequence under Persistent Comparison Errors We study the problem of computing a longest increasing subsequence in a ... This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. 1 Introduction The longest common subsequence (LCS) of two strings and is the longest string that appears as a subsequence of both strings. The length of the LCS of and , which we denote by , is one of the most fundamental measures of similarity between two strings and has drawn significant interest in last five decades, see, e.g. [35, 6, 26, 27, 30, 32, 10, 31, 11, 36, 22, 16, 28, 2, 19, 4, 20, 3, 33, 34, 24]. On strings of length , the LCS problem can be solved exactly in quadratic time using a classical dynamic programming approach [35]. Despite an extensive line of research the quadratic running time has been improved only by logarithmic factors [30]. This lack of progress is explained by a recent result showing that any truly subquadratic algorithm for LCS would falsify the Strong Exponential Time Hypothesis (SETH); this has been proven independently by Abboud et al. [2] and by Bringmann and Künnemann [19]. Further work in this direction shows that even a high polylogarithmic speedup for LCS would have surprising consequences [4, 3]. For the closely related edit distance the situation is similar, as the classic quadratic running time can be improved by logarithmic factors, but any truly subquadratic algorithm would falsify SETH [12]. These strong hardness results naturally bring up the question whether LCS or edit distance can be efficiently approximated (namely, whether an algorithm with truly subquadratic time for any constant , can produce a good approximation in the worst-case). In the last two decades, significant progress has been made towards designing efficient approximation algorithms for edit distance [14, 13, 15, 9, 7, 21, 23, 29, 17]; the latest achievement is a constant-factor approximation in almost-linear111By almost-linear we mean time for a constant that can be chosen arbitrarily small. time [8]. For LCS the picture is much more frustrating. The LCS problem has a simple -approximation algorithm with running time for any constant , and it has a trivial -approximation algorithm with running time for strings over alphabet . Yet, improving upon these naive bounds has evaded the community until very recently, making LCS a notoriously hard problem to approximate. In 2019, Rubinstein et al. [33] presented a subquadratic-time -approximation, where is the ratio of the string length to the length of the optimal LCS. For binary alphabet, Rubinstein and Song [34] recently improved the -approximation. In the general case (where and the alphabet size are arbitrary), the naive -approximation in near-linear222By near-linear we mean time , where hides polylogarithmic factors in . time was recently beaten by Hajiaghayi et al. [24], who designed a linear-time algorithm that computes an -approximation in expectation.333While the SODA proceedings version of [24] claimed a high probability bound, the newer corrected Arxiv version [25] only claims that the algorithm outputs an -approximation in expectation. Personal communications with the authors confirm that the result indeed holds only in expectation, see also Remark 14. Nonetheless, the gap between the upper bound provided by Hajiaghayi et al. [24] and the recent results on hardness of approximation [1, 5] remains huge. 1.1 Our Contribution We present a randomized -approximation for LCS running in linear time , where the approximation guarantee holds with high probability444We say that an event happens with high probability (w.h.p.) if it has probability at least , where the constant can be chosen in advance.. More generally, we obtain a tradeoff between approximation guarantee and running time: For any we achieve approximation ratio in time . Formally we prove the following: Theorem 1. There is a randomized algorithm that, given strings of length and a time budget , with high probability computes a multiplicative -approximation of the length of the LCS of and in time . The improvement over the state of the art can be summarized as follows: 1. An improved approximation ratio for the linear time regime: from  [24] to ; 2. The first algorithm which improves upon the naive bound with high probability; 3. A generalization to running time , breaking the naive approximation ratio in general. 2 Technical Overview We combine classic exact algorithms for LCS with different subsampling strategies to develop several algorithms that work in different regimes of the problem. A combination of these algorithms then yields the full approximation algorithm. Our Algorithm 1 covers the regime of short LCS, i.e., when the LCS has length at most for an appropriate constant depending on the running time budget. In this regime, we decrease the length of the string  by subsampling. This naturally allows to run classic exact algorithms for LCS on the subsampled string (which now has significantly smaller size) and the original string , while not deteriorating the LCS between the two strings too much. For the remaining parts of the algorithm, the strings and are split into substrings and of length where denotes the total running time budget. For any block  we write for the length of the LCS of and . We call a set with and a block sequence. Since we can assume the LCS of and to be long, it follows that there exists a good “block-aligned LCS”, more precisely there exists a block sequence with large LCS sum . Now, a natural approach is to compute estimates for all blocks and to determine the maximum sum over all block sequences . Once we have estimates , the maximum  can be computed by dynamic programming in time , which is for our choice of . In the following we describe three different strategies to compute estimates . The major difficulty is that on average per block we can only afford time to compute an estimate . The first strategy focuses on matching pairs. A matching pair of strings is a pair of indices such that . We write for the number of matching pairs of the strings and . Our Algorithm 2 works well if some block sequence has a large total number of matching pairs . Here the key observation (Lemma 7) is that for each block there exists a symbol that occurs at least times in both and . If is large, matching this symbol provides a good approximation for . Unfortunately, since we can afford only running time per block, finding a frequent symbol is difficult. We develop as a new tool an algorithm that w.h.p. finds a frequent symbol in each block with an above-average number of matching pairs, see Lemma 8. For our remaining two strategies we can assume the optimal LCS to be large and to be small (i.e., every block sequence has a small total number of matching pairs). In our Algorithm 3, we analyze the case where is large. Here we pick some diagonal and run our basic approximation algorithm on each block along the diagonal. Since there are diagonals, an above-average diagonal has a total LCS of . If  is large then this provides a good estimation of the LCS. The main difficulty is how to find an above-average diagonal. A random diagonal has a good LCS sum in expectation, but not necessarily with good probability. Our solution is a non-uniform sampling, where we first test random blocks until we find a block with large LCS, and then choose the diagonal containing this seed block. This sampling yields an above-average diagonal with good probability. Recall that there always exists a block sequence with large LCS sum (see Lemma 11). The idea of our Algorithm 4 is to focus on a uniformly random subset of all blocks, where each block is picked with probability . Then on each picked block we can spend more time (specifically time ) to compute an estimate . Moreover, we still find a -fraction of . We analyze this algorithm in terms of and (the choice of depends on these two parameters) and show that it works well in the complementary regimes of Algorithms 1-3. Comparison with the Previous Approach of Hajiaghayi et al. [24] The general approach of splitting and into blocks and performing dynamic programming over estimates was introduced by Hajiaghayi et al. [24]. Moreover, our Algorithm 1 has essentially the same guarantees as [24, Algorithm 1], but ours is a simple combination of generic parts that we reuse in our later algorithms, thus simplifying the overall algorithm. Our Algorithm 2 follows the same idea as [24, Algorithm 3], in that we want to find a frequent symbol in and and match only this symbol to obtain an estimate . Hajiaghayi et al. find a frequent symbol by picking a random symbol in each block ; in expectation appears at least times in and . In order to obtain with high probability guarantees, we need to develop a new tool for finding frequent symbols not only in expectation but even with high probability, see Lemma 8 and Remark 14. The remainder of the approach differs significantly; our Algorithms 3 and 4 are very different compared to [24, Algorithms 2 and 4]. In the following we discuss their ideas. In [24, Algorithm 2], they argue about the alphabet size, splitting the alphabet into frequent and infrequent letters. For infrequent letters the total number of matching pairs is small, so augmenting a classic exact algorithm by subsampling works well. Therefore, they can assume that every letter is frequent and thus the alphabet size is small. We avoid this line of reasoning. Finally, [24, Algorithm 4] is their most involved algorithm. Assuming that their other algorithms have failed to produce a sufficiently good approximation, they show that each part and can be turned into a semi-permutation by a little subsampling. Then by leveraging Dilworth’s theorem and Tuŕan’s theorem they show that most blocks have an LCS length of at least ; this can be seen as a triangle inequality for LCS and is their most novel contribution. This results in a highly non-trivial algorithm making clever use of combinatorial machinery. We show that these ideas can be completely avoided, by instead relying on classic algorithms based on matching pairs augmented by subsampling. Specifically, we replace their combinatorial machinery by our Algorithms 3 and 4 described above (recall that Algorithm 3 considers a non-uniformly sampled random diagonal while Algorithm 4 subsamples the set of blocks to be able to spend more time per block). We stress that our solution completely avoids the concept of semi-permutation or any heavy combinatorial machinery as used in [24, Algorithm 4], while providing a significantly improved approximation guarantee. Organization of the Paper. Section 3 introduces notation and a classical algorithm by Hunt and Szymanski. In Section 4 we present our new tools, in particular for finding frequent symbols. Section 5 contains our main algorithm, split into four parts that are presented in Sections 5.1, 5.3, 5.4, and 5.5, and combined in Section 5.6. In the appendix, for completeness we sketch the algorithm by Hunt and Szymanski (Appendix A) and we present pseudocode for all our algorithms (Appendix B). 3 Preliminaries For we write . By the notation and we hide factors of the form . We use “with high probability” (w.h.p.) to denote probabilities of the form , where the constant can be chosen in advance. String Notation. A string over alphabet is a finite sequence of letters in . We denote its length by and its -th letter by . We also denote by the substring consisting of letters . For any indices the string forms a subsequence of . For strings we denote by the length of the longest common subsequence of and . In this paper we study the problem of approximating for given strings of length . We focus on the length , however, our algorithms can be easily adapted to also reconstruct a subsequence attaining the output length. If are clear from the context, we may replace by . Throughout the paper we assume that the alphabet is (this is without loss of generality after a -time preprocessing). Matching Pairs. For a symbol , we denote the number of times that appears in by , and call this the frequency of in . For strings and , a matching pair is a pair with . We denote the number of matching pairs by . If are clear from the context, we may replace by . Observe that . Using this equation we can compute in time . Hunt and Szymanski [27] solved the LCS problem in time . More precisely, their algorithm can be viewed as having a preprocessing phase that only reads  and runs in time , and a query phase that reads and and takes time . Theorem 2 (Hunt and Szymanski [27]). We can preprocess a string in time . Given a string and a preprocessed string , we can compute their LCS in time . For convenience, we provide a proof sketch of their theorem in Appendix A. 4 New Basic Tools 4.1 Basic Approximation Algorithm Throughout this section we abbreviate and . We start with the basic approximation algorithm that is central to our approach; most of our later algorithms use this as a subroutine. This algorithm subsamples the string and then runs Hunt and Szymanski’s algorithm (Theorem 2). Lemma 3 (Basic Approximation Algorithm). Let . We can preprocess in time . Given , the preprocessed string , and , in expected time we can compute a value that w.h.p. satisfies . Proof. In the preprocessing phase, we run the preprocessing of Theorem 2 on . Fix a constant . If , then in the query phase we simply run Theorem 2, solving LCS exactly in time . Otherwise, denote by a random subsequence of , where each letter is removed independently with probability (i.e., kept with probability ) for . Note that by our assumption on . We can sample in expected time , since the difference from one unremoved letter to the next is geometrically distributed, and geometric random variates can be sampled in expected time , see, e.g., [18]. Note that this subsampling yields and . In the query phase, we sample and then run the query phase of Theorem 2 on and . This runs in time , which is in expectation. Finally, consider a fixed LCS of and , namely for some and . Each letter survives the subsampling to with probability . Therefore, we can bound from below by a binomial random variable (the correct terminology is that statistically dominates ). Since is a sum of independent -variables, multiplicative Chernoff applies and yields . If then and , and thus . Otherwise, if , then we can only bound . In both cases, we have with high probability. ∎ The above lemma behaves poorly if , due to the “” in the approximation guarantee. We next show that this can be avoided, at the cost of increasing the running time by an additive . Lemma 4 (Generalised Basic Approximation Algorithm). Given and , in expected time we can compute a value that w.h.p. satisfies . Proof. We run the basic approximation algorithm from Lemma 3, which computes a value . Additionally, we compute the number of matching pairs in time . If , then there exists a matching pair, which yields a common subsequence of length 1. Therefore, if we set . In the proof of Lemma 3 we showed that if then w.h.p. we have . We now argue differently in the case . If , then and we are done. If , then there must exist at least one matching pair, so , so the second part of our algorithm yields . Hence, in all cases w.h.p. we have . ∎ We now turn towards the problem of deciding for given and whether . To this end, we repeatedly call the basic approximation algorithm with geometrically decreasing approximation ratio . Note that with decreasing approximation ratio we get a better approximation guarantee at the cost of higher running time. The idea is that if the LCS is much shorter than the threshold , then already approximation ratio allows us to detect that . This yields a running time bound depending on the gap . Lemma 5 (Basic Decision Algorithm). Let . We can preprocess in time . Given , the preprocessed , and a number , in expected time we can w.h.p. correctly decide whether . Our algorithm has no false positives (and w.h.p. no false negatives). Proof. In the preprocessing phase, we run the preprocessing of Lemma 3. In the query phase, we repeatedly call the query phase of Lemma 3, with geometrically decreasing values of : 1. Preprocessing: Run the preprocessing of Lemma 3. 2. For : 1. Run the query phase of Lemma 3 with parameter to obtain an estimate . 2. If : return “ 3. If : return “ Let us first argue correctness. Since Lemma 3 computes a common subsequence of , we have . Thus, if , we correctly infer . Moreover, w.h.p.  satisfies . Therefore, if , we can infer , and this decision is correct with high probability. Finally, in the last iteration (where ), we have , and thus one of or must hold, so the algorithm indeed returns a decision. The expected time of the query phase of Lemma 3 is . Since decreases geometrically, the total expected time of our algorithm is dominated by the last call. If , the last call is at the latest for . This yields running time . If , note that for any we have , and thus we return “”. Because we decrease by a factor 2 in each iteration, the last call satisfies . Hence, the expected running time is . If then this time bound simplifies to . If , then also , and the time bound becomes . In both cases we can bound the expected running time by the claimed , since . ∎ 4.2 Approximating the Number of Matching Pairs Recall that for given strings of length the number of matching pairs can be computed in time , which is linear in the input size. However, later in the paper we will split into substrings and into substrings , each of length , and we will need estimates of the numbers of matching pairs . In this setting, the input size is still (the total length of all strings and ) and the output size is (all numbers ), but we are not aware of any algorithm computing the numbers in near-linear time in the input plus output size .555In fact, one can show conditional lower bounds from Boolean matrix multiplication that rule out near-linear time for computing all ’s unless the exponent of matrix multiplication is . Therefore, we devise an approximation algorithm for estimating the number of matching pairs. Lemma 6. For write and . Given the strings and a number , we can compute values that w.h.p. satisfy , in total expected time . This yields a near-linear-time constant-factor approximation of all above-average : By setting , in expected time we obtain a constant-factor approximation of all values with . Proof. The algorithm works as follows. 1. Graph Construction: Build a three-layered graph on vertex set , where has a node for every string , has a node for every string , and has a node for any and . Put an edge from to iff . Similarly, put an edge from to iff . Note that all frequencies and thus all edges of this graphs can be computed in total time . For and , we denote by their common neighbors. Note that any represents all matching pairs of symbol in and , and the number of these matching pairs is . 2. Subsampling: We sample a subset by removing each node independently with probability , where . 3. Determine Common Neighbors: For each enumerate all pairs of neighbors and . For each such 2-path, add to an initially empty set . This step computes the sets in time proportional to their total size. 4. Output: Return the values . Correctness: To analyze this algorithm, we consider the numbers . Observe that we have , since each corresponds to at least and at most matching pairs of and . It therefore suffices to show that is close to . Using Bernoulli random variables to express whether survives the subsampling, we write ˜Mij=∑(σ,ℓ,r)∈Uij2ℓ+rpℓ,r⋅Ber(pℓ,r). This yields an expected value of , so by Markov’s inequality we obtain with probability at least . Since is a linear combination of independent Bernoulli random variables, we can also easily express its variance as V[˜Mij]=∑(σ,ℓ,r)∈Uij(2ℓ+r/pℓ,r)2⋅pℓ,r(1−pℓ,r)=∑(σ,ℓ,r)∈Uij2ℓ+r⋅2ℓ+r(1pℓ,r−1). We now use the definition of to bound 2ℓ+r(1pℓ,r−1)=2ℓ+r(max{1,q2ℓ+r+3}−1)=max{0,q/8−2ℓ+r}≤q/8. This yields . We now use Chebychev’s inequality on and to obtain Pr[˜Mij<¯¯¯¯¯¯Mij/2]≤q2¯¯¯¯¯¯Mij. In case , we have and hence . Otherwise, in case , we can only use the trivial . Hence, each inequality and individually holds with probability at leat . Finally, we boost the success probability by repeating the above algorithm times and returning for each the median of all computed values . Running Time: Steps 1 and 2 can be easily seen to run in time . Steps 3 and 4 run in time proportional to the total size of all sets , which we claim to be at most in expectation. Over repetitions, we obtain a total expected running time of . (We remark that here we consider a succinct output format, where only the non-zero numbers are listed; otherwise additional time of is required to output the numbers .) It remains to prove the claimed bound of . Since , from the definition of we infer . Therefore, E[∑i,j|˜Uij|]≤E[8q∑i,j˜Mij]=8q∑i,j¯¯¯¯¯¯Mij≤8q∑i,jMij=8Mq.\qed 4.3 Single Symbol Approximation Algorithm For strings that have a large number of matchings pairs , some symbol must appear often in and in . This yields a common subsequence using (several repetitions of) a single alphabet symbol. Lemma 7 (Cf. Lemma 6.6.(ii) in [20] or Algorithm 3 in [24]). For any there exists a symbol that appears at least times in and in . Therefore, in time we can compute a common subsequence of of length at least . In particular, we can compute a value that satisfies . Proof. Let be maximal such that some symbol appears at least times in and at least times in . Let for . Since no symbol appears more than times in and in , we have . We can thus bound M=M(x,y)=∑σ∈Σ#σ(x)⋅#σ(y)≤∑σ∈Σxk⋅#σ(y)+∑σ∈Σy#σ(x)⋅k≤2kn, since the frequencies sum up to at most , and similarly for . It follows that . Computing , and a symbol attaining , in time is straightforward. ∎ We devise a variant of Lemma 7 in the following setting. For strings , we write , and . We want to find for each block  a frequent symbol in and , or equivalently we want to find a common subsequence of and using a single alphabet symbol. Similarly to Lemma 6, we relax Lemma 7 to obtain a fast running time. Lemma 8. Given and any , we can compute for each a number  such that w.h.p. . The algorithm runs in total expected time . Proof. We run the same algorithm as in Lemma 6, except that in Step 4 for each with non-empty set  we let be the maximum of over all . For each empty set , we implicitly set , i.e., we output a sparse representation of all non-zero values . The running time analysis is the same as in Lemma 6. For the upper bound on , since appears at least times in and at least times in , there is a common subsequence of and of length at least . Thus, we have . For the lower bound on , fix and order the tuples in ascending order of , obtaining an ordering . For we let and . Recall that , and observe that we can pick with ∑(σ,ℓ,r)∈S2ℓ+r≥¯¯¯¯¯¯Mij/2and∑(σ,ℓ,r)∈L2ℓ+r≥¯¯¯¯¯¯Mij/2. (1) Then we have ¯¯¯¯¯¯Mij2≤∑(σ,ℓ,r)∈S2ℓ+r=∑(σ,ℓ,r)∈S2min{ℓ,r}⋅2max{ℓ,r}≤2min{ℓh,rh}∑(σ,ℓ,r)∈S2max{ℓ,r}. Note that for any the symbol appears at least times in or in , and thus the sum on the right hand side is at most . Rearranging, this yields , where we used as in the proof of Lemma 6. In particular, due to our ordering we have for any : 2min{ℓ,r}≥2min{ℓh,rh}≥Mij16m. (2) Consider the number of nodes in surviving the subsampling, i.e., . If , then some node in survived, and thus by (2) the computed value is at least . It thus remains to analyze . In case some has , we have with probability 1. Otherwise all have and thus . In this case, we write as a sum of independent Bernoulli random variates in the form . In particular, E[Z]=∑(σ,ℓ,r)∈L2ℓ+r+3/q(???)≥4¯¯¯¯¯¯Mijq≥Mijq. Since is a sum of independent -variables, multiplicative Chernoff applies and yields . We thus obtain Pr[Z>0]≥1−Pr[Z In case , we obtain , and thus we have with probability at least . Otherwise, in case , we can only use the trivial bound . In any case, we have with probability at least . Similar to the proof of Lemma 6, we run independent repetitions of this algorithm and return for each the maximum of all computed values , to boost the success probability and finish the proof. ∎ 5 Main Algorithm In this section we prove Theorem 1. First we show that Theorem 9 implies Theorem 1, and then in the remainder of this section we prove Theorem 9. Theorem 9 (Main Result, Relaxation). Given strings of length and a time budget , in expected time we can compute a number such that and w.h.p. . Recall Theorem 1: See 1 Proof of Theorem 1 assuming Theorem 9. Note that the difference between Theorems 1 and 9 is that the latter allows expected running time and has an additional slack of logarithmic factors in the running time. In order to remove the expected running time, we abort the algorithm from Theorem 9 after time steps. By Markov’s inequality, we can choose the hidden constants and logfactors such that the probability of aborting is at most . We boost the success probability of this adapted algorithm by running independent repetitions and returning the maximum over all computed values . This yields an -approximation with high probability in time . To remove the logfactors in the running time, as the first step in our algorithm we subsample the given strings , keeping each symbol independently with probability , resulting in subsampled strings . Since any common subsequence of is also a common subsequence of , the estimate that we compute for satisfies . Moreover, if then by Chernoff bound with high probability we have , so that an -approximation on also yields an -approximation on . Otherwise, if , then in order to compute a -approximation it suffices to compute an LCS of length 1, which is just a matching pair and can be found in time (assuming that the alphabet is ). This yields an algorithm that computes a value such that w.h.p. . The algorithm runs in time , and this running time bound holds deterministically, i.e., with probability 1. Hence, we proved Theorem 1. ∎ It remains to prove Theorem 9. Our algorithm is a combination of four methods that work well in different regimes of the problem, see Sections 5.1, 5.3, 5.4, and 5.5. We will combine these methods in Section 5.6. 5.1 Algorithm 1: Small L Algorithm 1 works well if the LCS is short. It yields the following result. Theorem 10 (Algorithm 1). We can compute in expected time an estimate that w.h.p. satisfies . Proof. Our Algorithm 1 works as follows. 1. Run Lemma 7 on and . 2. Run Lemma 4 on and with . 3. Output the larger of the two common subsequence lengths computed in Steps 1 and 2. Running Time: Step 1 runs in time . Step 2 runs in expected time . Since we have , so the expected running time is . Upper Bound: Steps 1 and 2 compute common subsequences, so the computed estimate satisfies . Approximation Guarantee: Note that Step 1 guarantees and Step 2 guarantees w.h.p. . If then and , so we solved the problem exactly. Otherwise we have and
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9706261157989502, "perplexity": 577.9814571756353}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301592.29/warc/CC-MAIN-20220119215632-20220120005632-00093.warc.gz"}
https://www.physicsforums.com/threads/faradays-cage.440674/
1. Oct 22, 2010 ### alphaomega@ho hello, I have a very simple question: a hollow sphere has the propertie that in the sphere the electric field, due to being charged , is zero is it in the whole sphere? for example, near the inside surface of the sphere, the the electric field of the nearest charge will be geater than the elctric field of the charge wich is a the other side... what about non regular hollow object , is the electric field also zero than (because of its geometry) thank you very much! 2. Oct 22, 2010 ### zhermes If its a net-uncharged, conducting sphere then yes. If you are so close to the surface of the sphere that the charge density is no longer uniform, but instead you can "see" individual charges, then no--the field won't be identically zero, but still very close. Still zero. 3. Oct 23, 2010 ### alphaomega@ho what do you mean with net-uncharged ? I find it difficult to understand why in a non regular onbject the field whtin is zero I've seen the disctraction for the a hollow sphere, with those space angle whom causes opposite field vector in this field (reason why the field is zero) => but in a non regular hollow object this isn't true, so whats the reason of havving no netto electric field insie that object? for example a car... thank you very mucg 4. Oct 23, 2010 ### zhermes The argument makes no assumption about the particular shape/features of the hollow object except that it is closed, conducting, and uncharged. Think about the inner surface of the conducting object. If there was still a residual electric field there, then electrons on the surface would feel a force and therefore move. Thus the system would not be in equilibrium. The only possible equilibrium configuration, and the lowest energy configuration (thus the preferred configuration), is if there is no electric field at the inner surface. 5. Oct 23, 2010 ### alphaomega@ho so the conclusion: a hollow object (no matter what shape) with a equally divided charged surface(dq/dA is overall the same) has a electric field inside that has a netto value = 0 thank you very much 6. Oct 23, 2010 ### zhermes Yes, except that $$\frac{dq}{dA}$$ isn't necessarily constant over the surface. I think it will only be constant in the case of zero divergence in the external field, across the surface. Can anyone reading this confirm or deny? 7. Oct 23, 2010 ### alphaomega@ho I mean that dq/dA must be equal over the outer surface suppose all the charge q is spread out over 1/100 of the outer surface somewhere, in the inside of the object will be an netto electric field there's no other charge wich creates a internal field opposits to the other dE creates by the other charge on the surface, I think... what did you mean with zero divergence in the external field ? 8. Oct 26, 2010 ### alphaomega@ho is my statement that I made above correct ? thank you ! 9. Oct 26, 2010 ### zhermes Consider a point-charge located outside the hollow conductor (of arbitrary shape). The induced charge density on the sphere surface will be non-homogeneous, even though the electric field inside the cavity would be zero. 10. Oct 27, 2010 ### alphaomega@ho something like the picture for example? that I do understand, but for a non homogeneous dq/dA over the outer surface I find it very strange but I'll except it... thank you
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9222405552864075, "perplexity": 1198.131317127091}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170884.63/warc/CC-MAIN-20170219104610-00406-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.mathxplain.com/calculus-3/matrices-and-vectors/the-equation-of-the-line-and-the-equation-of-the-plane
Contents of this Calculus 3 episode: The equatin of the line, The equatin of the plane, Normal vector, Vector between 2 points, Distance of 2 points, Direction vector, System of equation of the line. Text of slideshow It's time to do some geometry. There's nothing to worry about, it's just a few little things. Let's start with vectors and lines in the plain. The equation of a line and the equation of a plane It's time to do some geometry. There's nothing to worry about, it's just a few little things. Let's start with vectors and lines in the plane. EQUATION OF A LINE: If is a point on the line, and is the normal vector of the line: Just a reminder: the normal vector of a line is the non-zero vector that is perpendicular to that line. VECTOR BETWEEN 2 POINTS: If and are points, then the vector between these points: DISTANCE OF 2 POINTS: If and are points, then the distance between these points: It's all the same in space, except there are three coordinates. EQUATION OF A PLANE: If is on the plane and is the normal vector of the plane: Just a reminder: the normal vector of a line is the non-zero vector that is perpendicular to that line. VECTOR BETWEEN 2 POINTS: If and are points, then the vector between these points: DISTANCE OF 2 POINTS: If and are points, then the distance between these points: Let's try to come up with the equation of a line in space. This would be useful for us, but it is not included in this list. Unfortunately there will be some problems with it, but let's try anyway. Let's find the equation of the line where point is on the line, and is the direction vector of the line. Here, we have to use the direction vector instead of the normal vector, because in space it is not obvious which vector is perpendicular to the line. The direction vector, on the other hand, is specific, only its length may vary. If is an arbitrary point on the line, then This vector is a multiple of the line's direction vector If , then we divide by it, if it is zero, then If , then we divide by it, if it is zero, then If , then we divide by it, if it is zero, then All of them are equal to , therefore they must be equal to each other, too. This is the system of equations of a line in space. Let's see an example: Find the equation of the line where point is on the line, and is the direction vector of the line. Here is the system of equations of the line: Unfortunately, will cause some trouble. In such cases Next, let's see a typical exercise. Find the equation of a line in the plane, where point is on the line, and the line is perpendicular to the line described by the equation of Find the equation of a plane in space, where point is on the plane and the plane is perpendicular to the line described by the following system of equations: The normal vector of line is We can make use of this vector if we rotate it by , because then it will be the normal vector of the line we are trying to define. To rotate a vector in the plane by , we swap its coordinates, and multiply one of them by . We have the normal vector, so the equation of the line is: Let's see what we can do over here. The normal vector of the plane happens to be the direction vector of the line. The direction vector of the line: The normal vector of the plane: Here comes the equation of the plane: And finally, another typical exercise. Find the equation of a line in the plane, where points and are on the line. Find the equation of the plane in space, where points , , and are on the plane. If is a point on the plane and is the normal vector of the plane, the equation of the line will be: If is a point on the plane and is the normal vector of the plane, the equation of the plane will be: We have plenty of points, but we don't have a single normal vector, so we have to make one. Let's rotate this by , and that gives us the normal vector. To rotate a vector in the plane by , we swap its coordinates, and multiply one of them by . The equation of the line: We will have a problem with the plane here. In space there is no such thing as rotating a vector by . We have to figure out something else to get the plane's normal vector. We would need a vector that is perpendicular to the triangle determined by points , , and . This vector will be the so called cross product. # The equation of the line and the equation of the plane 06 Let's see this Calculus 3 episode
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9007354974746704, "perplexity": 319.6013673324001}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107874026.22/warc/CC-MAIN-20201020162922-20201020192922-00161.warc.gz"}
http://www.math.ubc.ca/~andrewr/CLP/clp_1_dc/sec_invTrig.html
We are now going to consider the problem of finding the derivatives of the inverses of trigonometric functions. Now is a very good time to go back and reread Section 0.6 on inverse functions — especially Definition 0.6.4. Most importantly, given a function $f(x)\text{,}$ its inverse function $f^{-1}(x)$ only exists, with domain $D\text{,}$ when $f(x)$ passes the “horizontal line test”, which says that for each $Y$ in $D$ the horizontal line $y=Y$ intersects the graph $y=f(x)$ exactly once. (That is, $f(x)$ is a one-to-one function.) Let us start by playing with the sine function and determine how to restrict the domain of $\sin x$ so that its inverse function exists.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8621153235435486, "perplexity": 108.0749823761531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806509.31/warc/CC-MAIN-20171122065449-20171122085449-00207.warc.gz"}
https://tex.stackexchange.com/questions/253295/difference-between-framebox-and-boxed/253297#253297
# Difference between \framebox{} and \boxed{} I want to put an equation in a frame box. Is it the same that if I create a framed text box environment first then put the equation inside or use the environment \boxed{}? \documentclass[a4paper]{article} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsfonts} \begin{document} \framebox{$a^2+b^2=c^2$} $\boxed{a^2+b^2=c^2}$ \end{document} Package amsmath defines \boxed: \newcommand{\boxed}[1]{\fbox{\m@th$\displaystyle#1$}} \framebox and \fbox are just different interfaces for the same internal \@frameb@x, which actually makes the box. \framebox has more options. Thus, the main difference between \framebox{$...$} and $\boxed{...}$ is that \boxed sets \displaystyle, whereas it had to be done manually in the former variant: \framebox{$\displaystyle ...$}. Another difference appears, if \mathsurround is not zero. This space is set, when TeX enters and leaves inline math mode. It is intended as separation of math from the surrounding text. Inside the box it does not make sense and \boxed removes it by setting \mathsurround to zero by \m@th. A test file, which illustrates the differences: \documentclass{article} \usepackage[T1]{fontenc} \usepackage{lmodern} \usepackage{amsmath} \begin{document} \newcommand*{\vs}{\mathrel{\text{vs.}}} \begin{align*} \text{\ttfamily\string\framebox\{\$\dots\$\}} &\vs \text{\ttfamily\string\boxed\{\dots\}} \\ \framebox{$\frac{123}{456}$} &\vs \boxed{\frac{123}{456}} \\ \setlength{\mathsurround}{1em}\framebox{$X$} &\vs \setlength{\mathsurround}{1em}\boxed{X} \end{align*} \end{document} A simulation of \boxed would be: \framebox{% \setlength{\mathsurround}{0pt}% $\displaystyle ...$% } • Thanks for the helpful reply! By the way, would you please tell me where I can find the definition of \boxed? Jul 2 '15 at 10:43 • @ZhangZe kpsewhich amsmath.sty (or locate the package file amsmath.sty by means of the OS), then line 307 (in version 2013/01/14 v2.14). Jul 2 '15 at 11:06 • Okay. Gotta it! Jul 2 '15 at 11:17 For such questions I just load the package lua-visual-debug. You can see clearly that they are both the same: % arara: lualatex \documentclass[a4paper]{article} \usepackage{amsmath} \usepackage{blindtext} \usepackage{lua-visual-debug} \parindent0pt \begin{document} \blindtext \framebox{$\displaystyle a^2+b^2=c^2$} % displaystyle needed here, as it is set in \boxed by default. It just changes the horizontal spacing of the powers a bit. $\boxed{a^2+b^2=c^2}$ \blindtext \end{document} The boxing is equal speaking of spacing and aligning. Therefore, both approaches are similar. For the syntactical equivalence, see Heiko's answer.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9729504585266113, "perplexity": 3421.111016614425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358078.2/warc/CC-MAIN-20211127013935-20211127043935-00135.warc.gz"}
https://link.springer.com/article/10.1140%2Fepjc%2Fs10052-010-1237-2
The European Physical Journal C , Volume 66, Issue 1–2, pp 271–281 # Probing scalar particle and unparticle couplings in $$e^{+}e^{-}\to t\bar{t}$$ with transversely polarized beams Regular Article - Theoretical Physics ## Abstract In searching for indications of new-physics scalar particle and unparticle couplings in $$e^{+}e^{-}\to t\bar{t}$$ , we consider the role of transversely polarized initial beams at e+e colliders. By using a general relativistic spin density matrix formalism for describing the particles spin states, we find analytical expressions for the differential cross section of the process with t or $$\bar{t}$$ polarization measured, including the anomalous coupling contributions. Thanks to the transversely polarized initial beams these contributions are first order anomalous coupling corrections to the Standard Model (SM) contributions. We present and analyze the main features of the SM and anomalous coupling contributions. We show how differences between SM and anomalous coupling contributions provide means to search for anomalous coupling manifestations at future e+e linear colliders.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8842342495918274, "perplexity": 1830.208350467347}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424247.30/warc/CC-MAIN-20170723042657-20170723062657-00522.warc.gz"}
http://eprints.adm.unipi.it/1980/
# Fast fraction-free triangularization of Bezoutians with applications to sub-resultant chain computation Bini, D. A. and Gemignani, Luca (1997) Fast fraction-free triangularization of Bezoutians with applications to sub-resultant chain computation. Technical Report del Dipartimento di Informatica . Università di Pisa, Pisa, IT. An algorithm for the computation of the LU factorization over the integers of an $n\times n$ Bezoutian $B$ is presented. The algorithm requires $O(n^2)$ arithmetic operations and involves integers having at most $O(n\log nc)$ bits, where $c$ is an upper bound of the moduli of the integer entries of $B$. As an application, by using the correlations between Bezoutians and the Euclidean scheme, we devise a new division-free algorithm for the computation of the polynomial pseudo-remainder sequence of two polynomials of degree at most $n$ in $O(n^2)$ arithmetic operations. The growth of the length of the integers involved in the computation is kept at the minimum value, i.e., $O(n\log nc)$ bits, no matter if the sequence is normal or not, where $c$ is an upper bound of the moduli of the input coefficients. The algorithms, that rely on the Bareiss technique and on the properties of the Schur complements of Bezoutians , improve the previous ones.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9493411779403687, "perplexity": 447.2503042050648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794869732.36/warc/CC-MAIN-20180527170428-20180527190428-00200.warc.gz"}
https://www.physicsforums.com/threads/joint-probability-of-partitioned-vectors.763723/
# Joint probability of partitioned vectors 1. Jul 28, 2014 ### scinoob Hi everybody, I apologize if this question is too basic but I did 1 hour of solid Google searching and couldn't find an answer and I'm stuck. I'm reading Bishop's Pattern Recognition and Machine Learning and in the second chapter he introduces partitioned vectors. Say, if X is a D-dimensional vector, it can be partitioned like: X = [Xa, Xb] where Xa is the first M components of X and Xb is the remaining D-M components of X. I have no problem with this simple concept. Later in the same chapter he talks about conditional and marginal multivariate Gaussian distributions and he uses the notation p(Xa, Xb). I'm trying to understand how certain integrals involving this notation are expanded but I'm actually struggling to understand even this expression. It seems to suggest that we're denoting the joint probability of the components of Xa and the components of Xb. But those are just the components of X anyway! What is the difference between P(Xa, Xb) and P(X)? It will be more helpful for me if we considered a more concrete example. Say, X = [X1, X2, X3, X4] and Xa = [X1, X2] while Xb = [X3, X4]. Now, the joint probability P(X) would simply be P(X1, X2, X3, X4), right? What is P(Xa, Xb) in this case? 2. Jul 28, 2014 ### mathman My guess: in later chapters he discusses Xa and Xb as separate entities. 3. Jul 28, 2014 ### gill1109 There is no difference between p(Xa, Xb) and p(X), because X = (Xa, Xb). It starts to get interesting when we introduce marginal and conditional probability densities e.g. p(Xa | Xb) and p(Xb). Obviously, p(Xa, Xb) = p(X) = p(Xa | Xb) . p(Xb) You get p(Xb) from p(X) by integrating out over Xa. NB use small "p" for probability density. Use capital "P" for probability. Similar Discussions: Joint probability of partitioned vectors
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.902511715888977, "perplexity": 1285.774250229462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886123359.11/warc/CC-MAIN-20170823190745-20170823210745-00110.warc.gz"}
https://www.aimsciences.org/journal/2155-3289/2021/11/4
American Institute of Mathematical Sciences ISSN: 2155-3289 eISSN: 2155-3297 All Issues Numerical Algebra, Control and Optimization December 2021 , Volume 11 , Issue 4 Select all articles Export/Reference: 2021, 11(4): 487-493 doi: 10.3934/naco.2020039 +[Abstract](1484) +[HTML](542) +[PDF](308.99KB) Abstract: As it is known the equation \begin{document}$A\varphi = f$\end{document} with injective compact operator has a unique solution for all \begin{document}$f$\end{document} in the range \begin{document}$R(A).$\end{document}Unfortunately, the right-hand side \begin{document}$f$\end{document} is never known exactly, so we can take an approximate data \begin{document}$f_{\delta }$\end{document} and used the perturbed problem \begin{document}$\alpha \varphi +A\varphi = f_{\delta }$\end{document} where the solution \begin{document}$\varphi _{\alpha \delta }$\end{document} depends continuously on the data \begin{document}$f_{\delta },$\end{document} and the bounded inverse operator \begin{document}$\left( \alpha I+A \right) ^{-1}$\end{document} approximates the unbounded operator \begin{document}$A^{-1}$\end{document} but not stable. In this work we obtain the convergence of the approximate solution of \begin{document}$\varphi _{\alpha \delta }$\end{document} of the perturbed equation to the exact solution \begin{document}$\varphi$\end{document} of initial equation provided \begin{document}$\alpha$\end{document} tends to zero with \begin{document}$\dfrac{\delta }{\sqrt{\alpha }}.$\end{document} 2021, 11(4): 495-512 doi: 10.3934/naco.2020040 +[Abstract](1851) +[HTML](519) +[PDF](468.75KB) Abstract: The direct scheme method is applied to construct an asymptotic approximation of any order to a solution of a singularly perturbed optimal problem with scalar state, controlled via a second-order linear ODE and two fixed end points. The error estimates for state and control variables and for the functional are obtained. An illustrative example is given. 2021, 11(4): 513-531 doi: 10.3934/naco.2020053 +[Abstract](1620) +[HTML](449) +[PDF](449.06KB) Abstract: In an attempt to improve theoretical complexity of large-update methods, in this paper, we propose a primal-dual interior-point method for \begin{document}$P_{\ast}\left( \kappa \right)$\end{document}-horizontal linear complementarity problem. The method is based on a class of parametric kernel functions. We show that the corresponding algorithm has \begin{document}$O\left( \left( 1+2\kappa \right) p^{2}n^{\frac{2+p}{2\left( 1+p\right) }}\log \frac{n}{\epsilon }\right)$\end{document} iteration complexity for large-update methods and we match the best known iteration bounds with special choice of the parameter \begin{document}$p$\end{document} for \begin{document}$P_{\ast }\left(\kappa \right)$\end{document}-horizontal linear complementarity problem that is \begin{document}$O\left(\left( 1+2\kappa \right) \sqrt{n}\log n\log \frac{n}{\epsilon }\right)$\end{document}. We illustrate the performance of the proposed kernel function by some comparative numerical results that are derived by applying our algorithm on five kernel functions. 2021, 11(4): 533-554 doi: 10.3934/naco.2020054 +[Abstract](1294) +[HTML](484) +[PDF](443.7KB) Abstract: Solving a bi-criterion fractional stochastic programming using an existing multi criteria decision making tool demands sufficient efforts and it is time consuming. There are many cases in financial situations that a nonlinear fractional programming, generated as a result of studying fractional stochastic programming, must be solved. Often management is not in needs of an optimal solution for the problem but rather an approximate solution can give him/her a good starting for the decision making or running a new model to find an intermediate or final solution. To this end, this author introduces a new linear approximation technique for solving a fractional stochastic programming (CCP) problem. After introducing the problem, the equivalent deterministic form of the fractional nonlinear programming problem is developed. To solve the problem, a fuzzy goal programming model of the equivalent deterministic form of the fractional stochastic programming is provided and then, the process of defuzzification and linearization of the problem is presented. A sample test problem is solved for presentation purposes. There are some limitations to the proposed approach: (1) solution obtains from this type of modeling is an approximate solution and, (2) preparation of approximation model of the problem may take some times for the beginners. 2021, 11(4): 555-566 doi: 10.3934/naco.2020055 +[Abstract](1162) +[HTML](399) +[PDF](379.71KB) Abstract: The aim of this paper is to study the problem of constrained controllability for distributed parabolic linear system evolving in spatial domain \begin{document}$\Omega$\end{document} using the Reverse Hilbert Uniqueness Method (RHUM approach) introduced by Lions in 1988. It consists in finding the control \begin{document}$u$\end{document} that steers the system from an initial state \begin{document}$y_{_{0}}$\end{document} to a state between two prescribed functions. We give some definitions and properties concerning this concept and then we resolve the problem that relays on computing a control with minimum cost in the case of \begin{document}$\omega = \Omega$\end{document} and in the regional case where \begin{document}$\omega$\end{document} is a part of \begin{document}$\Omega$\end{document}. 2021, 11(4): 567-578 doi: 10.3934/naco.2020056 +[Abstract](982) +[HTML](376) +[PDF](430.99KB) Abstract: Biometric characteristics have been used since antiquated decades, particularly in the detection of crimes and investigations. The rapid development in image processing made great progress in biometric features recognition that is used in all life directions, especially when these features recognition is constructed as a computer system. The target of this research is to set up a left foot biometric system by hybridization between image processing and artificial bee colony (ABC) for feature choice that is addressed within artificial image processing. The algorithm is new because of the rare availability of hybridization algorithms in the literature of footprint recognition with the artificial bee colony assessment. The suggested system is tested on a live-captured ninety colored footprint images that composed the visual database. Then the constructed database was classified into nine clusters and normalized to be used at the advanced stages. Features database is constructed from the visual database off-line. The system starts with a comparison operation between the foot-tip image features extracted on-line and the visual database features. The outcome from this process is either a reject or an acceptance message. The results of the proposed work reflect the accuracy and integrity of the output. That is affected by the perfect choice of features as well as the use of artificial bee colony and data clustering which decreased the complexity and later raised the recognition rate to 100%. Our outcomes show the precision of our proposed procedures over others' methods in the field of biometric acknowledgment. 2021, 11(4): 579-611 doi: 10.3934/naco.2020057 +[Abstract](1298) +[HTML](363) +[PDF](3088.97KB) Abstract: The assimilation of flexible AC transmission (FACTS) controllers to the existing power network outweigh the numerous alternatives in enhancing the damping behavior for the inter-area /intra-area system oscillations of a power network. This paper provides a rigorous analysis in damping of oscillations in a power network. It utilizes a shunt connected voltage source converter (VSC) based FACTS device to enhance the system operating characteristics. A comprehensive system mathematical modelling has been developed for demonstrating the system behavior under different loading conditions. A novel hybrid augmented grey wolf optimization-particle swarm optimization (AGWO-PSO) is proposed for the coordinated design of controllers static synchronous compensator (STATCOM) and power system stabilizers (PSSs). A multi-objective function, comprising damping ratio improvement and drifting the real part to the left-hand side of S-plane of the system poles, has been developed to achieve the objective and the effectiveness of the proposed algorithms have been analyzed by monitoring the system performance under different loading conditions. Eigenvalue analysis and damping nature of the system states under perturbation have been presented for the proposed algorithms under different loading conditions, and the performance evaluation of the proposed algorithms have been done by means of time of execution and the convergence characteristics. 2021, 11(4): 613-631 doi: 10.3934/naco.2020058 +[Abstract](1636) +[HTML](437) +[PDF](428.7KB) Abstract: An interior point modified Nelder Mead method for nonlinearly constrained optimization is described. This method neither uses nor estimates objective function or constraint gradients. A modified logarithmic barrier function is used. The method generates a sequence of points which converges to KKT point(s) under mild conditions including existence of a Slater point. Numerical results are presented that show the algorithm performs well in practice. 2021, 11(4): 633-644 doi: 10.3934/naco.2021001 +[Abstract](1298) +[HTML](372) +[PDF](413.37KB) Abstract: In this paper, Lyapunov's artificial small parameter method (LASPM) with continuous particle swarm optimization (CPSO) is presented and used for solving nonlinear differential equations. The proposed method, LASPM-CPSO, is based on estimating the \begin{document}$\varepsilon$\end{document} parameter in LASPM through a PSO algorithm and based on a proposed objective function. Three different examples are used to evaluate the proposed method LASPM-CPSO, and compare it with the classical method LASPM through different intervals of the domain. The results from the maximum absolute error (MAE) and mean squared error (MSE) obtained through the given examples show the reliability and efficiency of the proposed LASPM-CPSO method, compared to the classical method LASPM. 2021, 11(4): 645-663 doi: 10.3934/naco.2021002 +[Abstract](1294) +[HTML](374) +[PDF](372.25KB) Abstract: In this work, we have proposed a new approach for solving the linear-quadratic optimal control problem, where the quality criterion is a quadratic function, which can be convex or non-convex. In this approach, we transform the continuous optimal control problem into a quadratic optimization problem using the Cauchy discretization technique, then we solve it with the active-set method. In order to study the efficiency and the accuracy of the proposed approach, we developed an implementation with MATLAB, and we performed numerical experiments on several convex and non-convex linear-quadratic optimal control problems. The obtained simulation results show that our method is more accurate and more efficient than the method using the classical Euler discretization technique. Furthermore, it was shown that our method fastly converges to the optimal control of the continuous problem found analytically using the Pontryagin's maximum principle. 2021, 11(4): 665-676 doi: 10.3934/naco.2021003 +[Abstract](1058) +[HTML](367) +[PDF](374.52KB) Abstract: In this paper, we propose a new alternate gradient (AG) method to solve a class of optimization problems with orthogonal constraints. In particular, our AG method alternately takes several gradient reflection steps followed by one gradient projection step. It is proved that any accumulation point of the iterations generated by the AG method satisfies the first-order optimal condition. Numerical experiments show that our method is efficient. 2021, 11(4): 677-685 doi: 10.3934/naco.2021012 +[Abstract](1003) +[HTML](318) +[PDF](333.38KB) Abstract: An efficiently preconditioned Newton-like method for the computation of the eigenpairs of large and sparse nonsymmetric matrices is proposed. A sequence of preconditioners based on the Broyden-type rank-one update formula are constructed for the solution of the linearized Newton system. The properties of the preconditioned matrix are investigated. Numerical results are given which reveal that the new proposed algorithms are efficient. 2021 CiteScore: 1.9
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8599947690963745, "perplexity": 684.192832441184}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104506762.79/warc/CC-MAIN-20220704232527-20220705022527-00690.warc.gz"}
http://www.potto.org/dieCasting/node38.html
Next: A note on numerical Up: 3.3.1 Is the flow Previous: 3.3.1 Is the flow ### Transition from laminar to turbulent It commonly assumed that the flow is turbulent in the shot sleeve, runner system, and the duration of the filling of the cavity. Further, it also assumed that the model can reasonably represent the turbulence structure. These assumptions are examined herein. We examine the flow in 1) the shot sleeve, 2) the runner system, and 3) the mold cavity. Note, that even if the turbulence is exist in some regions it doesn't necessarily mean that all the flow field is turbulent. = 0.5 Figure exhibits the transition to a turbulent flow for instantaneous starting flow in a circular pipe. The abscissa represents time and the y-axis represents the number at which transition to turbulence occurs. The points on the graphs show the transition to a turbulence. This figure demonstrates that a large time is required to turn the flow pattern to a turbulent which measured in several seconds. The figure demonstrates that the transition does not occur below a certain critical number (known as the critical number for steady state). It also shows that a considerable time has to elapse before transition to turbulence occurs even for a relatively large Reynolds number. The geometry in die casting however is different and therefore it is expected that the transition occurs at different times. Our present knowledge of this area is very limited. Yet, a similar transition delay is expected to occur after the instantaneous'' start-up which probably will be measured in seconds. The flow in die casting in many situations is very short (in order of milliseconds) and therefore it is expected that the transition to a turbulent flow does not occur. After the liquid metal is poured, it is normally repose for sometime in a range of 10's seconds. This fact is known in the scientific literature as the quieting time for which the existed turbulence (if exist) is reduced and after enough time (measured in seconds) is illuminated. Hence, if turbulence was created during the filling process of the shot sleeve is disappeared''. Now we can examine the question, is the flow in the duration of the slow plunger velocity is turbulent (see Figure ). = 0.4 Clearly, the flow in the substrate (a head the wave) is still (almost zero velocity) and therefore the turbulence does not exist. The number behind the wave is above the critical number (which is in the range of 2000-3000). The typical time for the wave to travel to the end of the shot sleeve are in the range of a second. At present there are no experiments on the flow behind the wave10. To discuss the free interface as oppose to solid interface. The estimation can be done by looking at what is known in the literature about the transition to turbulence in instantaneous starting pipe flow. It has been shown [#!flow:wygy!#] that the flow changes from a laminar flow to turbulent flow occurs in an abrupt meaner for a flow with supercritical number. A typical velocity of the propagating front (transition between laminar to turbulent) is about at same velocity as the mean velocity of the flow. Hence, it is reasonable to assume that the turbulence is confined to a small zone in the wave front since the wave is traveling in a faster velocity than the mean velocity. should we insert a graph about the relation between the mean velocity and wave velocity in the shot sleeve Note that the thickness of the transition layer is a monotone increase function of time (traveling distance). The number in the shot sleeve based on the diameter is in a range of which means that the boundary layer has not developed much. Therefore, the flow can be assumed as almost a plug flow with the exception of the front region. what about the solid layer the skimmed to increase'' the plunger head and effects on the flow. Next: A note on numerical Up: 3.3.1 Is the flow Previous: 3.3.1 Is the flow   Contents Genick Bar-Meir ||| www.potto.org
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9311309456825256, "perplexity": 516.8929486369796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119646352.2/warc/CC-MAIN-20141024030046-00310-ip-10-16-133-185.ec2.internal.warc.gz"}
https://mran.revolutionanalytics.com/snapshot/2020-01-27/web/packages/scoringRules/vignettes/gettingstarted.html
R package scoringRules: Getting started 2019-08-20 This vignette provides a two-page introduction to the scoringRules package. For a more detailed introduction, see the paper ‘Evaluating probabilistic forecasts with scoringRules’ (Jordan, Krüger, and Lerch, Journal of Statistical Software, forthcoming) which is available as a further vignette to scoringRules. Background Forecasting a continuous random variable, such as temperature (in degrees Celsius) or the inflation rate (in percent) is important in many situations. Since forecasts are usually surrounded by uncertainty, it makes sense to state forecast distributions, rather than point forecasts. The scoringRules package offers mathematical tools to evaluate a forecast distribution, $$F$$, after an outcome $$y$$ has realized. To this end, we use scoring rules which assign a penalty score $$S(y, F)$$ to the realization/forecast pair; a smaller score corresponds to a better forecast. See Gneiting and Katzfuss (‘Probabilistic Forecasting’, Annual Review of Statistics and Its Application, 2014) for an introduction to the relevant statistical literature. scoringRules supports two types of forecast distributions, $$F$$: Distributions given by a parametric family (such as Normal or Gamma), and distributions given by a simulated sample. Furthermore, the package covers the two most popular scoring rules $$S$$: The logarithmic score (LogS), given by $\text{LogS}(y, F) = -\log f(y),$ where $$f$$ is the density conforming to $$F$$, and the continuous ranked probability score (CRPS), given by $\text{CRPS}(y, F) = \int_{-\infty}^\infty (F(z) - \mathbf{1}(y \le z))^2 dz,$ where $$\mathbf{1}(A)$$ is the indicator function of the event $$A$$. The LogS is typically easy to compute, and we include it mainly for reference purposes. By contrast, the CRPS can be very tricky to compute analytically, and cumbersome to approximate numerically. The scoringRules package implements several new methods to tackle this challenge. Specifically, we include many previously unknown analytical expressions, and incorporate recent findings on how to best compute the CRPS of a simulated forecast distribution. Example 1: Parametric forecast distribution Suppose the forecast distribution is $$\mathcal{N}(2, 4)$$, and an outcome of zero realizes: The following piece of code computes the CRPS for this situation: library(scoringRules) # CRPS of a normal distribution with mean = standard deviation = 2, outcome is zero crps(y = 0, family = "normal", mean = 2, sd = 2) ## [1] 1.204883 As documented under ?crps, the parametric family used in the example is one of the following choices currently implemented in scoringRules: • Normal, Mixture of normals, Two-piece normal, Log-normal, Generalized truncated/censored normal, (Scaled) Student t, Generalized truncated/censored Student t, Laplace, Log-Laplace, Logistic, Log-logistic, Generalized truncated/censored logistic, Exponential, Exponential with point mass, Two-piece exponential, Beta, Gamma, Uniform, Generalized extreme value, Generalized Pareto, Binomial, Hypergeometric, Negative binomial, Poisson We provide analytical CRPS formulas in Appendix A of the vignette ‘Evaluating Probabilistic Forecasts with scoringRules’. Whenever possible, our syntax and parametrization closely follow base R. For distributions not supported by base R, we have created documentation pages with details; see for example ?f2pnorm. Example 2: Simulated forecast distribution Via data(gdp_mcmc), the user can load a sample data set with forecasts and realizations for the growth rate of US gross domestic product, a widely regarded economic indicator. The following plot shows the histogram of a simulated forecast distribution for the fourth quarter of 2012: Let’s now compute the CRPS for this simulated forecast distribution: # Load data data(gdp_mcmc) # Get forecast distribution for 2012:Q4 dat <- gdp_mcmc$forecasts[, "X2012Q4"] # Get realization for 2012:Q4 y <- gdp_mcmc$actuals[, "X2012Q4"] # Compute CRPS of simulated sample crps_sample(y = y, dat = dat) ## [1] 0.9058803
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8263913989067078, "perplexity": 2707.8139114360242}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991870.70/warc/CC-MAIN-20210517211550-20210518001550-00343.warc.gz"}
https://scholars.ttu.edu/en/publications/comparison-of-the-z%CE%B3-supsup-jets-to-%CE%B3-jets-cross-sections-in-pp-c
# Comparison of the Z/γ ∗ + jets to γ + jets cross sections in pp collisions at (Formula presented.) TeV The CMS collaboration Research output: Contribution to journalArticlepeer-review 5 Scopus citations ## Abstract Abstract: A comparison of the differential cross sections for the processes Z/γ* + jets and photon (γ)+jets is presented. The measurements are based on data collected with the CMS detector at s=8(Formula presented.) TeV corresponding to an integrated luminosity of 19.7 fb−1. The differential cross sections and their ratios are presented as functions of pT. The measurements are also shown as functions of the jet multiplicity. Differential cross sections are obtained as functions of the ratio of the Z/γ*pT to the sum of all jet transverse momenta and of the ratio of the Z/γ*pT to the leading jet transverse momentum. The data are corrected for detector effects and are compared to simulations based on several QCD calculations.[Figure not available: see fulltext.] Original language English 128 1-46 46 Journal of High Energy Physics 2015 10 https://doi.org/10.1007/JHEP10(2015)128 Published - Oct 1 2015 ## Keywords • Beyond Standard Model • Hadron-Hadron Scattering • Jets • Photon production • QCD ## Fingerprint Dive into the research topics of 'Comparison of the Z/γ <sup>∗</sup> + jets to γ + jets cross sections in pp collisions at (Formula presented.) TeV'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9949411749839783, "perplexity": 3440.712729825861}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363229.84/warc/CC-MAIN-20211206012231-20211206042231-00150.warc.gz"}
https://www2.math.binghamton.edu/p/pow/problem6f21
### Sidebar pow:problem6f21 Problem 6 (due Monday, November 22) Let $a_1=3/2$ and $a_{n+1}=a_n^2-a_n+1$. Compute the sum $\frac{1}{a_1}+\frac{1}{a_2}+\frac{1}{a_3}+\ldots .$ Two solutions were received: from Ashton Keith and Pluto Wang. Both solvers show that if $a_1=a>1$ then the infinite sum is equal to 1/(a-1). Pluto's solution is essentially the same as our original solution. Ashton's solution is based on the same idea, but used slightly differently, and some claims are not justified with sufficient rigor. For more details see the following link Solution.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9287527799606323, "perplexity": 1087.7938691040847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711376.47/warc/CC-MAIN-20221209011720-20221209041720-00307.warc.gz"}
https://aviation.stackexchange.com/questions/34287/how-much-hydraulic-fluid-is-in-a-jet-airliner
# How much hydraulic fluid is in a jet airliner? Hopefully the title is self-explanatory. I am wondering how much hydraulic fluid - in mass and volume - is carried in normal operations for a jet (the bigger the aircraft in your answer the better).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8817226886749268, "perplexity": 740.1851647046816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107890028.58/warc/CC-MAIN-20201025212948-20201026002948-00426.warc.gz"}
http://mathhelpforum.com/calculus/80619-test-convergence.html
Thread: test for convergence 1. test for convergence show that the sum of 1-cos(pie/n) from n = 1 to infinity converges or diverges second question: show that the sum of e^-n/(n+1) from n =1 to infinity converges or diverges i think it converges. is it okay if i use the comparison test by comparing series to another series and can i use the integral test to see if series being compared converges? 2. Doesn't the second one diverge by Nth term/TFD or am I mistaken? 3. can someone please clearly show/explain answers to the two questions? 4. #1: We're going to make use of the fact that for all $x \in (0, \pi)$, we have: $0 < 1 - \cos x < \tfrac{1}{2}x^2$. To prove this, consider $f(x) = 1 - \cos x - \tfrac{1}{2}x^2$ and prove that it is a decreasing function. Thus, $f(x) < f(0) = 0 \ \Leftrightarrow \ 1 - \cos x < \tfrac{1}{2}x^2$ So now we have the inequality: $0 < 1 - \cos \tfrac{\pi}{n} < \frac{\pi^2}{2n^2}$ since $\frac{1}{n} \in (0, \pi)$ for all $n$. Now use comparison. #2: Use the divergence test. If $\lim_{n \to \infty} a_n$ is not equal to 0, then the series $\sum a_n$ diverges. 5. thanks thanks 6. how'd u get 1/2x^2 7. It's simply a fact: $0 < 1 - \cos x < \frac{x^2}{2}$ for $0 < x < \pi$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9799875617027283, "perplexity": 388.30851360022734}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542693.41/warc/CC-MAIN-20161202170902-00428-ip-10-31-129-80.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-algebra/121141-product-basis.html
# Math Help - Product Basis? 1. ## Product Basis? I have this theorem: Theorem 33.2: Let $\{e_i \}$ and $\{ e^i \}$ be dual bases for $V$ and $V^*$. Then the set of tensor products: $\{ e_{i_1} \otimes \ldots \otimes e_{i_p} \otimes e^{j_1} \otimes \ldots \otimes e^{j_q}, \ i_1, \ldots , i_p,j_1, \ldots , j_q=1, \ldots , N \}$ forms a basis for $\vartheta_q^p(V)$ (the set of all tensors of order $(p,q)$ on a vector space $V$). The proof starts by proving that the set is linearly independent. It does this as follows: "We shall prove that the set of tensor products is a linearly independent generating set for $\vartheta_q^p(V)$. To prove that the set is linearly independent, let $A^{i_1, \ldots, i_p}_{j_1, \ldots , j_q} e_{i_1} \otimes \ldots \otimes e_{i_p} \otimes e^{j_1} \otimes \ldots \otimes e^{j_q}=0$ where the RHS is the 0 tensor". Firstly, what is $A^{i_1, \ldots, i_p}_{j_1, \ldots , j_q} e_{i_1}$? Is it a linear map going from $V$ to $V^*$ with the bases outlined in the question? Secondly, how will this prove that the set is linearly independent? 2. Originally Posted by Showcase_22 I have this theorem: The proof starts by proving that the set is linearly independent. It does this as follows: "We shall prove that the set of tensor products is a linearly independent generating set for $\vartheta_q^p(V)$. To prove that the set is linearly independent, let $A^{i_1, \ldots, i_p}_{j_1, \ldots , j_q} e_{i_1} \otimes \ldots \otimes e_{i_p} \otimes e^{j_1} \otimes \ldots \otimes e^{j_q}=0$ where the RHS is the 0 tensor". Firstly, what is $A^{i_1, \ldots, i_p}_{j_1, \ldots , j_q} e_{i_1}$? Is it a linear map going from $V$ to $V^*$ with the bases outlined in the question? Secondly, how will this prove that the set is linearly independent? $A^{i_1, \ldots, i_p}_{j_1, \ldots , j_q}$ is a scalar, for each set of indices $\{i_1, \ldots, i_p,j_1, \ldots , j_q\}$. You should read this expression as $\sum A^{i_1, \ldots, i_p}_{j_1, \ldots , j_q}(e_{i_1} \otimes \ldots \otimes e_{i_p} \otimes e^{j_1} \otimes \ldots \otimes e^{j_q})$. The summation convention is being used, which means that you are meant to sum (from 1 to N) over each index that appears both as a superscript and as a subscript. So that expression simply denotes a linear combination of the given tensors. To prove that they are linearly independent, you put a linear combination of them equal to 0, and then you are aiming to show that each coefficient $A^{i_1, \ldots, i_p}_{j_1, \ldots , j_q}$ is zero.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.967481791973114, "perplexity": 105.92996475733307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
http://universeinproblems.com/index.php/Initial_perturbations_in_the_Universe
# Initial perturbations in the Universe ## Fluctuations power spectrum: non-relativistic approach ### Problem 1 problem id: per16 Construct the correlation function of the Fourier components of the relative density fluctuations, which satisfies the cosmological principle. ### Problem 2 problem id: per17 Express the correlation function of the relative density fluctuations through the power spectrum of these fluctuations. ## Quantum fluctuations of fields in inflationary Universe ### Problem 3 problem id: infl_vac_fluc0 Estimate the amplitude of vacuum fluctuations of the free massless scalar quantum field $\varphi(\vec{x},t)$ with characteristic momenta $q$ and frequencies $w_q=q$ with background Minkowski metrics. ### Problem 4 problem id: infl_vac_fluc1 Considering the free massless scalar quantum field as a set of quantum harmonic oscillators, refine the estimate obtained in the problem #infl_vac_fluc0. ### Problem 5 problem id: Representing the inflanton field as a superposition of uniform scalar field $\varphi_b(t)$ and small perturbation $\psi(\vec{x},t)$ on the background of unperturbed FRW metrics, obtain the equations of motion of small perturbation $\psi(\vec{x},t),$, assuming, that action for perturbation is quadratic. ### Problem 6 problem id: Perform a qualitative analysis of the equation for small perturbations of the inflaton from previous problemin the different modes of inflation. ### Problem 7 problem id: infl_vac_per_app Estimate the amplitude of vacuum fluctuations at the moment of exit of cosmological perturbations beyond the on the horizon. ### Problem 8 problem id: Demonstrate that the inflationary stage provides amplification of vacuum fluctuations of inflaton field. ### Problem 9 problem id: Consider the difference in the sequence of events during the evolution of cosmological perturbations in the radiation--dominated stage, or during the stage of domination nonrelativistic matter and inflationary cosmology. ### Problem 10 problem id: field_inf_Mink Demonstrate that in the slow-roll regime at the beginning of inflation the inflaton field behaves like a massless scalar field in the Minkowski space. ### Problem 11 problem id: What is the initial state of the inflaton quantum field towards the creation and annihilation operators? ### Problem 12 problem id: For modes beyond the horizon at the inflationary stage, obtain a qualitative solution to the equation, obtained in problem #field_inf_Mink. ### Problem 13 problem id: inf_chi1 Obtain the exact solution to the equation from problem \ref{field_inf_Mink} at the inflationary stage. Consider the case of modes below and beyondthe horizon. ### Problem 14 problem id: infl_vac_fluc_hor-p Demonstrate, that power spectrum $\mathcal{P}_k(\varphi)$ of the modes, which cross the horizon is the same as for free massless scalar field (see problem #infl_vac_fluc1.) ### Problem 15 problem id: eq_infPhi Obtain the equation, connecting the gravitational potential $\Phi$, background inflanton scalar field ôîíîâîå $\varphi(t)$ and its perturbation $\psi(\vec{x},t).$ ### Problem 16 problem id: inf-z-u Obtain the equations of evolution of scalar perturbations generated by the inflaton field perturbation in the case, when there is no other fields of matter in the Universe. What form does its solution has in the case for the mode under the horizon and what its implies? ### Problem 17 problem id: inf-z-u1 Construct the solution of the equation obtained in the previous problem for the case of inflation in the slow--roll approximation. Find the spatial curvature of hypersurfaces of constant inflaton field $\mathcal{R}$ in this regime. ### Problem 18 problem id: Find the power spectrum of $\mathcal {P}_\varphi (k)$ of spatial curvature of hypersurfaces generated by fluctuations of the inflaton field in the comoving reference frame. ### Problem 19 problem id: Express the amplitude of the scalar perturbations $\Delta_ {\mathcal {P}} \equiv \sqrt {| \mathcal {P} _ \varphi (k) |}$ generated by fluctuations of the inflaton field through the potential of this field. ### Problem 20 problem id: Express the amplitude of the scalar perturbations $\Delta_{\mathcal{P}}$ generated by fluctuations of the inflaton field for the potential $V(\varphi)=g\varphi^n$. ### Problem 21 problem id: Find the power spectrum of initial perturbations $\mathcal {P}_h ^{(T)}$, generated at the inflationary stage. ### Problem 22 problem id: Construct the relation between the power amplitudes of the primary gravitational and scalar perturbations generated at the inflationary stage.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 54, "x-ck12": 0, "texerror": 0, "math_score": 0.9034082293510437, "perplexity": 1404.7495523940772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190134.25/warc/CC-MAIN-20170322212950-00535-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.hepdata.net/search/?q=&collaboration=ALICE&page=30
Showing 3 of 293 results #### rivet Analysis Charged-particle multiplicity measurement in proton-proton collisions at $\sqrt{s}=0.9$ and 2.36 TeV with ALICE at LHC The collaboration Aamodt, K. ; Abel, N. ; Abeysekara, U. ; et al. Eur.Phys.J.C 68 (2010) 89-108, 2010. Inspire Record 852450 Charged-particle production was studied in proton-proton collisions collected at the LHC with the ALICE detector at centre-of-mass energies 0.9 TeV and 2.36 TeV in the pseudorapidity range |$\eta$| < 1.4. In the central region (|$\eta$| < 0.5), at 0.9 TeV, we measure charged-particle pseudorapidity density dNch/deta = 3.02 $\pm$ 0.01 (stat.) $^{+0.08}_{-0.05}$ (syst.) for inelastic interactions, and dNch/deta = 3.58 $\pm$ 0.01 (stat.) $^{+0.12}_{-0.12}$ (syst.) for non-single-diffractive interactions. At 2.36 TeV, we find dNch/deta = 3.77 $\pm$ 0.01 (stat.) $^{+0.25}_{-0.12}$ (syst.) for inelastic, and dNch/deta = 4.43 $\pm$ 0.01 (stat.) $^{+0.17}_{-0.12}$ (syst.) for non-single-diffractive collisions. The relative increase in charged-particle multiplicity from the lower to higher energy is 24.7% $\pm$ 0.5% (stat.) $^{+5.7}_{-2.8}$% (syst.) for inelastic and 23.7% $\pm$ 0.5% (stat.) $^{+4.6}_{-1.1}$% (syst.) for non-single-diffractive interactions. This increase is consistent with that reported by the CMS collaboration for non-single-diffractive events and larger than that found by a number of commonly used models. The multiplicity distribution was measured in different pseudorapidity intervals and studied in terms of KNO variables at both energies. The results are compared to proton-antiproton data and to model predictions. 23 data tables Measured pseudorapidity dependence of DN/DETARAP for INEL collisions at a centre-of-mass energy of 900 GeV. Measured pseudorapidity dependence of DN/DETARAP for NSD collisions at a centre-of-mass energy of 900 GeV. Measured pseudorapidity dependence of DN/DETARAP for INEL collisions at a centre-of-mass energy of 2360 GeV. More… #### rivet Analysis Charged-particle multiplicity measurement in proton-proton collisions at $\sqrt{s}=7$ TeV with ALICE at LHC The collaboration Aamodt, K. ; Abel, N. ; Abeysekara, U. ; et al. Eur.Phys.J.C 68 (2010) 345-354, 2010. Inspire Record 852264 The pseudorapidity density and multiplicity distribution of charged particles produced in proton-proton collisions at the LHC, at a centre-of-mass energy $\sqrt{s} = 7$ TeV, were measured in the central pseudorapidity region |$\eta$| < 1. Comparisons are made with previous measurements at $\sqrt{s}$ = 0.9 TeV and 2.36 TeV. At $\sqrt{s}$ = 7 TeV, for events with at least one charged particle in |$\eta$| < 1, we obtain dNch/deta = 6.01 $\pm$ 0.01 (stat.) $^{+0.20}_{-0.12}$ (syst.). This corresponds to an increase of 57.6% $\pm$ 0.4% (stat.) $^{+3.6}_{-1.8}$% (syst.) relative to collisions at 0.9 TeV, significantly higher than calculations from commonly used models. The multiplicity distribution at 7 TeV is described fairly well by the negative binomial distribution. 6 data tables Charged-particle pseudorapidity densities at central pseudorapidity (ETRAP from -1.0 to 1.0) for the INEL>0 class of events. Data are also given for the lower energy ALICE data. Relative increase in pseudorapidity density between the different energies. Multiplicity distribution normalized to the bin width in the pseudorapidity region -1.0 to 1.0 for INEL>0 collisions at a centre-of-mass energy of 7000 GeV. See the paper arXiv:1004.3034 for the lower energy data. Note that the statistical as well as the systematic uncertainties are strongly correlated between neighbouring points. See text of paper for details. More… #### First proton-proton collisions at the LHC as observed with the ALICE detector: Measurement of the charged particle pseudorapidity density at s**(1/2) = 900-GeV The collaboration Aamodt, K ; Abel, N ; Abeysekara, U ; et al. Eur.Phys.J.C 65 (2010) 111-125, 2010. Inspire Record 838352 On 23rd November 2009, during the early commissioning of the CERN Large Hadron Collider (LHC), two counter-rotating proton bunches were circulated for the first time concurrently in the machine, at the LHC injection energy of 450 GeV per beam. Although the proton intensity was very low, with only one pilot bunch per beam, and no systematic attempt was made to optimize the collision optics, all LHC experiments reported a number of collision candidates. In the ALICE experiment, the collision region was centred very well in both the longitudinal and transverse directions and 284 events were recorded in coincidence with the two passing proton bunches. The events were immediately reconstructed and analyzed both online and offline. We have used these events to measure the pseudorapidity density of charged primary particles in the central region. In the range |$\eta$| < 0.5, we obtain dNch/deta = 3.10 $\pm$ 0.13 (stat.) $\pm$ 0.22 (syst.) for all inelastic interactions, and dNch/deta = 3.51 $\pm$ 0.15 (stat.) $\pm$ 0.25 (syst.) for non-single diffractive interactions. These results are consistent with previous measurements in proton-antiproton interactions at the same centre-of-mass energy at the CERN SppS collider. They also illustrate the excellent functioning and rapid progress of the LHC accelerator, and of both the hardware and software of the ALICE experiment, in this early start-up phase. 2 data tables Pseudorapidity dependence of DN/DETARAP in Inelastic (INEL) and Non-Single-Diffractive (NSD) collisions. Note that the plot in the paper shows only statistical errors. Pseudorapidity density for |ETARAP|<0.5 for Inelastic (INEL) and Non-Single-Diffractive (NSD) collisions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9911998510360718, "perplexity": 3983.7219453812777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057424.99/warc/CC-MAIN-20210923135058-20210923165058-00677.warc.gz"}
http://bestwwws.com/error-function/complementary-error-function-derivative.php
Home > Error Function > Complementary Error Function Derivative # Complementary Error Function Derivative ## Contents However, it can be extended to the disk |z| < 1 of the complex plane, using the Maclaurin series erf − 1 ⁡ ( z ) = ∑ k = 0 The system returned: (22) Invalid argument The remote host or network may be down. Click the button below to return to the English verison of the page. Use the erfc function to replace 1 - erf(x) for greater accuracy when erf(x) is close to 1.Examplescollapse allFind Complementary Error FunctionOpen ScriptFind the complementary error function of a value.erfc(0.35) ans http://bestwwws.com/error-function/complementary-error-function.php Cambridge, England: Cambridge University Press, pp.209-214, 1992. J. (March 1993), "Algorithm 715: SPECFUN—A portable FORTRAN package of special function routines and test drivers" (PDF), ACM Trans. Definite integrals involving include Definite integrals involving include (34) (35) (36) (37) (38) The first two of these appear in Prudnikov et al. (1990, p.123, eqns. 2.8.19.8 and 2.8.19.11), with , J. ## Error Function Graph Schöpf and P. Prudnikov, A.P.; Brychkov, Yu.A.; and Marichev, O.I. A Course in Modern Analysis, 4th ed. and Oldham, K.B. "The Error Function and Its Complement " and "The and and Related Functions." Chs.40 and 41 in An Atlas of Functions. Referenced on Wolfram|Alpha: Erfc CITE THIS AS: Weisstein, Eric W. "Erfc." From MathWorld--A Wolfram Web Resource. Haskell: An erf package[18] exists that provides a typeclass for the error function and implementations for the native (real) floating point types. Negative integer values of Im(ƒ) are shown with thick red lines. Erf(2) Wolfram Language» Knowledge-based programming for everyone. K -- Input representing an integer larger than -2number | symbolic number | symbolic variable | symbolic expression | symbolic function | symbolic vector | symbolic matrix Input representing an integer Erfc Function Positive integer values of Im(f) are shown with thick blue lines. Java: Apache commons-math[19] provides implementations of erf and erfc for real arguments. For more information, see Tall Arrays.TipsYou can also find the standard normal probability distribution using the Statistics and Machine Learning Toolbox™ function normcdf. M. Erfc Formula The error and complementary error functions occur, for example, in solutions of the heat equation when boundary conditions are given by the Heaviside step function. IDL: provides both erf and erfc for real and complex arguments. For previous versions or for complex arguments, SciPy includes implementations of erf, erfc, erfi, and related functions for complex arguments in scipy.special.[21] A complex-argument erf is also in the arbitrary-precision arithmetic 1. For any complex number z: erf ⁡ ( z ¯ ) = erf ⁡ ( z ) ¯ {\displaystyle \operatorname ⁡ 9 ({\overline ⁡ 8})={\overline {\operatorname ⁡ 7 (z)}}} where z 2. Asymptotic expansion A useful asymptotic expansion of the complementary error function (and therefore also of the error function) for large real x is erfc ⁡ ( x ) = e − 3. This series diverges for every finite x, and its meaning as asymptotic expansion is that, for any N ∈ N {\displaystyle N\in \mathbb Γ 1 } one has erfc ⁡ ( 4. This allows one to choose the fastest approximation suitable for a given application. 5. Wolfram Demonstrations Project» Explore thousands of free applications across science, mathematics, engineering, technology, business, art, finance, social sciences, and more. ## Erfc Function A. try here This substitution maintains accuracy by avoiding roundoff errors for large values of x. Error Function Graph Taylor series The error function is an entire function; it has no singularities (except that at infinity) and its Taylor expansion always converges. Derivative Of Erfc MathWorks does not warrant, and disclaims all liability for, the accuracy, suitability, or fitness for purpose of the translation. Another approximation is given by erf ⁡ ( x ) ≈ sgn ⁡ ( x ) 1 − exp ⁡ ( − x 2 4 π + a x 2 1 this content For |z| < 1, we have erf ⁡ ( erf − 1 ⁡ ( z ) ) = z {\displaystyle \operatorname ζ 1 \left(\operatorname ζ 0 ^{-1}(z)\right)=z} . comm., Dec.15, 2005). Properties Plots in the complex plane Integrand exp(−z2) erf(z) The property erf ⁡ ( − z ) = − erf ⁡ ( z ) {\displaystyle \operatorname − 5 (-z)=-\operatorname − 4 Erf(1)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8707320094108582, "perplexity": 3697.652137596442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257651007.67/warc/CC-MAIN-20180324210433-20180324230433-00431.warc.gz"}
http://mathhelpforum.com/algebra/80441-equation-ellipse-standard-form.html
# Math Help - Equation of Ellipse in Standard Form 1. ## Equation of Ellipse in Standard Form I have to write the equation in standard form of the ellipse with foci (8,0) and (-8,0) if the minor axis has y-intercepts of 2 and -2. I know that the standard form of the equation if x^2/a^2 + y^2/b^2 =1. This is a horizontal elipse with center (0,0). The length of the minor axis is 4 units so I think c = 4 and c^2 = 16. I don't know where to go to find a^2 and b^2. Can you help? Thanks. Joanie 2. Originally Posted by Joanie I have to write the equation in standard form of the ellipse with foci (8,0) and (-8,0) if the minor axis has y-intercepts of 2 and -2. I know that the standard form of the equation if x^2/a^2 + y^2/b^2 =1. In any ellipse, the distance between the center of the ellipse and either focus is $c=\sqrt{a^2-b^2}.$ You know $b$ (it is half the length of the minor axis), and you know the distance between the center and the foci. So you can solve for $a^2.$ 3. ## Response to Equation of an Ellipse Thanks. I will do the problem and have you check my answer. Joanie 4. Originally Posted by Joanie I have to write the equation in standard form of the ellipse with foci (8,0) and (-8,0) if the minor axis has y-intercepts of 2 and -2. I know that the standard form of the equation if x^2/a^2 + y^2/b^2 =1. This is a horizontal elipse with center (0,0). The length of the minor axis is 4 units so I think c = 4 and c^2 = 16. I don't know where to go to find a^2 and b^2. Can you help? Thanks. Joanie Hi Joanie, $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$ The two foci are (-c, 0) and (+c, 0). So c would have to be 8, wouldn't it? Not 4. The y-intercepts are at (0, b) and (0, -b) so b would have to be 2 and $b^2=4$. Using $c^2=a^2-b^2$, you can easily solve for $a^2$. Now just plug and chug.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9217727780342102, "perplexity": 382.37303828527774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657132883.65/warc/CC-MAIN-20140914011212-00234-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://math.stackexchange.com/questions/95543/is-there-a-topological-space-which-is-star-compact-but-not-star-countable
# Is there a topological space which is star compact but not star countable? A topological space $X$ is said to be star compact if whenever $\mathscr{U}$ is an open cover of $X$, there is a compact subspace $K$ of $X$ such that $X = \operatorname{St}(K,\mathscr{U})$. A topological space $X$ is said to be star countable if whenever $\mathscr{U}$ is an open cover of $X$, there is a countable subspace $K$ of $X$ such that $X = \operatorname{St}(K,\mathscr{U})$. We know that there are some topological spaces which are star countable but not star compact. However I don't know whether star compact implies star countable. Is there a topological space which is star compact but not star countable? Added: $St(K, \mathscr{U})=\cup\{u\in \mathscr{U}: u \cap K \neq \emptyset\}$ - Could you provide a definition of $\operatorname{St}(K,\mathscr{U})$, please? –  user20266 Jan 1 '12 at 9:25 $St(K, \mathscr{U})=\cup\{u\in \mathscr{U}: u \cap K \neq \emptyset\}$ –  Paul Jan 1 '12 at 9:49 Such a space (if it exists) cannot be normal, because for $T_4$ spaces star compact is equivalent to star finite, so certainly star countable. –  Henno Brandsma Jan 1 '12 at 10:46 It also hasn't countable extent, and hence it can't be countably compact. However, every star compact space is pseducompact. So the space must be pseudocompact but not countably compact! –  Paul Jan 1 '12 at 10:58 The definition of $\operatorname{St}(K, \mathscr{U})$ added ad the end of your post does not say what you believe it says. You define a set $V\subseteq\mathscr{U}$ by $V=\{u\in \mathscr{U}: u \cap K \neq \emptyset\}$, then what is $\cup V$... I guess you mean something like $\operatorname{St}(K, \mathscr{U})=\bigcup\limits_{u\in V}u$. –  Did Jan 1 '12 at 12:28 Let $X=\Big(\beta\omega_1\times(\omega_2+1)\Big)\setminus\Big((\beta\omega_1\setminus \omega_1)\times\{\omega_2\}\Big)$ as a subspace of $\beta\omega_1\times(\omega_2+1)$; I claim that $X$ is star compact. Let $\mathscr{U}$ be an open cover of $X$. For each $\xi\in\omega_1$ there are $U_\xi\in\mathscr{U}$ and $\alpha_n\in\omega_2$ such that $$\langle \xi,\omega_2\rangle\in \{\xi\}\times(\alpha_n,\omega_2]\subseteq U_\xi\;.$$ Let $\alpha=\sup_\xi\alpha_\xi<\omega_2$, and let $K=\beta\omega_1\times\{\alpha+1\}$; $K$ is compact, and $$\omega_1\times \{\omega_2\}\subseteq \operatorname{st}(K,\mathscr{U})\;,$$ since $U_\xi\subseteq \operatorname{st}(K,\mathscr{U})$ for each $\xi\in\omega_1$. The ordinal space $\omega_2$ is countably compact, so $\beta\omega_1\times\omega_2$ is countably compact and therefore star finite, and there is a finite $F\subseteq \beta\omega_1\times\omega_2$ such that $\beta\omega_1\times\omega_2\subseteq\operatorname{st}(F,\mathscr{U})$. But then $K\cup F$ is compact, and $\operatorname{st}(K\cup F,\mathscr{U})=X$, as desired. However, $X$ is not star countable. To see this, let $$\mathscr{U}=\{\beta\omega_1\times\omega_2\}\cup\Big\{\{\xi\}\times(\omega_2+1):\xi\in\omega_1\Big\}\;;$$ $\mathscr{U}$ is certainly an open cover of $X$, but if $C$ is any countable subset of $X$, we can choose $\xi\in\omega_1$ such that $C\cap\big(\{\xi\}\times (\omega_2+1)\big)=\varnothing$, and then $\langle \xi,\omega_2\rangle\notin\operatorname{st}(C,\mathscr{U})$. This is a modification of Example 2.1 of Yan-Kui Song, On $\mathcal{K}$-Starcompact Spaces, Bull. Malays. Math. Soc. (2) 30(1) (2007), 59-64, which is available as a PDF here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9708205461502075, "perplexity": 148.71709835673383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246659483.51/warc/CC-MAIN-20150417045739-00199-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.varsitytutors.com/hotmath/hotmath_help/topics/complement-of-an-event.html
# Complement of an Event If event $A$ is a subset of a sample space $S$ , the complement of $A$ contains the elements of $S$ that are not members of $A$ .  The symbol for the complement of an event $A$ is $\stackrel{¯}{A}$ . Example 1: Consider the experiment of rolling a single die. $S=\left\{1,2,3,4,5,6\right\}$ .  Let $A$ be the event that the roll yields a number greater than $4$$A=\left\{5,6\right\}$ .  Then $\stackrel{¯}{A}=\left\{1,2,3,4\right\}$ . Remember: $P\left(S\right)=1$$A\cup \stackrel{¯}{A}=S$ , so $P\left(A\cup \stackrel{¯}{A}\right)=1$ .  Since $A$ and $\stackrel{¯}{A}$ are mutually exclusive , $P\left(A\cup \stackrel{¯}{A}\right)=P\left(A\right)+P\left(\stackrel{¯}{A}\right)=1$ .  Therefore, $P\left(\stackrel{¯}{A}\right)=1-P\left(A\right)$ Example 2: When it is not raining, the probability of the New Orleans Saints winning a football game is $\frac{7}{10}$ .  What is the probability of the Saints winning if it is raining? $P$ (Saints winning with rain) = $1-$ $P$ (Saints winning without rain) = $1-\frac{7}{10}=\frac{3}{10}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 24, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8329402804374695, "perplexity": 511.7465595963053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645280.4/warc/CC-MAIN-20180317174935-20180317194935-00139.warc.gz"}
http://math.stackexchange.com/questions/138869/find-the-critical-points-of-fx-x2-2x
# Find the critical points of $f(x) = |x^2 - 2x|$ Find the critical points of $f(x) = |x^2 - 2x|$ The only way I was able to evaluate this was to draw the graph of $y = x^2 - 2x$ and when the parabola was going under the $x$ axis I mirrored it above the $x$ axis. I observed that the critical points were $x$ = 0, 1, 2. How do I evaluate this function properly using maybe left and right hand side limits or some kind of calculus/differentiation? If I was going to justify my critical points beyond "because they are on my graph". Perhaps I can justify $0$ and $2$ by factorizing $f(x)$ to $|x(x - 2)|$, but how about $x = 1$? - Generally speaking, you want to write absolute values of functions piecewise to get your hands dirty and solve for critical points. This means we have to find the sign of the function between zeros and apply the definition of absolute value. As you noted, there are zeros at 0 and 2; these are the only two zeros of $x^2-2x$. We can check signs between and outside these points to find that $x^2-2x>0$ if $x>2$ or $x<0$ $x^2-2x<0$ if $0<x<2$ Therefore our function $f$ may be written $$f(x)=\begin{cases}x^2-2x & \text{if } x\leq 0 \\ 2x-x^2 & \text{if } 0\leq x\leq 2 \\ x^2-2x & \text{if } 2\leq x \end{cases}$$ To find critical points, we must find points where the derivative does not exist or equals zero. In this case, it suffices to differentiate each piece; if the derivatives do not agree at the endpoints, then the derivative does not exist at the endpoint. $$f'(x)=\begin{cases}2x-2 & \text{if } x< 0 \\ \text{undefined} & \text{if } x= 0 \\ 2-2x & \text{if } 0< x\leq 2 \\ \text{undefined} & \text{if } x= 2 \\ 2x-2 & \text{if } 2\leq x \end{cases}$$ So we see that the derivative is undefined at $0$ and $2$. It is defined everywhere else, but it is easy to check the only point where the derivative is zero is when $x=1$. (By the way, the comparison of the derivatives to the left and right of $0$, $2$ is the same as evaluating the left-hand derivative and right-hand derivative). (EDIT: The definition of critical point may vary in what cases we include, but many standard calculus texts include non-existence in the definition for optimization.) - Writing $x^2-2\,x=x(x-2)$, we see that $x^2-2\,x\ge0$ if $x\ge2$ or $x\le0$, and $x^2-2\,x<0$ if $0<x<2$. Then $$f(x)=\begin{cases} x^2-2\,x & \text{if }x\le0,\\ 2\,x-x^2 & \text{if }0<x<2,\\ x^2-2\,x & \text{if }x\ge2, \end{cases}\quad f'(x)=\begin{cases} 2\,x-2 & \text{if }x\le0,\\ 2-2\,x & \text{if }0<x<2,\\ 2\,x-2 & \text{if }x\ge2. \end{cases},\quad$$ $f$ has no derivative at $x=0$ and $x=2$. The usual definition of critical point is a point $x$ such that $f'(x)=0$. With this definition the only critical point is $x=1$, since $f'(1)=0$. It is a local maximum. The points $x=0$ and $x=2$ are not critical (with this definition), since the derivative does not exist at those points. They are however (global) minima, since $f(0)=f(2)=0$ and $f(x)>0$ if $x\ne0,2$. - From all the books I have read, critical points are those points where the original function is defined and the derivative is 0 OR the derivative is undefined. – Graphth Apr 30 '12 at 13:44 Perhaps the distinction is in "calculus" and "real analysis". For example, the calculus books by Larson, Hostetler and Edwards and also Edwards and Penney, and also Varberg, Purcell, and Rigdon all include where the derivative is undefined (as long as the original function is defined). – Graphth Apr 30 '12 at 13:58 @Graphth I understand that from the point of view of teaching calculus it might be desirable to define a critical point as one where the derivative vanishes or exits and equals 0. But I am more at ease with the "real analysis" definition. – Julián Aguirre Apr 30 '12 at 15:01 Sure, that's fine, but telling someone about the "usual" definition of critical point when the one they usually see is different, at best won't confuse and will do no good, and at worst will confuse. If they take real analysis, they'll learn the "usual" definition. For now, their definition is not the "usual". – Graphth May 1 '12 at 15:39 This solution is different than the others shown. First, consider the simple function $f(x) = |x|$. We want to know $f'(x)$. That's actually pretty simple to do, since $f'(x)$ gives the slope of the tangent line. That is -1 for $x < 0$ and 1 for $x > 0$. At $x = 0$, there is a sharp point in $f(x)$ so $f'(x)$ does not exist. Or, another way to see it, the left hand limit of the difference quotient is -1 and the right hand limit is +1, so the limit itself (the derivative) does not exist. Therefore, a very nice shorthand way of writing the derivative is $$f'(x) = \frac{x}{|x|}.$$ This is -1 when $x$ is negative, undefined when $x$ is 0, and 1 when $x$ is positive, just as I described above. When you have a more complicated function inside the absolute values, just use the chain rule. If $f(x) = |x^2 - 2x|$, then $$f'(x) = \frac{x^2 - 2x}{|x^2 - 2x|} \cdot \frac{d}{dx} (x^2 - 2x) = \frac{x^2 - 2x}{|x^2 - 2x|} (2x - 2) = \frac{2x(x - 1)(x - 2)}{|x^2 - 2x|}.$$ Now, critical points are those where the derivative is 0 or undefined, and the original function is defined. Since the function $f(x)$ is defined for all $x$, find where the derivative is 0 or undefined. Just find the places where the numerator is 0 or the denominator is 0. That's pretty simple here, $x = 0, 1, 2$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9529159069061279, "perplexity": 142.59833980723388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276305.39/warc/CC-MAIN-20160524002116-00113-ip-10-185-217-139.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/what-do-think-is-photon-a-quanta.123874/
# What do think is photon a quanta? 1. Jun 16, 2006 ### AGBROKO EJOVWOKE what do think is photon a quanta? it is true to me that it is a quanta but i wish to gather solid evidence. i was think to gather evidence from photoelectric effect please see if you can give me additional insight 2. Jun 16, 2006 ### mathman The basic argument (Einstein) was that increasing intensity led only to more electrons if they could be emitted. However if the wavelength was too long to produce photoelectrons, increasing intensity didn't help. Similar Discussions: What do think is photon a quanta?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9539921283721924, "perplexity": 3680.8827447822873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118552.28/warc/CC-MAIN-20170423031158-00630-ip-10-145-167-34.ec2.internal.warc.gz"}
https://jascoinc.com/applications/microvolume-analysis-dna/
# Micro-volume Analysis of a λ DNA Using a One Drop Accessory September 12, 2017 ## Introduction For many biological samples, it is much more convenient, cost effective, and efficient to use microvolumes for structural and quantitative studies.  The SAF-850 One Drop is a microsampling accessory that enables fluorescence measurements to be obtained with only 5 µL of sample. The highly reproducible baseline allows for either simple spectrum measurements or quantitative analysis of multiple samples. This application note demonstrates the use of the One Drop accessory to obtain fluorescence data and create a calibration curve of λDNA labeled with PicoGreen®. ## Experimental Measurement Conditions Fluorescence Spectra Calibration Excitation Wavelength 485 nm Excitation Wavelength 480 nm Excitation Bandwidth 10 nm Emission Bandwidth 523 nm Emission Bandwidth 10 nm Excitation Bandwidth 20 nm Data Interval 0.5 nm Emission Bandwidth 20 nm Response Time 2 sec Response Time 1 sec Scan Speed 100 nm/min Sensitivity 620 V Sensitivity 700 V ## Keywords FP0010, FP-8200, Fluorescence, Microsampling, SAF-850 One Drop, FP0010 ## Results The emission spectra of λDNA for a variety of concentrations is shown in Figure 2 and indicates that the maximum emission wavelength is 523 nm. The fluorescence intensity at the maximum emission wavelength (523 nm) was then measured five times for each sample to ensure measurement reproducibility. Table 1 shows these measurement reproducibility results for each sample concentration. Table 1. Measurement reproducibility results. Concentration [ng/mL] 1 2 3 4 5 Average SD CV(%) 0 53.3 52.4 55.9 53.1 55.2 54.0 1.49 2.8 1 63.6 68.1 65.9 66.5 65.1 65.8 1.68 2.6 5 110.3 106.7 105.1 104 110 107.2 2.86 2.7 10 157.6 155.5 156.1 153 151.7 154.8 2.39 1.5 50 447.1 465.3 460.2 455.8 469 459.5 8.56 1.9 100 865.9 856.9 848.3 850.6 853.9 855.1 6.86 0.8 500 3842.7 3831 3858 3828.9 3811 3834.3 17.42 0.5 1000 7766.2 7992.1 7925.3 7972.8 7805.7 7892.4 101.15 1.3 A calibration curve was then created using the average value for each concentration from the reproducibility results in Table 1 and is shown in Figure 3. The equation of the line fit to the calibration curve is y = 7.780 • c + 57.5211. The correlation coefficient is 0.9999 and the standard error is 6.819, indicating a good fit. ## Conclusion This application note demonstrates the measurement reproducibility of the One Drop microsampling accessory by obtaining a calibration curve with good linearity over a wide concentration range. ## Required Products and Software • FP-8200/8300/8500/8600/8700 Spectrofluorometer • SAF-850/851 One-Drop Measurement Accessory
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8363661766052246, "perplexity": 3118.573457461187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251799918.97/warc/CC-MAIN-20200129133601-20200129163601-00356.warc.gz"}
https://zbmath.org/?q=an:1175.47050
× ## Common fixed point theorems for occasionally weakly compatible mappings satisfying a generalized contractive condition.(English)Zbl 1175.47050 The authors give some common fixed point theorems for occasionally weakly compatible mappings in symmetric spaces satisfying a generalized contractive condition, based on a result of G. Jungck and B. E. Rhoades [Fixed Point Theory 7, No. 2, 287–296 (2006); erratum ibid. 9, No. 1, 383–384 (2008; Zbl 1118.47045)]. ### MSC: 47H10 Fixed-point theorems 54H25 Fixed-point and coincidence theorems (topological aspects) 47H09 Contraction-type mappings, nonexpansive mappings, $$A$$-proper mappings, etc. Zbl 1118.47045
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8500261902809143, "perplexity": 3980.4216793929345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499634.11/warc/CC-MAIN-20230128121809-20230128151809-00483.warc.gz"}
https://www.physicsforums.com/threads/determining-sample-size-needed-to-test-hypothesis.115440/
# Determining sample size needed to test hypothesis 1. Mar 24, 2006 ### Orikon This is a statistics and probability question. I've been trying to figure this out for hours but I'm getting nowhere: I have to test the hypothesis that the true average value of a sample is 6.2. I will reject the null hypothesis if with 95% confidence if the true average value is indeed 6.2 or reject the null hypothesis if the true average is 6.0. I need to calculate the number of samples needed to test the hypothesis with the required accuracy, as well as the cut-off level, assuming a standard deviation of 0.4. I have never done a problem where average value or sample size wasn't given, any help on how to get started would be greatly appreciated 2. Mar 24, 2006 ### ksinclair13 Can you type in the exact problem? I think I can help you on this, but I need to know the problem word for word first :). Usually, you are given the margin of error for these kinds of problems... Last edited: Mar 24, 2006 3. Mar 24, 2006 ### Orikon sure, An engineer wants to test the hypothesis that the true average voltage threshold V of a diode is 6.2 Volts as labeled. He will accept the null hypothesis with 95% confidence if the true average voltage is indeed 6.2 Volts, and will reject the null hypothesis with 95% confidence if the true average voltage is instead 6.0 Volts calculate the number of diodes that he will need to measure and the cut-off level for the voltage in order to test the hypothesis with the required accuracy. Assume the standard deviation of the diodes is 0.4V 4. Mar 24, 2006 ### ksinclair13 Thank you :-) Okay, if you were to write out a 95% confidence interval for this, how would you write it? 6.2 +- ?? Think about it. If you reject H0 at 6.0, what value do you think goes in where the question marks are? Last edited: Mar 24, 2006 5. Mar 24, 2006 ### Orikon ah ok, that would be 0.2, which is the margin of error right? then i can solve the equation for the sampe size...I got a value of 11 using a one tailed test. does that sound right to you, or do you think this is a two tailed. Many thanks :) by the way, im still not sure how i would find the cutoff value, any ideas on that? 6. Mar 24, 2006 ### ksinclair13 It sounds one-sided, although it doesn't really say. Regardless, I don't think your answer is correct. I think you derived the correct equation, but I think you used the z* value of 1.645 (90% confidence) instead of 1.960 (95% confidence). Perhaps my memory has failed me... 7. Mar 24, 2006 ### Orikon Actually, according to my handy table here, 1.645 is for a 95% one-sided confidence; 1.96 is used for two sided 95% confidence (corresponds to 97.5%). Anyways, thanks a lot, I can't believe I spent so much time on that lol. As for the cutoff level , what do you think they mean by that? I would guess 6.0 but that seems too easy...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9432762265205383, "perplexity": 508.2291266284744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00353-ip-10-171-10-70.ec2.internal.warc.gz"}
http://math.stackexchange.com/users/13130/dave-l-renfro?tab=activity
# Dave L. Renfro less info reputation 1236 bio website location Iowa City, IA, USA age 55 member for 3 years, 1 month seen 18 hours ago profile views 1,750 Primary Mathematical Interests: Study and use of negligible sets, especially involving porosity notions and fractal dimension notions. Application and refinements of the Baire category method for proving existence. Classical point set theory and real function theory. History of nowhere differentiable continuous functions and history of the topics above. My email address has the form "first-name period last-name at YAHOO period COM". # 1,594 Actions Aug27 revised Determine if these correspondences on ${\mathbb Q}$ define functions Made title less tautologous sounding and more specific to question being asked Aug26 comment Book on calculus of several variables. @Bungo: I too like Lang's Calculus of Several Variables (see here), and there has been at least one math StackExchange question about Lang's calculus books -- Would it be fine to use Serge Lang's two Calculus books as textbooks for freshman as Maths major?. Aug25 comment What should I learn next? You might want to look over some of the calculus references I gave in answer to the Mathematics Educators StackExchange question Extremely “hard” books (or handouts) for undergrad studies. Aug25 comment Countable and uncountable sets in Riemann integration I think this is one of those things that you need to correctly and explicitly rephrase. For example, "a set" is singular tense, and since you didn't include any quantifiers in your (final) question, I assumed $i$ was fixed. Also (as another example of apples and oranges), you wrote near the beginning "the number of terms in the sum above ...", but when I look at "the sum above" I see a sum with $n$ terms (the index goes from $i=0$ to $i=n).$ In this regard, see Jonas Meyer's first comment. FYI, the equivalent method of using upper and lower Darboux sums might make more intuitive sense. Aug25 comment Characterization of sets of differentiability I am deleting my first two comments here, and my first comment in my answer, because I think they could be too much of a confusing distraction for others coming to this page in the future, in light of the fact that I used (by accident) the symbol $D(f)$ to denote the complement what you used $D(f)$ to denote. Aug25 comment Countable and uncountable sets in Riemann integration You asked: What is the number of elements in a set $[x_i, \, x_{i+1}]$ as $\delta \rightarrow 0.$ By the Cantor Intersection Theorem, the limit of a strictly decreasing sequence of closed and bounded intervals is a singleton set. That said, I think you're trying to mix apples and oranges here. Aug22 revised Characterization of sets of differentiability added 8239 characters in body Aug22 comment Characterization of sets of differentiability Oops, sorry about the notation mix-up. I did in fact get things backwards from what you originally posted! I've fixed this in the update I'm now posting. Aug21 answered Characterization of sets of differentiability Aug21 comment Countable and uncountable sets in Riemann integration The limit of the number of elements is $c$ (cardinality of the continuum), while the number of elements in the "limiting set" is $1.$ Aug19 revised Is there a theory of integration in elementary terms for definite integrals? A relevant additional reference is added Aug19 comment Assumptions in Word Problems (Calculus) How about this? "The volume of a spherical balloon that is being inflated with gas is increasing at the rate of 800 cubic centimeters per minute. How fast is the radius of the balloon increasing at the instant the radius is (a) 30 centimeters and (b) 60 centimeters, assuming the stated rate of increase of the balloon's volume holds for all radii belonging to some open interval containing 30 and 60?" Note this takes care of what is probably a more significant issue, namely the fact that as the pressure in the balloon increases, the density of the air in the balloon increases. Aug18 answered Is there a theory of integration in elementary terms for definite integrals? Aug18 answered Readings on more general/abstract notions of induction related to logic Aug18 comment I need help finding a rigorous Pre-calculus textbook Fairly rigorous is the very widely used (in the U.S., from the late 1960s through the 1970s) Dolciani text Modern Introductory Analysis. Also worth looking at is Olmsted's Prelude to Calculus and Linear Algebra (1968). Aug18 comment I need help finding a rigorous Pre-calculus textbook FYI, I taught 6 precalculus courses in the 1990s using an earlier version of this book: Brown/Robbins, Advanced Mathematics: A Precalculus Course, revised edition, 1987. Aug11 comment where can I get math book reviews? Notices of the American Mathematical Society (freely available) has book reviews. Also, Bulletin of the American Mathematical Society has a lot of book reviews (freely available back to the 1890s). Aug7 comment Jordan Measure and Lebesgue Measure Regarding "no characterisation of Riemann integrable functions", Jordan measure can be used to give a characterization and in fact the pre-Lebesgue characterizations used this: A bounded function $f:[a,b] \rightarrow {\mathbb R}$ is Riemann integrable if and only if for each $\epsilon > 0$ the set of points at which the oscillation of $f$ is greater than or equal to $\epsilon$ has Jordan measure zero. Indeed, it is this characterization that H. J. S. Smith used in showing that certain functions were not Riemann integrable in his 1875 paper where the first Cantor set (essentially) appeared. Aug6 comment Product of Borel $\sigma$-algebras vs Borel $\sigma$-algebra of product See my 5 May 2002 sci.math.research post. I'm not sure if your specific question is answered there (I don't have time right now to check), but even if it doesn't, it should give you enough leads to find an answer. Aug6 comment Elementary ways to calculate the arc length of the Cantor function (and singular function in general) See Richard Brian Darst, Some Cantor sets and Cantor functions, Mathematics Magazine 45 #1 (January 1972), 2-7. For a summary of this paper, see item #11 in my Bibliography for Singular Functions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9254978895187378, "perplexity": 522.7660085367786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835505.87/warc/CC-MAIN-20140820021355-00026-ip-10-180-136-8.ec2.internal.warc.gz"}
https://worldwidescience.org/topicpages/c/coherent+velocity+fields.html
#### Sample records for coherent velocity fields 1. Coherent hybrid electromagnetic field imaging Science.gov (United States) Cooke, Bradly J [Jemez Springs, NM; Guenther, David C [Los Alamos, NM 2008-08-26 An apparatus and corresponding method for coherent hybrid electromagnetic field imaging of a target, where an energy source is used to generate a propagating electromagnetic beam, an electromagnetic beam splitting means to split the beam into two or more coherently matched beams of about equal amplitude, and where the spatial and temporal self-coherence between each two or more coherently matched beams is preserved. Two or more differential modulation means are employed to modulate each two or more coherently matched beams with a time-varying polarization, frequency, phase, and amplitude signal. An electromagnetic beam combining means is used to coherently combine said two or more coherently matched beams into a coherent electromagnetic beam. One or more electromagnetic beam controlling means are used for collimating, guiding, or focusing the coherent electromagnetic beam. One or more apertures are used for transmitting and receiving the coherent electromagnetic beam to and from the target. A receiver is used that is capable of square-law detection of the coherent electromagnetic beam. A waveform generator is used that is capable of generation and control of time-varying polarization, frequency, phase, or amplitude modulation waveforms and sequences. A means of synchronizing time varying waveform is used between the energy source and the receiver. Finally, a means of displaying the images created by the interaction of the coherent electromagnetic beam with target is employed. 2. Velocities of Auroral Coherent Echoes At 12 and 144 Mhz Science.gov (United States) Koustov, A. V.; Danskin, D. W.; Makarevitch, R. A.; Uspensky, M. V.; Janhunen, P.; Nishitani, N.; Nozawa, N.; Lester, M.; Milan, S. Two Doppler coherent radar systems are currently working at Hankasalmi, Finland, the STARE and CUTLASS radars operating at 144 MHz and 12 MHz, respectively. The STARE beam 3 is nearly co-located with the CUTLASS beam 5 providing an opportunity for echo velocity comparison along the same direction but at significantly different radar frequencies. In this study we consider one event when STARE radar echoes are detected t the same ranges as CUTLASS radar echoes. The observations are complemented by EISCAT measurements of the ionospheric electric field and elec- tron density behavior at one range of 900 km. Two separate situations are studied; for the first one, CUTLASS observed F-region echoes (including the range of the EIS- CAT measurements) while for the second one CUTLASS observed E-region echoes. In both cases STARE E-region measurements were available. We show that F-region CUTLASS velocities agree well with the convection component along the CUTLASS radar beam while STARE velocities are sometimes smaller by a factor of 2-3. For the second case, STARE velocities are found to be either smaller or larger than CUTLASS velocities, depending on range. Plasma physics of E- and F-region irregularities is dis- cussed in attempt to explain inferred relationship between various velocities. Special attention is paid to ionospheric refraction that is important for the detection of 12-MHz echoes. 3. Perturbative coherence in field theory International Nuclear Information System (INIS) Aldrovandi, R.; Kraenkel, R.A. 1987-01-01 A general condition for coherent quantization by perturbative methods is given, because the basic field equations of a fild theory are not always derivable from a Lagrangian. It's seen that non-lagrangian models way have well defined vertices, provided they satisfy what they call the 'coherence condition', which is less stringent than the condition for the existence of a Lagrangian. They note that Lagrangian theories are perturbatively coherent, in the sense that they have well defined vertices, and that they satisfy automatically that condition. (G.D.F.) [pt 4. Coherence measures in automatic time-migration velocity analysis International Nuclear Information System (INIS) Maciel, Jonathas S; Costa, Jessé C; Schleicher, Jörg 2012-01-01 Time-migration velocity analysis can be carried out automatically by evaluating the coherence of migrated seismic events in common-image gathers (CIGs). The performance of gradient methods for automatic time-migration velocity analysis depends on the coherence measures used as the objective function. We compare the results of four different coherence measures, being conventional semblance, differential semblance, an extended differential semblance using differences of more distant image traces and the product of the latter with conventional semblance. In our numerical experiments, the objective functions based on conventional semblance and on the product of conventional semblance with extended differential semblance provided the best velocity models, as evaluated by the flatness of the resulting CIGs. The method can be easily extended to anisotropic media. (paper) 5. Coherent control of the group velocity in a dielectric slab doped with duplicated two-level atoms Science.gov (United States) Ziauddin; Chuang, You-Lin; Lee, Ray-Kuang; Qamar, Sajid 2016-01-01 Coherent control of reflected and transmitted pulses is investigated theoretically through a slab doped with atoms in a duplicated two-level configuration. When a strong control field and a relatively weak probe field are employed, coherent control of the group velocity is achieved via changing the phase shift ϕ between control and probe fields. Furthermore, the peak values in the delay time of the reflected and transmitted pulses are also studied by varying the phase shift ϕ. 6. Complete destructive interference of partially coherent fields NARCIS (Netherlands) Gbur, G.J.; Visser, T.D.; Wolf, E. 2004-01-01 A three-point source model is used to study the interference of wavefields which are mutually partially coherent. It is shown that complete destructive interference of the fields is possible in such a "three-pinhole interferometer" even if the sources are not fully coherent with respect to each 7. Neutron stars velocities and magnetic fields Science.gov (United States) Paret, Daryel Manreza; Martinez, A. Perez; Ayala, Alejandro.; Piccinelli, G.; Sanchez, A. 2018-01-01 We study a model that explain neutron stars velocities due to the anisotropic emission of neutrinos. Strong magnetic fields present in neutron stars are the source of the anisotropy in the system. To compute the velocity of the neutron star we model its core as composed by strange quark matter and analice the properties of a magnetized quark gas at finite temperature and density. Specifically we have obtained the electron polarization and the specific heat of magnetized fermions as a functions of the temperature, chemical potential and magnetic field which allow us to study the velocity of the neutron star as a function of these parameters. 8. Controlling Casimir force via coherent driving field Science.gov (United States) 2016-04-01 A four level atom-field configuration is used to investigate the coherent control of Casimir force between two identical plates made up of chiral atomic media and separated by vacuum of width d. The electromagnetic chirality-induced negative refraction is obtained via atomic coherence. The behavior of Casimir force is investigated using Casimir-Lifshitz formula. It is noticed that Casimir force can be switched from repulsive to attractive and vice versa via coherent control of the driving field. This switching feature provides new possibilities of using the repulsive Casimir force in the development of new emerging technologies, such as, micro-electro-mechanical and nano-electro-mechanical systems, i.e., MEMS and NEMS, respectively. 9. Jovian cloud structure and velocity fields International Nuclear Information System (INIS) Mitchell, J.L.; Terrile, R.J.; Collins, S.A.; Smith, B.A.; Muller, J.P.; Ingersoll, A.P.; Hunt, G.E.; Beebe, R.F. 1979-01-01 A regional comparison of the cloud structures and velocity fields (meridional as well as zonal velocities) in the jovian atmosphere (scales > 200 km) as observed by the Voyager 1 imaging system is given. It is shown that although both hemispheres of Jupiter show similar patterns of diminishing and alternating eastward and westward jets as one progresses polewards, there is a pronounced asymmetry in the structural appearance of the two hemispheres. (UK) 10. Uncondensed atoms in the regime of velocity-selective coherent population trapping International Nuclear Information System (INIS) Il’ichov, L. V.; Tomilin, V. A. 2016-01-01 We consider the model of a Bose condensate in the regime of velocity-selective coherent population trapping. As a result of interaction between particles, some fraction of atoms is outside the condensate, remaining in the coherent trapping state. These atoms are involved in brief events of intense interaction with external resonant electromagnetic fields. Intense induced and spontaneous transitions are accompanied by the exchange of momenta between atoms and radiation, which is manifested as migration of atoms in the velocity space. The rate of such migration is calculated. A nonlinear kinetic equation for the many-particle statistical operator for uncondensed atoms is derived under the assumption that correlations of atoms with different momenta are insignificant. The structure of its steady-state solution leads to certain conclusions about the above-mentioned migration pattern taking the Bose statistics into consideration. With allowance for statistical effects, we derive nonlinear integral equations for frequencies controlling the migration. The results of numerical solution of these equations are represented in the weak interatomic interaction approximation. 11. The Musca cloud: A 6 pc-long velocity-coherent, sonic filament Science.gov (United States) Hacar, A.; Kainulainen, J.; Tafalla, M.; Beuther, H.; Alves, J. 2016-03-01 Filaments play a central role in the molecular clouds' evolution, but their internal dynamical properties remain poorly characterized. To further explore the physical state of these structures, we have investigated the kinematic properties of the Musca cloud. We have sampled the main axis of this filamentary cloud in 13CO and C18O (2-1) lines using APEX observations. The different line profiles in Musca shows that this cloud presents a continuous and quiescent velocity field along its ~6.5 pc of length. With an internal gas kinematics dominated by thermal motions (I.e. σNT/cs ≲ 1) and large-scale velocity gradients, these results reveal Musca as the longest velocity-coherent, sonic-like object identified so far in the interstellar medium. The transonic properties of Musca present a clear departure from the predicted supersonic velocity dispersions expected in the Larson's velocity dispersion-size relationship, and constitute the first observational evidence of a filament fully decoupled from the turbulent regime over multi-parsec scales. This publication is based on data acquired with the Atacama Pathfinder Experiment (APEX). APEX is a collaboration between the Max-Planck-Institut fuer Radioastronomie, the European Southern Observatory, and the Onsala Space Observatory (ESO programme 087.C-0583).The reduced datacubes as FITS files are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/587/A97 12. Temporal Changes of the Photospheric Velocity Fields Czech Academy of Sciences Publication Activity Database Klvaňa, Miroslav; Švanda, Michal; Bumba, Václav 2005-01-01 Roč. 29, č. 1 (2005), s. 89-98 ISSN 0351-2657. [Hvar astrophysical colloquium /7./: Solar activity cycle and global phenomena. Hvar, 20.09.2004-24.09.2004] R&D Projects: GA ČR GA205/04/2129 Institutional research plan: CEZ:AV0Z10030501 Keywords : Solar photosphere * velocity fields * tidal waves Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics 13. Coherent polarization driven by external electromagnetic fields International Nuclear Information System (INIS) Apostol, M.; Ganciu, M. 2010-01-01 The coherent interaction of the electromagnetic radiation with an ensemble of polarizable, identical particles with two energy levels is investigated in the presence of external electromagnetic fields. The coupled non-linear equations of motion are solved in the stationary regime and in the limit of small coupling constants. It is shown that an external electromagnetic field may induce a macroscopic occupation of both the energy levels of the particles and the corresponding photon states, governed by a long-range order of the quantum phases of the internal motion (polarization) of the particles. A lasing effect is thereby obtained, controlled by the external field. Its main characteristics are estimated for typical atomic matter and atomic nuclei. For atomic matter the effect may be considerable (for usual external fields), while for atomic nuclei the effect is extremely small (practically insignificant), due to the great disparity in the coupling constants. In the absence of the external field, the solution, which is non-analytic in the coupling constant, corresponds to a second-order phase transition (super-radiance), which was previously investigated. 14. Resonant coherent ionization in grazing ion/atom-surface collisions at high velocities Energy Technology Data Exchange (ETDEWEB) Garcia de Abajo, F J [Dept. de Ciencias de la Computacion e Inteligencia Artificial, Facultad de Informatica, Univ. del Pais Vasco, San Sebastian (Spain); Pitarke, J M [Materia Kondentsatuaren Fisika Saila, Zientzi Fakultatea, Euskal Herriko Univ., Bilbo (Spain) 1994-05-01 The resonant coherent interaction of a fast ion/atom with an oriented crystal surface under grazing incidence conditions is shown to contribute significantly to ionize the probe for high enough velocities and motion along a random direction. The dependence of this process on both the distance to the surface and the velocity of the projectile is studied in detail. We focus on the case of hydrogen moving with a velocity above 2 a.u. Comparison with other mechanisms of charge transfer, such as capture from inner shells of the target atoms, permits us to draw some conclusions about the charge state of the outgoing projectiles. (orig.) 15. Resonant coherent ionization in grazing ion/atom-surface collisions at high velocities International Nuclear Information System (INIS) Garcia de Abajo, F.J.; Pitarke, J.M. 1994-01-01 The resonant coherent interaction of a fast ion/atom with an oriented crystal surface under grazing incidence conditions is shown to contribute significantly to ionize the probe for high enough velocities and motion along a random direction. The dependence of this process on both the distance to the surface and the velocity of the projectile is studied in detail. We focus on the case of hydrogen moving with a velocity above 2 a.u. Comparison with other mechanisms of charge transfer, such as capture from inner shells of the target atoms, permits us to draw some conclusions about the charge state of the outgoing projectiles. (orig.) 16. Completeness for coherent states in a magnetic–solenoid field International Nuclear Information System (INIS) Bagrov, V G; Gavrilov, S P; Gitman, D M; Górska, K 2012-01-01 This paper completes our study of coherent states in the so-called magnetic–solenoid field (a collinear combination of a constant uniform magnetic field and Aharonov–Bohm solenoid field) presented in Bagrov et al (2010 J. Phys. A: Math. Theor. 43 354016, 2011 J. Phys. A: Math. Theor. 44 055301). Here, we succeeded in proving nontrivial completeness relations for non-relativistic and relativistic coherent states in such a field. In addition, we solve here the relevant Stieltjes moment problem and present a comparative analysis of our coherent states and the well-known, in the case of pure uniform magnetic field, Malkin–Man’ko coherent states. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Coherent states: mathematical and physical aspects’. (paper) 17. Linear algebraic theory of partial coherence: discrete fields and measures of partial coherence. Science.gov (United States) Ozaktas, Haldun M; Yüksel, Serdar; Kutay, M Alper 2002-08-01 A linear algebraic theory of partial coherence is presented that allows precise mathematical definitions of concepts such as coherence and incoherence. This not only provides new perspectives and insights but also allows us to employ the conceptual and algebraic tools of linear algebra in applications. We define several scalar measures of the degree of partial coherence of an optical field that are zero for full incoherence and unity for full coherence. The mathematical definitions are related to our physical understanding of the corresponding concepts by considering them in the context of Young's experiment. 18. A stochastic differential equation framework for the turbulent velocity field DEFF Research Database (Denmark) Barndorff-Nielsen, Ole Eiler; Schmiegel, Jürgen We discuss a stochastic differential equation, as a modelling framework for the turbulent velocity field, that is capable of capturing basic stylized facts of the statistics of velocity increments. In particular, we focus on the evolution of the probability density of velocity increments... 19. Inverse Doppler shift and control field as coherence generators for the stability in superluminal light Science.gov (United States) Ghafoor, Fazal; Bacha, Bakht Amin; Khan, Salman 2015-05-01 A gain-based four-level atomic medium for the stability in superluminal light propagation using control field and inverse Doppler shift as coherence generators is studied. In regimes of weak and strong control field, a broadband and multiple controllable transparency windows are, respectively, identified with significantly enhanced group indices. The observed Doppler effect for the class of high atomic velocity of the medium is counterintuitive in comparison to the effect of the class of low atomic velocity. The intensity of each of the two pump fields is kept less than the optimum limit reported in [M. D. Stenner and D. J. Gauthier, Phys. Rev. A 67, 063801 (2003), 10.1103/PhysRevA.67.063801] for stability in the superluminal light pulse. Consequently, superluminal stable domains with the generated coherence are explored. 20. Dependence on relative magnitude of probe and coherent field the condition Ω ≫ G. Here, by using the exact analytical expressions of ... The presence of rotational and vibrational states makes the study of LWI/AWI ... Doppler free condition, keeping the absorption on the coherent field minimum. Here ... where Ec and Ep are the electric field for the coupling and probe fields respectively. 1. Investigation of optical currents in coherent and partially coherent vector fields DEFF Research Database (Denmark) Angelsky, O. V.; Gorsky, M. P.; Maksimyak, P. P. 2011-01-01 We present the computer simulation results of the spatial distri-bution of the Poynting vector and illustrate motion of micro and nanopar-ticles in spatially inhomogeneously polarized fields. The influence of phase relations and the degree of mutual coherence of superimposing waves...... by polarization characteristics of an optical field alone, using nanoscale me-tallic particles has been shown experimentally.... 2. Effects of spontaneously generated coherence on the group velocity in a V system International Nuclear Information System (INIS) Bai Yanfeng; Guo Hong; Han, Dingan; Sun Hui 2005-01-01 We show how the application of an incoherent pumping can produce a variety of effects on the propagation of a weak electromagnetic pulse in a V system with spontaneously generated coherence (SGC). There exists an incoherent pumping rate which makes the group velocity reach the extremum near the region of two-photon resonant excitation. The existence of SGC is just the cause for the occurrence of the extremum, and it may also be regarded as a knob which can be used to manipulate light propagation from subluminal to superluminal 3. Patch near field acoustic holography based on particle velocity measurements DEFF Research Database (Denmark) Zhang, Yong-Bin; Jacobsen, Finn; Bi, Chuan-Xing 2009-01-01 Patch near field acoustic holography (PNAH) based on sound pressure measurements makes it possible to reconstruct the source field near a source by measuring the sound pressure at positions on a surface. that is comparable in size to the source region of concern. Particle velocity is an alternative...... examines the use of particle velocity as the input of PNAH. Because the particle velocity decays faster toward the edges of the measurement aperture than the pressure does and because the wave number ratio that enters into the inverse propagator from pressure to velocity amplifies high spatial frequencies... 4. Turbulent structure of thermal plume. Velocity field International Nuclear Information System (INIS) Guillou, B.; Brahimi, M.; Doan-kim-son 1986-01-01 An experimental investigation and a numerical study of the dynamics of a turbulent plume rising from a strongly heated source are described. This type of flow is met in thermal effluents (air, vapor) from, e.g., cooling towers of thermal power plants. The mean and fluctuating values of the vertical component of the velocity were determined using a Laser-Doppler anemometer. The measurements allow us to distinguish three regions in the plume-a developing region near the source, an intermediate region, and a self-preserving region. The characteristics of each zone have been determined. In the self-preserving zone, especially, the turbulence level on the axis and the entrainment coefficient are almost twice of the values observed in jets. The numerical model proposed takes into account an important phenomenon, the intermittency, observed in the plume. This model, established with the self-preserving hypothesis, brings out analytical laws. These laws and the predicted velocity profile are in agreement with the experimental evolutions [fr 5. High-flow-velocity and shear-rate imaging by use of color Doppler optical coherence tomography NARCIS (Netherlands) van Leeuwen, T. G.; Kulkarni, M. D.; Yazdanfar, S.; Rollins, A. M.; Izatt, J. A. 1999-01-01 Color Doppler optical coherence tomography (CDOCT) is capable of precise velocity mapping in turbid media. Previous CDOCT systems based on the short-time Fourier transform have been limited to maximum flow velocities of the order of tens of millimeters per second. We describe a technique, based on 6. Teleportation of atomic states with a weak coherent cavity field Institute of Scientific and Technical Information of China (English) Zheng Shi-Biao 2005-01-01 A scheme is proposed for the teleportation of an unknown atomic state. The scheme is based on the resonant interaction of atoms with a coherent cavity field. The mean photon-number of the cavity field is much smaller than one and thus the cavity decay can be effectively suppressed. Another adwntage of the scheme is that only one cavity is required. 7. Variational multi-valued velocity field estimation for transparent sequences DEFF Research Database (Denmark) Ramírez-Manzanares, Alonso; Rivera, Mariano; Kornprobst, Pierre 2011-01-01 Motion estimation in sequences with transparencies is an important problem in robotics and medical imaging applications. In this work we propose a variational approach for estimating multi-valued velocity fields in transparent sequences. Starting from existing local motion estimators, we derive...... a variational model for integrating in space and time such a local information in order to obtain a robust estimation of the multi-valued velocity field. With this approach, we can indeed estimate multi-valued velocity fields which are not necessarily piecewise constant on a layer –each layer can evolve... 8. Magnetic and Velocity Field Variations in the Active Regions NOAA ... Abstract. We study the magnetic and velocity field evolution in the two magnetically complex active regions NOAA 10486 and NOAA 10488 observed during October–November 2003. We have used the available data to examine net flux and Doppler velocity time profiles to identify changes associated with evolutionary and ... 9. The geostrophic velocity field in shallow water over topography Science.gov (United States) Charnock, Henry; Killworth, Peter D. 1998-01-01 A recent note (Hopkins, T.S., 1996. A note on the geostrophic velocity field referenced to a point. Continental Shelf Research 16, 1621-1630) suggests a method for evaluating absolute pressure gradients in stratified water over topography. We demonstrate that this method requires no along-slope bottom velocity, in contradiction to what is usually observed, and that mass is not conserved. 10. The velocity field induced by a helical vortex tube DEFF Research Database (Denmark) Fukumoto, Y.; Okulov, Valery 2005-01-01 The influence of finite-core thickness on the velocity field around a vortex tube is addressed. An asymptotic expansion of the Biot-Savart law is made to a higher order in a small parameter, the ratio of core radius to curvature radius, which consists of the velocity field due to lines of monopoles...... and dipoles arranged on the centerline of the tube. The former is associated with an infinitely thin core and is featured by the circulation alone. The distribution of vorticity in the core reflects on the strength of dipole. This result is applied to a helical vortex tube, and the induced velocity due... 11. Coherent field propagation between tilted planes. Science.gov (United States) Stock, Johannes; Worku, Norman Girma; Gross, Herbert 2017-10-01 Propagating electromagnetic light fields between nonparallel planes is of special importance, e.g., within the design of novel computer-generated holograms or the simulation of optical systems. In contrast to the extensively discussed evaluation between parallel planes, the diffraction-based propagation of light onto a tilted plane is more burdensome, since discrete fast Fourier transforms cannot be applied directly. In this work, we propose a quasi-fast algorithm (O(N 3  log N)) that deals with this problem. Based on a proper decomposition into three rotations, the vectorial field distribution is calculated on a tilted plane using the spectrum of plane waves. The algorithm works on equidistant grids, so neither nonuniform Fourier transforms nor an explicit complex interpolation is necessary. The proposed algorithm is discussed in detail and applied to several examples of practical interest. 12. Wide-field absolute transverse blood flow velocity mapping in vessel centerline Science.gov (United States) Wu, Nanshou; Wang, Lei; Zhu, Bifeng; Guan, Caizhong; Wang, Mingyi; Han, Dingan; Tan, Haishu; Zeng, Yaguang 2018-02-01 We propose a wide-field absolute transverse blood flow velocity measurement method in vessel centerline based on absorption intensity fluctuation modulation effect. The difference between the light absorption capacities of red blood cells and background tissue under low-coherence illumination is utilized to realize the instantaneous and average wide-field optical angiography images. The absolute fuzzy connection algorithm is used for vessel centerline extraction from the average wide-field optical angiography. The absolute transverse velocity in the vessel centerline is then measured by a cross-correlation analysis according to instantaneous modulation depth signal. The proposed method promises to contribute to the treatment of diseases, such as those related to anemia or thrombosis. 13. Measuring average angular velocity with a smartphone magnetic field sensor Science.gov (United States) Pili, Unofre; Violanda, Renante 2018-02-01 The angular velocity of a spinning object is, by standard, measured using a device called a tachometer. However, by directly using it in a classroom setting, the activity is likely to appear as less instructive and less engaging. Indeed, some alternative classroom-suitable methods for measuring angular velocity have been presented. In this paper, we present a further alternative that is smartphone-based, making use of the real-time magnetic field (simply called B-field in what follows) data gathering capability of the B-field sensor of the smartphone device as the timer for measuring average rotational period and average angular velocity. The in-built B-field sensor in smartphones has already found a number of uses in undergraduate experimental physics. For instance, in elementary electrodynamics, it has been used to explore the well-known Bio-Savart law and in a measurement of the permeability of air. 14. Recent progress on the unified theory of polarization and coherence for stochastic electromagnetic fields DEFF Research Database (Denmark) Wang, Wei; Zhao, Juan; Hu, Xiaoying 2017-01-01 All optical fields undergo random fluctuation and the underlying theory referred to as coherence and polarization of optical fields has played a fundamental role as an important manifestation of the random fluctuations of the electric fields. In this paper, we reviewed our recent theoretical...... and experimental work on the unified theory of polarization and coherence including coherence tensor wave, degree of coherence tensor, degree of generalized Stokes parameters, and their applications including coherence tensor holography and two-point resolution of polarimetric imaging.... 15. POLARIZED LINE FORMATION IN NON-MONOTONIC VELOCITY FIELDS Energy Technology Data Exchange (ETDEWEB) Sampoorna, M.; Nagendra, K. N., E-mail: [email protected], E-mail: [email protected] [Indian Institute of Astrophysics, Koramangala, Bengaluru 560034 (India) 2016-12-10 For a correct interpretation of the observed spectro-polarimetric data from astrophysical objects such as the Sun, it is necessary to solve the polarized line transfer problems taking into account a realistic temperature structure, the dynamical state of the atmosphere, a realistic scattering mechanism (namely, the partial frequency redistribution—PRD), and the magnetic fields. In a recent paper, we studied the effects of monotonic vertical velocity fields on linearly polarized line profiles formed in isothermal atmospheres with and without magnetic fields. However, in general the velocity fields that prevail in dynamical atmospheres of astrophysical objects are non-monotonic. Stellar atmospheres with shocks, multi-component supernova atmospheres, and various kinds of wave motions in solar and stellar atmospheres are examples of non-monotonic velocity fields. Here we present studies on the effect of non-relativistic non-monotonic vertical velocity fields on the linearly polarized line profiles formed in semi-empirical atmospheres. We consider a two-level atom model and PRD scattering mechanism. We solve the polarized transfer equation in the comoving frame (CMF) of the fluid using a polarized accelerated lambda iteration method that has been appropriately modified for the problem at hand. We present numerical tests to validate the CMF method and also discuss the accuracy and numerical instabilities associated with it. 16. Anomalous cross-field velocities in a CIV laboratory experiment International Nuclear Information System (INIS) Axnaes, I. 1988-10-01 The axial and radial ion velocities and the electron radial velocity are determined in coaxial plasma gun operated under critical velocity conditions. The particle celocities are determined from probe measurement together with He I 3889 AA absolute intensity measurements and the consideration of the total momentum balance of the current sheet. The ions are found move axially and the electrons radially much faster than predicted by the E/B drift in the macroscopic fields. These results agree with what can be expected from the instability processes, which has earlier been proposed to operate in these experiments. It is therefore a direct experimental demonstration that instability processes have to be invoked not only for the electron heating, but also to explain the macroscopic velocities and currents. (author) 17. Dual-beam optical coherence tomography system for quantification of flow velocity in capillary phantoms Science.gov (United States) Daly, S. M.; Silien, C.; Leahy, M. J. 2012-03-01 The quantification of (blood) flow velocity within the vasculature has potent diagnostic and prognostic potential. Assessment of flow irregularities in the form of increased permeability (micro haemorrhaging), the presence of avascular areas, or conversely the presence of vessels with enlarged or increased tortuosity in the acral regions of the body may provide a means of non-invasive in vivo assessment. If assessment of dermal flow dynamics were performed in a routine manner, the existence and prevalence of ailments such as diabetes mellitus, psoriatic arthritis and Raynaud's condition may be confirmed prior to clinical suspicion. This may prove advantageous in cases wherein the efficacy of a prescribed treatment is dictated by a prompt diagnosis and to alleviate patient discomfort through early detection. Optical Coherence Tomography (OCT) is an imaging modality which utilises the principle of optical interferometry to distinguish between spatial changes in refractive index within the vasculature and thus formulate a multi-dimensional representation of the structure of the epi- and dermal skin layers. The use of the Doppler functionality has been the predominant force for the quantification of moving particles within media, elucidated via estimation of the phase shift in OCT A-scans. However, the theoretical formulation for the assessment of these phase shifts dictates that the angle between the incident light source and the vessel under question be known a priori; this may be achieved via excisional biopsy of the tissue segment in question, but is counter to the non-invasive premise of the OCT technique. To address the issue of angular dependence, an alternate means of estimating absolute flow velocity is presented. The design and development of a dual-beam (db) system incorporating an optical switch mechanism for signal discrimination of two spatially disparate points enabling quasi-simultaneous multiple specimen scanning is described. A crosscorrelation (c 18. Velocity field calculation for non-orthogonal numerical grids Energy Technology Data Exchange (ETDEWEB) Flach, G. P. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL) 2015-03-01 Computational grids containing cell faces that do not align with an orthogonal (e.g. Cartesian, cylindrical) coordinate system are routinely encountered in porous-medium numerical simulations. Such grids are referred to in this study as non-orthogonal grids because some cell faces are not orthogonal to a coordinate system plane (e.g. xy, yz or xz plane in Cartesian coordinates). Non-orthogonal grids are routinely encountered at the Savannah River Site in porous-medium flow simulations for Performance Assessments and groundwater flow modeling. Examples include grid lines that conform to the sloping roof of a waste tank or disposal unit in a 2D Performance Assessment simulation, and grid surfaces that conform to undulating stratigraphic surfaces in a 3D groundwater flow model. Particle tracking is routinely performed after a porous-medium numerical flow simulation to better understand the dynamics of the flow field and/or as an approximate indication of the trajectory and timing of advective solute transport. Particle tracks are computed by integrating the velocity field from cell to cell starting from designated seed (starting) positions. An accurate velocity field is required to attain accurate particle tracks. However, many numerical simulation codes report only the volumetric flowrate (e.g. PORFLOW) and/or flux (flowrate divided by area) crossing cell faces. For an orthogonal grid, the normal flux at a cell face is a component of the Darcy velocity vector in the coordinate system, and the pore velocity for particle tracking is attained by dividing by water content. For a non-orthogonal grid, the flux normal to a cell face that lies outside a coordinate plane is not a true component of velocity with respect to the coordinate system. Nonetheless, normal fluxes are often taken as Darcy velocity components, either naively or with accepted approximation. To enable accurate particle tracking or otherwise present an accurate depiction of the velocity field for a non 19. Reconstructing the velocity field beyond the local universe CSIR Research Space (South Africa) Johnston, R 2014-10-01 Full Text Available an estimate of the velocity field derived from the galaxy over-density d(sub g) and the second makes use of the matter linear density power spectrum P(sub k). Using N-body simulations we find, with an SDSS-like sample (N(sub gal) 33 per deg(sup 2... 20. Channel flow analysis. [velocity distribution throughout blade flow field Science.gov (United States) Katsanis, T. 1973-01-01 The design of a proper blade profile requires calculation of the blade row flow field in order to determine the velocities on the blade surfaces. An analysis theory is presented for several methods used for this calculation and associated computer programs that were developed are discussed. 1. Transformations Based on Continuous Piecewise-Affine Velocity Fields DEFF Research Database (Denmark) Freifeld, Oren; Hauberg, Søren; Batmanghelich, Kayhan 2017-01-01 We propose novel finite-dimensional spaces of well-behaved transformations. The latter are obtained by (fast and highly-accurate) integration of continuous piecewise-affine velocity fields. The proposed method is simple yet highly expressive, effortlessly handles optional constraints (e.g., volum... 2. Three-dimensional instantaneous velocity field measurement using ... 2014-02-13 Feb 13, 2014 ... Abstract. In the present study, a digital holography microscope has been developed to study instantaneous 3D velocity field in a square channel of 1000 × 1000 μm2 cross-section. The flow field is seeded with polystyrene microspheres of size dp = 2.1 μm. The volumetric flow rate is set equal to 20 μl/min. 3. Results of verification and investigation of wind velocity field forecast. Verification of wind velocity field forecast model International Nuclear Information System (INIS) Ogawa, Takeshi; Kayano, Mitsunaga; Kikuchi, Hideo; Abe, Takeo; Saga, Kyoji 1995-01-01 In Environmental Radioactivity Research Institute, the verification and investigation of the wind velocity field forecast model 'EXPRESS-1' have been carried out since 1991. In fiscal year 1994, as the general analysis, the validity of weather observation data, the local features of wind field, and the validity of the positions of monitoring stations were investigated. The EXPRESS which adopted 500 m mesh so far was improved to 250 m mesh, and the heightening of forecast accuracy was examined, and the comparison with another wind velocity field forecast model 'SPEEDI' was carried out. As the results, there are the places where the correlation with other points of measurement is high and low, and it was found that for the forecast of wind velocity field, by excluding the data of the points with low correlation or installing simplified observation stations to take their data in, the forecast accuracy is improved. The outline of the investigation, the general analysis of weather observation data and the improvements of wind velocity field forecast model and forecast accuracy are reported. (K.I.) 4. Sound field separation with sound pressure and particle velocity measurements DEFF Research Database (Denmark) Fernandez Grande, Efren; Jacobsen, Finn; Leclère, Quentin 2012-01-01 separation techniques make it possible to distinguish between outgoing and incoming waves from the two sides, and thus NAH can be applied. In this paper, a separation method based on the measurement of the particle velocity in two layers and another method based on the measurement of the pressure...... and the velocity in a single layer are proposed. The two methods use an equivalent source formulation with separate transfer matrices for the outgoing and incoming waves, so that the sound from the two sides of the array can be modeled independently. A weighting scheme is proposed to account for the distance......In conventional near-field acoustic holography (NAH) it is not possible to distinguish between sound from the two sides of the array, thus, it is a requirement that all the sources are confined to only one side and radiate into a free field. When this requirement cannot be fulfilled, sound field... 5. Coherent error study in a retarding field energy analyzer International Nuclear Information System (INIS) Cui, Y.; Zou, Y.; Reiser, M.; Kishek, R.A.; Haber, I.; Bernal, S.; O'Shea, P.G. 2005-01-01 A novel cylindrical retarding electrostatic field energy analyzer for low-energy beams has been designed, simulated, and tested with electron beams of several keV, in which space charge effects play an important role. A cylindrical focusing electrode is used to overcome the beam expansion inside the device due to space-charge forces, beam emittance, etc. In this paper, we present the coherent error analysis for this energy analyzer with beam envelope equation including space charge and emittance effects. The study shows that this energy analyzer can achieve very high resolution (with relative error of around 10 -5 ) if taking away the coherent errors by using proper focusing voltages. The theoretical analysis is compared with experimental results 6. Mass conservative fluid flow visualization for CFD velocity fields International Nuclear Information System (INIS) Li, Zhenquan; Mallinson, Gordon D. 2001-01-01 Mass conservation is a key issue for accurate streamline and stream surface visualization of flow fields. This paper complements an existing method (Feng et al., 1997) for CFD velocity fields defined at discrete locations in space that uses dual stream functions to generate streamlines and stream surfaces. Conditions for using the method have been examined and its limitations defined. A complete set of dual stream functions for all possible cases of the linear fields on which the method relies are presented. The results in this paper are important for developing new methods for mass conservative streamline visualization from CFD data and using the existing method 7. On the measurements of large scale solar velocity fields International Nuclear Information System (INIS) Andersen, B.N. 1985-01-01 A general mathematical formulation for the correction of the scattered light influence on solar Doppler shift measurements has been developed. This method has been applied to the straylight correction of measurements of solar rotation, limb effect, large scale flows and oscillations. It is shown that neglecting the straylight errors may cause spurious large scale velocity fields, oscillations and erronous values for the solar rotation and limb effect. The influence of active regions on full disc velocity measurements has been studied. It is shown that a 13 day periodicity in the global velocity signal will be introduced by the passage of sunspots over the solar disc. With different types of low resolution apertures, other periodicities may be introduced. Accurate measurements of the center-to-limb velocity shift are presented for a set of magnetic insensitive lines well suited for solar velocity measurements. The absolute wavelenght shifts are briefly discussed. The stronger lines have a ''supergravitational'' shift of 300-400 m/s at the solar limb. The results may be explained by the presence of a 20-25 m/s poleward meridional flow and a latitudinal dependence of the granular parameters. Using a simple model it is shown that the main properites of the observations are explained by a 5% increase in the granular size with latitude. Data presented indicate that the resonance line K I, 769.9 nm has a small but significant limb effect of 125 m/s from center to limb 8. Goos-Hänchen shift of partially coherent light fields in epsilon-near-zero metamaterials Science.gov (United States) Ziauddin; Chuang, You-Lin; Qamar, Sajid; Lee, Ray-Kuang 2016-05-01 The Goos-Hänchen (GH) shifts in the reflected light are investigated both for p and s polarized partial coherent light beams incident on epsilon-near-zero (ENZ) metamaterials. In contrary to the coherent counterparts, the magnitude of GH shift becomes non-zero for p polarized partial coherent light beam; while GH shift can be relatively large with a small degree of spatial coherence for s polarized partial coherent beam. Dependence on the beam width and the permittivity of ENZ metamaterials is also revealed for partial coherent light fields. Our results on the GH shifts provide a direction on the applications for partial coherent light sources in ENZ metamaterials. 9. Velocity fields and transition densities in nuclear collective modes Energy Technology Data Exchange (ETDEWEB) Stringari, S [Dipartimento di Matematica e Fisica, Libera Universita di Trento, Italy 1979-08-13 The shape of the deformations occurring in nuclear collective modes is investigated by means of a microscopic approach. Analytical solutions of the equations of motion are obtained by using simplified nuclear potentials. It is found that the structure of the velocity field and of the transition density of low-lying modes is considerably different from the predictions of irrotational hydrodynamic models. The low-lying octupole state is studied in particular detail by using the Skyrme force. 10. Newly velocity field of Sulawesi Island from GPS observation Science.gov (United States) Sarsito, D. A.; Susilo, Simons, W. J. F.; Abidin, H. Z.; Sapiie, B.; Triyoso, W.; Andreas, H. 2017-07-01 Sulawesi microplate Island is located at famous triple junction area of the Eurasian, India-Australian, and Philippine Sea plates. Under the influence of the northward moving Australian plate and the westward motion of the Philippine plate, the island at Eastern part of Indonesia is collide and with the Eurasian plate and Sunda Block. Those recent microplate tectonic motions can be quantitatively determine by GNSS-GPS measurement. We use combine GNSS-GPS observation types (campaign type and continuous type) from 1997 to 2015 to derive newly velocity field of the area. Several strategies are applied and tested to get the optimum result, and finally we choose regional strategy to reduce error propagation contribution from global multi baseline processing using GAMIT/GLOBK 10.5. Velocity field are analyzed in global reference frame ITRF 2008 and local reference frame by fixing with respect alternatively to Eurasian plate - Sunda block, India-Australian plate and Philippine Sea plates. Newly results show dense distribution of velocity field. This information is useful for tectonic deformation studying in geospatial era. 11. Inequivalent coherent state representations in group field theory Science.gov (United States) Kegeles, Alexander; Oriti, Daniele; Tomlin, Casey 2018-06-01 In this paper we propose an algebraic formulation of group field theory and consider non-Fock representations based on coherent states. We show that we can construct representations with an infinite number of degrees of freedom on compact manifolds. We also show that these representations break translation symmetry. Since such representations can be regarded as quantum gravitational systems with an infinite number of fundamental pre-geometric building blocks, they may be more suitable for the description of effective geometrical phases of the theory. 12. Coherent feedback control of multipartite quantum entanglement for optical fields Energy Technology Data Exchange (ETDEWEB) Yan, Zhihui; Jia, Xiaojun; Xie, Changde; Peng, Kunchi [State Key Laboratory of Quantum Optics and Quantum Optics Devices, Institute of Opto-Electronics, Shanxi University, Taiyuan, 030006 (China) 2011-12-15 Coherent feedback control (CFC) of multipartite optical entangled states produced by a nondegenerate optical parametric amplifier is theoretically studied. The features of the quantum correlations of amplitude and phase quadratures among more than two entangled optical modes can be controlled by tuning the transmissivity of the optical beam splitter in the CFC loop. The physical conditions to enhance continuous variable multipartite entanglement of optical fields utilizing the CFC loop are obtained. The numeric calculations based on feasible physical parameters of realistic systems provide direct references for the design of experimental devices. 13. Quantum Coherence and Random Fields at Mesoscopic Scales International Nuclear Information System (INIS) Rosenbaum, Thomas F. 2016-01-01 We seek to explore and exploit model, disordered and geometrically frustrated magnets where coherent spin clusters stably detach themselves from their surroundings, leading to extreme sensitivity to finite frequency excitations and the ability to encode information. Global changes in either the spin concentration or the quantum tunneling probability via the application of an external magnetic field can tune the relative weights of quantum entanglement and random field effects on the mesoscopic scale. These same parameters can be harnessed to manipulate domain wall dynamics in the ferromagnetic state, with technological possibilities for magnetic information storage. Finally, extensions from quantum ferromagnets to antiferromagnets promise new insights into the physics of quantum fluctuations and effective dimensional reduction. A combination of ac susceptometry, dc magnetometry, noise measurements, hole burning, non-linear Fano experiments, and neutron diffraction as functions of temperature, magnetic field, frequency, excitation amplitude, dipole concentration, and disorder address issues of stability, overlap, coherence, and control. We have been especially interested in probing the evolution of the local order in the progression from spin liquid to spin glass to long-range-ordered magnet. 14. Quantum Coherence and Random Fields at Mesoscopic Scales Energy Technology Data Exchange (ETDEWEB) Rosenbaum, Thomas F. [Univ. of Chicago, IL (United States) 2016-03-01 We seek to explore and exploit model, disordered and geometrically frustrated magnets where coherent spin clusters stably detach themselves from their surroundings, leading to extreme sensitivity to finite frequency excitations and the ability to encode information. Global changes in either the spin concentration or the quantum tunneling probability via the application of an external magnetic field can tune the relative weights of quantum entanglement and random field effects on the mesoscopic scale. These same parameters can be harnessed to manipulate domain wall dynamics in the ferromagnetic state, with technological possibilities for magnetic information storage. Finally, extensions from quantum ferromagnets to antiferromagnets promise new insights into the physics of quantum fluctuations and effective dimensional reduction. A combination of ac susceptometry, dc magnetometry, noise measurements, hole burning, non-linear Fano experiments, and neutron diffraction as functions of temperature, magnetic field, frequency, excitation amplitude, dipole concentration, and disorder address issues of stability, overlap, coherence, and control. We have been especially interested in probing the evolution of the local order in the progression from spin liquid to spin glass to long-range-ordered magnet. 15. Coherent lidar modulated with frequency stepped pulse trains for unambiguous high duty cycle range and velocity sensing in the atmosphere DEFF Research Database (Denmark) Lindelöw, Per Jonas Petter; Mohr, Johan Jacob 2007-01-01 Range unambiguous high duty cycle coherent lidars can be constructed based on frequency stepped pulse train modulation, even continuously emitting systems could be envisioned. Such systems are suitable for velocity sensing of dispersed targets, like the atmosphere, at fast acquisition rates....... The lightwave synthesized frequency sweeper is a suitable generator yielding fast pulse repetition rates and stable equidistant frequency steps. Theoretical range resolution profiles of modulated lidars are presented.... 16. Optical Implementation of Non-locality with Coherent Light Fields for Quantum Communication OpenAIRE Lee, Kim Fook 2008-01-01 Polarization correlations of two distant observers are observed by using coherent light fields based on Stapp's formulation of nonlocality. Using a 50/50 beam splitter transformation, a vertically polarized coherent light field is found to be entangled with a horizontally polarized coherent noise field. The superposed light fields at each output port of the beam splitter are sent to two distant observers, where the fields are interfered and manipulated at each observer by using a quarter wave... 17. Coherent states field theory in supramolecular polymer physics Science.gov (United States) Fredrickson, Glenn H.; Delaney, Kris T. 2018-05-01 In 1970, Edwards and Freed presented an elegant representation of interacting branched polymers that resembles the coherent states (CS) formulation of second-quantized field theory. This CS polymer field theory has been largely overlooked during the intervening period in favor of more conventional "auxiliary field" (AF) interacting polymer representations that form the basis of modern self-consistent field theory (SCFT) and field-theoretic simulation approaches. Here we argue that the CS representation provides a simpler and computationally more efficient framework than the AF approach for broad classes of reversibly bonding polymers encountered in supramolecular polymer science. The CS formalism is reviewed, initially for a simple homopolymer solution, and then extended to supramolecular polymers capable of forming reversible linkages and networks. In the context of the Edwards model of a non-reacting homopolymer solution and one and two-component models of telechelic reacting polymers, we discuss the structure of CS mean-field theory, including the equivalence to SCFT, and show how weak-amplitude expansions (random phase approximations) can be readily developed without explicit enumeration of all reaction products in a mixture. We further illustrate how to analyze CS field theories beyond SCFT at the level of Gaussian field fluctuations and provide a perspective on direct numerical simulations using a recently developed complex Langevin technique. 18. The Local Stellar Velocity Field via Vector Spherical Harmonics Science.gov (United States) Markarov, V. V.; Murphy, D. W. 2007-01-01 We analyze the local field of stellar tangential velocities for a sample of 42,339 nonbinary Hipparcos stars with accurate parallaxes, using a vector spherical harmonic formalism. We derive simple relations between the parameters of the classical linear model (Ogorodnikov-Milne) of the local systemic field and low-degree terms of the general vector harmonic decomposition. Taking advantage of these relationships, we determine the solar velocity with respect to the local stars of (V(sub X), V(sub Y), V(sub Z)) (10.5, 18.5, 7.3) +/- 0.1 km s(exp -1) not corrected for the asymmetric drift with respect to the local standard of rest. If only stars more distant than 100 pc are considered, the peculiar solar motion is (V(sub X), V(sub Y), V(sub Z)) (9.9, 15.6, 6.9) +/- 0.2 km s(exp -1). The adverse effects of harmonic leakage, which occurs between the reflex solar motion represented by the three electric vector harmonics in the velocity space and higher degree harmonics in the proper-motion space, are eliminated in our analysis by direct subtraction of the reflex solar velocity in its tangential components for each star. The Oort parameters determined by a straightforward least-squares adjustment in vector spherical harmonics are A=14.0 +/- 1.4, B=13.1 +/- 1.2, K=1.1 +/- 1.8, and C=2.9 +/- 1.4 km s(exp -1) kpc(exp -1). The physical meaning and the implications of these parameters are discussed in the framework of a general linear model of the velocity field. We find a few statistically significant higher degree harmonic terms that do not correspond to any parameters in the classical linear model. One of them, a third-degree electric harmonic, is tentatively explained as the response to a negative linear gradient of rotation velocity with distance from the Galactic plane, which we estimate at approximately -20 km s(exp -1) kpc(exp -1). A similar vertical gradient of rotation velocity has been detected for more distant stars representing the thick disk (z greater than 1 kpc 19. Velocity field measurements on high-frequency, supersonic microactuators Science.gov (United States) Kreth, Phillip A.; Ali, Mohd Y.; Fernandez, Erik J.; Alvi, Farrukh S. 2016-05-01 The resonance-enhanced microjet actuator which was developed at the Advanced Aero-Propulsion Laboratory at Florida State University is a fluidic-based device that produces pulsed, supersonic microjets by utilizing a number of microscale, flow-acoustic resonance phenomena. The microactuator used in this study consists of an underexpanded source jet that flows into a cylindrical cavity with a single, 1-mm-diameter exhaust orifice through which an unsteady, supersonic jet issues at a resonant frequency of 7 kHz. The flowfields of a 1-mm underexpanded free jet and the microactuator are studied in detail using high-magnification, phase-locked flow visualizations (microschlieren) and two-component particle image velocimetry. These are the first direct measurements of the velocity fields produced by such actuators. Comparisons are made between the flow visualizations and the velocity field measurements. The results clearly show that the microactuator produces pulsed, supersonic jets with velocities exceeding 400 m/s for roughly 60 % of their cycles. With high unsteady momentum output, this type of microactuator has potential in a range of ow control applications. 20. Velocity field measurement in micro-bubble emission boiling International Nuclear Information System (INIS) Ito, Daisuke; Saito, Yasushi; Natazuka, Jun 2017-01-01 Liquid inlet behavior to a heat surface in micro-bubble emission boiling (MEB) was investigated by flow measurement using particle image velocimetry (PIV). Subcooled pool boiling experiments under atmospheric pressure were carried out using a heat surface with a diameter of 10 mm. An upper end of a heater block made of copper was used as the heat surface. Working fluid was the deionized water and the subcooling was varied from 40 K to 70 K. Three K-type thermocouples were installed in the copper block to measure the temperature gradient, and the heat flux and wall superheat were estimated from these temperature data to make a boiling curve. The flow visualization around the heat surface was carried out using a high-speed video camera and a light sheet. The microbubbles generated in the MEB were used as tracer particles and the velocity field was obtained by PIV analysis of the acquired image sequence. As a result, the higher heat fluxes than the critical heat flux could be obtained in the MEB region. In addition, the distribution characteristics of the velocity in MEB region were studied using the PIV results and the location of the stagnation point in the velocity fields was discussed. (author) 1. Migration velocity analysis using pre-stack wave fields KAUST Repository Alkhalifah, Tariq Ali; Wu, Zedong 2016-01-01 Using both image and data domains to perform velocity inversion can help us resolve the long and short wavelength components of the velocity model, usually in that order. This translates to integrating migration velocity analysis into full waveform 2. The size effect of the quantum coherence in the transverse-field XY chain Energy Technology Data Exchange (ETDEWEB) Wang, Lu; Yang, Cui-hong; Wang, Jun-feng [Department of Physics, Nanjing University of Information Science & Technology, Nanjing 210044 (China); Lei, Shu-guo, E-mail: [email protected] [College of Science, Nanjing Tech University, Nanjing, 211816 (China) 2016-12-15 Based on the Wigner–Yanase skew information, the size effect of the quantum coherence in the ground state of the finite transverse-field spin-1/2 XY chain is explored. It is found that the first-order derivatives of the single-spin coherence and the two-spin local coherence both have scaling behaviors in the vicinity of the critical point. A simplified version of coherence is also studied and the same characteristics with its counterpart are found. 3. Coherence properties of the harmonic generation in intense laser field International Nuclear Information System (INIS) Salieres, P. 1995-01-01 In this thesis is presented an experimental and theoretical study of the harmonic generation in intense field and coherence properties of this radiation. The first part reminds the main harmonic specter characteristics. Follow then experimental studies of the tray extension with the laser lighting, the harmonic generation by ions, and the influence of the laser field on the efficiency of generation. The second part presents the quantum model of the harmonic generation in tunnel regime that we have used for the calculation of the dipoles. We compare dependence in lighting of some harmonic, by insisting on the characteristic behavior of the atomic phase. The theory of the propagation is presented in third part. After the reminder of the case of a perturbative polarization, we develop the case of the polarization in tunnel regime. With the help of numerical simulations, we show the influence of the atomic phase on the agreement of phase, and therefore on the efficiency of conversion and profiles of generation in the medium. The importance of the geometry of the interaction is underlined. The part IV presents the study of the spatial coherence of the harmonic radiation. We develop first consequences of the theory of the agreement of phase for profiles of emission. Then the comparison with experimental profiles is detailed in function of the different parameters( order of non linearity, laser lighting, position of the focus by report in the gaseous medium). The study of the spectral and temporal coherence of the part V begins with the experimental effect investigation of the ionization on specters of the harmonic of weak order. We present then theoretical predictions of the preceding model for spectral and temporal profiles of the harmonic of highest order, generated in tunnel regime. The part VI is devoted to the UVX source aspect of the harmonic radiation. General characteristics (number of photons, agreement) are first detailed, then we present the first experiences 4. Magnetic fields, velocity fields and brightness in the central region of the Solar disk Energy Technology Data Exchange (ETDEWEB) Tsap, T T 1978-01-01 The longitudinal magnetic fields, velocity fields and brightness at the center of the Solar disk are studied. Observations of the magnetic field, line-of-sight velocities and brightness have been made with the doublemagnetograph of the Crimean astrophysical observatory. It is found that the average magnetic field strength recorded in the iron line lambda 5233 A is 18 Gs for the elements of N-polarity and 23 Gs for the elements of S-polarity. The magnetic elements with the field strength more than 200 Gs are observed in some of the cases. There is a close correlation between the magnetic field distribution in the lambda 5250 A FeI and D/sub 1/ Na I lines and between the magnetic field in the lambda 5250 A and brightness in the K/sub 3/CaII line. The dimensions of the magnetic elements in the lambda and D/sub 1/NaI lines are equal. The comparison of the magnetic field with the radial velocity recorded in the lambda 5250 and 5233 A lines has shown that radial velocities are close to zero in the regions of maximum longitudinal magnetic field. The chromospheric network-like pattern is observed in the brightness distribution of ten different spectral lines. 5. Coherent confinement of plasmonic field in quantum dot-metallic nanoparticle molecules. Science.gov (United States) Sadeghi, S M; Hatef, A; Fortin-Deschenes, Simon; Meunier, Michel 2013-05-24 Interaction of a hybrid system consisting of a semiconductor quantum dot and a metallic nanoparticle (MNP) with a laser beam can replace the intrinsic plasmonic field of the MNP with a coherently normalized field (coherent-plasmonic or CP field). In this paper we show how quantum coherence effects in such a hybrid system can form a coherent barrier (quantum cage) that spatially confines the CP field. This allows us to coherently control the modal volume of this field, making it significantly smaller or larger than that of the intrinsic plasmonic field of the MNP. We investigate the spatial profiles of the CP field and discuss how the field barrier depends on the collective states of the hybrid system. 6. Coherent drift wave structures in sheared magnetic fields International Nuclear Information System (INIS) Morrison, P.J.; Horton, W. 1993-01-01 For the problem of calculating the coherent drift wave structures in sheared magnetic fields, the authors have found it useful to derive the governing nonlinear pde from a variational principle. The variational principle is based on the free energy functional F[var-phi] = ∫ V F(var-phi, ∇ var-phi, x)dx dy. The method is applied to the vortex with speed u derived in Su et al., given by ∇ 2 var-phi = (1 - v d /u) var-phi - S m 2 /u 2 (x - var-phi/u) (x - var-phi/2u) var-phi where space is measured in units of ρ s , var-phi = (eΦ/T e )(L n /ρ s ) and the magnetic shear parameter is S m . While the linearized problem (var-phi much-lt ux) describes the usual shear induced damping, nonlinear solutions with trapped flow (var-phi > ur 0 ) form nonlinear self-bound states, which are maxima of the free energy F. The authors discuss the analytic properties and the numerical procedures for solving these types of nonlinear pde's 7. Measuring Average Angular Velocity with a Smartphone Magnetic Field Sensor Science.gov (United States) Pili, Unofre; Violanda, Renante 2018-01-01 The angular velocity of a spinning object is, by standard, measured using a device called a tachometer. However, by directly using it in a classroom setting, the activity is likely to appear as less instructive and less engaging. Indeed, some alternative classroom-suitable methods for measuring angular velocity have been presented. In this paper,… 8. Interband coherence response to electric fields in crystals: Berry-phase contributions and disorder effects Science.gov (United States) Culcer, Dimitrie; Sekine, Akihiko; MacDonald, Allan H. 2017-07-01 In solid state conductors, linear response to a steady electric field is normally dominated by Bloch state occupation number changes that are correlated with group velocity and lead to a steady state current. Recently it has been realized that, for a number of important physical observables, the most important response even in conductors can be electric-field induced coherence between Bloch states in different bands, such as that responsible for screening in dielectrics. Examples include the anomalous and spin-Hall effects, spin torques in magnetic conductors, and the minimum conductivity and chiral anomaly in Weyl and Dirac semimetals. In this paper we present a general quantum kinetic theory of linear response to an electric field which can be applied to solids with arbitrarily complicated band structures and includes the interband coherence response and the Bloch-state repopulation responses on an equal footing. One of the principal aims of our work is to enable extensive transport theory applications using computational packages constructed in terms of maximally localized Wannier functions. To this end we provide a complete correspondence between the Bloch and Wannier formulations of our theory. The formalism is based on density-matrix equations of motion, on a Born approximation treatment of disorder, and on an expansion in scattering rate to leading nontrivial order. Our use of a Born approximation omits some physical effects and represents a compromise between comprehensiveness and practicality. The quasiparticle bands are treated in a completely general manner that allows for arbitrary forms of the spin-orbit interaction and for the broken time reversal symmetry of magnetic conductors. We demonstrate that the interband response in conductors consists primarily of two terms: an intrinsic contribution due to the entire Fermi sea that captures, among other effects, the Berry curvature contribution to wave-packet dynamics, and an anomalous contribution caused 9. Coherent states of non-relativistic electron in the magnetic-solenoid field International Nuclear Information System (INIS) Bagrov, V G; Gavrilov, S P; Filho, D P Meira; Gitman, D M 2010-01-01 In the present work we construct coherent states in the magnetic-solenoid field, which is a superposition of the Aharonov-Bohm field and a collinear uniform magnetic field. In the problem under consideration there are two kinds of coherent states, those which correspond to classical trajectories which embrace the solenoid and those which do not. The constructed coherent states reproduce exactly classical trajectories, maintain their form under the time evolution and form a complete set of functions, which can be useful in semiclassical calculations. In the absence of the solenoid field these states are reduced to the well known in the case of uniform magnetic field Malkin-Man'ko coherent states. 10. Coherent states of non-relativistic electron in the magnetic-solenoid field Energy Technology Data Exchange (ETDEWEB) Bagrov, V G [Department of Physics, Tomsk State University, 634050, Tomsk (Russian Federation); Gavrilov, S P; Filho, D P Meira [Institute of Physics, University of Sao Paulo (Brazil); Gitman, D M, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Institute of Physics, University of Sao Paulo, CP 66318, CEP 05315-970 Sao Paulo (Brazil) 2010-09-03 In the present work we construct coherent states in the magnetic-solenoid field, which is a superposition of the Aharonov-Bohm field and a collinear uniform magnetic field. In the problem under consideration there are two kinds of coherent states, those which correspond to classical trajectories which embrace the solenoid and those which do not. The constructed coherent states reproduce exactly classical trajectories, maintain their form under the time evolution and form a complete set of functions, which can be useful in semiclassical calculations. In the absence of the solenoid field these states are reduced to the well known in the case of uniform magnetic field Malkin-Man'ko coherent states. 11. Field Testing of an In-well Point Velocity Probe for the Rapid Characterization of Groundwater Velocity Science.gov (United States) Osorno, T.; Devlin, J. F. 2017-12-01 Reliable estimates of groundwater velocity is essential in order to best implement in-situ monitoring and remediation technologies. The In-well Point Velocity Probe (IWPVP) is an inexpensive, reusable tool developed for rapid measurement of groundwater velocity at the centimeter-scale in monitoring wells. IWPVP measurements of groundwater speed are based on a small-scale tracer test conducted as ambient groundwater passes through the well screen and the body of the probe. Horizontal flow direction can be determined from the difference in tracer mass passing detectors placed in four funnel-and-channel pathways through the probe, arranged in a cross pattern. The design viability of the IWPVP was confirmed using a two-dimensional numerical model in Comsol Multiphysics, followed by a series of laboratory tank experiments in which IWPVP measurements were calibrated to quantify seepage velocities in both fine and medium sand. Lab results showed that the IWPVP was capable of measuring the seepage velocity in less than 20 minutes per test, when the seepage velocity was in the range of 0.5 to 4.0 m/d. Further, the IWPVP estimated the groundwater speed with a precision of ± 7%, and an accuracy of ± 14%, on average. The horizontal flow direction was determined with an accuracy of ± 15°, on average. Recently, a pilot field test of the IWPVP was conducted in the Borden aquifer, C.F.B. Borden, Ontario, Canada. A total of approximately 44 IWPVP tests were conducted within two 2-inch groundwater monitoring wells comprising a 5 ft. section of #8 commercial well screen. Again, all tests were completed in under 20 minutes. The velocities estimated from IWPVP data were compared to 21 Point Velocity Probe (PVP) tests, as well as Darcy-based estimates of groundwater velocity. Preliminary data analysis shows strong agreement between the IWPVP and PVP estimates of groundwater velocity. Further, both the IWPVP and PVP estimates of groundwater velocity appear to be reasonable when 12. Experimental investigation of the velocity field in buoyant diffusion flames using PIV and TPIV algorithm Science.gov (United States) L. Sun; X. Zhou; S.M. Mahalingam; D.R. Weise 2005-01-01 We investigated a simultaneous temporally and spatially resolved 2-D velocity field above a burning circular pan of alcohol using particle image velocimetry (PIV). The results obtained from PIV were used to assess a thermal particle image velocimetry (TPIV) algorithm previously developed to approximate the velocity field using the temperature field, simultaneously... 13. Migration velocity analysis using pre-stack wave fields KAUST Repository Alkhalifah, Tariq Ali 2016-08-25 Using both image and data domains to perform velocity inversion can help us resolve the long and short wavelength components of the velocity model, usually in that order. This translates to integrating migration velocity analysis into full waveform inversion. The migration velocity analysis part of the inversion often requires computing extended images, which is expensive when using conventional methods. As a result, we use pre-stack wavefield (the double-square-root formulation) extrapolation, which includes the extended information (subsurface offsets) naturally, to make the process far more efficient and stable. The combination of the forward and adjoint pre-stack wavefields provides us with update options that can be easily conditioned to improve convergence. We specifically use a modified differential semblance operator to split the extended image into a residual part for classic differential semblance operator updates and the image (Born) modelling part, which provides reflections for higher resolution information. In our implementation, we invert for the velocity and the image simultaneously through a dual objective function. Applications to synthetic examples demonstrate the features of the approach. 14. Near field acoustic holography with particle velocity transducers DEFF Research Database (Denmark) Jacobsen, Finn; Liu, Yang 2005-01-01 of the pressure measured in a plane further away, high spatial frequency components corresponding to evanescent modes are not only amplified by the distance but also by the wave number ratio (kz/k). By contrast, when the pressure is predicted close to the source on the basis of the particle velocity measured... 15. Strong field coherent control of atomic population transfer International Nuclear Information System (INIS) Trallero-Herrero, Carlos; Clow, Stephen D; Bergeman, Thomas; Weinacht, Thomas 2008-01-01 We demonstrate a population inversion in a three-level system via three-photon absorption from a single shaped ultrafast laser pulse. The optimal pulse shape for the inversion is discovered using closed-loop learning control and interpreted via pulse shape parameter scans and numerical integration of the Schroedinger equation. The population inversion is measured using a combination of spontaneous and stimulated emissions. Our results illustrate the importance of dynamic Stark shifts in coherent multi-photon excitation 16. Measurement of pressure distributions and velocity fields of water jet intake flow International Nuclear Information System (INIS) Jeong, Eun Ho; Yoon, Sang Youl; Kwon, Seong Hoon; Chun, Ho Hwan; Kim, Mun Chan; Kim, Kyung Chun 2002-01-01 Waterjet propulsion system can avoid cavitation problem which is being arised conventional propeller propulsion system. The main issue of designing waterjet system is the boundary layer separation at ramp and lib of water inlet. The flow characteristics are highly depended on Jet to Velocity Ratio(JVR) as well as the intake geometry. The present study is conducted in a wind tunnel to provide accurate pressure destribution at the inlet wall and velocity field of the inlet and exit planes. Particle image velocimetry technique is used to obtain detail velocity fields. Pressure distributions and velocity field are discussed with accelerating and deaccelerating flow zones and the effect of JVR 17. Magnetic field and temperature dependence of the critical vortex velocity in type-II superconducting films Energy Technology Data Exchange (ETDEWEB) Grimaldi, G; Leo, A; Cirillo, C; Attanasio, C; Nigro, A; Pace, S [CNR-INFM Laboratorio Regionale SuperMat, Via Salvador Allende, I-84081 Baronissi (Italy)], E-mail: [email protected] 2009-06-24 We study the vortex dynamics in the instability regime induced by high dissipative states well above the critical current in Nb superconducting strips. The magnetic field and temperature behavior of the critical vortex velocity corresponding to the observed dynamic instability is ascribed to intrinsic non-equilibrium phenomena. The Larkin-Ovchinnikov (LO) theory of electronic instability in high velocity vortex motion has been applied to interpret the temperature dependence of the critical vortex velocity. The magnetic field dependence of the vortex critical velocity shows new features in the low-field regime not predicted by LO. 18. Protecting quantum coherence of two-level atoms from vacuum fluctuations of electromagnetic field International Nuclear Information System (INIS) Liu, Xiaobao; Tian, Zehua; Wang, Jieci; Jing, Jiliang 2016-01-01 In the framework of open quantum systems, we study the dynamics of a static polarizable two-level atom interacting with a bath of fluctuating vacuum electromagnetic field and explore under which conditions the coherence of the open quantum system is unaffected by the environment. For both a single-qubit and two-qubit systems, we find that the quantum coherence cannot be protected from noise when the atom interacts with a non-boundary electromagnetic field. However, with the presence of a boundary, the dynamical conditions for the insusceptible of quantum coherence are fulfilled only when the atom is close to the boundary and is transversely polarizable. Otherwise, the quantum coherence can only be protected in some degree in other polarizable direction. -- Highlights: •We study the dynamics of a two-level atom interacting with a bath of fluctuating vacuum electromagnetic field. •For both a single and two-qubit systems, the quantum coherence cannot be protected from noise without a boundary. •The insusceptible of the quantum coherence can be fulfilled only when the atom is close to the boundary and is transversely polarizable. •Otherwise, the quantum coherence can only be protected in some degree in other polarizable direction. 19. Coherent states of quantum systems. [Hamiltonians, variable magnetic field, adiabatic approximation Energy Technology Data Exchange (ETDEWEB) Trifonov, D A 1975-01-01 Time-evolution of coherent states and uncertainty relations for quantum systems are considered as well as the relation between the various types of coherent states. The most general form of the Hamiltonians that keep the uncertainty products at a minimum is found using the coherent states. The minimum uncertainty packets are shown to be coherent states of the type nonstationary-system coherent states. Two specific systems, namely that of a generalized N-dimensional oscillator and that of a charged particle moving in a variable magnetic field, are treated as examples. The adiabatic approximation to the uncertainty products for these systems is also discussed and the minimality is found to be retained with an exponential accuracy. 20. Bound states in quantum field theory and coherent states: A fresh look International Nuclear Information System (INIS) Misra, S.P. 1986-09-01 We consider here bound state equations in quantum field theory where the state explicitly includes radiation quanta as constituents with the number of such quanta not fixed. The fully interacting system is dealt with through equal time commutators/anticommutators of field operators. The multiparticle channel for the radiation field is approximated through coherent state representations. (author) 1. Temperature Field-Wind Velocity Field Optimum Control of Greenhouse Environment Based on CFD Model Directory of Open Access Journals (Sweden) Yongbo Li 2014-01-01 Full Text Available The computational fluid dynamics technology is applied as the environmental control model, which can include the greenhouse space. Basic environmental factors are set to be the control objects, the field information is achieved via the division of layers by height, and numerical characteristics of each layer are used to describe the field information. Under the natural ventilation condition, real-time requirements, energy consumption, and distribution difference are selected as index functions. The optimization algorithm of adaptive simulated annealing is used to obtain optimal control outputs. A comparison with full-open ventilation shows that the whole index can be reduced at 44.21% and found that a certain mutual exclusiveness exists between the temperature and velocity field in the optimal course. All the results indicate that the application of CFD model has great advantages to improve the control accuracy of greenhouse. 2. Velocity dependence of enhanced dynamic hyperfine field for Pd ions swiftly recoiling in magnetized Fe International Nuclear Information System (INIS) Stuchbery, A.E.; Ryan, G.C.; Bolotin, H.H.; Sie, S.H. 1980-01-01 The velocity-dependence of the magnitude of the enchanced dynamic hyperfine magnetic field (EDF) manifest at nuclei of 108 Pd ions swiftly recoiling through thin magnetized Fe has been investigated at ion velocities higher than have heretofore been examined for the heavier nuclides (i.e., at initial recoil velocities (v/Zv 0 )=0.090 and 0.160, v 0 =c/137). These results for 108 Pd, when taken in conjunction with those of prior similar measurements for 106 Pd at lower velocities, and fitted to a velocity dependence for the EDF, give for the Pd isotopes over the extended velocity range 1.74 0 )<=7.02, p=0.41+-0.15; a result incompatible with previous attributions of a linear velocity dependence for the field 3. Characterizing the original ejection velocity field of the Koronis family Science.gov (United States) Carruba, V.; Nesvorný, D.; Aljbaae, S. 2016-06-01 An asteroid family forms as a result of a collision between an impactor and a parent body. The fragments with ejection speeds higher than the escape velocity from the parent body can escape its gravitational pull. The cloud of escaping debris can be identified by the proximity of orbits in proper element, or frequency, domains. Obtaining estimates of the original ejection speed can provide valuable constraints on the physical processes occurring during collision, and used to calibrate impact simulations. Unfortunately, proper elements of asteroids families are modified by gravitational and non-gravitational effects, such as resonant dynamics, encounters with massive bodies, and the Yarkovsky effect, such that information on the original ejection speeds is often lost, especially for older, more evolved families. It has been recently suggested that the distribution in proper inclination of the Koronis family may have not been significantly perturbed by local dynamics, and that information on the component of the ejection velocity that is perpendicular to the orbital plane (vW), may still be available, at least in part. In this work we estimate the magnitude of the original ejection velocity speeds of Koronis members using the observed distribution in proper eccentricity and inclination, and accounting for the spread caused by dynamical effects. Our results show that (i) the spread in the original ejection speeds is, to within a 15% error, inversely proportional to the fragment size, and (ii) the minimum ejection velocity is of the order of 50 m/s, with larger values possible depending on the orbital configuration at the break-up. 4. Far-Field and Middle-Field Vertical Velocities Associated with Megathrust Earthquakes Science.gov (United States) Fleitout, L.; Trubienko, O.; Klein, E.; Vigny, C.; Garaud, J.; Shestakov, N.; Satirapod, C.; Simons, W. J. 2013-12-01 The recent megathrust earthquakes (Sumatra, Chili and Japan) have induced far-field postseismic subsidence with velocities from a few mm/yr to more than 1cm/yr at distances from 500 to 1500km from the earthquake epicentre, for several years following the earthquake. This subsidence is observed in Argentina, China, Korea, far-East Russia and in Malaysia and Thailand as reported by Satirapod et al. ( ASR, 2013). In the middle-field a very pronounced uplift is localized on the flank of the volcanic arc facing the trench. This is observed both over Honshu, in Chile and on the South-West coast of Sumatra. In Japan, the deformations prior to Tohoku earthquake are well measured by the GSI GPS network: While the East coast was slightly subsiding, the West coast was raising. A 3D finite element code (Zebulon-Zset) is used to understand the deformations through the seismic cycle in the areas surrounding the last three large subduction earthquakes. The meshes designed for each region feature a broad spherical shell portion with a viscoelastic asthenosphere. They are refined close to the subduction zones. Using these finite element models, we find that the pattern of the predicted far-field vertical postseismic displacements depends upon the thicknesses of the elastic plate and of the low viscosity asthenosphere. A low viscosity asthenosphere at shallow depth, just below the lithosphere is required to explain the subsidence at distances from 500 to 1500km. A thick (for example 600km) asthenosphere with a uniform viscosity predicts subsidence too far away from the trench. Slip on the subduction interface is unable tot induce the observed far-field subsidence. However, a combination of relaxation in a low viscosity wedge and slip or relaxation on the bottom part of the subduction interface is necessary to explain the observed postseismic uplift in the middle-field (volcanic arc area). The creep laws of the various zones used to explain the postseismic data can be injected in 5. Measurement of core velocity fluctuations and the dynamo in a reversed-field pinch International Nuclear Information System (INIS) Den Hartog, D.J.; Craig, D.; Fiksel, G.; Fontana, P.W.; Prager, S.C.; Sarff, J.S.; Chapman, J.T. 1998-01-01 Plasma flow velocity fluctuations have been directly measured in the high temperature magnetically confined plasma in the Madison Symmetric Torus (MST) Reversed-Field Pinch (RFP). These measurements show that the flow velocity fluctuations are correlated with magnetic field fluctuations. This initial measurement is subject to limitations of spatial localization and other uncertainties, but is evidence for sustainment of the RFP magnetic field configuration by the magnetohydrodynamic (MHD) dynamo. Both the flow velocity and magnetic field fluctuations are the result of global resistive MHD modes of helicity m = 1, n = 5--10 in the core of MST. Chord-averaged flow velocity fluctuations are measured in the core of MST by recording the Doppler shift of impurity line emission with a specialized high resolution and throughput grating spectrometer. Magnetic field fluctuations are recorded with a large array of small edge pickup coils, which allows spectral decomposition into discrete modes and subsequent correlation with the velocity fluctuation data 6. Quantitative imaging of cerebral blood flow velocity and intracellular motility using dynamic light scattering-optical coherence tomography. Science.gov (United States) Lee, Jonghwan; Radhakrishnan, Harsha; Wu, Weicheng; Daneshmand, Ali; Climov, Mihail; Ayata, Cenk; Boas, David A 2013-06-01 This paper describes a novel optical method for label-free quantitative imaging of cerebral blood flow (CBF) and intracellular motility (IM) in the rodent cerebral cortex. This method is based on a technique that integrates dynamic light scattering (DLS) and optical coherence tomography (OCT), named DLS-OCT. The technique measures both the axial and transverse velocities of CBF, whereas conventional Doppler OCT measures only the axial one. In addition, the technique produces a three-dimensional map of the diffusion coefficient quantifying nontranslational motions. In the DLS-OCT diffusion map, we observed high-diffusion spots, whose locations highly correspond to neuronal cell bodies and whose diffusion coefficient agreed with that of the motion of intracellular organelles reported in vitro in the literature. Therefore, the present method has enabled, for the first time to our knowledge, label-free imaging of the diffusion-like motion of intracellular organelles in vivo. As an example application, we used the method to monitor CBF and IM during a brief ischemic stroke, where we observed an induced persistent reduction in IM despite the recovery of CBF after stroke. This result supports that the IM measured in this study represent the cellular energy metabolism-related active motion of intracellular organelles rather than free diffusion of intracellular macromolecules. 7. Coherent states of a particle in a magnetic field and the Stieltjes moment problem International Nuclear Information System (INIS) Gazeau, J.P.; Baldiotti, M.C.; Gitman, D.M. 2009-01-01 A solution to a version of the Stieltjes moment problem is presented. Using this solution, we construct a family of coherent states of a charged particle in a uniform magnetic field. We prove that these states form an overcomplete set that is normalized and resolves the unity. By the help of these coherent states we construct the Fock-Bergmann representation related to the particle quantization. This quantization procedure takes into account a circle topology of the classical motion. 8. Coherent states of a particle in a magnetic field and the Stieltjes moment problem Energy Technology Data Exchange (ETDEWEB) Gazeau, J.P. [Instituto de Fisica, Universidade de Sao Paulo, Caixa Postal 66318-CEP, 05315-970 Sao Paulo, S.P. (Brazil)], E-mail: [email protected]; Baldiotti, M.C. [Instituto de Fisica, Universidade de Sao Paulo, Caixa Postal 66318-CEP, 05315-970 Sao Paulo, S.P. (Brazil)], E-mail: [email protected]; Gitman, D.M. [Instituto de Fisica, Universidade de Sao Paulo, Caixa Postal 66318-CEP, 05315-970 Sao Paulo, S.P. (Brazil)], E-mail: [email protected] 2009-05-11 A solution to a version of the Stieltjes moment problem is presented. Using this solution, we construct a family of coherent states of a charged particle in a uniform magnetic field. We prove that these states form an overcomplete set that is normalized and resolves the unity. By the help of these coherent states we construct the Fock-Bergmann representation related to the particle quantization. This quantization procedure takes into account a circle topology of the classical motion. 9. Magnetic field dependence of ultrasound velocity in high-Tc superconductors International Nuclear Information System (INIS) Higgins, M.J.; Goshorn, D.P.; Bhattacharya, S.; Johnston, D.C. 1989-01-01 The magnetic field dependence of ultrasound velocity in the superconductor La 1.8 Sr 0.2 CuO 4-y is studied. The sound velocity anomaly near T c is shown to be unambiguously related to superconductivity. Below T c , the sound velocity is found to be sensitive to the dynamics of a pinned flux lattice. A combination of sound velocity and magnetization measurements suggests three regimes of pinning behavior. A generic pinning ''phase diagram'' is obtained in the superconducting state. An anomalous peak effect in the magnetization is also observed at intermediate field strengths 10. Controlling the development of coherent structures in high speed jets and the resultant near field Science.gov (United States) Speth, Rachelle and an increase on the non-flapping plane. Therefore, these thicker layers and higher Reynolds number jets may require actuators with a higher energy input (i.e. higher duty cycle, higher actuator temperature, more actuators) to ensure the excitation of the flow instability. The final parameter studied is the effect of Mach number on the development and decay of large scale structures for no-control and control cases for Mach 0.9 and Mach 1.3 jets. For this exercise, the axisymmetric mode (m=0) was considered at excitation frequencies of St=0.05, 0.15, and 0.25, with emphasis on the evolution of coherent structures and their effects on the resultant near field pressure map. Without control, the two jets have similar shear layer growth until the end of the potential core length of the subsonic case, at which point the subsonic jet spreads at a higher rate. For the controlled cases, relatively larger streamwise hairpin vortices have been noted for the subsonic cases than the supersonic cases resulting in stronger entrainment of the ambient fluid. This increased entrainment in the subsonic cases causes a reduction in the normalized convective velocity resulting in similar normalized values to that of the supersonic cases. As the excitation frequency is increased, more hairpin vortices are present and the normalized convective velocity is reduced for both subsonic and supersonic cases. (Abstract shortened by ProQuest.). 11. Dynamics of plasmonic field polarization induced by quantum coherence in quantum dot-metallic nanoshell structures. Science.gov (United States) 2014-09-01 When a hybrid system consisting of a semiconductor quantum dot and a metallic nanoparticle interacts with a laser field, the plasmonic field of the metallic nanoparticle can be normalized by the quantum coherence generated in the quantum dot. In this Letter, we study the states of polarization of such a coherent-plasmonic field and demonstrate how these states can reveal unique aspects of the collective molecular properties of the hybrid system formed via coherent exciton-plasmon coupling. We show that transition between the molecular states of this system can lead to ultrafast polarization dynamics, including sudden reversal of the sense of variations of the plasmonic field and formation of circular and elliptical polarization. 12. Electric field and electron density thresholds for coherent auroral echo onset International Nuclear Information System (INIS) Kustov, A.V.; Uspensky, M.V.; Sofko, G.J.; Koehler, J.A.; Jones, G.O.L.; Williams, P.J.S. 1993-01-01 The authors study the threshold dependence of electron density and electric field for the observation of coherent auroral echo onset. They make use of Polar Geophysical Institute 83 MHz auroral radar and the EISCAT facility in Scandanavia, to simultaneously get plasma parameter information and coherent scatter observations. They observe an electron density threshold of roughly 2.5x10 11 m -3 for electric fields of 15 - 20 mV/m (near the Farley-Buneman instability threshold). For electric fields of 5 - 10 mV/m echos are not observed for even twice the previous electron density. Echo strength is observed to have other parametric dependences 13. Modeling of velocity field for vacuum induction melting process Institute of Scientific and Technical Information of China (English) CHEN Bo; JIANG Zhi-guo; LIU Kui; LI Yi-yi 2005-01-01 The numerical simulation for the recirculating flow of melting of an electromagnetically stirred alloy in a cylindrical induction furnace crucible was presented. Inductive currents and electromagnetic body forces in the alloy under three different solenoid frequencies and three different melting powers were calculated, and then the forces were adopted in the fluid flow equations to simulate the flow of the alloy and the behavior of the free surface. The relationship between the height of the electromagnetic stirring meniscus, melting power, and solenoid frequency was derived based on the law of mass conservation. The results show that the inductive currents and the electromagnetic forces vary with the frequency, melting power, and the physical properties of metal. The velocity and the height of the meniscus increase with the increase of the melting power and the decrease of the solenoid frequency. 14. New Interpretations of Measured Antihydrogen Velocities and Field Ionization Spectra International Nuclear Information System (INIS) Pohl, T.; Sadeghpour, H. R.; Gabrielse, G. 2006-01-01 We present extensive Monte Carlo simulations, showing that cold antihydrogen (H) atoms are produced when antiprotons (p) are gently heated in the side wells of a nested Penning trap. The observed H with high energies, that had seemed to indicate otherwise, are instead explained by a surprisingly effective charge-exchange mechanism. We shed light on the previously measured field-ionization spectrum, and reproduce both the characteristic low-field power law as well as the enhanced H production at higher fields. The latter feature is shown to arise from H atoms too deeply bound to be described as guiding center atoms, atoms with internally chaotic motion 15. Group velocity measurement using spectral interference in near-field scanning optical microscopy International Nuclear Information System (INIS) Mills, John D.; Chaipiboonwong, Tipsuda; Brocklesby, William S.; Charlton, Martin D. B.; Netti, Caterina; Zoorob, Majd E.; Baumberg, Jeremy J. 2006-01-01 Near-field scanning optical microscopy provides a tool for studying the behavior of optical fields inside waveguides. In this experiment the authors measure directly the variation of group velocity between different modes of a planar slab waveguide as the modes propagate along the guide. The measurement is made using the spectral interference between pulses propagating inside the waveguide with different group velocities, collected using a near-field scanning optical microscope at different points down the guide and spectrally resolved. The results are compared to models of group velocities in simple guides 16. Near-field acoustic holography with sound pressure and particle velocity measurements DEFF Research Database (Denmark) Fernandez Grande, Efren of the particle velocity has notable potential in NAH, and furthermore, combined measurement of sound pressure and particle velocity opens a new range of possibilities that are examined in this study. On this basis, sound field separation methods have been studied, and a new measurement principle based on double...... layer measurements of the particle velocity has been proposed. Also, the relation between near-field and far-field radiation from sound sources has been examined using the concept of the supersonic intensity. The calculation of this quantity has been extended to other holographic methods, and studied... 17. Linear velocity fields in non-Gaussian models for large-scale structure Science.gov (United States) Scherrer, Robert J. 1992-01-01 Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis. 18. Observation and analysis of abrupt changes in the interplanetary plasma velocity and magnetic field. Science.gov (United States) Martin, R. N.; Belcher, J. W.; Lazarus, A. J. 1973-01-01 This paper presents a limited study of the physical nature of abrupt changes in the interplanetary plasma velocity and magnetic field based on 19 day's data from the Pioneer 6 spacecraft. The period was chosen to include a high-velocity solar wind stream and low-velocity wind. Abrupt events were accepted for study if the sum of the energy density in the magnetic field and velocity changes was above a specified minimum. A statistical analysis of the events in the high-velocity solar wind stream shows that Alfvenic changes predominate. This conclusion is independent of whether steady state requirements are imposed on conditions before and after the event. Alfvenic changes do not dominate in the lower-speed wind. This study extends the plasma field evidence for outwardly propagating Alfvenic changes to time scales as small as 1 min (scale lengths on the order of 20,000 km). 19. Temporal coherence of the acoustic field forward propagated through a continental shelf with random internal waves. Science.gov (United States) Gong, Zheng; Chen, Tianrun; Ratilal, Purnima; Makris, Nicholas C 2013-11-01 An analytical model derived from normal mode theory for the accumulated effects of range-dependent multiple forward scattering is applied to estimate the temporal coherence of the acoustic field forward propagated through a continental-shelf waveguide containing random three-dimensional internal waves. The modeled coherence time scale of narrow band low-frequency acoustic field fluctuations after propagating through a continental-shelf waveguide is shown to decay with a power-law of range to the -1/2 beyond roughly 1 km, decrease with increasing internal wave energy, to be consistent with measured acoustic coherence time scales. The model should provide a useful prediction of the acoustic coherence time scale as a function of internal wave energy in continental-shelf environments. The acoustic coherence time scale is an important parameter in remote sensing applications because it determines (i) the time window within which standard coherent processing such as matched filtering may be conducted, and (ii) the number of statistically independent fluctuations in a given measurement period that determines the variance reduction possible by stationary averaging. 20. Quantum coherence dynamics of a three-level atom in a two-mode field International Nuclear Information System (INIS) Solovarov, N. K. 2008-01-01 The correlated dynamics of a three-level atom resonantly coupled to an electromagnetic cavity field is calculated (Λ, V, and L models). A diagrammatic representation of quantum dynamics is proposed for these models. As an example, Λ-atom dynamics is examined to demonstrate how the use of conventional von Neumann's reduction leads to internal decoherence (disentanglement-induced decoherence) and to the absence of atomic coherence under multiphoton excitation. The predicted absence of atomic coherence is inconsistent with characteristics of an experimentally observed atom-photon entangled state. It is shown that the correlated reduction of a composite quantum system proposed in [18] qualitatively predicts the occurrence and evolution of atomic coherence under multiphoton excitation if a seed coherence is introduced into any subsystem (the atom or a cavity mode) 1. Magnetic and velocity fields MHD flow of a stretched vertical ... African Journals Online (AJOL) Analytical solutions for heat and mass transfer by laminar flow of Newtonian, viscous, electrically conducting and heat generation/absorbing fluid on a continuously moving vertical permeable surface with buoyancy in the presence of a magnetic field and a first order chemical reaction are reported. The solutions for magnetic ... 2. Wide-Field Vibrational Phase Contrast Imaging Based on Coherent Anti-Stokes Raman Scattering Holography International Nuclear Information System (INIS) Lv Yong-Gang; Ji Zi-Heng; Dong Da-Shan; Gong Qi-Huang; Shi Ke-Bin 2015-01-01 We propose and implement a wide-field vibrational phase contrast detection to obtain imaging of imaginary components of third-order nonlinear susceptibility in a coherent anti-Stokes Raman scattering (CARS) microscope with full suppression of the non-resonant background. This technique is based on the unique ability of recovering the phase of the generated CARS signal based on holographic recording. By capturing the phase distributions of the generated CARS field from the sample and from the environment under resonant illumination, we demonstrate the retrieval of imaginary components in the CARS microscope and achieve background free coherent Raman imaging. (paper) 3. Clear and Measurable Signature of Modified Gravity in the Galaxy Velocity Field Science.gov (United States) Hellwing, Wojciech A.; Barreira, Alexandre; Frenk, Carlos S.; Li, Baojiu; Cole, Shaun 2014-06-01 The velocity field of dark matter and galaxies reflects the continued action of gravity throughout cosmic history. We show that the low-order moments of the pairwise velocity distribution v12 are a powerful diagnostic of the laws of gravity on cosmological scales. In particular, the projected line-of-sight galaxy pairwise velocity dispersion σ12(r) is very sensitive to the presence of modified gravity. Using a set of high-resolution N-body simulations, we compute the pairwise velocity distribution and its projected line-of-sight dispersion for a class of modified gravity theories: the chameleon f(R) gravity and Galileon gravity (cubic and quartic). The velocities of dark matter halos with a wide range of masses would exhibit deviations from general relativity at the (5-10)σ level. We examine strategies for detecting these deviations in galaxy redshift and peculiar velocity surveys. If detected, this signature would be a "smoking gun" for modified gravity. 4. Particle image velocimetry measurements of 2-dimensional velocity field around twisted tape Energy Technology Data Exchange (ETDEWEB) Song, Min Seop; Park, So Hyun; Kim, Eung Soo, E-mail: [email protected] 2016-11-01 Highlights: • Measurements of the flow field in a pipe with twisted tape were conducted by particle image velocimetry (PIV). • A novel matching index of refraction technique utilizing 3D printing and oil mixture was adopted to make the test section transparent. • Undistorted particle images were clearly captured in the presence of twisted tape. • 2D flow field in the pipe with twisted tape revealed the characteristic two-peak velocity profile. - Abstract: Twisted tape is a passive component used to enhance heat exchange in various devices. It induces swirl flow that increases the mixing of fluid. Thus, ITER selected the twisted tape as one of the candidates for turbulence promoting in the divertor cooling. Previous study was mainly focused on the thermohydraulic performance of the twisted tape. As detailed data on the velocity field around the twisted tape was insufficient, flow visualization study was performed to provide fundamental data on velocity field. To visualize the flow in a complex structure, novel matching index of refraction technique was used with 3-D printing and mixture of anise and mineral oil. This technique enables the camera to capture undistorted particle image for velocity field measurement. Velocity fields at Reynolds number 1370–9591 for 3 different measurement plane were obtained through particle image velocimetry. The 2-dimensional averaged velocity field data were obtained from 177 pair of instantaneous velocity fields. It reveals the characteristic two-peak flow motion in axial direction. In addition, the normalized velocity profiles were converged with increase of Reynolds numbers. Finally, the uncertainty of the result data was analyzed. 5. TAURUS observations of the emission-line velocity field of Centaurus A (NGC 5128) International Nuclear Information System (INIS) Taylor, K.; Atherton, P.D. 1983-01-01 Using TAURUS - an Imaging Fabry Perot system in conjunction with the IPCS on the AAT, the authors have studied the velocity field of the Hα emission line at a spatial resolution of 1.7'' over the dark lane structure of Centaurus A. The derived velocity field is quite symmetrical and strongly suggests that the emission line material is orbiting the elliptical component, as a warped disc. (orig.) 6. Ultrasonic propagation velocity in magnetic and magnetorheological fluids due to an external magnetic field International Nuclear Information System (INIS) Bramantya, M A; Sawada, T; Motozawa, M 2010-01-01 Ultrasonic propagation velocity in a magnetic fluid (MF) and magnetorheological fluid (MRF) changes with the application of an external magnetic field. The formation of clustering structures inside the MF and MRF clearly has an influence on the ultrasonic propagation velocity. Therefore, we propose a qualitative analysis of these structures by measuring properties of ultrasonic propagation. Since MF and MRF are opaque, non-contact inspection using the ultrasonic technique can be very useful for analyzing the inner structures of MF and MRF. In this study, we measured ultrasonic propagation velocity in a hydrocarbon-based MF and MRF precisely. Based on these results, the clustering structures of these fluids are analyzed experimentally in terms of elapsed time dependence and the effect of external magnetic field strength. The results reveal hysteresis and anisotropy in the ultrasonic propagation velocity. We also discuss differences of ultrasonic propagation velocity between MF and MRF. 7. A simple measuring technique of surface flow velocity to analyze the behavior of velocity fields in hydraulic engineering applications. Science.gov (United States) Tellez, Jackson; Gomez, Manuel; Russo, Beniamino; Redondo, Jose M. 2015-04-01 An important achievement in hydraulic engineering is the proposal and development of new techniques for the measurement of field velocities in hydraulic problems. The technological advances in digital cameras with high resolution and high speed found in the market, and the advances in digital image processing techniques now provides a tremendous potential to measure and study the behavior of the water surface flows. This technique was applied at the Laboratory of Hydraulics at the Technical University of Catalonia - Barcelona Tech to study the 2D velocity fields in the vicinity of a grate inlet. We used a platform to test grate inlets capacity with dimensions of 5.5 m long and 4 m wide allowing a zone of useful study of 5.5m x 3m, where the width is similar of the urban road lane. The platform allows you to modify the longitudinal slopes from 0% to 10% and transversal slope from 0% to 4%. Flow rates can arrive to 200 l/s. In addition a high resolution camera with 1280 x 1024 pixels resolution with maximum speed of 488 frames per second was used. A novel technique using particle image velocimetry to measure surface flow velocities has been developed and validated with the experimental data from the grate inlets capacity. In this case, the proposed methodology can become a useful tools to understand the velocity fields of the flow approaching the inlet where the traditional measuring equipment have serious problems and limitations. References DigiFlow User Guide. (2012), (June). Russo, B., Gómez, M., & Tellez, J. (2013). Methodology to Estimate the Hydraulic Efficiency of Nontested Continuous Transverse Grates. Journal of Irrigation and Drainage Engineering, 139(10), 864-871. doi:10.1061/(ASCE)IR.1943-4774.0000625 Teresa Vila (1), Jackson Tellez (1), Jesus Maria Sanchez (2), Laura Sotillos (1), Margarita Diez (3, 1), and J., & (1), M. R. (2014). Diffusion in fractal wakes and convective thermoelectric flows. Geophysical Research Abstracts - EGU General Assembly 2014 8. [Dynamics of biomacromolecules in coherent electromagnetic radiation field]. Science.gov (United States) Leshcheniuk, N S; Apanasevich, E E; Tereshenkov, V I 2014-01-01 It is shown that induced oscillations and periodic displacements of the equilibrium positions occur in biomacromolecules in the absence of electromagnetic radiation absorption, due to modulation of interaction potential between atoms and groups of atoms forming the non-valence bonds in macromolecules by the external electromagnetic field. Such "hyperoscillation" state causes inevitably the changes in biochemical properties of macromolecules and conformational transformation times. 9. Radiography by selective detection of scatter field velocity components Science.gov (United States) Jacobs, Alan M. (Inventor); Dugan, Edward T. (Inventor); Shedlock, Daniel (Inventor) 2007-01-01 A reconfigurable collimated radiation detector, system and related method includes at least one collimated radiation detector. The detector has an adjustable collimator assembly including at least one feature, such as a fin, optically coupled thereto. Adjustments to the adjustable collimator selects particular directions of travel of scattered radiation emitted from an irradiated object which reach the detector. The collimated detector is preferably a collimated detector array, where the collimators are independently adjustable. The independent motion capability provides the capability to focus the image by selection of the desired scatter field components. When an array of reconfigurable collimated detectors is provided, separate image data can be obtained from each of the detectors and the respective images cross-correlated and combined to form an enhanced image. 10. Coherence and fluctuations in the interaction between moving atoms and a quantum field International Nuclear Information System (INIS) Hu, B.L.; Raval, A. 1998-01-01 Mesoscopic physics deals with three fundamental issues: quantum coherence, fluctuations and correlations. Here we analyze these issues for atom optics, using a simplified model of an assembly of atoms (or detectors, which are particles with some internal degree of freedom) moving in arbitrary trajectories in a quantum field. Employing the influence functional formalism, we study the self-consistent effect of the field on the atoms, and their mutual interactions via coupling to the field. We derive the coupled Langevin equations for the atom assemblage and analyze the relation of dissipative dynamics of the atoms (detectors) with the correlation and fluctuations of the quantum field. This provides a useful theoretical framework for analysing the coherent properties of atom-field systems. (author) 11. Electric field measurement in an atmospheric or higher pressure gas by coherent Raman scattering of nitrogen International Nuclear Information System (INIS) Ito, Tsuyohito; Kobayashi, Kazunobu; Hamaguchi, Satoshi; Mueller, Sarah; Luggenhoelscher, Dirk; Czarnetzki, Uwe 2009-01-01 The feasibility of electric field measurement based on field-induced coherent Raman scattering is demonstrated for the first time in a nitrogen containing gas at atmospheric or higher pressure, including open air. The technique is especially useful for the determination of temporal and spatial profiles of the electric field in air-based microdischarges, where nitrogen is abundant. In our current experimental setup, the minimum detectable field strength in open air is about 100 V mm -1 , which is sufficiently small compared with the average field present in typical microdischarges. No further knowledge of other gas/plasma parameters such as the nitrogen density is required. (fast track communication) 12. Drift velocities of 150-km Field-Aligned Irregularities observed by the Equatorial Atmosphere Radar Directory of Open Access Journals (Sweden) Yuichi Otsuka 2013-11-01 Full Text Available Between 130 and 170 km altitude in the daytime ionosphere, the so-called 150-km field-aligned irregularities (FAIs have been observed since the 1960s at equatorial regions with several very high frequency (VHF radars. We report statistical results of 150-km FAI drift velocities on a plane perpendicular to the geomagnetic field, acquired by analyzing the Doppler velocities of 150-km FAIs observed with the Equatorial Atmosphere Radar (EAR at Kototabang, Indonesia during the period from Aug. 2007 to Oct. 2009. We found that the southward/upward perpendicular drift velocity of the 150-km FAIs tends to decrease in the afternoon and that this feature is consistent with that of F-region plasma drift velocities over the magnetic equator. The zonal component of the 150-km FAI drift velocity is westward and decreases with time, whereas the F-region plasma drift velocity observed with the incoherent scatter radar at Jicamarca, Peru, which is westward, reaches a maximum at about noon. The southward/upward and zonal drift velocities of the 150-km FAIs are smaller than that of the F-region plasma drift velocity by approximately 3 m/s and 25 m/s, respectively, on average. The large difference between the 150-km FAI and F-region plasma drift velocities may not arise from a difference in the magnetic latitudes at which their electric fields are generated. Electric fields generated at the altitude at which the 150-km FAIs occur may not be negligible. 13. New Methods for Estimating Water Current Velocity Fields from Autonomous Underwater Vehicles Science.gov (United States) Kinsey, J. C.; Medagoda, L. 2016-02-01 Water current velocities are a crucial component of understanding oceanographic processes and underwater robots, such as autonomous underwater vehicles (AUVs), provide a mobile platform for obtaining these observations. Estimating water current velocities requires both measurements of the water velocity, often obtained with an Acoustic Doppler Current Profiler (ADCP), as well as estimates of the vehicle velocity. Presently, vehicle velocities are supplied on the sea surface with velocity from GPS, or near the seafloor where Doppler Velocity Log (DVL) in bottom-lock is available; however, this capability is unavailable in the mid-water column where DVL bottom-lock and GPS are unavailable. Here we present a method which calculates vehicle velocities using consecutive ADCP measurements in the mid-water using an extended Kalman filter (EKF). The correlation of the spatially changing water current states, along with mass transport and shear constraints on the water current field, is formulated using least square constraints. Results from the Sentry AUV from a mid-water surveying mission at Deepwater Horizon and a small-scale hydrothermal vent flux estimation mission suggest the method is suitable for real-time use. DVL data is denied to simulate mid-water missions and the results compared to ground truth water velocity measurements estimated using DVL velocities. Results show quantifiable uncertainties in the water current velocities, along with similar performance, for the DVL and no-DVL case in the mid-water. This method has the potential to provide geo-referenced water velocity measurements from mobile ocean robots in the absence of GPS and DVL as well as estimate the uncertainty associated with the measurements. 14. Dependence of EIA spectra on mutual coherence between coupling and probe fields in Cs atomic vapors Energy Technology Data Exchange (ETDEWEB) Kwon, Mi Rang; Kim, Kyoung Dae; Park, Hyun Deok; Kim, Jung Bog [Korea National University of Education, Chungwon (Korea, Republic of); Moon, Han Seb [Korea Research Institute of the Standards and Science, Taejon (Korea, Republic of) 2002-03-01 We observed the dependence of EIA spectra on the mutual coherence between the coupling and the probe fields in the D{sub 2}F{sub 9} = 4 {r_reversible} F{sub e} = 5 transition of Cs vapors at room temperature where the coupling and the probe fields were made from one laser source or two independent laser sources. By using one source having a high mutual coherence, we found EIA spectra linewidths much narrower than 0.1 {gamma} on the weak coupling field and the transparent spectra with linewidths narrower than 1 MHz within subnatural absorption on the strong coupling field. On the other hand, where the two sources which were nearly incoherent with each other were used, the absorption profiles showed the same dependence on the coupling power as the spectra for the one source, but their linewidths were broad, on the order of the natural linewidth. 15. Full-field parallel interferometry coherence probe microscope for high-speed optical metrology. Science.gov (United States) Safrani, A; Abdulhalim, I 2015-06-01 Parallel detection of several achromatic phase-shifted images is used to obtain a high-speed, high-resolution, full-field, optical coherence probe tomography system based on polarization interferometry. The high enface imaging speed, short coherence gate, and high lateral resolution provided by the system are exploited to determine microbump height uniformity in an integrated semiconductor chip at 50 frames per second. The technique is demonstrated using the Linnik microscope, although it can be implemented on any polarization-based interference microscopy system. 16. Coherent and Semiclassical States of a Charged Particle in a Constant Electric Field Science.gov (United States) Adorno, T. C.; Pereira, A. S. 2018-05-01 The method of integrals of motion is used to construct families of generalized coherent states of a nonrelativistic spinless charged particle in a constant electric field. Families of states, differing in the values of their standard deviations at the initial time, are obtained. Depending on the initial values of the standard deviations, and also on the electric field, it turns out to be possible to identify some families with semiclassical states. 17. [Deep learning and neuronal networks in ophthalmology : Applications in the field of optical coherence tomography]. Science.gov (United States) Treder, M; Eter, N 2018-04-19 Deep learning is increasingly becoming the focus of various imaging methods in medicine. Due to the large number of different imaging modalities, ophthalmology is particularly suitable for this field of application. This article gives a general overview on the topic of deep learning and its current applications in the field of optical coherence tomography. For the benefit of the reader it focuses on the clinical rather than the technical aspects. 18. Comparison of velocity and temperature fields for two types of spacers in an annular channel Directory of Open Access Journals (Sweden) Lávička David 2012-04-01 Full Text Available The paper deals with measurement of flow field using a modern laser method (PIV in an annular channel of very small dimension - a fuel cell model. The velocity field was measured in several positions and plains around the spacer. The measurement was extended also to record temperatures by thermocouples soldered into stainless-steel tube wall. The measurement was focused on cooling process of the preheated fuel cell tube model, where the tube was very slowly flooded with water. Main result of the paper is comparison of two spacer's designs with respect to measured velocity and temperature fields. 19. Diffusion with intrinsic trapping in 2-d incompressible stochastic velocity fields International Nuclear Information System (INIS) Vlad, M.; Spineanu, F.; Misguich, J.H.; Vlad, M.; Spineanu, F.; Balescu, R. 1998-10-01 A new statistical approach that applies to the high Kubo number regimes for particle diffusion in stochastic velocity fields is presented. This 2-dimensional model describes the partial trapping of the particles in the stochastic field. the results are close to the numerical simulations and also to the estimations based on percolation theory. (authors) 20. 3-D seismic velocity and attenuation structures in the geothermal field Energy Technology Data Exchange (ETDEWEB) Nugraha, Andri Dian [Global Geophysics Research Group, Faculty of Mining and Petroleum Engineering, Institute of Technology Bandung, Jalan Ganesha No. 10 Bandung, 40132 (Indonesia); Syahputra, Ahmad [Geophyisical Engineering, Faculty of Mining and Petroleum Engineering, Institute of Technology Bandung, Jalan Ganesha No. 10 Bandung, 40132 (Indonesia); Fatkhan,; Sule, Rachmat [Applied Geophysics Research Group, Faculty of Mining and Petroleum Engineering, Institute of Technology Bandung, Jalan Ganesha No. 10 Bandung, 40132 (Indonesia) 2013-09-09 We conducted delay time tomography to determine 3-D seismic velocity structures (Vp, Vs, and Vp/Vs ratio) using micro-seismic events in the geothermal field. The P-and S-wave arrival times of these micro-seismic events have been used as input for the tomographic inversion. Our preliminary seismic velocity results show that the subsurface condition of geothermal field can be fairly delineated the characteristic of reservoir. We then extended our understanding of the subsurface physical properties through determining of attenuation structures (Qp, Qs, and Qs/Qp ratio) using micro-seismic waveform. We combined seismic velocities and attenuation structures to get much better interpretation of the reservoir characteristic. Our preliminary attanuation structures results show reservoir characterization can be more clearly by using the 3-D attenuation model of Qp, Qs, and Qs/Qp ratio combined with 3-D seismic velocity model of Vp, Vs, and Vp/Vs ratio. 1. Temperature and velocity measurement fields of fluids using a schlieren system. Science.gov (United States) Martínez-González, Adrian; Guerrero-Viramontes, J A; Moreno-Hernández, David 2012-06-01 This paper proposes a combined method for two-dimensional temperature and velocity measurements in liquid and gas flow using a schlieren system. Temperature measurements are made by relating the intensity level of each pixel in a schlieren image to the corresponding knife-edge position measured at the exit focal plane of the schlieren system. The same schlieren images were also used to measure the velocity of the fluid flow. The measurement is made by using particle image velocimetry (PIV). The PIV software used in this work analyzes motion between consecutive schlieren frames to obtain velocity fields. The proposed technique was applied to measure the temperature and velocity fields in the natural convection of water provoked by a heated rectangular plate. 2. CONSTRAINING THE NFW POTENTIAL WITH OBSERVATIONS AND MODELING OF LOW SURFACE BRIGHTNESS GALAXY VELOCITY FIELDS International Nuclear Information System (INIS) Kuzio de Naray, Rachel; McGaugh, Stacy S.; Mihos, J. Christopher 2009-01-01 We model the Navarro-Frenk-White (NFW) potential to determine if, and under what conditions, the NFW halo appears consistent with the observed velocity fields of low surface brightness (LSB) galaxies. We present mock DensePak Integral Field Unit (IFU) velocity fields and rotation curves of axisymmetric and nonaxisymmetric potentials that are well matched to the spatial resolution and velocity range of our sample galaxies. We find that the DensePak IFU can accurately reconstruct the velocity field produced by an axisymmetric NFW potential and that a tilted-ring fitting program can successfully recover the corresponding NFW rotation curve. We also find that nonaxisymmetric potentials with fixed axis ratios change only the normalization of the mock velocity fields and rotation curves and not their shape. The shape of the modeled NFW rotation curves does not reproduce the data: these potentials are unable to simultaneously bring the mock data at both small and large radii into agreement with observations. Indeed, to match the slow rise of LSB galaxy rotation curves, a specific viewing angle of the nonaxisymmetric potential is required. For each of the simulated LSB galaxies, the observer's line of sight must be along the minor axis of the potential, an arrangement that is inconsistent with a random distribution of halo orientations on the sky. 3. Field’s entropy in the atom–field interaction: Statistical mixture of coherent states Energy Technology Data Exchange (ETDEWEB) Zúñiga-Segundo, Arturo [Instituto Politécnico Nacional. ESFM Departamento de Física, Edificio 9 Unidad Profesional Adolfo López Mateos, CP 07738 CDMX (Mexico); Juárez-Amaro, Raúl [Universidad Tecnológica de la Mixteca, Apdo. Postal 71, Huajuapan de León, Oax., 69000 (Mexico); Aguilar-Loreto, Omar [Departamento de Ingenierías, CUCSur, Universidad de Guadalajara CP 48900, Autlán de Navarro, Jal. (Mexico); Moya-Cessa, Héctor M., E-mail: [email protected] [Instituto Nacional de Astrofísica, Óptica y Electrónica, Calle Luis Enrique Erro No. 1, Sta. Ma. Tonantzintla, Pue. CP 72840 (Mexico) 2017-04-15 We study the atom–field interaction when the field is in a mixture of coherent states. We show that in this case it is possible to calculate analytically the field entropy for times of the order of twice the collapse time. Such analytical results are done with the help of numerical analysis. We also give an expression in terms of Chebyshev polynomials for power of density matrices. - Highlights: • We calculate the field entropy for times of the order of twice the collapse time. • We give a relation between powers of the density matrices of the subsystems. • Entropy operators for both subsystems are obtained. 4. Dynamical Properties of Two Coupled Dissipative QED Cavities Driven by Coherent Fields International Nuclear Information System (INIS) Hou Bangpin; Sun Weili; Wang Shunjin; Wang Gang 2007-01-01 When two identical QED cavities driven by the coherent fields are located in a uniform environment, in addition to dissipation, there appears an indirect coupling between the two cavities induced by the background fields. We investigate the effects of the coherent fields, the dissipation as well as the incoherent coupling on the following dynamical properties of the system: photon transfer, reversible decoherence, and quantum state transfer, etc. We find that the photons in the cavities do not leak completely into the environment due to the collective coupling between the cavities and the environment, and the photons are transferred irreversibly from the cavity with more photons to the cavity with less ones due to the incoherent coupling so that they are equally distributed among the two cavities. The coherent field pumping on the two cavities increases the mean photons, complements the revived magnitude of the reversible decoherence, but hinders the quantum state transfer between the two cavities. The above phenomena may find applications in quantum communication and other basic fields. 5. A Unified Geodetic Vertical Velocity Field (UGVVF), Version 1.0 Science.gov (United States) Schmalzle, G.; Wdowinski, S. 2014-12-01 Tectonic motion, volcanic inflation or deflation, as well as oil, gas and water pumping can induce vertical motion. In southern California these signals are inter-mingled. In tectonics, properly identifying regions that are contaminated by other signals can be important when estimating fault slip rates. Until recently vertical deformation rates determined by high precision Global Positioning Systems (GPS) had large uncertainties compared to horizontal components and were rarely used to constrain tectonic models of fault motion. However, many continuously occupied GPS stations have been operating for ten or more years, often delivering uncertainties of ~1 mm/yr or less, providing better constraints for tectonic modeling. Various processing centers produced GPS time series and estimated vertical velocity fields, each with their own set of processing techniques and assumptions. We compare vertical velocity solutions estimated by seven data processing groups as well as two combined solutions (Figure 1). These groups include: Central Washington University (CWU) and New Mexico Institute of Technology (NMT), and their combined solution provided by the Plate Boundary Observatory (PBO) through the UNAVCO website. Also compared are the Jet Propulsion Laboratory (JPL) and Scripps Orbit and Permanent Array Center (SOPAC) and their combined solution provided as part of the NASA MEaSUREs project. Smaller velocity fields included are from Amos et al., 2014, processed at the Nevada Geodetic Laboratory, Shen et al., 2011, processed by UCLA and called the Crustal Motion Map 4.0 (CMM4) dataset, and a new velocity field provided by the University of Miami (UM). Our analysis includes estimating and correcting for systematic vertical velocity and uncertainty differences between groups. Our final product is a unified velocity field that contains the median values of the adjusted velocity fields and their uncertainties. This product will be periodically updated when new velocity fields 6. Optically controlled locking of the nuclear field via coherent dark-state spectroscopy. Science.gov (United States) Xu, Xiaodong; Yao, Wang; Sun, Bo; Steel, Duncan G; Bracker, Allan S; Gammon, Daniel; Sham, L J 2009-06-25 A single electron or hole spin trapped inside a semiconductor quantum dot forms the foundation for many proposed quantum logic devices. In group III-V materials, the resonance and coherence between two ground states of the single spin are inevitably affected by the lattice nuclear spins through the hyperfine interaction, while the dynamics of the single spin also influence the nuclear environment. Recent efforts have been made to protect the coherence of spins in quantum dots by suppressing the nuclear spin fluctuations. However, coherent control of a single spin in a single dot with simultaneous suppression of the nuclear fluctuations has yet to be achieved. Here we report the suppression of nuclear field fluctuations in a singly charged quantum dot to well below the thermal value, as shown by an enhancement of the single electron spin dephasing time T(2)*, which we measure using coherent dark-state spectroscopy. The suppression of nuclear fluctuations is found to result from a hole-spin assisted dynamic nuclear spin polarization feedback process, where the stable value of the nuclear field is determined only by the laser frequencies at fixed laser powers. This nuclear field locking is further demonstrated in a three-laser measurement, indicating a possible enhancement of the electron spin T(2)* by a factor of several hundred. This is a simple and powerful method of enhancing the electron spin coherence time without use of 'spin echo'-type techniques. We expect that our results will enable the reproducible preparation of the nuclear spin environment for repetitive control and measurement of a single spin with minimal statistical broadening. 7. Anomalous scaling of a passive vector advected by the Navier-Stokes velocity field International Nuclear Information System (INIS) Jurcisinova, E; Jurcisin, M; Remecky, R 2009-01-01 Using the field theoretic renormalization group and the operator-product expansion, the model of a passive vector field (a weak magnetic field in the framework of the kinematic MHD) advected by the velocity field which is governed by the stochastic Navier-Stokes equation with the Gaussian random stirring force δ-correlated in time and with the correlator proportional to k 4-d-2ε is investigated to the first order in ε (one-loop approximation). It is shown that the single-time correlation functions of the advected vector field have anomalous scaling behavior and the corresponding exponents are calculated in the isotropic case, as well as in the case with the presence of large-scale anisotropy. The hierarchy of the anisotropic critical dimensions is briefly discussed and the persistence of the anisotropy inside the inertial range is demonstrated on the behavior of the skewness and hyperskewness (dimensionless ratios of correlation functions) as functions of the Reynolds number Re. It is shown that even though the present model of a passive vector field advected by the realistic velocity field is mathematically more complicated than, on one hand, the corresponding models of a passive vector field advected by 'synthetic' Gaussian velocity fields and, on the other hand, than the corresponding model of a passive scalar quantity advected by the velocity field driven by the stochastic Navier-Stokes equation, the final one-loop approximate asymptotic scaling behavior of the single-time correlation or structure functions of the advected fields of all models are defined by the same anomalous dimensions (up to normalization) 8. Suppression of thermal noise in a non-Markovian random velocity field International Nuclear Information System (INIS) Ueda, Masahiko 2016-01-01 We study the diffusion of Brownian particles in a Gaussian random velocity field with short memory. By extending the derivation of an effective Fokker–Planck equation for the Lanvegin equation with weakly colored noise to a random velocity-field problem, we find that the effect of thermal noise on particles is suppressed by the existence of memory. We also find that the renormalization effect for the relative diffusion of two particles is stronger than that for single-particle diffusion. The results are compared with those of molecular dynamics simulations. (paper: classical statistical mechanics, equilibrium and non-equilibrium) 9. Determine of velocity field with PIV and CFD during the flow around of bridge piers Directory of Open Access Journals (Sweden) Picka D. 2013-04-01 Full Text Available The article describes the processing of specific junior research FAST-J-11-51/1456 which dealt with physical and CFD of the velocity field during the flow around of bridge piers. Physical modelling has been carried out in Laboratory of water management research in Institute of Water Structures in Brno University of Technology – Faculty of Civil Engineering. To measure of the velocity field in profile of bridge piers were used laser measuring method PIV (Particle Image Velocimetry. The results of PIV served as a basis for comparing experimental data with CFD results of this type of flow in the commercial software ANSYS CFX. 10. Visualization of velocity field and phase distribution in gas-liquid two-phase flow by NMR imaging International Nuclear Information System (INIS) Matsui, G.; Monji, H.; Obata, J. 2004-01-01 NMR imaging has been applied in the field of fluid mechanics, mainly single phase flow, to visualize the instantaneous flow velocity field. In the present study, NMR imaging was used to visualize simultaneously both the instantaneous phase structure and velocity field of gas-liquid two-phase flow. Two methods of NMR imaging were applied. One is useful to visualize both the one component of liquid velocity and the phase distribution. This method was applied to horizontal two-phase flow and a bubble rising in stagnant oil. It was successful in obtaining some pictures of velocity field and phase distribution on the cross section of the pipe. The other is used to visualize a two-dimensional velocity field. This method was applied to a bubble rising in a stagnant water. The velocity field was visualized after and before the passage of a bubble at the measuring cross section. Furthermore, the distribution of liquid velocity was obtained. (author) 11. Electric field detection of coherent synchrotron radiation in a storage ring generated using laser bunch slicing International Nuclear Information System (INIS) Katayama, I.; Shimosato, H.; Bito, M.; Furusawa, K.; Adachi, M.; Zen, H.; Kimura, S.; Katoh, M.; Shimada, M.; Yamamoto, N.; Hosaka, M.; Ashida, M. 2012-01-01 The electric field of coherent synchrotron radiation (CSR) generated by laser bunch slicing in a storage ring has been detected by an electro-optic sampling method. The gate pulses for sampling are sent through a large-mode-area photonic-crystal fiber. The observed electric field profile of the CSR is in good agreement with the spectrum of the CSR observed using Fourier transform far-infrared spectrometry, indicating good phase stability in the CSR. The longitudinal density profiles of electrons modulated by laser pulses were evaluated from the electric field profile. 12. Particle size, magnetic field, and blood velocity effects on particle retention in magnetic drug targeting. Science.gov (United States) Cherry, Erica M; Maxim, Peter G; Eaton, John K 2010-01-01 A physics-based model of a general magnetic drug targeting (MDT) system was developed with the goal of realizing the practical limitations of MDT when electromagnets are the source of the magnetic field. The simulation tracks magnetic particles subject to gravity, drag force, magnetic force, and hydrodynamic lift in specified flow fields and external magnetic field distributions. A model problem was analyzed to determine the effect of drug particle size, blood flow velocity, and magnetic field gradient strength on efficiency in holding particles stationary in a laminar Poiseuille flow modeling blood flow in a medium-sized artery. It was found that particle retention rate increased with increasing particle diameter and magnetic field gradient strength and decreased with increasing bulk flow velocity. The results suggest that MDT systems with electromagnets are unsuitable for use in small arteries because it is difficult to control particles smaller than about 20 microm in diameter. 13. Measurement of velocity field in pipe with classic twisted tape using matching refractive index technique Energy Technology Data Exchange (ETDEWEB) Song, Min Seop; Park, So Hyun; Kim, Eung Soo [Seoul National Univ., Seoul (Korea, Republic of) 2014-10-15 Many researchers conducted experiments and numerical simulations to measure or predict a Nusselt number or a friction factor in a pipe with a twisted tape while some other studies focused on the heat transfer performance enhancement using various twisted tape configurations. However, since the optical access to the inner space of a pipe with a twisted tape was limited, the detailed flow field data were not obtainable so far. Thus, researchers mainly relied on the numerical simulations to obtain the data of the flow field. In this study, a 3D printing technique was used to manufacture a transparent test section for optical access. And also, a noble refractive index matching technique was used to eliminate optical distortion. This two combined techniques enabled to measure the velocity profile with Particle Image Velocimetry (PIV). The measured velocity field data can be used either to understand the fundamental flow characteristics around a twisted tape or to validate turbulence models in Computational Fluid Dynamics (CFD). In this study, the flow field in the test-section was measured for various flow conditions and it was finally compared with numerically calculated data. Velocity fields in a pipe with a classic twisted tape was measured using a particle image velocimetry (PIV) system. To obtain undistorted particle images, a noble optical technique, refractive index matching, was used and it was proved that high-quality image can be obtained from this experimental equipment. The velocity data from the PIV was compared with the CFD simulations. 14. Measurement of velocity field in pipe with classic twisted tape using matching refractive index technique International Nuclear Information System (INIS) Song, Min Seop; Park, So Hyun; Kim, Eung Soo 2014-01-01 Many researchers conducted experiments and numerical simulations to measure or predict a Nusselt number or a friction factor in a pipe with a twisted tape while some other studies focused on the heat transfer performance enhancement using various twisted tape configurations. However, since the optical access to the inner space of a pipe with a twisted tape was limited, the detailed flow field data were not obtainable so far. Thus, researchers mainly relied on the numerical simulations to obtain the data of the flow field. In this study, a 3D printing technique was used to manufacture a transparent test section for optical access. And also, a noble refractive index matching technique was used to eliminate optical distortion. This two combined techniques enabled to measure the velocity profile with Particle Image Velocimetry (PIV). The measured velocity field data can be used either to understand the fundamental flow characteristics around a twisted tape or to validate turbulence models in Computational Fluid Dynamics (CFD). In this study, the flow field in the test-section was measured for various flow conditions and it was finally compared with numerically calculated data. Velocity fields in a pipe with a classic twisted tape was measured using a particle image velocimetry (PIV) system. To obtain undistorted particle images, a noble optical technique, refractive index matching, was used and it was proved that high-quality image can be obtained from this experimental equipment. The velocity data from the PIV was compared with the CFD simulations 15. Methodology to estimate the relative pressure field from noisy experimental velocity data International Nuclear Information System (INIS) Bolin, C D; Raguin, L G 2008-01-01 The determination of intravascular pressure fields is important to the characterization of cardiovascular pathology. We present a two-stage method that solves the inverse problem of estimating the relative pressure field from noisy velocity fields measured by phase contrast magnetic resonance imaging (PC-MRI) on an irregular domain with limited spatial resolution, and includes a filter for the experimental noise. For the pressure calculation, the Poisson pressure equation is solved by embedding the irregular flow domain into a regular domain. To lessen the propagation of the noise inherent to the velocity measurements, three filters - a median filter and two physics-based filters - are evaluated using a 2-D Couette flow. The two physics-based filters outperform the median filter for the estimation of the relative pressure field for realistic signal-to-noise ratios (SNR = 5 to 30). The most accurate pressure field results from a filter that applies in a least-squares sense three constraints simultaneously: consistency between measured and filtered velocity fields, divergence-free and additional smoothness conditions. This filter leads to a 5-fold gain in accuracy for the estimated relative pressure field compared to without noise filtering, in conditions consistent with PC-MRI of the carotid artery: SNR = 5, 20 x 20 discretized flow domain (25 X 25 computational domain). 16. VELOCITY FIELD OF COMPRESSIBLE MAGNETOHYDRODYNAMIC TURBULENCE: WAVELET DECOMPOSITION AND MODE SCALINGS International Nuclear Information System (INIS) Kowal, Grzegorz; Lazarian, A. 2010-01-01 We study compressible magnetohydrodynamic turbulence, which holds the key to many astrophysical processes, including star formation and cosmic-ray propagation. To account for the variations of the magnetic field in the strongly turbulent fluid, we use wavelet decomposition of the turbulent velocity field into Alfven, slow, and fast modes, which presents an extension of the Cho and Lazarian decomposition approach based on Fourier transforms. The wavelets allow us to follow the variations of the local direction of the magnetic field and therefore improve the quality of the decomposition compared to the Fourier transforms, which are done in the mean field reference frame. For each resulting component, we calculate the spectra and two-point statistics such as longitudinal and transverse structure functions as well as higher order intermittency statistics. In addition, we perform a Helmholtz- Hodge decomposition of the velocity field into incompressible and compressible parts and analyze these components. We find that the turbulence intermittency is different for different components, and we show that the intermittency statistics depend on whether the phenomenon was studied in the global reference frame related to the mean magnetic field or in the frame defined by the local magnetic field. The dependencies of the measures we obtained are different for different components of the velocity; for instance, we show that while the Alfven mode intermittency changes marginally with the Mach number, the intermittency of the fast mode is substantially affected by the change. 17. Time-depth and velocity trend analysis of the Wasagu field ... African Journals Online (AJOL) From this study of data sets from Wasagu field in the Niger Delta, it has been found ... relief) cause big difference in bed velocities or where anisotropy is severe. ... of seismic data and checkshot data sets, which lie three wells, a relationship ... 18. Near field acoustic holography based on the equivalent source method and pressure-velocity transducers DEFF Research Database (Denmark) Zhang, Y.-B.; Chen, X.-Z.; Jacobsen, Finn 2009-01-01 The advantage of using the normal component of the particle velocity rather than the sound pressure in the hologram plane as the input of conventional spatial Fourier transform based near field acoustic holography (NAH) and also as the input of the statistically optimized variant of NAH has recen...... generated by sources on the two sides of the hologram plane is also examined.... 19. Factors controlling the field settling velocity of cohesive sediment in estuaries DEFF Research Database (Denmark) Pejrup, Morten; Mikkelsen, Ole 2010-01-01 in the correlation of the description of W-50 and the controlling parameters from each area can be obtained. A generic algorithm describing the data from all the investigated areas is suggested. It works well within specific tidal areas but fails to give a generic description of the field settling velocity.... 20. Wet and gassy zones in a municipal landfill from P- and S-wave velocity fields NARCIS (Netherlands) Konstantaki, L.A.; Ghose, R.; Draganov, D.S.; Heimovaara, T.J. 2016-01-01 The knowledge of the distribution of leachate and gas in a municipal landfill is of vital importance to the landfill operators performing improved landfill treatments and for environmental protection and efficient biogas extraction. We have explored the potential of using the velocity fields of 1. Three-Dimensional Velocity Field De-Noising using Modal Projection Science.gov (United States) Frank, Sarah; Ameli, Siavash; Szeri, Andrew; Shadden, Shawn 2017-11-01 PCMRI and Doppler ultrasound are common modalities for imaging velocity fields inside the body (e.g. blood, air, etc) and PCMRI is increasingly being used for other fluid mechanics applications where optical imaging is difficult. This type of imaging is typically applied to internal flows, which are strongly influenced by domain geometry. While these technologies are evolving, it remains that measured data is noisy and boundary layers are poorly resolved. We have developed a boundary modal analysis method to de-noise 3D velocity fields such that the resulting field is divergence-free and satisfies no-slip/no-penetration boundary conditions. First, two sets of divergence-free modes are computed based on domain geometry. The first set accounts for flow through truncation boundaries'', and the second set of modes has no-slip/no-penetration conditions imposed on all boundaries. The modes are calculated by minimizing the velocity gradient throughout the domain while enforcing a divergence-free condition. The measured velocity field is then projected onto these modes using a least squares algorithm. This method is demonstrated on CFD simulations with artificial noise. Different degrees of noise and different numbers of modes are tested to reveal the capabilities of the approach. American Heart Association Award 17PRE33660202. 2. Velocity field measurements in an evaporating sessile droplet by means of micro-PIV technique Directory of Open Access Journals (Sweden) Yagodnitsyna Anna 2016-01-01 Full Text Available Velocity fields are measured in evaporating sessile droplets on two substrates with different contact angles and contact angle hysteresis using micro resolution particle image velocimetry technique. Different flow patterns are observed in different stages of droplet evaporation: a flow with vortices and a radial flow. Flow structure is found to be similar for droplets on different substrates. 3. Dynamics of the solar magnetic field. V. Velocities associated with changing magnetic fields International Nuclear Information System (INIS) Levine, R.H.; Nakagawa, Y. 1975-01-01 Methods of determining horizontal velocities from the magnetic induction equation on the basis of a time series of magnetogram observations are discussed. For the flare of 1972 August 7, it is shown that a previously developed method of predicting positions of likely flare activity provides reasonable agreement with observations. Limitations to this type of solution of the magnetic induction equation are pointed out, and unambiguous solutions, corresponding to phenomenological determinations of velocity patterns under various physical circumstances, are presented for simple magnetic configurations. Implications for the analysis of changes in a series of magnetogram observations are discussed 4. Near-field Oblique Remote Sensing of Stream Water-surface Elevation, Slope, and Surface Velocity Science.gov (United States) Minear, J. T.; Kinzel, P. J.; Nelson, J. M.; McDonald, R.; Wright, S. A. 2014-12-01 A major challenge for estimating discharges during flood events or in steep channels is the difficulty and hazard inherent in obtaining in-stream measurements. One possible solution is to use near-field remote sensing to obtain simultaneous water-surface elevations, slope, and surface velocities. In this test case, we utilized Terrestrial Laser Scanning (TLS) to remotely measure water-surface elevations and slope in combination with surface velocities estimated from particle image velocimetry (PIV) obtained by video-camera and/or infrared camera. We tested this method at several sites in New Mexico and Colorado using independent validation data consisting of in-channel measurements from survey-grade GPS and Acoustic Doppler Current Profiler (ADCP) instruments. Preliminary results indicate that for relatively turbid or steep streams, TLS collects tens of thousands of water-surface elevations and slopes in minutes, much faster than conventional means and at relatively high precision, at least as good as continuous survey-grade GPS measurements. Estimated surface velocities from this technique are within 15% of measured velocity magnitudes and within 10 degrees from the measured velocity direction (using extrapolation from the shallowest bin of the ADCP measurements). Accurately aligning the PIV results into Cartesian coordinates appears to be one of the main sources of error, primarily due to the sensitivity at these shallow oblique look angles and the low numbers of stationary objects for rectification. Combining remotely-sensed water-surface elevations, slope, and surface velocities produces simultaneous velocity measurements from a large number of locations in the channel and is more spatially extensive than traditional velocity measurements. These factors make this technique useful for improving estimates of flow measurements during flood flows and in steep channels while also decreasing the difficulty and hazard associated with making measurements in these 5. Field Dependent Coherence Length in the Superclean, High-κ Superconductor CeCoIn5 International Nuclear Information System (INIS) DeBeer-Schmitt, L.; Eskildsen, M. R.; Dewhurst, C. D.; Hoogenboom, B. W.; Petrovic, C. 2006-01-01 Using small-angle neutron scattering, we have studied the flux-line lattice (FLL) in the superclean, high-κ superconductor CeCoIn 5 . The FLL undergoes a first-order symmetry and reorientation transition at ∼0.55 T at 50 mK. In addition, the FLL form factor in this material is found to be independent of the applied magnetic field, in striking contrast to the exponential decrease usually observed in superconductors. This result is consistent with a strongly field-dependent coherence length, proportional to the vortex separation 6. Field test and theoretical analysis of electromagnetic pulse propagation velocity on crossbonded cable systems DEFF Research Database (Denmark) Jensen, Christian Flytkjær; Bak, Claus Leth; Gudmundsdottir, Unnur Stella 2014-01-01 In this paper, the electromagnetic pulse propagation velocity on a three-phase cable system, consisting of three single core (SC) cables in flat formation with an earth continuity conductor is under study. The propagation velocity is an important parameter for most travelling wave off- and online...... fault location methods and needs to be exactly known for optimal performance of these algorithm types. Field measurements are carried out on a 6.9 km and a 31.4 km 245 kV crossbonded cable system, and the results are analysed using the modal decomposition theory. Several ways for determining... 7. Velocity-space particle loss in field-reversed theta pinches International Nuclear Information System (INIS) Hsiao, M.Y. 1983-01-01 A field-reversed theta pinch (FRTP) is a compact device for magnetic fusion. It has attracted much attention in recent years since encouraging experimental results have been obtained. However, the definite causes for the observed particle loss rate and plasma rotation are not well known. In this work, we study the velocity-space particle loss (VSPL), i.e., particle loss due to the existence of a loss region in velocity space, in FRTP's in order to have a better understanding about the characteristics of this device 8. The effect of spatially varying velocity field on the transport of radioactivity in a porous medium. Science.gov (United States) 2016-10-01 In the event of an accidental leak of the immobilized nuclear waste from an underground repository, it may come in contact of the flow of underground water and start migrating. Depending on the nature of the geological medium, the flow velocity of water may vary spatially. Here, we report a numerical study on the migration of radioactivity due to a space dependent flow field. For a detailed analysis, seven different types of velocity profiles are considered and the corresponding concentrations are compared. Copyright © 2016 Elsevier Ltd. All rights reserved. 9. Viscosity estimation utilizing flow velocity field measurements in a rotating magnetized plasma International Nuclear Information System (INIS) Yoshimura, Shinji; Tanaka, Masayoshi Y. 2008-01-01 The importance of viscosity in determining plasma flow structures has been widely recognized. In laboratory plasmas, however, viscosity measurements have been seldom performed so far. In this paper we present and discuss an estimation method of effective plasma kinematic viscosity utilizing flow velocity field measurements. Imposing steady and axisymmetric conditions, we derive the expression for radial flow velocity from the azimuthal component of the ion fluid equation. The expression contains kinematic viscosity, vorticity of azimuthal rotation and its derivative, collision frequency, azimuthal flow velocity and ion cyclotron frequency. Therefore all quantities except the viscosity are given provided that the flow field can be measured. We applied this method to a rotating magnetized argon plasma produced by the Hyper-I device. The flow velocity field measurements were carried out using a directional Langmuir probe installed in a tilting motor drive unit. The inward ion flow in radial direction, which is not driven in collisionless inviscid plasmas, was clearly observed. As a result, we found the anomalous viscosity, the value of which is two orders of magnitude larger than the classical one. (author) 10. A new GPS velocity field in the south-western Balkans: insights for continental dynamics Science.gov (United States) D'Agostino, N.; Avallone, A.; Duni, L.; Ganas, A.; Georgiev, I.; Jouanne, F.; Koci, R.; Kuka, N.; Metois, M. 2017-12-01 The Balkans peninsula is an area of active distributed deformation located at the southern boundary of the Eurasian plate. Relatively low strain rates and logistical reasons have so far limited the characterization and definition of the active tectonics and crustal kinematics. The increasing number of GNSS stations belonging to national networks deployed for scientific and cadastral purposes, now provides the opportunity to improve the knowledge of the crustal kinematics in this area and to define a cross-national velocity field that illuminates the active tectonic deformation. In this work we homogeneously processed the data from the south western Balkans and neighbouring regions using available rinex files from scientific and cadastral networks (ALBPOS, EUREF, HemusNET, ITALPOS, KOPOS, MAKPOS, METRICA, NETGEO, RING, TGREF). In order to analyze and interpret station velocities relative to the Eurasia plate and to reduce the common mode signal, we updated the Eurasian terrestrial reference frame described in Métois et al. 2015. Starting from this dataset we present a new GPS velocity field covering the south western part of the Balkan Peninsula. Using this new velocity field, we derive the strain rate tensor to analyze the regional style of the deformation. Our results (1) improve the picture of the general southward flow of the crust characterizing the south western Balkans behind the contractional belt at the boundary with Adriatic and (2) provide new key elements for the understanding of continental dynamics in this part of the Eurasian plate boundary. 11. Stopping power for arbitrary angle between test particle velocity and magnetic field International Nuclear Information System (INIS) Cereceda, Carlo; Peretti, Michel de; Deutsch, Claude 2005-01-01 Using the longitudinal dielectric function derived previously for charged test particles in helical movement around magnetic field lines, the numerical convergence of the series involved is found and the double numerical integrations on wave vector components are performed yielding the stopping power for arbitrary angle between the test particle velocity and magnetic field. Calculations are performed for particle Larmor radius larger and shorter than Debye length, i.e., for protons in a cold magnetized plasma and for thermonuclear α particles in a dense, hot, and strongly magnetized plasma. A strong decrease is found for the energy loss as the angle varies from 0 to π/2. The range of thermonuclear α particles as a function of the velocity angle with respect to the magnetic field is also given 12. Electric-field control of magnetic domain-wall velocity in ultrathin cobalt with perpendicular magnetization. Science.gov (United States) Chiba, D; Kawaguchi, M; Fukami, S; Ishiwata, N; Shimamura, K; Kobayashi, K; Ono, T 2012-06-06 Controlling the displacement of a magnetic domain wall is potentially useful for information processing in magnetic non-volatile memories and logic devices. A magnetic domain wall can be moved by applying an external magnetic field and/or electric current, and its velocity depends on their magnitudes. Here we show that the applying an electric field can change the velocity of a magnetic domain wall significantly. A field-effect device, consisting of a top-gate electrode, a dielectric insulator layer, and a wire-shaped ferromagnetic Co/Pt thin layer with perpendicular anisotropy, was used to observe it in a finite magnetic field. We found that the application of the electric fields in the range of ± 2-3 MV cm(-1) can change the magnetic domain wall velocity in its creep regime (10(6)-10(3) m s(-1)) by more than an order of magnitude. This significant change is due to electrical modulation of the energy barrier for the magnetic domain wall motion. 13. Modeling and Simulation of a Non-Coherent Frequency Shift Keying Transceiver Using a Field Programmable Gate Array (FPGA) National Research Council Canada - National Science Library Voskakis, Konstantinos 2008-01-01 ...) receiver-transmitter in a Field Programmable Gate Array (FPGA). After introducing the theory behind the Non- Coherent BFSK demodulation implemented at the receiver, the design of transmitter and receiver is illustrated... 14. Velocity Field Measurements of Human Coughing Using Time Resolved Particle Image Velocimetry Science.gov (United States) Khan, T.; Marr, D. R.; Higuchi, H.; Glauser, M. N. 2003-11-01 Quantitative fluid mechanics analysis of human coughing has been carried out using new Time Resolved Particle Image Velocimetry (TRPIV). The study involves measurement of velocity vector time-histories and velocity profiles. It is focused on the average normal human coughing. Some work in the past on cough mechanics has involved measurement of flow rates, tidal volumes and sub-glottis pressure. However, data of unsteady velocity vector field of the exiting highly time-dependent jets is not available. In this study, human cough waveform data are first acquired in vivo using conventional respiratory instrumentation for various volunteers of different gender/age groups. The representative waveform is then reproduced with a coughing/breathing simulator (with or without a manikin) for TRPIV measurements and analysis. The results of this study would be useful not only for designing of indoor air quality and heating, ventilation and air conditioning systems, but also for devising means of protection against infectious diseases. 15. Measurement of the blood flow rate and velocity in coronary artery stenosis using intracoronary frequency domain optical coherence tomography: Validation against fractional flow reserve. Science.gov (United States) Zafar, Haroon; Sharif, Faisal; Leahy, Martin J 2014-12-01 The main objective of this study was to assess the blood flow rate and velocity in coronary artery stenosis using intracoronary frequency domain optical coherence tomography (FD-OCT). A correlation between fractional flow reserve (FFR) and FD-OCT derived blood flow velocity is also included in this study. A total of 20 coronary stenoses in 15 patients were assessed consecutively by quantitative coronary angiography (QCA), FFR and FD-OCT. A percutaneous coronary intervention (PCI) optimization system was used in this study which combines wireless FFR measurement and FD-OCT imaging in one platform. Stenoses were labelled severe if FFR ≤ 0.8. Blood flow rate and velocity in each stenosis segment were derived from the volumetric analysis of the FD-OCT pull back images. The FFR value was ≤ 0.80 in 5 stenoses (25%). The mean blood flow rate in severe coronary stenosis ( n  = 5) was 2.54 ± 0.55 ml/s as compared to 4.81 ± 1.95 ml/s in stenosis with FFR > 0.8 ( n  = 15). A good and significant correlation between FFR and FD-OCT blood flow velocity in coronary artery stenosis ( r  = 0.74, p  < 0.001) was found. The assessment of stenosis severity using FD-OCT derived blood flow rate and velocity has the ability to overcome many limitations of QCA and intravascular ultrasound (IVUS). 16. Coherent gradient sensing method for measuring thermal stress field of thermal barrier coating structures Directory of Open Access Journals (Sweden) Kang Ma 2017-01-01 Full Text Available Coherent gradient sensing (CGS method can be used to measure the slope of a reflective surface, and has the merits of full-field, non-contact, and real-time measurement. In this study, the thermal stress field of thermal barrier coating (TBC structures is measured by CGS method. Two kinds of powders were sprayed onto Ni-based alloy using a plasma spraying method to obtain two groups of film–substrate specimens. The specimens were then heated with an oxy-acetylene flame. The resulting thermal mismatch between the film and substrate led to out-of-plane deformation of the specimen. The deformation was measured by the reflective CGS method and the thermal stress field of the structure was obtained through calibration with the help of finite element analysis. Both the experiment and numerical results showed that the thermal stress field of TBC structures can be successfully measured by CGS method. 17. The Coincident Coherence of Extreme Doppler Velocity Events with p-mode Patches in the Solar Photosphere. Science.gov (United States) McClure, Rachel Lee 2018-06-01 Observations of the solar photosphere show many spatially compact Doppler velocity events with short life spans and extreme values. In the IMaX spectropolarimetric inversion data of the first flight of the SUNRISE balloon in 2009 these striking flashes in the intergranule lanes and complementary outstanding values in the centers of granules have line of sight Doppler velocity values in excess of 4 sigma from the mean. We conclude that values outside 4 sigma are a result from the superposition of the granulation flows and the p-modes.To determine how granulation and p-modes contribute to these outstanding Doppler events, I separate the two components using the Fast Fourier Transform. I produce the power spectrum of the spatial wave frequencies and their corresponding frequency in time for each image, and create a k-omega filter to separate the two components. Using the filtered data, test the hypothesis that extreme events occur because of strict superposition between the p-mode Doppler velocities and the granular velocities. I compare event counts from the observational data to those produced by random superposition of the two flow components and find that the observational event counts are consistent with the model event counts in the limit of small number statistics. Poisson count probabilities of event numbers observed are consistent with expected model count probability distributions. 18. Velocity overshoot decay mechanisms in compound semiconductor field-effect transistors with a submicron characteristic length International Nuclear Information System (INIS) Jyegal, Jang 2015-01-01 Velocity overshoot is a critically important nonstationary effect utilized for the enhanced performance of submicron field-effect devices fabricated with high-electron-mobility compound semiconductors. However, the physical mechanisms of velocity overshoot decay dynamics in the devices are not known in detail. Therefore, a numerical analysis is conducted typically for a submicron GaAs metal-semiconductor field-effect transistor in order to elucidate the physical mechanisms. It is found that there exist three different mechanisms, depending on device bias conditions. Specifically, at large drain biases corresponding to the saturation drain current (dc) region, the velocity overshoot suddenly begins to drop very sensitively due to the onset of a rapid decrease of the momentum relaxation time, not the mobility, arising from the effect of velocity-randomizing intervalley scattering. It then continues to drop rapidly and decays completely by severe mobility reduction due to intervalley scattering. On the other hand, at small drain biases corresponding to the linear dc region, the velocity overshoot suddenly begins to drop very sensitively due to the onset of a rapid increase of thermal energy diffusion by electrons in the channel of the gate. It then continues to drop rapidly for a certain channel distance due to the increasing thermal energy diffusion effect, and later completely decays by a sharply decreasing electric field. Moreover, at drain biases close to a dc saturation voltage, the mechanism is a mixture of the above two bias conditions. It is suggested that a large secondary-valley energy separation is essential to increase the performance of submicron devices 19. High-magnification velocity field measurements on high-frequency, supersonic microactuators Science.gov (United States) Kreth, Phil; Fernandez, Erik; Ali, Mohd; Alvi, Farrukh 2014-11-01 The Resonance-Enhanced Microjet (REM) actuator developed at our laboratory produces pulsed, supersonic microjets by utilizing a number of microscale, flow-acoustic resonance phenomena. The microactuator used in this study consists of an underexpanded source jet flowing into a cylindrical cavity with a single orifice through which an unsteady, supersonic jet issues at a resonant frequency of 7 kHz. The flowfields of a 1 mm underexpanded free jet and the microactuator are studied in detail using high-magnification, phase-locked flow visualizations (microschlieren) and 2-component particle image velocimetry. The challenges of these measurements at such small scales and supersonic velocities are discussed. The results clearly show that the microactuator produces supersonic pulsed jets with velocities exceeding 400 m/s. This is the first direct measurement of the velocity field and its temporal evolution produced by such actuators. Comparisons are made between the flow visualizations, velocity field measurements, and simulations using Implicit LES for a similar microactuator. With high, unsteady momentum output, this type of microactuator has potential in a range of flow control applications. 20. Measurement of electroosmotic and electrophoretic velocities using pulsed and sinusoidal electric fields. Science.gov (United States) Sadek, Samir H; Pimenta, Francisco; Pinho, Fernando T; Alves, Manuel A 2017-04-01 In this work, we explore two methods to simultaneously measure the electroosmotic mobility in microchannels and the electrophoretic mobility of micron-sized tracer particles. The first method is based on imposing a pulsed electric field, which allows to isolate electrophoresis and electroosmosis at the startup and shutdown of the pulse, respectively. In the second method, a sinusoidal electric field is generated and the mobilities are found by minimizing the difference between the measured velocity of tracer particles and the velocity computed from an analytical expression. Both methods produced consistent results using polydimethylsiloxane microchannels and polystyrene micro-particles, provided that the temporal resolution of the particle tracking velocimetry technique used to compute the velocity of the tracer particles is fast enough to resolve the diffusion time-scale based on the characteristic channel length scale. Additionally, we present results with the pulse method for viscoelastic fluids, which show a more complex transient response with significant velocity overshoots and undershoots after the start and the end of the applied electric pulse, respectively. © 2016 The Authors. Electrophoresis published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim. 1. Velocity dependence of transient hyperfine field at Pt ions rapidly recoiling through magnetized Fe International Nuclear Information System (INIS) Stuchbery, A.E.; Ryan, C.G.; Bolotin, H.H. 1981-01-01 The velocity-dependence of the transient hyperfine magnetic field acting at nuclei of 196 Pt ions rapidly recoiling through thin magnetized Fe was investigated at a number of recoil velocities. The state of interest (2 1 + ) was populated by Coulomb excitation using beams of 80- and 120-MeV 32 S and 150- and 220-MeV 58 Ni ions. The 2 1 + →0 1 + γ-ray angular distribution precession measurements were carried out in coincidence with backscattered projectiles. From these results, the strength of the transient field acting on Pt ions recoiling through magnetized Fe with average velocities in the extended range 2.14<=v/vsub(o)<=4.82 (vsub(o) = c/137) was found to be consistent with a linear velocity dependence and to be incompatible with the specific vsup(0.45+-0.18) dependence which has been previously reported to account well for all ions in the mass range from oxygen through samarium. This seemingly singular behaviour for Pt and other ions in the Pt mass vicinity is discussed 2. Transport properties of Dirac electrons in graphene based double velocity-barrier structures in electric and magnetic fields International Nuclear Information System (INIS) Liu, Lei; Li, Yu-Xian; Liu, Jian-Jun 2012-01-01 Using transfer matrix method, transport properties in graphene based double velocity-barrier structures under magnetic and electric fields are numerically studied. It is found that velocity barriers for the velocity ratio (the Fermi velocity inside the barrier to that outside the barrier) less than one (or for the velocity ratio greater than one) have properties similar to electrostatic wells (or barriers). The velocity barriers for the velocity ratio greater than one significantly enlarge the resonant tunneling region of electrostatic barriers. In the presence of magnetic field, the plateau width of the Fano factor with a Poissonian value shortens (or broadens) for the case of the velocity ratio less than one (or greater than one). When the Fermi energy is equal to the electrostatic barrier height, for different values of the velocity ratio, both the conductivities and the Fano factors remain fixed. -- Highlights: ► We model graphene based velocity-barrier structures in electric and magnetic fields. ► Velocity barrier for ξ 1) have property similar to electrostatic well (barrier). ► Velocity barrier for ξ>1 enlarge the resonant tunneling region of electrostatic barrier. ► The plateau width of Fano factor shortens (or broadens) for the case of ξ 1). ► The conductivity remains fixed at the point of E F =U 0 for different values of ξ. 3. Investigation of Horizontal Velocity Fields in Stirred Vessels with Helical Coils by PIV Directory of Open Access Journals (Sweden) Volker Bliem 2014-01-01 Full Text Available Horizontal velocity flow fields were measured by particle image velocimetry for a stirred vessel with baffles and two helical coils for enlargement of heat transfer area. The investigation was carried out in a cylindrical vessel with flat base and two different stirrers (radial-flow Rushton turbine and axial-flow propeller stirrer. Combined velocity plots for flow fields at different locations are presented. It was found that helical coils change the flow pattern significantly. Measurements for the axial-flow Rushton turbine showed a strong deflection by the coils, leading to a mainly tangential flow pattern. Behind baffles large areas of unused heat transfer area were found. First results for the axial-flow propeller reveal an extensive absence of fluid movement in the horizontal plane. Improved design considerations for enhanced heat transfer by more compatible equipment compilation are proposed. 4. Experimental analysis of the velocity field in an anular channel with helicoidal wire International Nuclear Information System (INIS) Lemos, M.J.S. de. 1979-06-01 In general, nuclear reactor fuel elements are rod bundles with coolant flowing axially among them. LMFBR's (Liquid Metal Fast Breeder Reactor) have wire wrapped fuel rods, with the wire working as spacer and mixer. The present work consists in the experimental analysis of the velocity field created by a typical LMFBR fuel rod placed in a cylinder, yielding an annular channel with helicoidal wire. Using hot wire anemometry, the main and secondary velocity fields were measured. The range for Re was from 2.2x 10 4 to 6.1x 10 4 , for air. The aspect ratio, P/D, and the lead-to-diameter ratio, 1/D, were 1.2 and 15, respectively. (Author) [pt 5. The large-scale peculiar velocity field in flat models of the universe International Nuclear Information System (INIS) Vittorio, N.; Turner, M.S. 1986-10-01 The inflationary Universe scenario predicts a flat Universe and both adiabatic and isocurvature primordial density perturbations with the Zel'dovich spectrum. The two simplest realizations, models dominated by hot or cold dark matter, seem to be in conflict with observations. Flat models are examined with two components of mass density, where one of the components of mass density is smoothly distributed and the large-scale (≥10h -1 MpC) peculiar velocity field for these models is considered. For the smooth component relativistic particles, a relic cosmological term, and light strings are considered. At present the observational situation is unsettled; but, in principle, the large-scale peculiar velocity field is very powerful discriminator between these different models. 61 refs 6. Field performance of an all-semiconductor laser coherent Doppler lidar DEFF Research Database (Denmark) Rodrigo, Peter John; Pedersen, Christian 2012-01-01 We implement and test what, to our knowledge, is the first deployable coherent Doppler lidar (CDL) system based on a compact, inexpensive all-semiconductor laser (SL). To demonstrate the field performance of our SL-CDL remote sensor, we compare a 36 h time series of averaged radial wind speeds...... measured by our instrument at an 80 m distance to those simultaneously obtained from an industry-standard sonic anemometer (SA). An excellent degree of correlation (R2=0.994 and slope=0.996) is achieved from a linear regression analysis of the CDL versus SA wind speed data. The lidar system is capable... 7. Entanglement of two atoms interacting with a dissipative coherent cavity field without rotating wave approximation International Nuclear Information System (INIS) Kang Guo-Dong; Fang Mao-Fa; Ouyang Xi-Cheng; Deng Xiao-Juan 2010-01-01 Considering two identical two-level atoms interacting with a single-model dissipative coherent cavity field without rotating wave approximation, we explore the entanglement dynamics of the two atoms prepared in different states using concurrence. Interestingly, our results show that the entanglement between the two atoms that initially disentangled will come up to a large constant rapidly, and then keeps steady in the following time or always has its maximum when prepared in some special Bell states. The model considered in this study is a good candidate for quantum information processing especially for quantum computation as steady high-degree atomic entanglement resource obtained in dissipative cavity 8. Collapse and Revival of an Atomic Beam Interacting with a Coherent State Light Field International Nuclear Information System (INIS) Ben, Li; Jing-Biao, Chen 2009-01-01 We report on the phenomena of the periodic spontaneous collapse and revival in the dynamics of an atomic beam interacting with a single-mode and coherent-state light field. Conventional collapse and revival by Eberly et al. [Phys. Rev. Lett. 44 (1980) 1323] are presented in the case of the evolution with time of the population inversion. Here, we study the evolution with coupling strength of population inversion. We define the collapse and revival coupling strengths as characteristic parameters to describe the above collapse and revival. Furthermore, we present the analytic formulas for the population inversion, the collapse and revival coupling strengths 9. Full-field optical coherence tomography using immersion Mirau interference microscope. Science.gov (United States) Lu, Sheng-Hua; Chang, Chia-Jung; Kao, Ching-Fen 2013-06-20 In this study, an immersion Mirau interference microscope was developed for full-field optical coherence tomography (FFOCT). Both the reference and measuring arms of the Mirau interferometer were filled with water to prevent the problems associated with imaging a sample in air with conventional FFOCT systems. The almost-common path interferometer makes the tomographic system less sensitive to environmental disturbances. En face OCT images at various depths were obtained with phase-shifting interferometry and Hariharan algorithm. This immersion interferometric method improves depth and quality in three-dimensional OCT imaging of scattering tissue. 10. Visualizing flow fields using acoustic Doppler current profilers and the Velocity Mapping Toolbox Science.gov (United States) Jackson, P. Ryan 2013-01-01 The purpose of this fact sheet is to provide examples of how the U.S. Geological Survey is using acoustic Doppler current profilers for much more than routine discharge measurements. These instruments are capable of mapping complex three-dimensional flow fields within rivers, lakes, and estuaries. Using the Velocity Mapping Toolbox to process the ADCP data allows detailed visualization of the data, providing valuable information for a range of studies and applications. 11. Coherent scattering of three-level atoms in the field of a bichromatic standing light wave International Nuclear Information System (INIS) Pazgalev, A.S.; Rozhdestvenskii, Yu.V. 1996-01-01 We discuss the coherent scattering of three-level atoms in the field of two standing light waves for two values of the spatial shift. In the case of a zero spatial shift and equal frequency detunings of the standing waves, the problem of scattering of a three-level atoms is reduced to scattering of an effectively two-level atom. For the case of an exact resonance between the waves and transitions we give expressions for the population probability of the states of the three-level atom obtained in the short-interaction-time approximation. Depending on the initial population distribution over the states, different scattering modes are realized. In particular, we show that there can be initial conditions for which the three-level system does not interact with the field of the standing waves, with the result that there is no coherent scattering of atoms. In the case of standing waves shifted by π/2, there are two types of solution, depending on the values of the frequency detuning. For instance, when the light waves are detuned equally we give the exact solution for arbitrary relationships between the detuning and the standing wave intensities valid for any atom-field interaction times. The case of 'mirror' detunings and shifted standing waves is studied only numerically 12. On the evolution of magnetic and velocity fields of an originating sunspot group International Nuclear Information System (INIS) Bachmann, G. 1978-01-01 Magnetographic measurements were made to derive longitudinal magnetic field strengths, line-of-sight velocities and the brightness distribution in an originating sunspot group. These results and photographs of the group are used to compare the evaluation of a relatively simple active region with our present ideas about the evolution of active regions in general. We found that the total magnetic flux increased from about 4 to 20x10 20 Mx over three days. The downward flow of gas in regions with stronger magnetic fields is formed only after the magnetic field has already been bipolar for two days. The maximum velocity always occurred in the main spots of the preceding and the subsequent parts of the sunspot group. Transformation into a flow pattern, which looks like Evershed motion, is observed in the main preceding sunspot after the formation of the penumbra. The generation of new active regions by concentration and amplification of magnetic fields, under the action of supergranulation flow in photospheric layers, cannot play an important role. On the contrary, the behaviour of the active region is in agreement with the conception of rising flux tubes, out of which the gas flows down. Our observations confirm that a magnetic field strength, leading to the generation of sunspots, is attained earlier in the preceding part of the originating active region than in its subsequent part. A series of subflares occurred in the active region, when short-lived small magnetic structure elements emerged in the larger bipolar magnetic field. (author) 13. Depth-resolved incoherent and coherent wide-field high-content imaging (Conference Presentation) Science.gov (United States) So, Peter T. 2016-03-01 Recent advances in depth-resolved wide-field imaging technique has enabled many high throughput applications in biology and medicine. Depth resolved imaging of incoherent signals can be readily accomplished with structured light illumination or nonlinear temporal focusing. The integration of these high throughput systems with novel spectroscopic resolving elements further enable high-content information extraction. We will introduce a novel near common-path interferometer and demonstrate its uses in toxicology and cancer biology applications. The extension of incoherent depth-resolved wide-field imaging to coherent modality is non-trivial. Here, we will cover recent advances in wide-field 3D resolved mapping of refractive index, absorbance, and vibronic components in biological specimens. 14. Information Entropy Squeezing of a Two-Level Atom Interacting with Two-Mode Coherent Fields Institute of Scientific and Technical Information of China (English) LIU Xiao-Juan; FANG Mao-Fa 2004-01-01 From a quantum information point of view we investigate the entropy squeezing properties for a two-level atom interacting with the two-mode coherent fields via the two-photon transition. We discuss the influences of the initial state of the system on the atomic information entropy squeezing. Our results show that the squeezed component number,squeezed direction, and time of the information entropy squeezing can be controlled by choosing atomic distribution angle,the relative phase between the atom and the two-mode field, and the difference of the average photon number of the two field modes, respectively. Quantum information entropy is a remarkable precision measure for the atomic squeezing. 15. Temporal Variability in Seismic Velocity at the Salton Sea Geothermal Field Science.gov (United States) Taira, T.; Nayak, A.; Brenguier, F. 2015-12-01 We characterize the temporal variability of ambient noise wavefield and search for velocity changes associated with activities of the geothermal energy development at the Salton Sea Geothermal Field. The noise cross-correlations (NCFs) are computed for ~6 years of continuous three-component seismic data (December 2007 through January 2014) collected at 8 sites from the CalEnergy Subnetwork (EN network) with MSNoise software (Lecocq et al., 2014, SRL). All seismic data are downloaded from the Southern California Earthquake Data Center. Velocity changes (dv/v) are obtained by measuring time delay between 5-day stacks of NCFs and the reference NCF (average over the entire 6 year period). The time history of dv/v is determined by averaging dv/v measurements over all station/channel pairs (252 combinations). Our preliminary dv/v measurement suggests a gradual increase in dv/v over the 6-year period in a frequency range of 0.5-8.0 Hz. The resultant increase rate of velocity is about 0.01%/year. We also explore the frequency-dependent velocity change at the 5 different frequency bands (0.5-2.0 Hz, 0.75-3.0 Hz, 1.0-4.0 Hz, 1.5-6.0 Hz, and 2.0-8.0 Hz) and find that the level of this long-term dv/v variability is increased with increase of frequency (i.e., the highest increase rate of ~0.15%/year at the 0.5-2.0 Hz band). This result suggests that the velocity changes were mostly occurred in a depth of ~500 m assuming that the coda parts of NCFs (~10-40 s depending on station distances) are predominantly composed of scattered surface waves, with the SoCal velocity model (Dreger and Helmberger, 1993, JGR). No clear seasonal variation of dv/v is observed in the frequency band of 0.5-8.0 Hz. 16. Direct simultaneous measurement of intraglottal geometry and velocity fields in excised larynges. Science.gov (United States) Khosla, Sid; Oren, Liran; Ying, Jun; Gutmark, Ephraim 2014-04-01 Current theories regarding the mechanisms of phonation are based on assumptions about the aerodynamics between the vocal folds during the closing phase of vocal fold vibration. However, many of these fundamental assumptions have never been validated in a tissue model. In this study, the main objective was to determine the aerodynamics (velocity fields) and the geometry of the medial surface of the vocal folds during the closing phase of vibration. The main hypothesis is that intraglottal vortices are produced during vocal fold closing when the glottal duct has a divergent shape and that these vortices are associated with negative pressures. Experiments using seven excised canine larynges. The particle imaging velocimetry (PIV) method was used to determine the velocity fields at low, mid-, and high subglottal pressures for each larynx. Modifications were made to previously described PIV methodology to allow the measurement of both the intraglottal velocity fields and the position of the medial aspects of the vocal fold. At relatively low subglottal pressures, little to no intraglottal vortices were seen. At mid- and high subglottal pressures, the flow separation vortices occurred and produced maximum negative pressures, relative to atmospheric, of -2.6 to -14.6 cm H2 O. Possible physiological and surgical implications are discussed. Intraglottal vortices produce significant negative pressures at mid- and high subglottal pressures. These vortices may be important in increasing maximum flow declination rate and acoustic intensity. N/A. © 2014 The American Laryngological, Rhinological and Otological Society, Inc. 17. Calculation of acoustic field based on laser-measured vibration velocities on ultrasonic transducer surface Science.gov (United States) Hu, Liang; Zhao, Nannan; Gao, Zhijian; Mao, Kai; Chen, Wenyu; Fu, Xin 2018-05-01 Determination of the distribution of a generated acoustic field is valuable for studying ultrasonic transducers, including providing the guidance for transducer design and the basis for analyzing their performance, etc. A method calculating the acoustic field based on laser-measured vibration velocities on the ultrasonic transducer surface is proposed in this paper. Without knowing the inner structure of the transducer, the acoustic field outside it can be calculated by solving the governing partial differential equation (PDE) of the field based on the specified boundary conditions (BCs). In our study, the BC on the transducer surface, i.e. the distribution of the vibration velocity on the surface, is accurately determined by laser scanning measurement of discrete points and follows a data fitting computation. In addition, to ensure the calculation accuracy for the whole field even in an inhomogeneous medium, a finite element method is used to solve the governing PDE based on the mixed BCs, including the discretely measured velocity data and other specified BCs. The method is firstly validated on numerical piezoelectric transducer models. The acoustic pressure distributions generated by a transducer operating in an homogeneous and inhomogeneous medium, respectively, are both calculated by the proposed method and compared with the results from other existing methods. Then, the method is further experimentally validated with two actual ultrasonic transducers used for flow measurement in our lab. The amplitude change of the output voltage signal from the receiver transducer due to changing the relative position of the two transducers is calculated by the proposed method and compared with the experimental data. This method can also provide the basis for complex multi-physical coupling computations where the effect of the acoustic field should be taken into account. 18. Magnetic field-aligned plasma expansion in critical ionization velocity space experiments International Nuclear Information System (INIS) Singh, N. 1989-01-01 Motivated by the recent Critical Ionization Velocity (CIV) experiments in space, the temporal evolution of a plasma cloud released in an ambient plasma is studied. Time-dependent Vlasov equations for both electrons and ions, along with the Poisson equation for the self-consistent electric field parallel to the ambient magnetic field, are solved. The initial cloud is assumed to consist of cold, warm, and hot electrons with temperatures T/sub c/ ≅ 0.2 eV, T/sub w/ ≅ 2 eV, and T/sub h/ ≅ 10 eV, respectively. It is found that the minor hot electrons escape the cloud, and their velocity distribution function shows the typical time-of-flight dispersion feature - that is, the larger the distance from the cloud, the larger is the average drift velocity of the escaping electrons. The major warm electrons expand along the magnetic field line with the corresponding ion-acoustic speed. The combined effect of the escaping hot electrons and the expanding warm ones sets up an electric potential structure which accelerates the ambient electrons into the cloud. Thus, the energy loss due to the electron escape is partly replenished. The electric field distribution in the potential structure depends on the stage of the evolution; before the rarefaction waves propagating from the edges of the cloud reach its center, the electric fields point into the cloud. After this stage the cloud divides into two subclouds, with each having their own bipolar electric fields. Effects of collisions on the evolution of plasma clouds are also discussed. The relevance of the results seen from the calculations are discussed in the context of recent space experiments on CIV 19. The effect of magnetic field configuration on particle pinch velocity in compact helical system (CHS) International Nuclear Information System (INIS) Iguchi, H.; Ida, K.; Yamada, H. 1994-01-01 Radial particle transport has been experimentally studied in the low-aspect-ratio heliotron/torsatron device CHS. A non-diffusive outward particle flow (inverse pinch) is observed in the magnetic configuration with the magnetic axis shifted outward, while an inward pinch, like in tokamaks, is observed with the magnetic axis shifted inward. This change in the direction of anomalous particle flow is not due to the reversal of temperature gradient nor the radial electric field. The observation suggests that the particle pinch velocity is sensitive to the magnetic field structure. (author) 20. Magnetic and velocity fields in a dynamo operating at extremely small Ekman and magnetic Prandtl numbers Science.gov (United States) Šimkanin, Ján; Kyselica, Juraj 2017-12-01 Numerical simulations of the geodynamo are becoming more realistic because of advances in computer technology. Here, the geodynamo model is investigated numerically at the extremely low Ekman and magnetic Prandtl numbers using the PARODY dynamo code. These parameters are more realistic than those used in previous numerical studies of the geodynamo. Our model is based on the Boussinesq approximation and the temperature gradient between upper and lower boundaries is a source of convection. This study attempts to answer the question how realistic the geodynamo models are. Numerical results show that our dynamo belongs to the strong-field dynamos. The generated magnetic field is dipolar and large-scale while convection is small-scale and sheet-like flows (plumes) are preferred to a columnar convection. Scales of magnetic and velocity fields are separated, which enables hydromagnetic dynamos to maintain the magnetic field at the low magnetic Prandtl numbers. The inner core rotation rate is lower than that in previous geodynamo models. On the other hand, dimensional magnitudes of velocity and magnetic fields and those of the magnetic and viscous dissipation are larger than those expected in the Earth's core due to our parameter range chosen. 1. Ab initio velocity-field curves in monoclinic β-Ga2O3 Science.gov (United States) Ghosh, Krishnendu; Singisetti, Uttam 2017-07-01 We investigate the high-field transport in monoclinic β-Ga2O3 using a combination of ab initio calculations and full band Monte Carlo (FBMC) simulation. Scattering rate calculation and the final state selection in the FBMC simulation use complete wave-vector (both electron and phonon) and crystal direction dependent electron phonon interaction (EPI) elements. We propose and implement a semi-coarse version of the Wannier-Fourier interpolation method [Giustino et al., Phys. Rev. B 76, 165108 (2007)] for short-range non-polar optical phonon (EPI) elements in order to ease the computational requirement in FBMC simulation. During the interpolation of the EPI, the inverse Fourier sum over the real-space electronic grids is done on a coarse mesh while the unitary rotations are done on a fine mesh. This paper reports the high field transport in monoclinic β-Ga2O3 with deep insight into the contribution of electron-phonon interactions and velocity-field characteristics for electric fields ranging up to 450 kV/cm in different crystal directions. A peak velocity of 2 × 107 cm/s is estimated at an electric field of 200 kV/cm. 2. Effects of magnetic field, sheared flow and ablative velocity on the Rayleigh-Taylor instability International Nuclear Information System (INIS) Li, D.; Zhang, W.L.; Wu, Z.W. 2005-01-01 It is found that magnetic field has a stabilization effect whereas the sheared flow has a destabilization effect on the RT instability in the presence of sharp interface. RT instability only occurs in the long wave region and can be completely suppressed if the stabilizing effect of magnetic field dominates. The RT instability increases with wave number and flow shear, and acts much like a Kelvin-Helmholtz instability when destabilizing effect of sheared flow dominates. It is shown that both of ablation velocity and magnetic filed have stabilization effect on RT instability in the presence of continued interface. The stabilization effect of magnetic field takes place for whole waveband and becomes more significant for the short wavelength. The RT instability can be completely suppressed by the cooperated effect of magnetic field and ablation velocity so that the ICF target shell may be unnecessary to be accelerated to very high speed. The growth rate decreases as the density scale length increases. The stabilization effect of magnetic field is more significant for the short density scale length. (author) 3. Retinal pigment epithelium findings in patients with albinism using wide-field polarization-sensitive optical coherence tomography. Science.gov (United States) Schütze, Christopher; Ritter, Markus; Blum, Robert; Zotter, Stefan; Baumann, Bernhard; Pircher, Michael; Hitzenberger, Christoph K; Schmidt-Erfurth, Ursula 2014-11-01 To investigate pigmentation characteristics of the retinal pigment epithelium (RPE) in patients with albinism using wide-field polarization-sensitive optical coherence tomography compared with intensity-based spectral domain optical coherence tomography and fundus autofluorescence imaging. Five patients (10 eyes) with previously genetically diagnosed albinism and 5 healthy control subjects (10 eyes) were imaged by a wide-field polarization-sensitive optical coherence tomography system (scan angle: 40 × 40° on the retina), sensitive to melanin contained in the RPE, based on the polarization state of backscattered light. Conventional intensity-based spectral domain optical coherence tomography and fundus autofluorescence examinations were performed. Retinal pigment epithelium-pigmentation was analyzed qualitatively and quantitatively based on depolarization assessed by polarization-sensitive optical coherence tomography. This study revealed strong evidence of polarization-sensitive optical coherence tomography to specifically image melanin in the RPE. Depolarization of light backscattered by the RPE in patients with albinism was reduced compared with normal subjects. Heterogeneous RPE-specific depolarization characteristics were observed in patients with albinism. Reduction of depolarization observed in the light backscattered by the RPE in patients with albinism corresponds to expected decrease of RPE pigmentation. The degree of depigmentation of the RPE is possibly associated with visual acuity. Findings suggest that different albinism genotypes result in heterogeneous levels of RPE pigmentation. Polarization-sensitive optical coherence tomography showed a heterogeneous appearance of RPE pigmentation in patients with albinism depending on different genotypes. 4. Coherence holography by achromatic 3-D field correlation of generic thermal light with an imaging Sagnac shearing interferometer. Science.gov (United States) Naik, Dinesh N; Ezawa, Takahiro; Singh, Rakesh Kumar; Miyamoto, Yoko; Takeda, Mitsuo 2012-08-27 We propose a new technique for achromatic 3-D field correlation that makes use of the characteristics of both axial and lateral magnifications of imaging through a common-path Sagnac shearing interferometer. With this technique, we experimentally demonstrate, for the first time to our knowledge, 3-D image reconstruction of coherence holography with generic thermal light. By virtue of the achromatic axial shearing implemented by the difference in axial magnifications in imaging, the technique enables coherence holography to reconstruct a 3-D object with an axial depth beyond the short coherence length of the thermal light. 5. Influence of spatial and temporal coherences on atomic resolution high angle annular dark field imaging Energy Technology Data Exchange (ETDEWEB) Beyer, Andreas, E-mail: [email protected]; Belz, Jürgen; Knaub, Nikolai; Jandieri, Kakhaber; Volz, Kerstin 2016-10-15 Aberration-corrected (scanning) transmission electron microscopy ((S)TEM) has become a widely used technique when information on the chemical composition is sought on an atomic scale. To extract the desired information, complementary simulations of the scattering process are inevitable. Often the partial spatial and temporal coherences are neglected in the simulations, although they can have a huge influence on the high resolution images. With the example of binary gallium phosphide (GaP) we elucidate the influence of the source size and shape as well as the chromatic aberration on the high angle annular dark field (HAADF) intensity. We achieve a very good quantitative agreement between the frozen phonon simulation and experiment for different sample thicknesses when a Lorentzian source distribution is assumed and the effect of the chromatic aberration is considered. Additionally the influence of amorphous layers introduced by the preparation of the TEM samples is discussed. Taking into account these parameters, the intensity in the whole unit cell of GaP, i.e. at the positions of the different atomic columns and in the region between them, is described correctly. With the knowledge of the decisive parameters, the determination of the chemical composition of more complex, multinary materials becomes feasible. - Highlights: • Atomic resolution high angle annular dark field images of gallium phosphide are compared quantitatively with simulated ones. • The influence of partial spatial and temporal coherence on the HAADF-intensity is investigated. • The influence of amorphous layers introduced by the sample preparation is simulated. 6. Estimation of 3-D conduction velocity vector fields from cardiac mapping data. Science.gov (United States) Barnette, A R; Bayly, P V; Zhang, S; Walcott, G P; Ideker, R E; Smith, W M 2000-08-01 A method to estimate three-dimensional (3-D) conduction velocity vector fields in cardiac tissue is presented. The speed and direction of propagation are found from polynomial "surfaces" fitted to space-time (x, y, z, t) coordinates of cardiac activity. The technique is applied to sinus rhythm and paced rhythm mapped with plunge needles at 396-466 sites in the canine myocardium. The method was validated on simulated 3-D plane and spherical waves. For simulated data, conduction velocities were estimated with an accuracy of 1%-2%. In experimental data, estimates of conduction speeds during paced rhythm were slower than those found during normal sinus rhythm. Vector directions were also found to differ between different types of beats. The technique was able to distinguish between premature ventricular contractions and sinus beats and between sinus and paced beats. The proposed approach to computing velocity vector fields provides an automated, physiological, and quantitative description of local electrical activity in 3-D tissue. This method may provide insight into abnormal conduction associated with fatal ventricular arrhythmias. 7. Field transients of coherent terahertz synchrotron radiation accessed via time-resolving and correlation techniques Energy Technology Data Exchange (ETDEWEB) Pohl, A.; Hübers, H.-W. [Humboldt-Universität zu Berlin, Institute of Physics, Newtonstraße 15, 12489 Berlin (Germany); Institute of Optical Sensor Systems, German Aerospace Center (DLR), Rutherfordstrasse 29, 12489 Berlin (Germany); Semenov, A. [Institute of Optical Sensor Systems, German Aerospace Center (DLR), Rutherfordstrasse 29, 12489 Berlin (Germany); Hoehl, A.; Ulm, G. [Physikalisch-Technische Bundesanstalt (PTB), Abbestraße 2-12, 10587 Berlin (Germany); Ries, M.; Wüstefeld, G. [Helmholz-Zentrum Berlin, Albert-Einstein-Str. 15, 12489 Berlin (Germany); Ilin, K.; Thoma, P.; Siegel, M. [Institute of Micro- and Nanoelectronic Systems, Karlsruhe Institute of Technology (KIT), Hertzstrasse 16, 76187 Karlsruhe (Germany) 2016-03-21 Decaying oscillations of the electric field in repetitive pulses of coherent synchrotron radiation in the terahertz frequency range was evaluated by means of time-resolving and correlation techniques. Comparative analysis of real-time voltage transients of the electrical response and interferograms, which were obtained with an ultrafast zero-bias Schottky diode detector and a Martin-Puplett interferometer, delivers close values of the pulse duration. Consistent results were obtained via the correlation technique with a pair of Golay Cell detectors and a pair of resonant polarisation-sensitive superconducting detectors integrated on one chip. The duration of terahertz synchrotron pulses does not closely correlate with the duration of single-cycle electric field expected for the varying size of electron bunches. We largely attribute the difference to the charge density oscillations in electron bunches and to the low-frequency spectral cut-off imposed by both the synchrotron beamline and the coupling optics of our detectors. 8. Wide-field optical coherence tomography based microangiography for retinal imaging Science.gov (United States) Zhang, Qinqin; Lee, Cecilia S.; Chao, Jennifer; Chen, Chieh-Li; Zhang, Thomas; Sharma, Utkarsh; Zhang, Anqi; Liu, Jin; Rezaei, Kasra; Pepple, Kathryn L.; Munsen, Richard; Kinyoun, James; Johnstone, Murray; van Gelder, Russell N.; Wang, Ruikang K. 2016-02-01 Optical coherence tomography angiography (OCTA) allows for the evaluation of functional retinal vascular networks without a need for contrast dyes. For sophisticated monitoring and diagnosis of retinal diseases, OCTA capable of providing wide-field and high definition images of retinal vasculature in a single image is desirable. We report OCTA with motion tracking through an auxiliary real-time line scan ophthalmoscope that is clinically feasible to image functional retinal vasculature in patients, with a coverage of more than 60 degrees of retina while still maintaining high definition and resolution. We demonstrate six illustrative cases with unprecedented details of vascular involvement in retinal diseases. In each case, OCTA yields images of the normal and diseased microvasculature at all levels of the retina, with higher resolution than observed with fluorescein angiography. Wide-field OCTA technology will be an important next step in augmenting the utility of OCT technology in clinical practice. 9. Evolution of Nanowire Transmon Qubits and Their Coherence in a Magnetic Field Science.gov (United States) Luthi, F.; Stavenga, T.; Enzing, O. W.; Bruno, A.; Dickel, C.; Langford, N. K.; Rol, M. A.; Jespersen, T. S.; Nygârd, J.; Krogstrup, P.; DiCarlo, L. 2018-03-01 We present an experimental study of flux- and gate-tunable nanowire transmons with state-of-the-art relaxation time allowing quantitative extraction of flux and charge noise coupling to the Josephson energy. We evidence coherence sweet spots for charge, tuned by voltage on a proximal side gate, where first order sensitivity to switching two-level systems and background 1 /f noise is minimized. Next, we investigate the evolution of a nanowire transmon in a parallel magnetic field up to 70 mT, the upper bound set by the closing of the induced gap. Several features observed in the field dependence of qubit energy relaxation and dephasing times are not fully understood. Using nanowires with a thinner, partially covering Al shell will enable operation of these circuits up to 0.5 T, a regime relevant for topological quantum computation and other applications. 10. Coherent and Semiclassical States of a Charged Particle in Electromagnetic Fields Science.gov (United States) Pereira, A. S. 2018-03-01 In the present article, we extend our study (Bagrov et al., Braz. J. Phys. 45, 369, 2015) of generalized coherent states (GCS) of a one-dimensional particle considering such important physical system as a three-dimensional charged particle in electric and magnetic fields. Constructing GCS in a many-dimensional case, we meet technical complications that make the consideration nontrivial and instructive. The GCS of the system under consideration are constructed. We study the properties of this GCS such as completeness relations, minimization of uncertainty relations, and so on. We point out which family of the obtained GCS of a charged particle in a magnetic field is related to the CS constructed first by Malkin and Man'ko. We obtain conditions under which some of the GCS can be considered as semiclassical states (SS). 11. Coherent and Semiclassical States of a Charged Particle in Electromagnetic Fields Science.gov (United States) Pereira, A. S. 2018-06-01 In the present article, we extend our study (Bagrov et al., Braz. J. Phys. 45, 369, 2015) of generalized coherent states (GCS) of a one-dimensional particle considering such important physical system as a three-dimensional charged particle in electric and magnetic fields. Constructing GCS in a many-dimensional case, we meet technical complications that make the consideration nontrivial and instructive. The GCS of the system under consideration are constructed. We study the properties of this GCS such as completeness relations, minimization of uncertainty relations, and so on. We point out which family of the obtained GCS of a charged particle in a magnetic field is related to the CS constructed first by Malkin and Man'ko. We obtain conditions under which some of the GCS can be considered as semiclassical states (SS). 12. In vivo high resolution human corneal imaging using full-field optical coherence tomography. Science.gov (United States) Mazlin, Viacheslav; Xiao, Peng; Dalimier, Eugénie; Grieve, Kate; Irsch, Kristina; Sahel, José-Alain; Fink, Mathias; Boccara, A Claude 2018-02-01 We present the first full-field optical coherence tomography (FFOCT) device capable of in vivo imaging of the human cornea. We obtained images of the epithelial structures, Bowman's layer, sub-basal nerve plexus (SNP), anterior and posterior stromal keratocytes, stromal nerves, Descemet's membrane and endothelial cells with visible nuclei. Images were acquired with a high lateral resolution of 1.7 µm and relatively large field-of-view of 1.26 mm x 1.26 mm - a combination, which, to the best of our knowledge, has not been possible with other in vivo human eye imaging methods. The latter together with a contactless operation, make FFOCT a promising candidate for becoming a new tool in ophthalmic diagnostics. 13. Antarctic Glaciological Data at NSIDC: field data, temperature, and ice velocity Science.gov (United States) Bauer, R.; Bohlander, J.; Scambos, T.; Berthier, E.; Raup, B.; Scharfen, G. 2003-12-01 An extensive collection of many Antarctic glaciological parameters is available for the polar science community upon request. The National Science Foundation's Office of Polar Programs funds the Antarctic Glaciological Data Center (AGDC) at the National Snow and Ice Data Center (NSIDC) to archive and distribute Antarctic glaciological and cryospheric system data collected by the U.S. Antarctic Program. AGDC facilitates data exchange among Principal Investigators, preserves recently collected data useful to future research, gathers data sets from past research, and compiles continent-wide information useful for modeling and field work planning. Data sets are available via our web site, http://nsidc.org/agdc/. From here, users can access extensive documentation, citation information, locator maps, derived images and references, and the numerical data. More than 50 Antarctic scientists have contributed data to the archive. Among the compiled products distributed by AGDC are VELMAP and THERMAP. THERMAP is a compilation of over 600 shallow firn temperature measurements ('10-meter temperatures') collected since 1950. These data provide a record of mean annual temperature, and potentially hold a record of climate change on the continent. The data are represented with maps showing the traverse route, and include data sources, measurement technique, and additional measurements made at each site, i.e., snow density and accumulation. VELMAP is an archive of surface ice velocity measurements for the Antarctic Ice Sheet. The primary objective of VELMAP is to assemble a historic record of outlet glaciers and ice shelf ice motion over the Antarctic. The collection includes both PI-contributed measurements and data generated at NSIDC using Landsat and SPOT satellite imagery. Tabular data contain position, speed, bearing, and data quality information, and related references. Two new VELMAP data sets are highlighted: the Mertz Glacier and the Institute Ice Stream. Mertz Glacier ice 14. Calculation of local characteristics of velocity field in turbulent coolant flow in fast reactor fuel assembly International Nuclear Information System (INIS) Muehlbauer, P. 1981-08-01 Experience is described gained with the application of computer code VELASCO in calculating the velocity field in fast reactor fuel assemblies taking into account configuration disturbances due to fuel pin displacement. Theoretical results are compared with the results of experiments conducted by UJV on aerodynamic models HEM-1 (model of the fuel assembly central part) and HEM-2 (model of the fuel assembly peripheral part). The results are reported of calculating the distribution of shear stress in wetted rod surfaces and in the assembly wall (model HEM-2) and the corresponding experimental results are shown. The shear stress distribution in wetted surfaces obtained using the VELASCO code allowed forming an opinion on the code capability of comprising local parameters of turbulent flow through a fuel rod bundle. The applicability was also tested of the code for calculating mean velocities in the individual zones, eg., in elementary cells. (B.S.) 15. Doppler-shifted fluorescence imaging of velocity fields in supersonic reacting flows Science.gov (United States) Allen, M. G.; Davis, S. J.; Kessler, W. J.; Sonnenfroh, D. M. 1992-01-01 The application of Doppler-shifted fluorescence imaging of velocity fields in supersonic reacting flows is analyzed. Focussing on fluorescence of the OH molecule in typical H2-air Scramjet flows, the effects of uncharacterized variations in temperature, pressure, and collisional partner composition across the measurement plane are examined. Detailed measurements of the (1,0) band OH lineshape variations in H2-air combustions are used, along with single-pulse and time-averaged measurements of an excimer-pumped dye laser, to predict the performance of a model velocimeter with typical Scramjet flow properties. The analysis demonstrates the need for modification and control of the laser bandshape in order to permit accurate velocity measurements in the presence of multivariant flow properties. 16. Morphology and velocity field of the large flare on the solar disc on July 14, 1980 International Nuclear Information System (INIS) Xuan, J.; Li, Z. 1982-01-01 The active region morphology and the features of solar radio bursts and the sight-line velocity distribution of a flare of importance 3B on the solar disk (AR 2562) on July 14, 1980, are discussed. The preliminary analysis made here suggests that measurement of the velocity fields of active regions is important in treating flare mechanisms. It is noted that a few hours before the eruption of the flare, the penumbral fibers of the sunspots on the corresponding photospheric part under the flare rapidly attenuated. It is thought that the instability of the flare may derive from the rapid changes occurring in the magnetic boundary conditions of the photosphere. These in turn denote the rapid emergence, subsidence, or dissipation of the magnetic flux. The spectra of the active filament in the later period of the flare are seen as indicating that most of the ejection matter was rolling, in addition to ascending or descending 17. Hybrid micro-/nano-particle image velocimetry for 3D3C multi-scale velocity field measurement in microfluidics International Nuclear Information System (INIS) Min, Young Uk; Kim, Kyung Chun 2011-01-01 The conventional two-dimensional (2D) micro-particle image velocimetry (micro-PIV) technique has inherent bias error due to the depth of focus along the optical axis to measure the velocity field near the wall of a microfluidics device. However, the far-field measurement of velocity vectors yields good accuracy for micro-scale flows. Nano-PIV using the evanescent wave of total internal reflection fluorescence microscopy can measure near-field velocity vectors within a distance of around 200 nm from the solid surface. A micro-/nano-hybrid PIV system is proposed to measure both near- and far-field velocity vectors simultaneously in microfluidics. A near-field particle image can be obtained by total internal reflection fluorescence microscopy using nanoparticles, and the far-field velocity vectors are measured by three-hole defocusing micro-particle tracking velocimetry (micro-PTV) using micro-particles. In order to identify near- and far-field particle images, lasers of different wavelengths are adopted and tested in a straight microchannel for acquiring the three-dimensional three-component velocity field. We found that the new technique gives superior accuracy for the velocity profile near the wall compared to that of conventional nano-PIV. This method has been successfully applied to precisely measure wall shear stress in 2D microscale Poiseulle flows 18. Quasi-direct numerical simulation of a pebble bed configuration. Part I: Flow (velocity) field analysis International Nuclear Information System (INIS) Shams, A.; Roelofs, F.; Komen, E.M.J.; Baglietto, E. 2013-01-01 Highlights: ► Quasi direct numerical simulations (q-DNS) of a pebble bed configuration has been performed. ► This q-DNS database may serve as a reference for the validation of different turbulence modeling approaches. ► A wide range of qualitative and quantitative data throughout the computational domain has been generated. ► Results for mean, RMS and covariance of velocity field are extensively reported in this paper. -- Abstract: High temperature reactors (HTR) are being considered for deployment around the world because of their excellent safety features. The fuel is embedded in a graphite moderator and can sustain very high temperatures. However, the appearance of hot spots in the pebble bed cores of HTR's may affect the integrity of the pebbles. A good prediction of the flow and heat transport in such a pebble bed core is a challenge for available turbulence models and such models need to be validated. In the present article, quasi direct numerical simulations (q-DNS) of a pebble bed configuration are reported, which may serve as a reference for the validation of different turbulence modeling approaches. Such approaches can be used in order to perform calculations for a randomly arranged pebble bed. Simulations are performed at a Reynolds number of 3088, based on pebble diameter, with a porosity level of 0.42. Detailed flow analyses have shown complex physics flow behavior and make this case challenging for turbulence model validation. Hence, a wide range of qualitative and quantitative data for velocity and temperature field have been extracted for this benchmark. In the present article (part I), results related to the flow field (mean, RMS and covariance of velocity) are documented and discussed in detail. Moreover, the discussion regarding the temperature field will be published in a separate article 19. Shear-wave velocity models and seismic sources in Campanian volcanic areas: Vesuvius and Phlegraean fields Energy Technology Data Exchange (ETDEWEB) Guidarelli, M; Zille, A; Sarao, A [Dipartimento di Scienze della Terra, Universita degli Studi di Trieste, Trieste (Italy); Natale, M; Nunziata, C [Dipartimento di Geofisica e Vulcanologia, Universita di Napoli ' Federico II' , Napoli (Italy); Panza, G F [Dipartimento di Scienze della Terra, Universita degli Studi di Trieste, Trieste (Italy); Abdus Salam International Centre for Theoretical Physics, Trieste (Italy) 2006-12-15 This chapter summarizes a comparative study of shear-wave velocity models and seismic sources in the Campanian volcanic areas of Vesuvius and Phlegraean Fields. These velocity models were obtained through the nonlinear inversion of surface-wave tomography data, using as a priori constraints the relevant information available in the literature. Local group velocity data were obtained by means of the frequency-time analysis for the time period between 0.3 and 2 s and were combined with the group velocity data for the time period between 10 and 35 s from the regional events located in the Italian peninsula and bordering areas and two station phase velocity data corresponding to the time period between 25 and 100 s. In order to invert Rayleigh wave dispersion curves, we applied the nonlinear inversion method called hedgehog and retrieved average models for the first 30-35 km of the lithosphere, with the lower part of the upper mantle being kept fixed on the basis of existing regional models. A feature that is common to the two volcanic areas is a low shear velocity layer which is centered at the depth of about 10 km, while on the outside of the cone and along a path in the northeastern part of the Vesuvius area this layer is absent. This low velocity can be associated with the presence of partial melting and, therefore, may represent a quite diffused crustal magma reservoir which is fed by a deeper one that is regional in character and located in the uppermost mantle. The study of seismic source in terms of the moment tensor is suitable for an investigation of physical processes within a volcano; indeed, its components, double couple, compensated linear vector dipole, and volumetric, can be related to the movements of magma and fluids within the volcanic system. Although for many recent earthquake events the percentage of double couple component is high, our results also show the presence of significant non-double couple components in both volcanic areas. (author) 20. Heliospheric magnetic field polarity inversions driven by radial velocity field structures Czech Academy of Sciences Publication Activity Database Landi, S.; Hellinger, Petr; Velli, M. 2006-01-01 Roč. 33, č. 14 (2006), L14101/1-L14101/5 ISSN 0094-8276 Grant - others:European Commission(XE) HRPN-CT-2001-00310 Institutional research plan: CEZ:AV0Z30420517 Keywords : solar wind * magnetic field polarity inversions * microstreams * turbulence Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 2.602, year: 2006 1. Vlasov-Maxwell equilibrium solutions for Harris sheet magnetic field with Kappa velocity distribution International Nuclear Information System (INIS) Fu, W.-Z.; Hau, L.-N. 2005-01-01 An exact solution of the steady-state, one-dimensional Vlasov-Maxwell equations for a plasma current sheet with oppositely directed magnetic field was found by Harris in 1962. The so-called Harris magnetic field model assumes Maxwellian velocity distributions for oppositely drifting ions and electrons and has been widely used for plasma stability studies. This paper extends Harris solutions by using more general κ distribution functions that incorporate Maxwellian distribution in the limit of κ→∞. A new functional form for the plasma pressure as a function of the magnetic vector potential p(A) is found and the magnetic field is a modified tanh z function. In the extended solutions the effective temperature is no longer spatially uniform like in the Harris model and the thickness of the current layer decreases with decreasing κ 2. Influence of relaxation on emission and excitation of coherent states of electromagnetic field in the Jaynes-Cummings model International Nuclear Information System (INIS) Verlan, E.M. 2003-01-01 A two-level atom interacting with a field mode is considered. The field frequency is assumed to be equal to the atom transition frequency. The relaxation equations of the atom - field system are written in the basis of dressed states of the Jaynes - Cummings model taking into account quasi-resonant pumping. Their solutions are derived for a stationary regime. The average amplitude of a coherent electromagnetic field is found 3. A new car-following model for autonomous vehicles flow with mean expected velocity field Science.gov (United States) Wen-Xing, Zhu; Li-Dong, Zhang 2018-02-01 Due to the development of the modern scientific technology, autonomous vehicles may realize to connect with each other and share the information collected from each vehicle. An improved forward considering car-following model was proposed with mean expected velocity field to describe the autonomous vehicles flow behavior. The new model has three key parameters: adjustable sensitivity, strength factor and mean expected velocity field size. Two lemmas and one theorem were proven as criteria for judging the stability of homogeneousautonomous vehicles flow. Theoretical results show that the greater parameters means larger stability regions. A series of numerical simulations were carried out to check the stability and fundamental diagram of autonomous flow. From the numerical simulation results, the profiles, hysteresis loop and density waves of the autonomous vehicles flow were exhibited. The results show that with increased sensitivity, strength factor or field size the traffic jam was suppressed effectively which are well in accordance with the theoretical results. Moreover, the fundamental diagrams corresponding to three parameters respectively were obtained. It demonstrates that these parameters play almost the same role on traffic flux: i.e. before the critical density the bigger parameter is, the greater flux is and after the critical density, the opposite tendency is. In general, the three parameters have a great influence on the stability and jam state of the autonomous vehicles flow. 4. Description of turbulent velocity and temperature fields of single phase flow through tight rod bundles International Nuclear Information System (INIS) Monir, C. 1991-02-01 A two-dimensional procedure, VANTACY-II, describing the turbulent velocity and temperature fields for single phase flow in tight lattices is presented and validated. The flow is assumed to be steady, incrompressible and hydraulic and thermal fully developed. First, the state of art of turbulent momentum and heat transport in tight lattices is documented. It is shown that there is a necessity for experimental investigations in the field of turbulent heat transport. The presented new procedure is based on the turbulence model VELASCO-TUBS by NEELEN. The numerical solution of the balance equations is done by the finite element method code VANTACY by KAISER. The validation of the new procedure VANTACY-II is done by comparing the numerically calculated data for the velocity and temperature fields and for natural mixing with the experimental data of SEALE. The comparison shows a good agreement of experimental and numerically computed data. The observed differences can be mainly attributed to the model of the turbulent PRANDTL number used in the new procedure. (orig.) [de 5. Contribution of the source velocity to the scattering of electromagnetic fields caused by airborne magnetic dipoles International Nuclear Information System (INIS) Sampaio, Edson Emanoel Starteri 2014-01-01 The velocity of controlled airborne sources of electromagnetic geophysical surveys plays an additional role in the scattering of the fields by the earth. Therefore, it is necessary to investigate its contribution in the space and time variation of secondary electromagnetic fields. The model of a vertical magnetic dipole moving at a constant speed along a horizontal line in the air and above a homogeneous conductive half-space constitutes a first approach to stress the kinematic aspect and determine the difference between the fields due to an airborne and a static source. The magnetic moment of the source is equal to 10 4  A m 2 , its height is 120 m, and the horizontal and vertical separations between it and the receiver are, respectively, equal to 100 and 50 m: these values of the model are typical of towed-bird airborne TDEM surveys. We employed four values for the common velocities of source and receiver (0, 60, 80, and 100 m s −1 ), four values of the conductivity of the half-space (0.5, 0.1, 0.05, and 0.01 S m −1 ), and two causal source currents (box with periods of 80 and 10 ms and periodic with frequency values of 12.5 and 100 Hz). The results demonstrate that the relative velocity between source and medium yields a measurable variation compared to the static condition. Therefore, it must be taken into consideration by compensating the discrepancy in measured data employing the respective theoretical result. The results also show that it is necessary to adjust the concepts of time and frequency domain for electromagnetic measurements with traveling sources. (paper) 6. The Seismo-Generated Electric Field Probed by the Ionospheric Ion Velocity Science.gov (United States) (Tiger) Liu, Jann-Yenq 2017-04-01 The ion density, ion temperature, and the ion velocity probed by IPEI (ionospheric Plasma and Electrodynamics Instrument) onboard ROCSAT (i.e. FORMOSAT-1), and the global ionospheric map (GIM) of the total electron content (TEC) derived from measurements of ground-based GPS receivers are employed to study seismo-ionospheric precursors (SIPs) of the 31 March 2002 M6.8 Earthquake in Taiwan. The GIM TEC and ROCSAT/IPEI ion density significantly decrease specifically over the epicenter area 1-5 days before the earthquake, which suggests that the associated SIPs have observed. The ROCSAT/IPEI ion temperature reveals no significant changes before and after the earthquake, while the latitude-time-TEC plots extracted from the GIMs along the Taiwan longitude illustrate that the equatorial ionization anomaly significantly weakens and moves equatorward, which indicates that the daily dynamo electric field has been disturbed and cancelled by possible seismo-generated electric field on 2 days before (29 March) the earthquake. Here, for the first time a vector parameter of ion velocity is employed to study SIPs. It is found that ROCSAT/IPEI ion velocity becomes significantly downward, which confirms that a westward electric field of about 0.91mV/m generated during the earthquake preparation period being essential 1-5 days before the earthquake. Liu, J. Y., and C. K. Chao (2016), An observing system simulation experiment for FORMOSAT-5/AIP detecting seismo-ionospheric precursors, Terrestrial Atmospheric and Oceanic Sciences, DOI: 10.3319/TAO.2016.07.18.01(EOF5). 7. Self-Organisation and Intermittent Coherent Oscillations in the EXTRAP T2 Reversed Field Pinch Science.gov (United States) Cecconello, M.; Malmberg, J.-A.; Sallander, E.; Drake, J. R. Many reversed-field pinch (RFP) experiments exhibit a coherent oscillatory behaviour that is characteristic of discrete dynamo events and is associated with intermittent current profile self-organisation phenomena. However, in the vast majority of the discharges in the resistive shell RFP experiment EXTRAP T2, the dynamo activity does not show global, coherent oscillatory behaviour. The internally resonant tearing modes are phase-aligned and wall-locked resulting in a large localised magnetic perturbation. Equilibrium and plasma parameters have a level of high frequency fluctuations but the average values are quasi-steady. For some discharges, however, the equilibrium parameters exhibit the oscillatory behaviour characteristic of the discrete dynamo events. For these discharges, the trend observed in the tearing mode spectra, associated with the onset of the discrete relaxation event behaviour, is a relative higher amplitude of m = 0 mode activity and relative lower amplitude of the m = 1 mode activity compared with their average values. Global plasma parameters and model profile calculations for sample discharges representing the two types of relaxation dynamics are presented. 8. Self-organisation and intermittent coherent oscillations in the EXTRAP T2 reversed field pinch International Nuclear Information System (INIS) Cecconello, M.; Malmberg, J.A.; Sallander, E.; Drake, J.R. 2002-01-01 Many reversed-field pinch (RFP) experiments exhibit a coherent oscillatory behaviour that is characteristic of discrete dynamo events and is associated with intermittent current profile self-organisation phenomena. However, in the vast majority of the discharges in the resistive shell RFP experiment EXTRAP T2, the dynamo activity does not show global, coherent oscillatory behaviour. The internally resonant tearing modes are phase-aligned and wall-locked resulting in a large localised magnetic perturbation. Equilibrium and plasma parameters have a level of high frequency fluctuations but the average values are quasi-steady. For some discharges, however, the equilibrium parameters exhibit the oscillatory behaviour characteristic of the discrete dynamo events. For these discharges, the trend observed in the tearing mode spectra, associated with the onset of the discrete relaxation event behaviour, is a relative higher amplitude of m = 0 mode activity and relative lower amplitude of the m = 1 mode activity compared with their average values. Global plasma parameters and model profile calculations for sample discharges representing the two types of relaxation dynamics are presented 9. Comparison of Glaucoma Progression Detection by Optical Coherence Tomography and Visual Field. Science.gov (United States) Zhang, Xinbo; Dastiridou, Anna; Francis, Brian A; Tan, Ou; Varma, Rohit; Greenfield, David S; Schuman, Joel S; Huang, David 2017-12-01 To compare longitudinal glaucoma progression detection using optical coherence tomography (OCT) and visual field (VF). Validity assessment. We analyzed subjects with more than 4 semi-annual follow-up visits (every 6 months) in the multicenter Advanced Imaging for Glaucoma Study. Fourier-domain optical coherence tomography (OCT) was used to map the thickness of the peripapillary retinal nerve fiber layer (NFL) and ganglion cell complex (GCC). OCT-based progression detection was defined as a significant negative trend for either NFL or GCC. VF progression was reached if either the event or trend analysis reached significance. The analysis included 356 glaucoma suspect/preperimetric glaucoma (GS/PPG) eyes and 153 perimetric glaucoma (PG) eyes. Follow-up length was 54.1 ± 16.2 months for GS/PPG eyes and 56.7 ± 16.0 for PG eyes. Progression was detected in 62.1% of PG eyes and 59.8% of GS/PPG eyes by OCT, significantly (P glaucoma. While the utility of NFL declines in advanced glaucoma, GCC remains a sensitive progression detector from early to advanced stages. Copyright © 2017 Elsevier Inc. All rights reserved. 10. Fractal analysis of en face tomographic images obtained with full field optical coherence tomography Energy Technology Data Exchange (ETDEWEB) Gao, Wanrong; Zhu, Yue [Department of Optical Engineering, Nanjing University of Science and Technology, Jiangsu (China) 2017-03-15 The quantitative modeling of the imaging signal of pathological areas and healthy areas is necessary to improve the specificity of diagnosis with tomographic en face images obtained with full field optical coherence tomography (FFOCT). In this work, we propose to use the depth-resolved change in the fractal parameter as a quantitative specific biomarker of the stages of disease. The idea is based on the fact that tissue is a random medium and only statistical parameters that characterize tissue structure are appropriate. We successfully relate the imaging signal in FFOCT to the tissue structure in terms of the scattering function and the coherent transfer function of the system. The formula is then used to analyze the ratio of the Fourier transforms of the cancerous tissue to the normal tissue. We found that when the tissue changes from the normal to cancerous the ratio of the spectrum of the index inhomogeneities takes the form of an inverse power law and the changes in the fractal parameter can be determined by estimating slopes of the spectra of the ratio plotted on a log-log scale. The fresh normal and cancer liver tissues were imaged to demonstrate the potential diagnostic value of the method at early stages when there are no significant changes in tissue microstructures. (copyright 2016 by WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) 11. Numerical correction of distorted images in full-field optical coherence tomography Science.gov (United States) Min, Gihyeon; Kim, Ju Wan; Choi, Woo June; Lee, Byeong Ha 2012-03-01 We propose a numerical method which can numerically correct the distorted en face images obtained with a full field optical coherence tomography (FF-OCT) system. It is shown that the FF-OCT image of the deep region of a biological sample is easily blurred or degraded because the sample has a refractive index (RI) much higher than its surrounding medium in general. It is analyzed that the focal plane of the imaging system is segregated from the imaging plane of the coherence-gated system due to the RI mismatch. This image-blurring phenomenon is experimentally confirmed by imaging the chrome pattern of a resolution test target through its glass substrate in water. Moreover, we demonstrate that the blurred image can be appreciably corrected by using the numerical correction process based on the Fresnel-Kirchhoff diffraction theory. The proposed correction method is applied to enhance the image of a human hair, which permits the distinct identification of the melanin granules inside the cortex layer of the hair shaft. 12. The crustal velocity field mosaic of the Alpine Mediterranean area (Italy): Insights from new geodetic data Science.gov (United States) Farolfi, Gregorio; Del Ventisette, Chiara 2016-04-01 A new horizontal crustal velocity field of Alpine Mediterranean area was determined by continuous long time series (6.5 years) of 113 Global Navigation Satellite System (GNSS) permanent stations. The processing was performed using state-of-the-art absolute antenna phase center correction model and recomputed precise IGS orbits available since April 2014. Moreover, a new more accurate tropospheric mapping function for geodetic applications was adopted. Results provide a new detailed map of the kinematics throughout the entire study area. This area is characterized by a complex tectonic setting driven by the interaction of Eurasian and African plates. The eastern Alps, Corsica, Sardinia and the Tyrrhenian Sea (which is covered only by interpolation data) show small velocity residuals with respect to the Eurasian plate. The whole Apennines axis discriminates two different velocity patterns, the Adriatic and the Tyrrhenian area. The area around Messina Strait, which separates peninsular Italy and Sicily, represents a poorly understood region. Results identify an important boundary zone between two different domains, Calabria and Sicily, which are characterized by different crustal motions. The northeastern part of Sicily and Calabria move like Adriatic area, whilst the rest of Sicily, Malta and Lampedusa are dominated by African motion. 13. Three-dimensional groundwater velocity field in an unconfined aquifer under irrigation International Nuclear Information System (INIS) Zlotnik, V. 1990-01-01 A method for three-dimensional flow velocity calculation has been developed to evaluate unconfined aquifer sensitivity to areal agricultural contamination of groundwater. The methodology of Polubarinova-Kochina is applied to an unconfined homogeneous compressible or incompressible anisotropic aquifer. It is based on a three-dimensional groundwater flow model with a boundary condition on the moving surface. Analytical solutions are obtained for a hydraulic head under the influence of areal sources of circular and rectangular shape using integral transforms. Two-dimensional Hantush formulas result from the vertical averaging of the three-dimensional solutions, and the asymptotic behavior of solutions is analyzed. Analytical expressions for flow velocity components are obtained from the gradient of the hydraulic head field. Areal and temporal variability of specific yield in groundwater recharge areas is also taken into account. As a consequence of linearization of the boundary condition, the operation of any irrigation system with respect to groundwater is represented by superposition of the operating wells and circular and rectangular source influences. Combining the obtained solutions with Dagan or Neuman well functions, one can develop computer codes for the analytical computation of the three-dimensional groundwater hydraulic head and velocity component distributions. Methods for practical implementation are discussed. (Author) (20 refs., 4 figs.) 14. Update schemes of multi-velocity floor field cellular automaton for pedestrian dynamics Science.gov (United States) Luo, Lin; Fu, Zhijian; Cheng, Han; Yang, Lizhong 2018-02-01 Modeling pedestrian movement is an interesting problem both in statistical physics and in computational physics. Update schemes of cellular automaton (CA) models for pedestrian dynamics govern the schedule of pedestrian movement. Usually, different update schemes make the models behave in different ways, which should be carefully recalibrated. Thus, in this paper, we investigated the influence of four different update schemes, namely parallel/synchronous scheme, random scheme, order-sequential scheme and shuffled scheme, on pedestrian dynamics. The multi-velocity floor field cellular automaton (FFCA) considering the changes of pedestrians' moving properties along walking paths and heterogeneity of pedestrians' walking abilities was used. As for parallel scheme only, the collisions detection and resolution should be considered, resulting in a great difference from any other update schemes. For pedestrian evacuation, the evacuation time is enlarged, and the difference in pedestrians' walking abilities is better reflected, under parallel scheme. In face of a bottleneck, for example a exit, using a parallel scheme leads to a longer congestion period and a more dispersive density distribution. The exit flow and the space-time distribution of density and velocity have significant discrepancies under four different update schemes when we simulate pedestrian flow with high desired velocity. Update schemes may have no influence on pedestrians in simulation to create tendency to follow others, but sequential and shuffled update scheme may enhance the effect of pedestrians' familiarity with environments. 15. Dilution and Mixing in transient velocity fields: a first-order analysis Science.gov (United States) Di Dato, Mariaines; de Barros, Felipe, P. J.; Fiori, Aldo; Bellin, Alberto 2017-04-01 An appealing remediation technique is in situ oxidation, which effectiveness is hampered by difficulties in obtaining good mixing of the injected oxidant with the contaminant, particularly when the contaminant plume is contained and therefore its deformation is physically constrained. Under such conditions (i.e. containment), mixing may be augmented by inducing temporal fluctuations of the velocity field. The temporal variability of the flow field may increase the deformation of the plume such that diffusive mass flux becomes more effective. A transient periodic velocity field can be obtained by an engineered sequence of injections and extractions from wells, which may serve also as a hydraulic barrier to confine the plume. Assessing the effectiveness of periodic flows to maximize solute mixing is a difficult task given the need to use a 3D setup and the large number of possible flow configurations that should be analyzed in order to identify the optimal one. This is the typical situation in which analytical solutions, though approximated, may assist modelers in screening possible alternative flow configurations such that solute dilution is maximized. To quantify dilution (i.e. a precondition that enables reactive mixing) we utilize the concept of the dilution index [1]. In this presentation, the periodic flow takes place in an aquifer with spatially variable hydraulic conductivity field which is modeled as a Stationary Spatial Random Function. We developed a novel first-order analytical solution of the dilution index under the hypothesis that the flow can be approximated as a sequence of steady state configurations with the mean velocity changing with time in intensity and direction. This is equivalent to assume that the characteristic time of the transient behavior is small compared to the period characterizing the change in time of the mean velocity. A few closed paths have been analyzed quantifying their effectiveness in enhancing dilution and thereby mixing 16. Field-effect transistor having a superlattice channel and high carrier velocities at high applied fields Science.gov (United States) Chaffin, R.J.; Dawson, L.R.; Fritz, I.J.; Osbourn, G.C.; Zipperian, T.E. 1987-06-08 A field effect transistor comprises a semiconductor having a source, a drain, a channel and a gate in operational relationship. The semiconductor is a strained layer superlattice comprising alternating quantum well and barrier layers, the quantum well layers and barrier layers being selected from the group of layer pairs consisting of InGaAs/AlGaAs, InAs/InAlGaAs, and InAs/InAlAsP. The layer thicknesses of the quantum well and barrier layers are sufficiently thin that the alternating layers constitute a superlattice which has a superlattice conduction band energy level structure in k-vector space. The layer thicknesses of the quantum well layers are selected to provide a superlattice L/sub 2D/-valley which has a shape which is substantially more two-dimensional than that of said bulk L-valley. 2 figs. 17. Understanding strong-field coherent control: Measuring single-atom versus collective dynamics International Nuclear Information System (INIS) Trallero-Herrero, Carlos; Weinacht, Thomas; Spanner, Michael 2006-01-01 We compare the results of two strong field coherent control experiments: one which optimizes multi-photon population transfer in atomic sodium (from the 3s to the 4s state, measured by spontaneous emission from the 3p-3s transition) with one that optimizes stimulated emission on the 3p-3s transition in an ensemble of sodium atoms. Both experiments make use of intense, shaped ultrafast laser pulses discovered by a Genetic Algorithm inside a learning control loop. Optimization leads to improvements in the spontaneous and stimulated emission yields of about 4 and 10 4 , respectively, over an unshaped pulse. We interpret these results by modeling both the single atom dynamics as well as the stimulated emission buildup through numerical integration of Schroedinger's and Maxwell's equations. Our interpretation leads to the conclusion that modest yields for controlling single quantum systems can lead to dramatic effects whenever an ensemble of such systems acts collectively following controlled impulsive excitation 18. Full-Field Optical Coherence Tomography as a Diagnosis Tool: Recent Progress with Multimodal Imaging Directory of Open Access Journals (Sweden) Olivier Thouvenin 2017-03-01 Full Text Available Full-field optical coherence tomography (FF-OCT is a variant of OCT that is able to register 2D en face views of scattering samples at a given depth. Thanks to its superior resolution, it can quickly reveal information similar to histology without the need to physically section the sample. Sensitivity and specificity levels of diagnosis performed with FF-OCT are 80% to 95% of the equivalent histological diagnosis performances and could therefore benefit from improvement. Therefore, multimodal systems have been designed to increase the diagnostic performance of FF-OCT. In this paper, we will discuss which contrasts can be measured with such multimodal systems in the context of ex vivo biological tissue examination. We will particularly emphasize three multimodal combinations to measure the tissue mechanics, dynamics, and molecular content respectively. 19. Qualitative investigation of fresh human scalp hair with full-field optical coherence tomography Science.gov (United States) Choi, Woo June; Pi, Long-Quan; Min, Gihyeon; Lee, Won-Soo; Lee, Byeong Ha 2012-03-01 We have investigated depth-resolved cellular structures of unmodified fresh human scalp hairs with ultrahigh-resolution full-field optical coherence tomography (FF-OCT). The Linnik-type white light interference microscope has been home-implemented to observe the micro-internal layers of human hairs in their natural environment. In hair shafts, FF-OCT has qualitatively revealed the cellular hair compartments of cuticle and cortex layers involved in keratin filaments and melanin granules. No significant difference between black and white hair shafts was observed except for absence of only the melanin granules in the white hair, reflecting that the density of the melanin granules directly affects the hair color. Anatomical description of plucked hair bulbs was also obtained with the FF-OCT in three-dimensions. We expect this approach will be useful for evaluating cellular alteration of natural hairs on cosmetic assessment or diagnosis of hair diseases. 20. Tissue imaging using full field optical coherence microscopy with short multimode fiber probe Science.gov (United States) Sato, Manabu; Eto, Kai; Goto, Tetsuhiro; Kurotani, Reiko; Abe, Hiroyuki; Nishidate, Izumi 2018-03-01 In achieving minimally invasive accessibility to deeply located regions the size of the imaging probes is important. We demonstrated full-field optical coherence tomography (FF-OCM) using an ultrathin forward-imaging short multimode fiber (SMMF) probe of 50 μm core diameter, 125 μm diameter, and 7.4 mm length for optical communications. The axial resolution was measured to be 2.14 μm and the lateral resolution was also evaluated to be below 4.38 μm using a test pattern (TP). The spatial mode and polarization characteristics of SMMF were evaluated. Inserting SMMF to in vivo rat brain, 3D images were measured and 2D information of nerve fibers was obtained. The feasibility of an SMMF as an ultrathin forward-imaging probe in FF-OCM has been demonstrated. 1. Constraining the optical depth of galaxies and velocity bias with cross-correlation between the kinetic Sunyaev-Zeldovich effect and the peculiar velocity field Science.gov (United States) Ma, Yin-Zhe; Gong, Guo-Dong; Sui, Ning; He, Ping 2018-03-01 We calculate the cross-correlation function between the kinetic Sunyaev-Zeldovich (kSZ) effect and the reconstructed peculiar velocity field using linear perturbation theory, with the aim of constraining the optical depth τ and peculiar velocity bias of central galaxies with Planck data. We vary the optical depth τ and the velocity bias function bv(k) = 1 + b(k/k0)n, and fit the model to the data, with and without varying the calibration parameter y0 that controls the vertical shift of the correlation function. By constructing a likelihood function and constraining the τ, b and n parameters, we find that the quadratic power-law model of velocity bias, bv(k) = 1 + b(k/k0)2, provides the best fit to the data. The best-fit values are τ = (1.18 ± 0.24) × 10-4, b=-0.84^{+0.16}_{-0.20} and y0=(12.39^{+3.65}_{-3.66})× 10^{-9} (68 per cent confidence level). The probability of b > 0 is only 3.12 × 10-8 for the parameter b, which clearly suggests a detection of scale-dependent velocity bias. The fitting results indicate that the large-scale (k ≤ 0.1 h Mpc-1) velocity bias is unity, while on small scales the bias tends to become negative. The value of τ is consistent with the stellar mass-halo mass and optical depth relationship proposed in the literature, and the negative velocity bias on small scales is consistent with the peak background split theory. Our method provides a direct tool for studying the gaseous and kinematic properties of galaxies. 2. Propagation of partially coherent fields through planar dielectric boundaries using angle-impact Wigner functions I. Two dimensions. Science.gov (United States) Petruccelli, Jonathan C; Alonso, Miguel A 2007-09-01 We examine the angle-impact Wigner function (AIW) as a computational tool for the propagation of nonparaxial quasi-monochromatic light of any degree of coherence past a planar boundary between two homogeneous media. The AIWs of the reflected and transmitted fields in two dimensions are shown to be given by a simple ray-optical transformation of the incident AIW plus a series of corrections in the form of differential operators. The radiometric and leading six correction terms are studied for Gaussian Schell-model fields of varying transverse width, transverse coherence, and angle of incidence. 3. Easy monitoring of velocity fields in microfluidic devices using spatiotemporal image correlation spectroscopy. Science.gov (United States) Travagliati, Marco; Girardo, Salvatore; Pisignano, Dario; Beltram, Fabio; Cecchini, Marco 2013-09-03 Spatiotemporal image correlation spectroscopy (STICS) is a simple and powerful technique, well established as a tool to probe protein dynamics in cells. Recently, its potential as a tool to map velocity fields in lab-on-a-chip systems was discussed. However, the lack of studies on its performance has prevented its use for microfluidics applications. Here, we systematically and quantitatively explore STICS microvelocimetry in microfluidic devices. We exploit a simple experimental setup, based on a standard bright-field inverted microscope (no fluorescence required) and a high-fps camera, and apply STICS to map liquid flow in polydimethylsiloxane (PDMS) microchannels. Our data demonstrates optimal 2D velocimetry up to 10 mm/s flow and spatial resolution down to 5 μm. 4. A method for measuring the velocity flow field in the vicinity of a moving cascade International Nuclear Information System (INIS) Bammert, K.; Mobarak, A. 1977-01-01 Centrifugal compressors and blowers are often used for recycling the coolant gas in gas-cooled reactors. To achieve the required pressure ratios, highly loaded centrifugal compressors are built. The paper deals with a method of measuring the flow field in the vicinity of a moving impeller or cascade with hot wires. The relative flow pattern induced ahead of a cascade or impeller or the rotating wakes behind a moving cascade (which is important for loss evaluation) could be now measured with the help of a single hot wire. The wire should be rotated about the axis of the probe for 3 different inclinations with respect to the approaching flow. The method has been used for measuring the flow field in the vicinity of the inducer of a highly loaded centrifugal compressor. The results and the accuracy of the method are discussed and the mean values have been compared with the theoretically estimated velocities. (orig.) [de 5. Statistical analysis of the velocity and scalar fields in reacting turbulent wall-jets Science.gov (United States) Pouransari, Z.; Biferale, L.; Johansson, A. V. 2015-02-01 The concept of local isotropy in a chemically reacting turbulent wall-jet flow is addressed using direct numerical simulation (DNS) data. Different DNS databases with isothermal and exothermic reactions are examined. The chemical reaction and heat release effects on the turbulent velocity, passive scalar, and reactive species fields are studied using their probability density functions (PDFs) and higher order moments for velocities and scalar fields, as well as their gradients. With the aid of the anisotropy invariant maps for the Reynolds stress tensor, the heat release effects on the anisotropy level at different wall-normal locations are evaluated and found to be most accentuated in the near-wall region. It is observed that the small-scale anisotropies are persistent both in the near-wall region and inside the jet flame. Two exothermic cases with different Damköhler numbers are examined and the comparison revealed that the Damköhler number effects are most dominant in the near-wall region, where the wall cooling effects are influential. In addition, with the aid of PDFs conditioned on the mixture fraction, the significance of the reactive scalar characteristics in the reaction zone is illustrated. We argue that the combined effects of strong intermittency and strong persistency of anisotropy at the small scales in the entire domain can affect mixing and ultimately the combustion characteristics of the reacting flow. 6. VELOCITY FIELD COMPUTATION IN VIBRATED GRANULAR MEDIA USING AN OPTICAL FLOW BASED MULTISCALE IMAGE ANALYSIS METHOD Directory of Open Access Journals (Sweden) Johan Debayle 2011-05-01 Full Text Available An image analysis method has been developed in order to compute the velocity field of a granular medium (sand grains, mean diameter 600 μm submitted to different kinds of mechanical stresses. The differential method based on optical flow conservation consists in describing a dense motion field with vectors associated to each pixel. A multiscale, coarse-to-fine, analytical approach through tailor sized windows yields the best compromise between accuracy and robustness of the results, while enabling an acceptable computation time. The corresponding algorithmis presented and its validation discussed through different tests. The results of the validation tests of the proposed approach show that the method is satisfactory when attributing specific values to parameters in association with the size of the image analysis window. An application in the case of vibrated sand has been studied. An instrumented laboratory device provides sinusoidal vibrations and enables external optical observations of sand motion in 3D transparent boxes. At 50 Hz, by increasing the relative acceleration G, the onset and development of two convective rolls can be observed. An ultra fast camera records the grain avalanches, and several pairs of images are analysed by the proposed method. The vertical velocity profiles are deduced and allow to precisely quantify the dimensions of the fluidized region as a function of G. 7. Measurements of 3D velocity and scalar field for a film-cooled airfoil trailing edge Energy Technology Data Exchange (ETDEWEB) Benson, Michael J.; Elkins, Christopher J.; Eaton, John K. [Stanford University, Department of Mechanical Engineering, Stanford, CA (United States) 2011-08-15 The 3D velocity and concentration fields have been measured for flow in a pressure side cutback trailing edge film cooling geometry consisting of rectangular film cooling slots separated by tapered lands. The velocity field was measured using conventional magnetic resonance velocimetry, and the concentration distribution was measured with a refined magnetic resonance concentration technique that yields experimental uncertainties for the concentration between 5 and 6%. All experiments were performed in water. A separation bubble behind the slot lip entrains coolant and promotes rapid turbulent mixing at the upper edge of the coolant jet. Vortices from inside the slot feed channel and on the upper sides of the lands rapidly distort the initially rectangular shape of the coolant stream and sweep mainstream flow toward the airfoil surface. The vortices also prevent any coolant from reaching the upper surfaces of the land. At the trailing edge, a second separation region exists in the blunt trailing edge wake. The flow forms suction side streaks behind the land tips, as well as streaks behind the slot centers on the pressure side. The peak coolant concentrations in the streaks remain above 25% through the end of the measurement domain, over 30 slot heights downstream. (orig.) 8. Gyrokinetic full f analysis of electric field dynamics and poloidal velocity in the FT2-tokamak configuration International Nuclear Information System (INIS) Leerink, S.; Heikkinen, J. A.; Janhunen, S. J.; Kiviniemi, T. P.; Nora, M.; Ogando, F. 2008-01-01 The ELMFIRE gyrokinetic simulation code has been used to perform full f simulations of the FT-2 tokamak. The dynamics of the radial electric field and the creation of poloidal velocity in the presence of turbulence are presented. 9. Strong-field spatiotemporal ultrafast coherent control in three-level atoms International Nuclear Information System (INIS) Bruner, Barry D.; Suchowski, Haim; Silberberg, Yaron; Vitanov, Nikolay V. 2010-01-01 Simple analytical approaches for implementing strong field coherent control schemes are often elusive due to the complexity of the interaction between the intense excitation field and the system of interest. Here, we demonstrate control over multiphoton excitation in a three-level resonant system using simple, analytically derived ultrafast pulse shapes. We utilize a two-dimensional spatiotemporal control technique, in which temporal focusing produces a spatially dependent quadratic spectral phase, while a second, arbitrary phase parameter is scanned using a pulse shaper. In the current work, we demonstrate weak-to-strong field excitation of 85 Rb, with a π phase step and the quadratic phase as the chosen control parameters. The intricate dependence of the multilevel dynamics on these parameters is exhibited by mapping the data onto a two-dimensional control landscape. Further insight is gained by simulating the complete landscape using a dressed-state, time-domain model, in which the influence of individual shaping parameters can be extracted using both exact and asymptotic time-domain representations of the dressed-state energies. 10. Dynamics of quantum correlation and coherence for two atoms coupled with a bath of fluctuating massless scalar field Energy Technology Data Exchange (ETDEWEB) Huang, Zhiming, E-mail: [email protected] [School of Economics and Management, Wuyi University, Jiangmen 529020 (China); Situ, Haozhen, E-mail: [email protected] [College of Mathematics and Informatics, South China Agricultural University, Guangzhou 510642 (China) 2017-02-15 In this article, the dynamics of quantum correlation and coherence for two atoms interacting with a bath of fluctuating massless scalar field in the Minkowski vacuum is investigated. We firstly derive the master equation that describes the system evolution with initial Bell-diagonal state. Then we discuss the system evolution for three cases of different initial states: non-zero correlation separable state, maximally entangled state and zero correlation state. For non-zero correlation initial separable state, quantum correlation and coherence can be protected from vacuum fluctuations during long time evolution when the separation between the two atoms is relatively small. For maximally entangled initial state, quantum correlation and coherence overall decrease with evolution time. However, for the zero correlation initial state, quantum correlation and coherence are firstly generated and then drop with evolution time; when separation is sufficiently small, they can survive from vacuum fluctuations. For three cases, quantum correlation and coherence first undergo decline and then fluctuate to relatively stable values with the increasing distance between the two atoms. Specially, for the case of zero correlation initial state, quantum correlation and coherence occur periodically revival at fixed zero points and revival amplitude declines gradually with increasing separation of two atoms. 11. Dynamics of quantum correlation and coherence for two atoms coupled with a bath of fluctuating massless scalar field International Nuclear Information System (INIS) Huang, Zhiming; Situ, Haozhen 2017-01-01 In this article, the dynamics of quantum correlation and coherence for two atoms interacting with a bath of fluctuating massless scalar field in the Minkowski vacuum is investigated. We firstly derive the master equation that describes the system evolution with initial Bell-diagonal state. Then we discuss the system evolution for three cases of different initial states: non-zero correlation separable state, maximally entangled state and zero correlation state. For non-zero correlation initial separable state, quantum correlation and coherence can be protected from vacuum fluctuations during long time evolution when the separation between the two atoms is relatively small. For maximally entangled initial state, quantum correlation and coherence overall decrease with evolution time. However, for the zero correlation initial state, quantum correlation and coherence are firstly generated and then drop with evolution time; when separation is sufficiently small, they can survive from vacuum fluctuations. For three cases, quantum correlation and coherence first undergo decline and then fluctuate to relatively stable values with the increasing distance between the two atoms. Specially, for the case of zero correlation initial state, quantum correlation and coherence occur periodically revival at fixed zero points and revival amplitude declines gradually with increasing separation of two atoms. 12. Sound field separation with a double layer velocity transducer array (L) DEFF Research Database (Denmark) Fernandez Grande, Efren; Jacobsen, Finn 2011-01-01 of the array. The technique has been examined and compared with direct velocity based reconstruction, as well as with a technique based on the measurement of the sound pressure and particle velocity. The double layer velocity method circumvents some of the drawbacks of the pressure-velocity based... 13. Coherent quantum states of a relativistic particle in an electromagnetic plane wave and a parallel magnetic field International Nuclear Information System (INIS) Colavita, E.; Hacyan, S. 2014-01-01 We analyze the solutions of the Klein–Gordon and Dirac equations describing a charged particle in an electromagnetic plane wave combined with a magnetic field parallel to the direction of propagation of the wave. It is shown that the Klein–Gordon equation admits coherent states as solutions, while the corresponding solutions of the Dirac equation are superpositions of coherent and displaced-number states. Particular attention is paid to the resonant case in which the motion of the particle is unbounded. -- Highlights: •We study a relativistic electron in a particular electromagnetic field configuration. •New exact solutions of the Klein–Gordon and Dirac equations are obtained. •Coherent and displaced number states can describe a relativistic particle 14. Quantum correlations between each two-level system in a pair of atoms and general coherent fields Directory of Open Access Journals (Sweden) S. Abdel-Khalek Full Text Available The quantitative description of the quantum correlations between each two-level system in a two-atom system and the coherent fields initially defined in a coherent state in the framework of power-law potentials (PLPCSs is considered. Specifically, we consider two atoms locally interacting with PLPCSs and take into account the different terms of interactions, the entanglement and quantum discord are studied including the time-dependent coupling and photon transition effects. Using the monogamic relation between the entanglement of formation and quantum discord in tripartite systems, we show that the control and preservation of the different kinds of quantum correlations greatly benefit from the combination of the choice of the physical quantities. Finally, we explore the link between the dynamical behavior of quantum correlations and nonclassicality of the fields with and without atomic motion effect. Keywords: Quantum correlations, Monogamic relation, Coherent states, Power-law potentials, Wehrl entropy 15. Velocity field in the wake of a hydropower farm equipped with Achard turbines International Nuclear Information System (INIS) Georgescu, A-M; Cosoiu, C I; Alboiu, N; Hamzu, Al; Georgescu, S C 2010-01-01 The study consists of experimental and numerical investigations related to the water flow in the wake of a hydropower farm, equipped with three Achard turbines. The Achard turbine is a French concept of vertical axis cross-flow marine current turbine, with three vertical delta-blades, which operates irrespective of the water flow direction. A farm model built at 1:5 scale has been tested in a water channel. The Achard turbines run in stabilized current, so the flow can be assumed to be almost unchanged in horizontal planes along the vertical z-axis, thus allowing 2D numerical modelling, for different farm configurations: the computational domain is a cross-section of all turbines at a certain z-level. The two-dimensional numerical model of that farm has been used to depict the velocity field in the wake of the farm, with COMSOL Multiphysics and FLUENT software, to compute numerically the overall farm efficiency. The validation of the numerical models with experimental results is performed via the measurement of velocity distribution, by Acoustic Doppler Velocimetry, in the wake of the middle turbine within the farm. Three basic configurations were studied experimentally and numerically, namely: with all turbines aligned on a row across the upstream flow direction; with turbines in an isosceles triangular arrangement pointing downstream; with turbines in an isosceles triangular arrangement pointing upstream. As long as the numerical flow in the wake fits the experiments, the numerical results for the power coefficient (turbine efficiency) are trustworthy. The farm configuration with all turbines aligned on a same row leads to lower values of the experimental velocities than the numerical ones, while the farm configurations where the turbines are in isosceles triangular arrangement, pointing downstream or upstream, present a better match between numerical and experimental data. 16. Precise Control of Molecular Dynamics with a Femtosecond Frequency Comb - A Weak Field Route to Strong Field Coherent Control OpenAIRE Pe'er, Avi; Shapiro, Evgeny A.; Stowe, Matthew C.; Shapiro, Moshe; Ye, Jun 2006-01-01 We present a general and highly efficient scheme for performing narrow-band Raman transitions between molecular vibrational levels using a coherent train of weak pump-dump pairs of shaped ultrashort pulses. The use of weak pulses permits an analytic description within the framework of coherent control in the perturbative regime, while coherent accumulation of many pulse pairs enables near unity transfer efficiency with a high spectral selectivity, thus forming a powerful combination of pump-d... 17. Investigation of the velocity field in a full-scale model of a cerebral aneurysm International Nuclear Information System (INIS) Roloff, Christoph; Bordás, Róbert; Nickl, Rosa; Mátrai, Zsolt; Szaszák, Norbert; Szilárd, Szabó; Thévenin, Dominique 2013-01-01 Highlights: • We investigate flow fields inside a phantom model of a full-scale cerebral aneurysm. • An artificial blood fluid is used matching viscosity and density of real blood. • We present Particle Tracking results of fluorescent tracer particles. • Instantaneous model inlet velocity profiles and volume flow rates are derived. • Trajectory fields at three of six measurement planes are presented. -- Abstract: Due to improved and now widely used imaging methods in clinical surgery practise, detection of unruptured cerebral aneurysms becomes more and more frequent. For the selection and development of a low-risk and highly effective treatment option, the understanding of the involved hemodynamic mechanisms is of great importance. Computational Fluid Dynamics (CFD), in vivo angiographic imaging and in situ experimental investigations of flow behaviour are powerful tools which could deliver the needed information. Hence, the aim of this contribution is to experimentally characterise the flow in a full-scale phantom model of a realistic cerebral aneurysm. The acquired experimental data will then be used for a quantitative validation of companion numerical simulations. The experimental methodology relies on the large-field velocimetry technique PTV (Particle Tracking Velocimetry), processing high speed images of fluorescent tracer particles added to the flow of a blood-mimicking fluid. First, time-resolved planar PTV images were recorded at 4500 fps and processed by a complex, in-house algorithm. The resulting trajectories are used to identify Lagrangian flow structures, vortices and recirculation zones in two-dimensional measurement slices within the aneurysm sac. The instantaneous inlet velocity distribution, needed as boundary condition for the numerical simulations, has been measured with the same technique but using a higher frame rate of 20,000 fps in order to avoid ambiguous particle assignment. From this velocity distribution, the time 18. Wall Shear Stress, Wall Pressure and Near Wall Velocity Field Relationships in a Whirling Annular Seal Science.gov (United States) Morrison, Gerald L.; Winslow, Robert B.; Thames, H. Davis, III 1996-01-01 The mean and phase averaged pressure and wall shear stress distributions were measured on the stator wall of a 50% eccentric annular seal which was whirling in a circular orbit at the same speed as the shaft rotation. The shear stresses were measured using flush mounted hot-film probes. Four different operating conditions were considered consisting of Reynolds numbers of 12,000 and 24,000 and Taylor numbers of 3,300 and 6,600. At each of the operating conditions the axial distribution (from Z/L = -0.2 to 1.2) of the mean pressure, shear stress magnitude, and shear stress direction on the stator wall were measured. Also measured were the phase averaged pressure and shear stress. These data were combined to calculate the force distributions along the seal length. Integration of the force distributions result in the net forces and moments generated by the pressure and shear stresses. The flow field inside the seal operating at a Reynolds number of 24,000 and a Taylor number of 6,600 has been measured using a 3-D laser Doppler anemometer system. Phase averaged wall pressure and wall shear stress are presented along with phase averaged mean velocity and turbulence kinetic energy distributions located 0.16c from the stator wall where c is the seal clearance. The relationships between the velocity, turbulence, wall pressure and wall shear stress are very complex and do not follow simple bulk flow predictions. 19. Remote Numerical Simulations of the Interaction of High Velocity Clouds with Random Magnetic Fields Science.gov (United States) Santillan, Alfredo; Hernandez--Cervantes, Liliana; Gonzalez--Ponce, Alejandro; Kim, Jongsoo The numerical simulations associated with the interaction of High Velocity Clouds (HVC) with the Magnetized Galactic Interstellar Medium (ISM) are a powerful tool to describe the evolution of the interaction of these objects in our Galaxy. In this work we present a new project referred to as Theoretical Virtual i Observatories. It is oriented toward to perform numerical simulations in real time through a Web page. This is a powerful astrophysical computational tool that consists of an intuitive graphical user interface (GUI) and a database produced by numerical calculations. In this Website the user can make use of the existing numerical simulations from the database or run a new simulation introducing initial conditions such as temperatures, densities, velocities, and magnetic field intensities for both the ISM and HVC. The prototype is programmed using Linux, Apache, MySQL, and PHP (LAMP), based on the open source philosophy. All simulations were performed with the MHD code ZEUS-3D, which solves the ideal MHD equations by finite differences on a fixed Eulerian mesh. Finally, we present typical results that can be obtained with this tool. 20. Frequency, delay and velocity analysis for intrinsic channel region of carbon nanotube field effect transistors Directory of Open Access Journals (Sweden) P. Geetha 2014-03-01 Full Text Available Gate wrap around field effect transistor is preferred for its good channel control. To study the high frequency behaviour of the device, parameters like cut-off frequency, transit or delay time, velocity are calculated and plotted. Double-walled and array of channels are considered in this work for enhanced output and impedance matching of the device with the measuring equipment terminal respectively. The perfomance of double-walledcarbon nanotube is compared with single-walled carbon nanotube and found that the device with double-wall shows appreciable improvement in its characteristics. Analysis of these parameters are done with various values of source/drain length, gate length, tube diameters and channel densities. The maximum cut-off frequency is found to be 72.3 THz with corresponding velocity as 5x106 m/s for channel density as 3 and gate length as 11nm. The number of channel is varied from 3 to 21 and found that the perfromance of the device containing double-walled carbon nano tube is better for channel number lesser than or equal to 12. The proposed modelling can be used for designing devices to handle high speed applications of future generation. 1. Detailed experimental investigations on flow behaviors and velocity field properties of a supersonic mixing layer Science.gov (United States) Tan, Jianguo; Zhang, Dongdong; Li, Hao; Hou, Juwei 2018-03-01 The flow behaviors and mixing characteristics of a supersonic mixing layer with a convective Mach number of 0.2 have been experimentally investigated utilizing nanoparticle-based planar laser scattering and particle image velocimetry techniques. The full development and evolution process, including the formation of Kelvin-Helmholtz vortices, breakdown of large-scale structures and establishment of self-similar turbulence, is exhibited clearly in the experiments, which can give a qualitative graphically comparing for the DNS and LES results. The shocklets are first captured at this low convective Mach number, and their generation mechanisms are elaborated and analyzed. The convective velocity derived from two images with space-time correlations is well consistent with the theoretical result. The pairing and merging process of large-scale vortices in transition region is clearly revealed in the velocity vector field. The analysis of turbulent statistics indicates that in weakly compressible mixing layers, with the increase of convective Mach number, the peak values of streamwise turbulence intensity and Reynolds shear stress experience a sharp decrease, while the anisotropy ratio seems to keep quasi unchanged. The normalized growth rate of the present experiments shows a well agreement with former experimental and DNS data. The validation of present experimental results is important for that in the future the present work can be a reference for assessing the accuracy of numerical data. 2. Radial Peripapillary Capillary Network Visualized Using Wide-Field Montage Optical Coherence Tomography Angiography. Science.gov (United States) Mase, Tomoko; Ishibazawa, Akihiro; Nagaoka, Taiji; Yokota, Harumasa; Yoshida, Akitoshi 2016-07-01 We quantitatively analyzed the features of a radial peripapillary capillary (RPC) network visualized using wide-field montage optical coherence tomography (OCT) angiography in healthy human eyes. Twenty eyes of 20 healthy subjects were recruited. En face 3 × 3-mm OCT angiograms of multiple locations in the posterior pole were acquired using the RTVue XR Avanti, and wide-field montage images of the RPC were created. To evaluate the RPC density, the montage images were binarized and skeletonized. The correlation between the RPC density and the retinal nerve fiber layer (RNFL) thickness measured by an OCT circle scan was investigated. The RPC at the temporal retina was detected as far as 7.6 ± 0.7 mm from the edge of the optic disc but not around the perifoveal area within 0.9 ± 0.1 mm of the fovea. Capillary-free zones beside the first branches of the arterioles were significantly (P optic disc edge were 13.6 ± 0.8, 11.9 ± 0.9, and 10.4 ± 0.9 mm-1. The RPC density also was correlated significantly (r = 0.64, P network. The RPC is present in the superficial peripapillary retina in proportion to the RNFL thickness, supporting the idea that the RPC may be the vascular network primarily responsible for RNFL nourishment. 3. Extraction of 3D velocity and porosity fields from GeoPET data sets Energy Technology Data Exchange (ETDEWEB) Lippmann-Pipke, Johanna; Kulenkampff, Johannes [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany). Reactive Transport; Eichelbaum, S. [Nemtics Visualization, Leipzig (Germany) 2017-06-01 Geoscientific process monitoring with positron emission tomography (GeoPET) is proven to be applicable for quantitative tomographic transport process monitoring in natural geological materials. We benchmarked GeoPET by inversely fitting a numerical finite element model to a diffusive transport experiment in Opalinus clay. The obtained effective diffusion coefficients, D{sub e}, parallel and D{sub e}, perpendicular to, are well in line with data from literature. But more complex, heterogeneous migration, and flow patterns cannot be similarly evaluated by inverse fitting using optimization tools. Alternatively, we started developing an algorithm that allows the quantitative extraction of velocity and porosity fields, v{sub i=x,y,z} (x,y,z) and n(x,y,z) from GeoPET time series, c{sub PET}(x,y,z,t). They may serve as constituent data sets for reactive transport modelling. 4. Velocity Vector Field Visualization of Flow in Liquid Acquisition Device Channel Science.gov (United States) McQuillen, John B.; Chao, David F.; Hall, Nancy R.; Zhang, Nengli 2012-01-01 A capillary flow liquid acquisition device (LAD) for cryogenic propellants has been developed and tested in NASA Glenn Research Center to meet the requirements of transferring cryogenic liquid propellants from storage tanks to an engine in reduced gravity environments. The prototypical mesh screen channel LAD was fabricated with a mesh screen, covering a rectangular flow channel with a cylindrical outlet tube, and was tested with liquid oxygen (LOX). In order to better understand the performance in various gravity environments and orientations at different liquid submersion depths of the screen channel LAD, a series of computational fluid dynamics (CFD) simulations of LOX flow through the LAD screen channel was undertaken. The resulting velocity vector field visualization for the flow in the channel has been used to reveal the gravity effects on the flow in the screen channel. 5. Tensor-based morphometry with mappings parameterized by stationary velocity fields in Alzheimer's disease neuroimaging initiative. Science.gov (United States) Bossa, Matías Nicolás; Zacur, Ernesto; Olmos, Salvador 2009-01-01 Tensor-based morphometry (TBM) is an analysis technique where anatomical information is characterized by means of the spatial transformations between a customized template and observed images. Therefore, accurate inter-subject non-rigid registration is an essential prerrequisite. Further statistical analysis of the spatial transformations is used to highlight some useful information, such as local statistical differences among populations. With the new advent of recent and powerful non-rigid registration algorithms based on the large deformation paradigm, TBM is being increasingly used. In this work we evaluate the statistical power of TBM using stationary velocity field diffeomorphic registration in a large population of subjects from Alzheimer's Disease Neuroimaging Initiative project. The proposed methodology provided atrophy maps with very detailed anatomical resolution and with a high significance compared with results published recently on the same data set. 6. Image registration using stationary velocity fields parameterized by norm-minimizing Wendland kernel DEFF Research Database (Denmark) Pai, Akshay Sadananda Uppinakudru; Sommer, Stefan Horst; Sørensen, Lauge by the regularization term. In a variational formulation, this term is traditionally expressed as a squared norm which is a scalar inner product of the interpolating kernels parameterizing the velocity fields. The minimization of this term using the standard spline interpolation kernels (linear or cubic) is only...... approximative because of the lack of a compatible norm. In this paper, we propose to replace such interpolants with a norm-minimizing interpolant - the Wendland kernel which has the same computational simplicity like B-Splines. An application on the Alzheimer's disease neuroimaging initiative showed...... that Wendland SVF based measures separate (Alzheimer's disease v/s normal controls) better than both B-Spline SVFs (p 7. Two-dimensional calculation by finite element method of velocity field and temperature field development in fast reactor fuel assembly. II International Nuclear Information System (INIS) Schmid, J. 1985-11-01 A package of updated computer codes for velocity and temperature field calculations for a fast reactor fuel subassembly (or its part) by the finite element method is described. Isoparametric triangular elements of the second degree are used. (author) 8. PIC Simulations of Velocity-space Instabilities in a Decreasing Magnetic Field: Viscosity and Thermal Conduction Science.gov (United States) Riquelme, Mario; Quataert, Eliot; Verscharen, Daniel 2018-02-01 We use particle-in-cell (PIC) simulations of a collisionless, electron–ion plasma with a decreasing background magnetic field, {\\boldsymbol{B}}, to study the effect of velocity-space instabilities on the viscous heating and thermal conduction of the plasma. If | {\\boldsymbol{B}}| decreases, the adiabatic invariance of the magnetic moment gives rise to pressure anisotropies with {p}| | ,j> {p}\\perp ,j ({p}| | ,j and {p}\\perp ,j represent the pressure of species j (electron or ion) parallel and perpendicular to B ). Linear theory indicates that, for sufficiently large anisotropies, different velocity-space instabilities can be triggered. These instabilities in principle have the ability to pitch-angle scatter the particles, limiting the growth of the anisotropies. Our simulations focus on the nonlinear, saturated regime of the instabilities. This is done through the permanent decrease of | {\\boldsymbol{B}}| by an imposed plasma shear. We show that, in the regime 2≲ {β }j≲ 20 ({β }j\\equiv 8π {p}j/| {\\boldsymbol{B}}{| }2), the saturated ion and electron pressure anisotropies are controlled by the combined effect of the oblique ion firehose and the fast magnetosonic/whistler instabilities. These instabilities grow preferentially on the scale of the ion Larmor radius, and make {{Δ }}{p}e/{p}| | ,e≈ {{Δ }}{p}i/{p}| | ,i (where {{Δ }}{p}j={p}\\perp ,j-{p}| | ,j). We also quantify the thermal conduction of the plasma by directly calculating the mean free path of electrons, {λ }e, along the mean magnetic field, finding that {λ }e depends strongly on whether | {\\boldsymbol{B}}| decreases or increases. Our results can be applied in studies of low-collisionality plasmas such as the solar wind, the intracluster medium, and some accretion disks around black holes. 9. On the extraction of pressure fields from PIV velocity measurements in turbines Science.gov (United States) Villegas, Arturo; Diez, Fancisco J. 2012-11-01 In this study, the pressure field for a water turbine is derived from particle image velocimetry (PIV) measurements. Measurements are performed in a recirculating water channel facility. The PIV measurements include calculating the tangential and axial forces applied to the turbine by solving the integral momentum equation around the airfoil. The results are compared with the forces obtained from the Blade Element Momentum theory (BEMT). Forces are calculated by using three different methods. In the first method, the pressure fields are obtained from PIV velocity fields by solving the Poisson equation. The boundary conditions are obtained from the Navier-Stokes momentum equations. In the second method, the pressure at the boundaries is determined by spatial integration of the pressure gradients along the boundaries. In the third method, applicable only to incompressible, inviscid, irrotational, and steady flow, the pressure is calculated using the Bernoulli equation. This approximated pressure is known to be accurate far from the airfoil and outside of the wake for steady flows. Additionally, the pressure is used to solve for the force from the integral momentum equation on the blade. From the three methods proposed to solve for pressure and forces from PIV measurements, the first one, which is solved by using the Poisson equation, provides the best match to the BEM theory calculations. 10. Search for auroral belt Eparallel fields with high-velocity barium ion injections International Nuclear Information System (INIS) Heppner, J.P.; Ledley, B.G.; Miller, M.L.; Marionni, P.A.; Pongratz, M.B.; Slater, D.W.; Hallinan, T.J.; Rees, D. 1989-01-01 Four high-velocity shaped charge Ba + injections were conducted from two Black Brant-10 rockets at collision-free altitudes (770-975 km) over northern Alaska (L = 7.4-10.6) in April 1984 under active auroral and magnetic disturbance (Kp 4+ and 5) conditions. The motions of the Ba + pencil beams from these injections were accurately triangulated to altitudes ranging from 9,000 to 14,000 km from multistation image observations. Well-defined initial conditions and improved software for predicting the unperturbed. E = 0, trajectories in the presence of convection, E perpendicular , fields permitted an accurate detection of changes in the motion which could be attributed to E parallel fields. Large (> 1 keV) potential changes that might be anticipated from double-layer or V-, U- and S-shaped potential structures were not encountered even though the Ba + rays were clearly located on auroral arc flux tubes on at least several occasions and were at various times in close proximity to auroral flux tubes for many minutes. Abnormally intense E perpendicular fields that might also indicate that the above potential structures were also not observed. Transient accelerations and/or decelerations involving magnetic field-aligned energy changes ≤ 375 eV were, however, encountered by each of the seven principal Ba + rays tracked to high altitudes. Acceleration events were only slightly more frequent than deceleration events. Interpretation, taking into account limits on the duration of the events and simultaneous auroral conditions, favors explanation in terms of propagating waves, soliton trains, or other pulse forms provided that the propagation is primarily field-aligned 11. Enhancement of the resolution of full-field optical coherence tomography by using a colour image sensor Energy Technology Data Exchange (ETDEWEB) Kalyanov, A L; Lychagov, V V; Smirnov, I V; Ryabukho, V P [N.G. Chernyshevsky Saratov State University, Saratov (Russian Federation) 2013-08-31 The influence of white balance in a colour image detector on the resolution of a full-field optical coherence tomograph (FFOCT) is studied. The change in the interference pulse width depending on the white balance tuning is estimated in the cases of a thermal radiation source (incandescent lamp) and a white light emitting diode. It is shown that by tuning white balance of the detector in a certain range, the FFOCT resolution can be increased by 20 % as compared to the resolution, attained with the use of a monochrome detector. (optical coherence tomography) 12. Enhancement of the resolution of full-field optical coherence tomography by using a colour image sensor International Nuclear Information System (INIS) Kalyanov, A L; Lychagov, V V; Smirnov, I V; Ryabukho, V P 2013-01-01 The influence of white balance in a colour image detector on the resolution of a full-field optical coherence tomograph (FFOCT) is studied. The change in the interference pulse width depending on the white balance tuning is estimated in the cases of a thermal radiation source (incandescent lamp) and a white light emitting diode. It is shown that by tuning white balance of the detector in a certain range, the FFOCT resolution can be increased by 20 % as compared to the resolution, attained with the use of a monochrome detector. (optical coherence tomography) 13. Assessment of visual function by optical coherence tomography and visual field for craniopharyngioma patients Directory of Open Access Journals (Sweden) Yang Tang 2015-09-01 Full Text Available AIM:To analyze the differences and correlations between ganglion cell complex(GCC, peripapillary retinal nerve fiber layer(pRNFLand mean defect(MD, mean sensitivity(MSof visual field(VFin craniopharyngioma patients, to evaluate the feasibility of optical coherence tomography(OCTin diagnosis of the visual pathway damage of craniopharyngioma patients.METHODS:Ninety-five craniopharyngioma patients treated in Beijing Tiantan Hospital, from September 2014 to April 2015 received the VF test by Octopus 900 automated perimeter with the central 30 degree program and the mean thickness measurements of GCC and pRNFL by RTVue OCT. Spearman rank correlation coefficient(rswas used to assess the correlation between GCC, pRNFL and MD, MS. The changes of VF and optic disc were analyzed. RESULTS: Abnormal pRNFL findings occurred in 53.1%(93/175, which included optic disk edema 3.4%(6/175, atrophic changes of optic nerve 47.4%(83/175and glaucoma-like optic neuropathy 7.4%(13/175. Various visual field defect was 71.4%(125/175. The average thickness of binocular pRNFL(rsOD=-0.411, rsOS=-0.354and GCC(rsOD=-0.400, rsOS=-0.314had correlation with MD(PrsOD=0.412, rsOS=0.342and GCC(rsOD=0.414, rsOS=0.299had correlation with MS(PCONCLUSION: The average thickness of pRNFL and GCC has correlation with VF damage, can evaluate the optic nerve damage of craniopharyngioma patients quantitatively. The thinner the thickness of pRNFL and GCC is, the serious damage of visual function is. During the clinical work, visual field test combined with OCT are helpful to find and assess the damage of visual pathway and prognosis. 14. Field dependence of the electron drift velocity along the hexagonal axis of 4H-SiC Energy Technology Data Exchange (ETDEWEB) Ivanov, P. A., E-mail: [email protected]; Potapov, A. S.; Samsonova, T. P.; Grekhov, I. V. [Russian Academy of Sciences, Ioffe Physical–Technical Institute (Russian Federation) 2016-07-15 The forward current–voltage characteristics of mesa-epitaxial 4H-SiC Schottky diodes are measured in high electric fields (up to 4 × 10{sup 5} V/cm) in the n-type base region. A semi-empirical formula for the field dependence of the electron drift velocity in 4H-SiC along the hexagonal axis of the crystal is derived. It is shown that the saturated drift velocity is (1.55 ± 0.05) × 10{sup 7} cm/s in electric fields higher than 2 × 10{sup 5} V/cm. 15. Transient-field strength measurements for 52Cr traversing Fe hosts at high velocity and polarization transfer mechanisms International Nuclear Information System (INIS) Stuchbery, A.E.; Doran, C.E.; Byrne, A.P.; Bolotin, H.H.; Dracoulis, G.D. 1986-12-01 Transient-field strengths were measured for 52 Cr ions traversing polarized Fe hosts at velocities up to 12v>=o (v>=o = c/137 = Bohr velocity). The results are compared with predictions of various transient field parametrizations and discussed in terms of possible mechanisms by which polarization might be transferred from the Fe host to inner vacancies of the moving Cr ions. The g-factor of the first 2 + state of 52 Cr was also measured by the transient field technique and found to be in accord with shell-model calculations 16. Remote Sensing Data in Wind Velocity Field Modelling: a Case Study from the Sudetes (SW Poland) Science.gov (United States) Jancewicz, Kacper 2014-06-01 The phenomena of wind-field deformation above complex (mountainous) terrain is a popular subject of research related to numerical modelling using GIS techniques. This type of modelling requires, as input data, information on terrain roughness and a digital terrain/elevation model. This information may be provided by remote sensing data. Consequently, its accuracy and spatial resolution may affect the results of modelling. This paper represents an attempt to conduct wind-field modelling in the area of the Śnieżnik Massif (Eastern Sudetes). The modelling process was conducted in WindStation 2.0.10 software (using the computable fluid dynamics solver Canyon). Two different elevation models were used: the Global Land Survey Digital Elevation Model (GLS DEM) and Digital Terrain Elevation Data (DTED) Level 2. The terrain roughness raster was generated on the basis of Corine Land Cover 2006 (CLC 2006) data. The output data were post-processed in ArcInfo 9.3.1 software to achieve a high-quality cartographic presentation. Experimental modelling was conducted for situations from 26 November 2011, 25 May 2012, and 26 May 2012, based on a limited number of field measurements and using parameters of the atmosphere boundary layer derived from the aerological surveys provided by the closest meteorological stations. The model was run in a 100-m and 250-m spatial resolution. In order to verify the model's performance, leave-one-out cross-validation was used. The calculated indices allowed for a comparison with results of former studies pertaining to WindStation's performance. The experiment demonstrated very subtle differences between results in using DTED or GLS DEM elevation data. Additionally, CLC 2006 roughness data provided more noticeable improvements in the model's performance, but only in the resolution corresponding to the original roughness data. The best input data configuration resulted in the following mean values of error measure: root mean squared error of velocity 17. Spatial-temporal analysis of coherent offshore wind field structures measured by scanning Doppler-lidar Science.gov (United States) Valldecabres, L.; Friedrichs, W.; von Bremen, L.; Kühn, M. 2016-09-01 An analysis of the spatial and temporal power fluctuations of a simplified wind farm model is conducted on four offshore wind fields data sets, two from lidar measurements and two from LES under unstable and neutral atmospheric conditions. The integral length scales of the horizontal wind speed computed in the streamwise and the cross-stream direction revealed the elongation of the structures in the direction of the mean flow. To analyse the effect of the structures on the power output of a wind turbine, the aggregated equivalent power of two wind turbines with different turbine spacing in the streamwise and cross-stream direction is analysed at different time scales under 10 minutes. The fact of considering the summation of the power of two wind turbines smooths out the fluctuations of the power output of a single wind turbine. This effect, which is stronger with increasing spacing between turbines, can be seen in the aggregation of the power of two wind turbines in the streamwise direction. Due to the anti-correlation of the coherent structures in the cross-stream direction, this smoothing effect is stronger when the aggregated power is computed with two wind turbines aligned orthogonally to the mean flow direction. 18. Using microencapsulated fluorescent dyes for simultaneous measurement of temperature and velocity fields International Nuclear Information System (INIS) Vogt, J; Stephan, P 2012-01-01 In this paper, a novel particle image thermometry method based on microcapsules filled with a fluorescent dye solution is described. The microcapsules consist of a liquid core of hexadecane in which the dye is dissolved and a solid polymer shell. The combination of a temperature-sensitive dye (Pyrromethene 597-8C9) and a dye showing a relatively smaller temperature sensitivity (Pyrromethene 567) in hexadecane makes application of the ratiometric LIF possible. This is necessary to compensate for fluctuations of the illuminating pulsed Nd:YAG laser (532 nm) as well as the different particle sizes. The applicability of this measurement technique is demonstrated for a cubic test cell (10 × 10 × 10 mm 3 ) with flow and temperature fields driven by natural convection and a capillary tube (1.16 mm inner diameter) inducing a temperature gradient and a Hagen–Poiseuille velocity profile. For the first case, a light sheet illumination is used making two optical accesses necessary. In the second case an inverted microscope is used, so only one optical access is needed and a volume illumination is applied. The technique facilitates high-resolution measurements (first case: 79 × 79 μm 2 ; second case: 8 × 8 μm 2 ). Although the measurement uncertainty is high compared to LIF measurements with dissolved dyes, temperature fields can be reproduced very well, and the experimental results are in good agreement with numerical computations. (paper) 19. Evolution of Mass and Velocity Field in the Cosmic Web: Comparison between Baryonic and Dark Matter Science.gov (United States) Zhu, Weishan; Feng, Long-Long 2017-03-01 We investigate the evolution of the cosmic web since z = 5 in grid-based cosmological hydrodynamical simulations, focusing on the mass and velocity fields of both baryonic and cold dark matter. The tidal tensor of density is used as the main method for web identification, with λ th = 0.2-1.2. The evolution trends in baryonic and dark matter are similar, although moderate differences are observed. Sheets appear early, and their large-scale pattern may have been set up by z = 3. In terms of mass, filaments supersede sheets as the primary collapsing structures from z ˜ 2-3. Tenuous filaments assembled with each other to form prominent ones at z dark matter field, and is even moderately stronger between {\\boldsymbol{ω }} and {{\\boldsymbol{e}}}1, and ω and {{\\boldsymbol{e}}}3. Compared with dark matter, there is slightly less baryonic matter found residing in filaments and clusters, and its vorticity developed more significantly below 2-3 Mpc. These differences may be underestimated because of the limited resolution and lack of star formation in our simulation. The impact of the change of dominant structures in overdense regions at z ˜ 2-3 on galaxy formation and evolution is shortly discussed. 20. A New Velocity Field from a Dense GPS Array in the Southernmost Longitudinal Valley, Southeastern Taiwan Directory of Open Access Journals (Sweden) Horng-Yue Chen 2013-01-01 Full Text Available In the southernmost Longitudinal Valley (LV, Taiwan, we analyzed a dense GPS array composed of 10 continuous stations and 86 campaign-mode stations. By removing the effects of the four major earthquakes (one regional and three local occurred during the 1992 - 2010 observation period, we derived a new horizontal velocity field in this area, which then allows better locating the surface traces of the major active faults, including the Longitudinal Valley Fault (LVF system and the Central Range Fault, and characterizing the slip behaviors along the faults. Note that LVF reveals two sub-parallel strands in the study area: the Luyeh Fault to the west and the Lichi Fault to the east. Based on the results of strain analyses, including dilatation and shear strain, and projected vectors of station velocities across the major faults, we came to the following geological interpretations. During the inter-seismic periods, the surface deformation of the southernmost LV is mainly accommodated by the faulting on the two branches of the LVF; there is very little surface deformation on the Central Range Fault. The Luyeh River appears to act as a boundary to divide the LVF to behave differently to its northern and southern sides. The Lichi Fault reveals a change of slip kinematics from an oblique shearing/thrusting in the north to a nearly pure shearing with minor extension to the south. Regarding the slip behavior of the Luyeh Fault, it exhibits a creeping behavior in the north and a partially near-surface-locked faulting behavior in the south. We interpret that the two strands of the LVF merge together in the northern Taitung alluvial plain and turns to E-W trend toward the offshore area. 1. Experimental study of stratified jet by simultaneous measurements of velocity and density fields Science.gov (United States) Xu, Duo; Chen, Jun 2012-07-01 Stratified flows with small density difference commonly exist in geophysical and engineering applications, which often involve interaction of turbulence and buoyancy effect. A combined particle image velocimetry (PIV) and planar laser-induced fluorescence (PLIF) system is developed to measure the velocity and density fields in a dense jet discharged horizontally into a tank filled with light fluid. The illumination of PIV particles and excitation of PLIF dye are achieved by a dual-head pulsed Nd:YAG laser and two CCD cameras with a set of optical filters. The procedure for matching refractive indexes of two fluids and calibration of the combined system are presented, as well as a quantitative analysis of the measurement uncertainties. The flow structures and mixing dynamics within the central vertical plane are studied by examining the averaged parameters, turbulent kinetic energy budget, and modeling of momentum flux and buoyancy flux. At downstream, profiles of velocity and density display strong asymmetry with respect to its center. This is attributed to the fact that stable stratification reduces mixing and unstable stratification enhances mixing. In stable stratification region, most of turbulence production is consumed by mean-flow convection, whereas in unstable stratification region, turbulence production is nearly balanced by viscous dissipation. Experimental data also indicate that at downstream locations, mixing length model performs better in mixing zone of stable stratification regions, whereas in other regions, eddy viscosity/diffusivity models with static model coefficients represent effectively momentum and buoyancy flux terms. The measured turbulent Prandtl number displays strong spatial variation in the stratified jet. 2. Limited interlimb transfer of locomotor adaptations to a velocity-dependent force field during unipedal walking. Science.gov (United States) Houldin, Adina; Chua, Romeo; Carpenter, Mark G; Lam, Tania 2012-08-01 3. Stochasticity induced by coherent wavepackets International Nuclear Information System (INIS) Fuchs, V.; Krapchev, V.; Ram, A.; Bers, A. 1983-02-01 We consider the momentum transfer and diffusion of electrons periodically interacting with a coherent longitudinal wavepacket. Such a problem arises, for example, in lower-hybrid current drive. We establish the stochastic threshold, the stochastic region δv/sub stoch/ in velocity space, the associated momentum transfer j, and the diffusion coefficient D. We concentrate principally on the weak-field regime, tau/sub autocorrelation/ < tau/sub bounce/ 4. Coherent response of a two-level atom to a signal field with account of suppression of phase relaxation by a strong field International Nuclear Information System (INIS) Grishanin, B.A.; Shatalova, G.G. 1984-01-01 Calculation is made of a coherent part of response to a weak test field of an atom located in a strong resonance field. The latter bads to a suppression of phase relaxation. This response is shown to appear both at a test field freq uency ω and at a combination frequency 2ωsub(l)-ω, where ωsub(l) is a resona nce field frequency. The spectrum of test field absorption by such a system has a symmetric form and consist of two parts, one of which corresponds to a test f ield absorption and another - to its amplification 5. The impact of groundwater velocity fields on streamlines in an aquifer system with a discontinuous aquitard (Inner Mongolia, China) Science.gov (United States) Wu, Qiang; Zhao, Yingwang; Xu, Hua 2018-04-01 Many numerical methods that simulate groundwater flow, particularly the continuous Galerkin finite element method, do not produce velocity information directly. Many algorithms have been proposed to improve the accuracy of velocity fields computed from hydraulic potentials. The differences in the streamlines generated from velocity fields obtained using different algorithms are presented in this report. The superconvergence method employed by FEFLOW, a popular commercial code, and some dual-mesh methods proposed in recent years are selected for comparison. The applications to depict hydrogeologic conditions using streamlines are used, and errors in streamlines are shown to lead to notable errors in boundary conditions, the locations of material interfaces, fluxes and conductivities. Furthermore, the effects of the procedures used in these two types of methods, including velocity integration and local conservation, are analyzed. The method of interpolating velocities across edges using fluxes is shown to be able to eliminate errors associated with refraction points that are not located along material interfaces and streamline ends at no-flow boundaries. Local conservation is shown to be a crucial property of velocity fields and can result in more accurate streamline densities. A case study involving both three-dimensional and two-dimensional cross-sectional models of a coal mine in Inner Mongolia, China, are used to support the conclusions presented. 6. Three-dimensional simulation of the motion of a single particle under a simulated turbulent velocity field Science.gov (United States) Moreno-Casas, P. A.; Bombardelli, F. A. 2015-12-01 A 3D Lagrangian particle tracking model is coupled to a 3D channel velocity field to simulate the saltation motion of a single sediment particle moving in saltation mode. The turbulent field is a high-resolution three dimensional velocity field that reproduces a by-pass transition to turbulence on a flat plate due to free-stream turbulence passing above de plate. In order to reduce computational costs, a decoupled approached is used, i.e., the turbulent flow is simulated independently from the tracking model, and then used to feed the 3D Lagrangian particle model. The simulations are carried using the point-particle approach. The particle tracking model contains three sub-models, namely, particle free-flight, a post-collision velocity and bed representation sub-models. The free-flight sub-model considers the action of the following forces: submerged weight, non-linear drag, lift, virtual mass, Magnus and Basset forces. The model also includes the effect of particle angular velocity. The post-collision velocities are obtained by applying conservation of angular and linear momentum. The complete model was validated with experimental results from literature within the sand range. Results for particle velocity time series and distribution of particle turbulent intensities are presented. 7. Nerve Fiber Flux Analysis Using Wide-Field Swept-Source Optical Coherence Tomography. Science.gov (United States) Tan, Ou; Liu, Liang; Liu, Li; Huang, David 2018-02-01 To devise a method to quantify nerve fibers over their arcuate courses over an extended peripapillary area using optical coherence tomography (OCT). Participants were imaged with 8 × 8-mm volumetric OCT scans centered at the optic disc. A new quantity, nerve fiber flux (NFF), represents the cross-sectional area transected perpendicular to the nerve fibers. The peripapillary area was divided into 64 tracks with equal flux. An iterative algorithm traced the trajectory of the tracks assuming that the relative distribution of the NFF was conserved with compensation for fiber connections to ganglion cells on the macular side. Average trajectory was averaged from normal eyes and use to calculate the NFF maps for glaucomatous eyes. The NFF maps were divided into eight sectors that correspond to visual field regions. There were 24 healthy and 10 glaucomatous eyes enrolled. The algorithm converged on similar patterns of NFL tracks for all healthy eyes. In glaucomatous eyes, NFF correlated with visual field sensitivity in the arcuate sectors (Spearman ρ = 0.53-0.62). Focal nerve fiber loss in glaucomatous eyes appeared as uniform tracks of NFF defects that followed the expected arcuate fiber trajectory. Using an algorithm based on the conservation of flux, we derived nerve fiber trajectories in the peripapillary area. The NFF map is useful for the visualization of focal defects and quantification of sector nerve fiber loss from wide-area volumetric OCT scans. NFF provides a cumulative measure of volumetric loss along nerve fiber tracks and could improve the detection of focal glaucoma damage. 8. Transportable and vibration-free full-field low-coherent quantitative phase microscope Science.gov (United States) Yamauchi, Toyohiko; Yamada, Hidenao; Goto, Kentaro; Matsui, Hisayuki; Yasuhiko, Osamu; Ueda, Yukio 2018-02-01 We developed a transportable Linnik-type full-field low-coherent quantitative phase microscope that is able to compensate for optical path length (OPL) disturbance due to environmental mechanical noises. Though two-beam interferometers such as Linnik ones suffer from unstable OPL difference, we overcame this problem with a mechanical feedback system based on digital signal-processing that controls the OPL difference in sub-nanometer resolution precisely with a feedback bandwidth of 4 kHz. The developed setup has a footprint of 200 mm by 200 mm, a height of 500 mm, and a weight of 4.5 kilograms. In the transmission imaging mode, cells were cultured on a reflection-enhanced glass-bottom dish, and we obtained interference images sequentially while performing stepwise quarter-wavelength phase-shifting. Real-time image processing, including retrieval of the unwrapped phase from interference images and its background correction, along with the acquisition of interference images, was performed on a laptop computer. Emulation of the phase contrast (PhC) images and the differential interference contrast (DIC) images was also performed in real time. Moreover, our setup was applied for full-field cell membrane imaging in the reflection mode, where the cells were cultured on an anti-reflection (AR)-coated glass-bottom dish. The phase and intensity of the light reflected by the membrane revealed the outer shape of the cells independent of the refractive index. In this paper, we show imaging results on cultured cells in both transmission and reflection modes. 9. Computation of the velocity field and mass balance in the finite-element modeling of groundwater flow International Nuclear Information System (INIS) Yeh, G.T. 1980-01-01 Darcian velocity has been conventionally calculated in the finite-element modeling of groundwater flow by taking the derivatives of the computed pressure field. This results in discontinuities in the velocity field at nodal points and element boundaries. Discontinuities become enormous when the computed pressure field is far from a linear distribution. It is proposed in this paper that the finite element procedure that is used to simulate the pressure field or the moisture content field also be applied to Darcy's law with the derivatives of the computed pressure field as the load function. The problem of discontinuity is then eliminated, and the error of mass balance over the region of interest is much reduced. The reduction is from 23.8 to 2.2% by one numerical scheme and from 29.7 to -3.6% by another for a transient problem 10. DEPENDENCE OF THE TURBULENT VELOCITY FIELD ON GAS DENSITY IN L1551 International Nuclear Information System (INIS) Yoshida, Atsushi; Kitamura, Yoshimi; Shimajiri, Yoshito; Kawabe, Ryohei 2010-01-01 We have carried out mapping observations of the entire L1551 molecular cloud with about 2 pc x 2 pc size in the 12 CO(1-0) line with the Nobeyama 45 m radio telescope at the high effective resolution of 22'' (corresponding to 0.017 pc at the distance of 160 pc), and analyzed the 12 CO data together with the 13 CO(1-0) and C 18 O(1-0) data from the Nobeyama Radio Observatory database. We derived the new non-thermal line width-size relations, σ NT ∝ L γ , for the three molecular lines, corrected for the effect of optical depth and the line-of-sight integration. To investigate the characteristic of the intrinsic turbulence, the effects of the outflows were removed. The derived relations are (σ NT /km s -1 ) = (0.18 ± 0.010)(L/pc) 0.45±0.095 , (0.20 ± 0.020)(L/pc) 0.48±0.091 , and (0.22 ± 0.050) (L/pc) 0.54±0.21 for the 12 CO, 13 CO, and C 18 O lines, respectively, suggesting that the line width-size relation of the turbulence very weakly depends on our observed molecular lines, i.e., the relation does not change between the density ranges of 10 2 -10 3 and 10 3 -10 4 cm -3 . In addition, the relations indicate that incompressible turbulence is dominant at the scales smaller than 0.6 pc in L1551. The power spectrum indices converted from the relations, however, seem to be larger than that of the Kolmogorov spectrum for incompressible flow. The disagreement could be explained by the anisotropy in the turbulent velocity field in L1551, as expected in MHD turbulence. Actually, the autocorrelation functions of the centroid velocity fluctuations show larger correlation along the direction of the magnetic field measured for the whole Taurus cloud, which is consistent with the results of numerical simulations for incompressible MHD flow. 11. Experimental Investigation on the Influence of a Double-Walled Confined Width on the Velocity Field of a Submerged Waterjet Directory of Open Access Journals (Sweden) Xiaolong Ding 2017-12-01 Full Text Available The current research on confined submerged waterjets mainly focuses on the flow field of the impinging jet and wall jet. The double-sided wall vertically confined waterjet, which is widely used in many fields such as mining, cleaning and surface strengthening, has rarely been studied so far. In order to explore the influence of a double-sided wall confined width on the velocity field of submerged waterjet, an experiment was conducted with the application of 2D particle image velocimetry (PIV technology. The distribution of mean velocity and turbulent velocity in both horizontal and vertical planes was used to characterize the flow field under various confined widths. The results show that the vertical confinement has an obvious effect on the decay rate of the mean centerline velocity. When the confined width changes from 15 to 5, the velocity is reduced by 20%. In addition, with the decrease of the confined width, the jet has a tendency to spread horizontally. The vertically confined region induces a space hysteresis effect which changes the location of the transition region moving downstream. There are local negative pressure zones separating the fluid and the wall. This study of a double-walled confined jet provides some valuable information with respect to its mechanism and industrial application. 12. Statistical properties of the surface velocity field in the northern Gulf of Mexico sampled by GLAD drifters OpenAIRE Mariano, A.J.; Ryan, E.H.; Huntley, H.S.; Laurindo, L.C.; Coelho, E.; Ozgokmen, TM; Berta, M.; Bogucki, D; Chen, S.S.; Curcic, M.; Drouin, K.L.; Gough, M; Haus, BK; Haza, A.C.; Hogan, P 2016-01-01 The Grand LAgrangian Deployment (GLAD) used multiscale sampling and GPS technology to observe time series of drifter positions with initial drifter separation of O(100 m) to O(10 km), and nominal 5 min sampling, during the summer and fall of 2012 in the northern Gulf of Mexico. Histograms of the velocity field and its statistical parameters are non-Gaussian; most are multimodal. The dominant periods for the surface velocity field are 1–2 days due to inertial oscillations, tides, and the sea b... 13. Predictive Factors for Visual Field Conversion: Comparison of Scanning Laser Polarimetry and Optical Coherence Tomography. Science.gov (United States) Diekmann, Theresa; Schrems-Hoesl, Laura M; Mardin, Christian Y; Laemmer, Robert; Horn, Folkert K; Kruse, Friedrich E; Schrems, Wolfgang A 2018-02-01 The purpose of this study was to compare the ability of scanning laser polarimetry (SLP) and spectral-domain optical coherence tomography (SD-OCT) to predict future visual field conversion of subjects with ocular hypertension and early glaucoma. All patients were recruited from the Erlangen glaucoma registry and examined using standard automated perimetry, 24-hour intraocular pressure profile, and optic disc photography. Peripapillary retinal nerve fiber layer thickness (RNFL) measurements were obtained by SLP (GDx-VCC) and SD-OCT (Spectralis OCT). Positive and negative predictive values (PPV, NPV) were calculated for morphologic parameters of SLP and SD-OCT. Kaplan-Meier survival curves were plotted and log-rank tests were performed to compare the survival distributions. Contingency tables and Venn-diagrams were calculated to compare the predictive ability. The study included 207 patients-75 with ocular hypertension, 85 with early glaucoma, and 47 controls. Median follow-up was 4.5 years. A total of 29 patients (14.0%) developed visual field conversion during follow-up. SLP temporal-inferior RNFL [0.667; 95% confidence interval (CI), 0.281-0.935] and SD-OCT temporal-inferior RNFL (0.571; 95% CI, 0.317-0.802) achieved the highest PPV; nerve fiber indicator (0.923; 95% CI, 0.876-0.957) and SD-OCT mean (0.898; 95% CI, 0.847-0.937) achieved the highest NPV of all investigated parameters. The Kaplan-Meier curves confirmed significantly higher survival for subjects within normal limits of measurements of both devices (P<0.001). Venn diagrams tested with McNemar test statistics showed no significant difference for PPV (P=0.219) or NPV (P=0.678). Both GDx-VCC and SD-OCT demonstrate comparable results in predicting future visual field conversion if taking typical scans for GDx-VCC. In addition, the likelihood ratios suggest that GDx-VCC's nerve fiber indicator<30 may be the most useful parameter to confirm future nonconversion. (http://www.ClinicalTrials.gov number, NTC 14. Tensor-based morphometry with stationary velocity field diffeomorphic registration: application to ADNI. Science.gov (United States) Bossa, Matias; Zacur, Ernesto; Olmos, Salvador 2010-07-01 Tensor-based morphometry (TBM) is an analysis technique where anatomical information is characterized by means of the spatial transformations mapping a customized template with the observed images. Therefore, accurate inter-subject non-rigid registration is an essential prerequisite for both template estimation and image warping. Subsequent statistical analysis on the spatial transformations is performed to highlight voxel-wise differences. Most of previous TBM studies did not explore the influence of the registration parameters, such as the parameters defining the deformation and the regularization models. In this work performance evaluation of TBM using stationary velocity field (SVF) diffeomorphic registration was performed in a subset of subjects from Alzheimer's Disease Neuroimaging Initiative (ADNI) study. A wide range of values of the registration parameters that define the transformation smoothness and the balance between image matching and regularization were explored in the evaluation. The proposed methodology provided brain atrophy maps with very detailed anatomical resolution and with a high significance level compared with results recently published on the same data set using a non-linear elastic registration method. Copyright (c) 2010 Elsevier Inc. All rights reserved. 15. Temperature and velocity field of coolant at inlet to WWER-440 core - evaluation of experimental data International Nuclear Information System (INIS) Jirous, F.; Klik, F.; Janeba, B.; Daliba, J.; Delis, J. 1989-01-01 Experimentally determined were coolant temperature and velocity fields at the inlet of the WWER-440 reactor core. The accuracy estimate is presented of temperature measurements and the relation is given for determining the resulting measurement error. An estimate is also made of the accuracy of solution of the system of equations for determining coefficients B kn using the method of the least square fit. Coefficients B kn represent the relative contribution of the mass flow of the k-th fuel assembly from the n-th loop and allow the calculation of coolant temperatures at the inlet of the k-th fuel assembly, when coolant temperatures in loops at reactor inlet are known. A comparison is made of the results of measurements on a hydrodynamic model of a WWER-440 reactor with results of measurements made at unit 4 of the Dukovany nuclear power plant. Full agreement was found for 32 model measurements and 6 reactor measurements. It may be assumed that the results of other model measurements obtained for other operating variants will also apply for an actual reactor. Their applicability may, however, only be confirmed by repeating the experiment on other WWER-440 reactors. (Z.M.). 5 figs., 7 refs 16. Measurements of the velocity fields by PIV method round about titling gate Directory of Open Access Journals (Sweden) Mistrová Ivana 2012-04-01 Full Text Available The article deals with problems of using of measurement method Particle Image Velocimetry (PIV to measure velocity fields in the flowing water in front, above and behind drowned titling weir gate. The aim was to obtain information about the distribution of speed in the area of interest for the verification or calibration of the numerical model. Experiments were carried out in inclinable channel connected to the hydraulic circuit with a pump and storage tank at the Water Management Research Laboratory (LVV of Institute of Water Structures at the Faculty of Civil Engineering in Brno University of Technology. Hydraulic inclinable channel has cross-section with dimensions of 0.4×0.4m and length of 12.5m. The measured area has cross-section approximately 0.2m wide and 0.4m high and its length is 1m. The results of physical modelling allowed a comparison of experimental data with numerical simulation results of this type of flow in the commercial software ANSYS CFX-12.0. 17. Coherent light from E-field induced quantum coupling of exciton states in superlattice-like quantum wells DEFF Research Database (Denmark) Lyssenko, V. G.; Østergaard, John Erland; Hvam, Jørn Märcher 1999-01-01 Summary form only given. We focus on the ability to control the electronic coupling in coupled quantum wells with external E-fields leading to a strong modification of the coherent light emission, in particular at a bias where a superlattice-like miniband is formed. More specifically, we investig......Summary form only given. We focus on the ability to control the electronic coupling in coupled quantum wells with external E-fields leading to a strong modification of the coherent light emission, in particular at a bias where a superlattice-like miniband is formed. More specifically, we...... investigate a MBE-grown GaAs sample with a sequence of 15 single quantum wells having a successive increase of 1 monolayer in width ranging from 62 A to 102 A and with AlGaAs barriers of 17 Å.... 18. Full field optical coherence tomography can identify spermatogenesis in a rodent sertoli-cell only model. Science.gov (United States) Ramasamy, Ranjith; Sterling, Joshua; Manzoor, Maryem; Salamoon, Bekheit; Jain, Manu; Fisher, Erik; Li, Phillip S; Schlegel, Peter N; Mukherjee, Sushmita 2012-01-01 Microdissection testicular sperm extraction (micro-TESE) has replaced conventional testis biopsies as a method of choice for obtaining sperm for in vitro fertilization for men with nonobstructive azoospermia. A technical challenge of micro-TESE is that the low magnification inspection of the tubules with a surgical microscope is insufficient to definitively identify sperm-containing tubules, necessitating tissue removal and cytologic assessment. Full field optical coherence tomography (FFOCT) uses white light interference microscopy to generate quick high-resolution tomographic images of fresh (unprocessed and unstained) tissue. Furthermore, by using a nonlaser safe light source (150 W halogen lamp) for tissue illumination, it ensures that the sperm extracted for in vitro fertilization are not photo-damaged or mutagenized. A focal Sertoli-cell only rodent model was created with busulfan injection in adult rats. Ex vivo testicular tissues from both normal and busulfan-treated rats were imaged with a commercial modified FFOCT system, Light-CT™, and the images were correlated with gold standard hematoxylin and eosin staining. Light-CT™ identified spermatogenesis within the seminiferous tubules in freshly excised testicular tissue, without the use of exogenous contrast or fixation. Normal adult rats exhibited tubules with uniform size and shape (diameter 328 ±11 μm). The busulfan-treated animals showed marked heterogeneity in tubular size and shape (diameter 178 ± 35 μm) and only 10% contained sperm within the lumen. FFOCT has the potential to facilitate real-time visualization of spermatogenesis in humans, and aid in micro-TESE for men with infertility. 19. Cycloid scanning for wide field optical coherence tomography endomicroscopy and angiography in vivo Science.gov (United States) Liang, Kaicheng; Wang, Zhao; Ahsen, Osman O.; Lee, Hsiang-Chieh; Potsaid, Benjamin M.; Jayaraman, Vijaysekhar; Cable, Alex; Mashimo, Hiroshi; Li, Xingde; Fujimoto, James G. 2018-01-01 Devices that perform wide field-of-view (FOV) precision optical scanning are important for endoscopic assessment and diagnosis of luminal organ disease such as in gastroenterology. Optical scanning for in vivo endoscopic imaging has traditionally relied on one or more proximal mechanical actuators, limiting scan accuracy and imaging speed. There is a need for rapid and precise two-dimensional (2D) microscanning technologies to enable the translation of benchtop scanning microscopies to in vivo endoscopic imaging. We demonstrate a new cycloid scanner in a tethered capsule for ultrahigh speed, side-viewing optical coherence tomography (OCT) endomicroscopy in vivo. The cycloid capsule incorporates two scanners: a piezoelectrically actuated resonant fiber scanner to perform a precision, small FOV, fast scan and a micromotor scanner to perform a wide FOV, slow scan. Together these scanners distally scan the beam circumferentially in a 2D cycloid pattern, generating an unwrapped 1 mm × 38 mm strip FOV. Sequential strip volumes can be acquired with proximal pullback to image centimeter-long regions. Using ultrahigh speed 1.3 μm wavelength swept-source OCT at a 1.17 MHz axial scan rate, we imaged the human rectum at 3 volumes/s. Each OCT strip volume had 166 × 2322 axial scans with 8.5 μm axial and 30 μm transverse resolution. We further demonstrate OCT angiography at 0.5 volumes/s, producing volumetric images of vasculature. In addition to OCT applications, cycloid scanning promises to enable precision 2D optical scanning for other imaging modalities, including fluorescence confocal and nonlinear microscopy. PMID:29682598 20. Characteristics of the Taylor microscale in the solar wind/foreshock. Magnetic field and electron velocity measurements Energy Technology Data Exchange (ETDEWEB) Gurgiolo, C. [Bitterroot Basic Research, Hamilton, MT (United States); Goldstein, M.L.; Vinas, A. [NASA Goddard Space Flight Center, Greenbelt, MD (United States). Heliospheric Physics Lab.; Matthaeus, W.H. [Delaware Univ., Newark, DE (United States). Bartol Research Foundation; Fazakerley, A.N. [University College London, Dorking (United Kingdom). Mullard Space Science Lab. 2013-07-01 The Taylor microscale is one of the fundamental turbulence scales. Not easily estimated in the interplanetary medium employing single spacecraft data, it has generally been studied through two point correlations. In this paper we present an alternative, albeit mathematically equivalent, method for estimating the Taylor microscale ({lambda}{sub T}). We make two independent determinations employing multi-spacecraft data sets from the Cluster mission, one using magnetic field data and a second using electron velocity data. Our results using the magnetic field data set yields a scale length of 1538{+-}550 km, slightly less than, but within the same range as, values found in previous magnetic-field-based studies. During time periods where both magnetic field and electron velocity data can be used, the two values can be compared. Relative comparisons show {lambda}{sub T} computed from the velocity is often significantly smaller than that from the magnetic field data. Due to a lack of events where both measurements are available, the absolute {lambda}{sub T} based on the electron fluid velocity is not able to be determined. 1. Characteristics of the Taylor microscale in the solar wind/foreshock: magnetic field and electron velocity measurements Directory of Open Access Journals (Sweden) C. Gurgiolo 2013-11-01 Full Text Available The Taylor microscale is one of the fundamental turbulence scales. Not easily estimated in the interplanetary medium employing single spacecraft data, it has generally been studied through two point correlations. In this paper we present an alternative, albeit mathematically equivalent, method for estimating the Taylor microscale (λT. We make two independent determinations employing multi-spacecraft data sets from the Cluster mission, one using magnetic field data and a second using electron velocity data. Our results using the magnetic field data set yields a scale length of 1538 ± 550 km, slightly less than, but within the same range as, values found in previous magnetic-field-based studies. During time periods where both magnetic field and electron velocity data can be used, the two values can be compared. Relative comparisons show λT computed from the velocity is often significantly smaller than that from the magnetic field data. Due to a lack of events where both measurements are available, the absolute λT based on the electron fluid velocity is not able to be determined. 2. Characteristics of the Taylor microscale in the solar wind/foreshock. Magnetic field and electron velocity measurements International Nuclear Information System (INIS) Gurgiolo, C.; Goldstein, M.L.; Vinas, A.; Matthaeus, W.H.; Fazakerley, A.N. 2013-01-01 The Taylor microscale is one of the fundamental turbulence scales. Not easily estimated in the interplanetary medium employing single spacecraft data, it has generally been studied through two point correlations. In this paper we present an alternative, albeit mathematically equivalent, method for estimating the Taylor microscale (λ T ). We make two independent determinations employing multi-spacecraft data sets from the Cluster mission, one using magnetic field data and a second using electron velocity data. Our results using the magnetic field data set yields a scale length of 1538±550 km, slightly less than, but within the same range as, values found in previous magnetic-field-based studies. During time periods where both magnetic field and electron velocity data can be used, the two values can be compared. Relative comparisons show λ T computed from the velocity is often significantly smaller than that from the magnetic field data. Due to a lack of events where both measurements are available, the absolute λ T based on the electron fluid velocity is not able to be determined. 3. Exploiting LSPIV to assess debris-flow velocities in the field Science.gov (United States) Theule, Joshua I.; Crema, Stefano; Marchi, Lorenzo; Cavalli, Marco; Comiti, Francesco 2018-01-01 The assessment of flow velocity has a central role in quantitative analysis of debris flows, both for the characterization of the phenomenology of these processes and for the assessment of related hazards. Large-scale particle image velocimetry (LSPIV) can contribute to the assessment of surface velocity of debris flows, provided that the specific features of these processes (e.g. fast stage variations and particles up to boulder size on the flow surface) are taken into account. Three debris-flow events, each of them consisting of several surges featuring different sediment concentrations, flow stages, and velocities, have been analysed at the inlet of a sediment trap in a stream in the eastern Italian Alps (Gadria Creek). Free software has been employed for preliminary treatment (orthorectification and format conversion) of video-recorded images as well as for LSPIV application. Results show that LSPIV velocities are consistent with manual measurements of the orthorectified imagery and with front velocity measured from the hydrographs in a channel recorded approximately 70 m upstream of the sediment trap. Horizontal turbulence, computed as the standard deviation of the flow directions at a given cross section for a given surge, proved to be correlated with surface velocity and with visually estimated sediment concentration. The study demonstrates the effectiveness of LSPIV in the assessment of surface velocity of debris flows and permit the most crucial aspects to be identified in order to improve the accuracy of debris-flow velocity measurements. 4. Two-dimensional, average velocity field across the Asal Rift, Djibouti from 1997-2008 RADARSAT data Science.gov (United States) Tomic, J.; Doubre, C.; Peltzer, G. 2009-12-01 Located at the western end of the Aden ridge, the Asal Rift is the first emerged section of the ridge propagating into Afar, a region of intense volcanic and tectonic activity. We construct a two-dimensional surface velocity map of the 200x400 km2 region covering the rift using the 1997-2008 archive of InSAR data acquired from ascending and descending passes of the RADARSAT satellite. The large phase signal due to turbulent troposphere conditions over the Afar region is mostly removed from the 11-year average line of sight (LOS) velocity maps, revealing a clear deformation signal across the rift. We combine the ascending and descending pass LOS velocity fields with the Arabia-Somalia pole of rotation adjusted to regional GPS velocities (Vigny et al., 2007) to compute the fields of the vertical and horizontal, GPS-parallel components of the velocity over the rift. The vertical velocity field shows a ~40 km wide zone of doming centered over the Fieale caldera associated with shoulder uplift and subsidence of the rift inner floor. Differential movement between shoulders and floor is accommodated by creep at 6 mm/yr on Fault γ and 2.7 mm/yr on Fault E. The horizontal field shows that the two shoulders open at a rate of ~15 mm/yr, while the horizontal velocity decreases away from the rift to the plate motion rate of ~11 mm/yr. Part of the opening is concentrated on faults γ (5 mm/yr) and E (4 mm/yr) and about 4 mm/yr is distributed between Fault E and Fault H in the southern part of the rift. The observed velocity field along a 60 km-long profile across the eastern part of the rift can be explained with a 2D mechanical model involving a 5-9 km-deep, vertical dyke expanding horizontally at a rate of 5 cm/yr, a 2 km-wide, 7 km-deep sill expanding vertically at 1cm/yr, and down-dip and opening of faults γ and E. Results from 3D rift models describing along-strike velocity decrease away from the Goubbet Gulf and the effects of a pressurized magma chamber will be 5. Shannon Entropy-Based Wavelet Transform Method for Autonomous Coherent Structure Identification in Fluid Flow Field Data Directory of Open Access Journals (Sweden) Kartik V. Bulusu 2015-09-01 Full Text Available The coherent secondary flow structures (i.e., swirling motions in a curved artery model possess a variety of spatio-temporal morphologies and can be encoded over an infinitely-wide range of wavelet scales. Wavelet analysis was applied to the following vorticity fields: (i a numerically-generated system of Oseen-type vortices for which the theoretical solution is known, used for bench marking and evaluation of the technique; and (ii experimental two-dimensional, particle image velocimetry data. The mother wavelet, a two-dimensional Ricker wavelet, can be dilated to infinitely large or infinitesimally small scales. We approached the problem of coherent structure detection by means of continuous wavelet transform (CWT and decomposition (or Shannon entropy. The main conclusion of this study is that the encoding of coherent secondary flow structures can be achieved by an optimal number of binary digits (or bits corresponding to an optimal wavelet scale. The optimal wavelet-scale search was driven by a decomposition entropy-based algorithmic approach and led to a threshold-free coherent structure detection method. The method presented in this paper was successfully utilized in the detection of secondary flow structures in three clinically-relevant blood flow scenarios involving the curved artery model under a carotid artery-inspired, pulsatile inflow condition. These scenarios were: (i a clean curved artery; (ii stent-implanted curved artery; and (iii an idealized Type IV stent fracture within the curved artery. 6. USGS Menlo Park GPS Data Processing Techniques and Derived North America Velocity Field (Invited) Science.gov (United States) Svarc, J. L.; Murray-Moraleda, J. R.; Langbein, J. O. 2010-12-01 The U.S. Geological Survey in Menlo Park routinely conducts repeated GPS surveys of geodetic markers throughout the western United States using dual-frequency geodetic GPS receivers. We combine campaign, continuous, and semi-permanent data to present a North America fixed velocity field for regions in the western United States. Mobile campaign-based surveys require less up-front investment than permanently monumented and telemetered GPS systems, and hence have achieved a broad and dense spatial coverage. The greater flexibility and mobility comes at the cost of greater uncertainties in individual daily position solutions. We also routinely process continuous GPS data collected at PBO stations operated by UNAVCO along with data from other continuous GPS networks such as BARD, PANGA, and CORS operated by other agencies. We have broken the Western US into several subnetworks containing approximately 150-250 stations each. The data are processed using JPL’s GIPSY-OASIS II release 5.0 software using a modified precise positioning strategy (Zumberge and others, 1997). We use the “ambizap” code provided by Geoff Blewitt (Blewitt, 2008) to fix phase ambiguities in continuous networks. To mitigate the effect of common mode noise we use the positions of stations in the network with very long, clean time series (i.e. those with no large outliers or offsets) to transform all position estimates into “regionally filtered” results following the approach of Hammond and Thatcher (2007). Velocity uncertainties from continuously operated GPS stations tend to be about 3 times smaller than those from campaign data. Langbein (2004) presents a maximum likelihood method for fitting a time series employing a variety of temporal noise models. We assume that GPS observations are contaminated by a combination of white, flicker, and random walk noise. For continuous and semi-permanent time series longer than 2 years we estimate these values, otherwise we fix the amplitudes of these 7. H0, q0 and the local velocity field. [Hubble and deceleration constants in Big Bang expansion Science.gov (United States) Sandage, A.; Tammann, G. A. 1982-01-01 An attempt is made to find a systematic deviation from linearity for distances that are under the control of the Virgo cluster, and to determine the value of the mean random motion about the systematic flow, in order to improve the measurement of the Hubble and the deceleration constants. The velocity-distance relation for large and intermediate distances is studied, and type I supernovae are calibrated relatively as distance indicators and absolutely to obtain a new value for the Hubble constant. Methods of determining the deceleration constant are assessed, including determination from direct measurement, mean luminosity density, virgocentric motion, and the time scale test. The very local velocity field is investigated, and a solution is preferred with a random peculiar radial velocity of very nearby field galaxies of 90-100 km/s, and a Virgocentric motion of the local group of 220 km/s, leading to an underlying expansion rate of 55, in satisfactory agreement with the global value. 8. Simultaneous planar measurements of soot structure and velocity fields in a turbulent lifted jet flame at 3 kHz Science.gov (United States) Köhler, M.; Boxx, I.; Geigle, K. P.; Meier, W. 2011-05-01 We describe a newly developed combustion diagnostic for the simultaneous planar imaging of soot structure and velocity fields in a highly sooting, lifted turbulent jet flame at 3000 frames per second, or two orders of magnitude faster than "conventional" laser imaging systems. This diagnostic uses short pulse duration (8 ns), frequency-doubled, diode-pumped solid state (DPSS) lasers to excite laser-induced incandescence (LII) at 3 kHz, which is then imaged onto a high framerate CMOS camera. A second (dual-cavity) DPSS laser and CMOS camera form the basis of a particle image velocity (PIV) system used to acquire 2-component velocity field in the flame. The LII response curve (measured in a laminar propane diffusion flame) is presented and the combined diagnostics then applied in a heavily sooting lifted turbulent jet flame. The potential challenges and rewards of application of this combined imaging technique at high speeds are discussed. 9. Dynamical system with plastic self-organized velocity field as an alternative conceptual model of a cognitive system. Science.gov (United States) Janson, Natalia B; Marsden, Christopher J 2017-12-05 It is well known that architecturally the brain is a neural network, i.e. a collection of many relatively simple units coupled flexibly. However, it has been unclear how the possession of this architecture enables higher-level cognitive functions, which are unique to the brain. Here, we consider the brain from the viewpoint of dynamical systems theory and hypothesize that the unique feature of the brain, the self-organized plasticity of its architecture, could represent the means of enabling the self-organized plasticity of its velocity vector field. We propose that, conceptually, the principle of cognition could amount to the existence of appropriate rules governing self-organization of the velocity field of a dynamical system with an appropriate account of stimuli. To support this hypothesis, we propose a simple non-neuromorphic mathematical model with a plastic self-organized velocity field, which has no prototype in physical world. This system is shown to be capable of basic cognition, which is illustrated numerically and with musical data. Our conceptual model could provide an additional insight into the working principles of the brain. Moreover, hardware implementations of plastic velocity fields self-organizing according to various rules could pave the way to creating artificial intelligence of a novel type. 10. On the phase velocity of plasma waves in a self-modulated laser wake-field accelerator NARCIS (Netherlands) Andreev, N. E.; Kirsanov, V. I.; Sakharov, A. S.; van Amersfoort, P. W.; Goloviznin, V. V. 1996-01-01 The properties of the wake field excited by a flattop laser pulse with a sharp leading edge and a power below the critical one for relativistic self-focusing are studied analytically and numerically with emphasis on the phase velocity of the plasma wave. The paraxial model describing modulation of 11. Influence on electron coherence from quantum electromagnetic fields in the presence of conducting plates International Nuclear Information System (INIS) Hsiang, J.-T.; Lee, D.-S. 2006-01-01 The influence of electromagnetic vacuum fluctuations in the presence of the perfectly conducting plate on electrons is studied with an interference experiment. The evolution of the reduced density matrix of the electron is derived by the method of influence functional. We find that the plate boundary anisotropically modifies vacuum fluctuations that in turn affect the electron coherence. The path plane of the interference is chosen either parallel or normal to the plate. In the vicinity of the plate, we show that the coherence between electrons due to the boundary is enhanced in the parallel configuration, but reduced in the normal case. The presence of the second parallel plate is found to boost these effects. The potential relation between the amplitude change and phase shift of interference fringes is pointed out. The finite conductivity effect on electron coherence is discussed 12. Measuring aortic pulse wave velocity using high-field cardiovascular magnetic resonance: comparison of techniques Directory of Open Access Journals (Sweden) Shaffer Jean M 2010-05-01 Full Text Available Abstract Background The assessment of arterial stiffness is increasingly used for evaluating patients with different cardiovascular diseases as the mechanical properties of major arteries are often altered. Aortic stiffness can be noninvasively estimated by measuring pulse wave velocity (PWV. Several methods have been proposed for measuring PWV using velocity-encoded cardiovascular magnetic resonance (CMR, including transit-time (TT, flow-area (QA, and cross-correlation (XC methods. However, assessment and comparison of these techniques at high field strength has not yet been performed. In this work, the TT, QA, and XC techniques were clinically tested at 3 Tesla and compared to each other. Methods Fifty cardiovascular patients and six volunteers were scanned to acquire the necessary images. The six volunteer scans were performed twice to test inter-scan reproducibility. Patient images were analyzed using the TT, XC, and QA methods to determine PWV. Two observers analyzed the images to determine inter-observer and intra-observer variabilities. The PWV measurements by the three methods were compared to each other to test inter-method variability. To illustrate the importance of PWV using CMR, the degree of aortic stiffness was assessed using PWV and related to LV dysfunction in five patients with diastolic heart failure patients and five matched volunteers. Results The inter-observer and intra-observer variability results showed no bias between the different techniques. The TT and XC results were more reproducible than the QA; the mean (SD inter-observer/intra-observer PWV differences were -0.12(1.3/-0.04(0.4 for TT, 0.2(1.3/0.09(0.9 for XC, and 0.6(1.6/0.2(1.4 m/s for QA methods, respectively. The correlation coefficients (r for the inter-observer/intra-observer comparisons were 0.94/0.99, 0.88/0.94, and 0.83/0.92 for the TT, XC, and QA methods, respectively. The inter-scan reproducibility results showed low variability between the repeated 13. Comparison of swirling strengths derived from two- and three-dimensional velocity fields in channel flow Science.gov (United States) Chen, Huai; Li, Danxun; Bai, Ruonan; Wang, Xingkui 2018-05-01 Swirling strength is an effective vortex indicator in wall turbulence, and it can be determined based on either two-dimensional (2D) or three-dimensional (3D) velocity fields, written as λci2D and λci3D, respectively. A comparison between λci2D and λci3D has been made in this paper in sliced XY, YZ, and XZ planes by using 3D DNS data of channel flow. The magnitude of λci2D in three orthogonal planes differs in the inner region, but the difference tends to diminish in the outer flow. The magnitude of λci3D exceeds each λci2D, and the square of λci3D is greater than the summation of squares of three λci2D. Extraction with λci2D in XY, YZ, and XZ planes yields different population densities and vortex sizes, i.e., in XZ plane, the vortices display the largest population density and the smallest size, and in XY and YZ planes the vortices are similar in size but fewer vortices are extracted in the XY plane in the inner layer. Vortex size increases inversely with the threshold used for growing the vortex region from background turbulence. When identical thresholds are used, the λci3D approach leads to a slightly smaller population density and a greater vortex radius than the λci2D approach. A threshold of 0.8 for the λci3D approach is approximately equivalent to a threshold of 1.5 for the λci2D approach. 14. Semianalytical solutions for contaminant transport under variable velocity field in a coastal aquifer Science.gov (United States) 2018-05-01 Existing closed-form solutions of contaminant transport problems are limited by the mathematically convenient assumption of uniform flow. These solutions cannot be used to investigate contaminant transport in coastal aquifers where seawater intrusion induces a variable velocity field. An adaptation of the Fourier-Galerkin method is introduced to obtain semi-analytical solutions for contaminant transport in a confined coastal aquifer in which the saltwater wedge is in equilibrium with a freshwater discharge flow. Two scenarios dealing with contaminant leakage from the aquifer top surface and contaminant migration from a source at the landward boundary are considered. Robust implementation of the Fourier-Galerkin method is developed to efficiently solve the coupled flow, salt and contaminant transport equations. Various illustrative examples are generated and the semi-analytical solutions are compared against an in-house numerical code. The Fourier series are used to evaluate relevant metrics characterizing contaminant transport such as the discharge flux to the sea, amount of contaminant persisting in the groundwater and solute flux from the source. These metrics represent quantitative data for numerical code validation and are relevant to understand the effect of seawater intrusion on contaminant transport. It is observed that, for the surface contamination scenario, seawater intrusion limits the spread of the contaminant but intensifies the contaminant discharge to the sea. For the landward contamination scenario, moderate seawater intrusion affects only the spatial distribution of the contaminant plume while extreme seawater intrusion can increase the contaminant discharge to the sea. The developed semi-analytical solution presents an efficient tool for the verification of numerical models. It provides a clear interpretation of the contaminant transport processes in coastal aquifers subject to seawater intrusion. For practical usage in further studies, the full 15. Shear-wave velocities beneath the Harrat Rahat volcanic field, Saudi Arabia, using ambient seismic noise analysis Science.gov (United States) Civilini, F.; Mooney, W.; Savage, M. K.; Townend, J.; Zahran, H. M. 2017-12-01 We present seismic shear-velocities for Harrat Rahat, a Cenozoic bimodal alkaline volcanic field in west-central Saudi Arabia, using seismic tomography from natural ambient noise. This project is part of an overall effort by the Saudi Geological Survey and the United States Geological Survey to describe the subsurface structure and assess hazards within the Saudi Arabian shield. Volcanism at Harrat Rahat began approximately 10 Ma, with at least three pulses around 10, 5, and 2 Ma, and at least several pulses in the Quaternary from 1.9 Ma to the present. This area is instrumented by 14 broadband Nanometrics Trillium T120 instruments across an array aperture of approximately 130 kilometers. We used a year of recorded natural ambient noise to determine group and phase velocity surface wave dispersion maps with a 0.1 decimal degree resolution for radial-radial, transverse-transverse, and vertical-vertical components of the empirical Green's function. A grid-search method was used to carry out 1D shear-velocity inversions at each latitude-longitude point and the results were interpolated to produce pseudo-3D shear velocity models. The dispersion maps resolved a zone of slow surface wave velocity south-east of the city of Medina spatially correlated with the 1256 CE eruption. A crustal layer interface at approximately 20 km depth was determined by the inversions for all components, matching the results of prior seismic-refraction studies. Cross-sections of the 3D shear velocity models were compared to gravity measurements obtained in the south-east edge of the field. We found that measurements of low gravity qualitatively correlate with low values of shear-velocity below 20 km along the cross-section profile. We apply these methods to obtain preliminary tomography results on the entire Arabian Shield. 16. Automated high resolution full-field spatial coherence tomography for quantitative phase imaging of human red blood cells Science.gov (United States) Singla, Neeru; Dubey, Kavita; Srivastava, Vishal; Ahmad, Azeem; Mehta, D. S. 2018-02-01 We developed an automated high-resolution full-field spatial coherence tomography (FF-SCT) microscope for quantitative phase imaging that is based on the spatial, rather than the temporal, coherence gating. The Red and Green color laser light was used for finding the quantitative phase images of unstained human red blood cells (RBCs). This study uses morphological parameters of unstained RBCs phase images to distinguish between normal and infected cells. We recorded the single interferogram by a FF-SCT microscope for red and green color wavelength and average the two phase images to further reduced the noise artifacts. In order to characterize anemia infected from normal cells different morphological features were extracted and these features were used to train machine learning ensemble model to classify RBCs with high accuracy. 17. Rotational Coherence Encoded in an “Air-Laser” Spectrum of Nitrogen Molecular Ions in an Intense Laser Field Directory of Open Access Journals (Sweden) Haisu Zhang 2013-10-01 Full Text Available We investigate lasing action in aligned nitrogen molecular ions (N_{2}^{+} produced in an intense laser field. We find that, besides the population inversion between the B^{2}Σ_{u}^{+}-X^{2}Σ_{g}^{+} states, which is responsible for the observed simulated amplification of a seed pulse, a rotational wave packet in the ground vibrational state (v=0 of the excited electronic B^{2}Σ_{u}^{+} state has been created in N_{2}^{+}. The rotational coherence can faithfully encode its characteristics into the amplified seed pulses, enabling reconstruction of rotational wave packets of molecules in a single-shot detection manner from the frequency-resolved laser spectrum. Our results suggest that the air laser can potentially provide a promising tool for remote characterization of coherent molecular rotational wave packets. 18. A Quantum Field Approach for Advancing Optical Coherence Tomography Part I: First Order Correlations, Single Photon Interference, and Quantum Noise. Science.gov (United States) Brezinski, M E 2018-01-01 Optical coherence tomography has become an important imaging technology in cardiology and ophthalmology, with other applications under investigations. Major advances in optical coherence tomography (OCT) imaging are likely to occur through a quantum field approach to the technology. In this paper, which is the first part in a series on the topic, the quantum basis of OCT first order correlations is expressed in terms of full field quantization. Specifically first order correlations are treated as the linear sum of single photon interferences along indistinguishable paths. Photons and the electromagnetic (EM) field are described in terms of quantum harmonic oscillators. While the author feels the study of quantum second order correlations will lead to greater paradigm shifts in the field, addressed in part II, advances from the study of quantum first order correlations are given. In particular, ranging errors are discussed (with remedies) from vacuum fluctuations through the detector port, photon counting errors, and position probability amplitude uncertainty. In addition, the principles of quantum field theory and first order correlations are needed for studying second order correlations in part II. 19. A Quantum Field Approach for Advancing Optical Coherence Tomography Part I: First Order Correlations, Single Photon Interference, and Quantum Noise Science.gov (United States) Brezinski, ME 2018-01-01 Optical coherence tomography has become an important imaging technology in cardiology and ophthalmology, with other applications under investigations. Major advances in optical coherence tomography (OCT) imaging are likely to occur through a quantum field approach to the technology. In this paper, which is the first part in a series on the topic, the quantum basis of OCT first order correlations is expressed in terms of full field quantization. Specifically first order correlations are treated as the linear sum of single photon interferences along indistinguishable paths. Photons and the electromagnetic (EM) field are described in terms of quantum harmonic oscillators. While the author feels the study of quantum second order correlations will lead to greater paradigm shifts in the field, addressed in part II, advances from the study of quantum first order correlations are given. In particular, ranging errors are discussed (with remedies) from vacuum fluctuations through the detector port, photon counting errors, and position probability amplitude uncertainty. In addition, the principles of quantum field theory and first order correlations are needed for studying second order correlations in part II. 20. Behavior of impurity ion velocities during the pulsed poloidal current drive in the Madison symmetric torus reversed-field pinch International Nuclear Information System (INIS) Sakakita, Hajime; Craig, Darren; Anderson, Jay K.; Chapman, Brett E.; Den-Hartog, Daniel J.; Prager, Stewart C.; Biewer, Ted M.; Terry, Stephen D. 2003-01-01 We report on passive measurements of impurity ion velocities during the pulsed poloidal current drive (PPCD) in the Madison Symmetric Torus reversed-field pinch. During PPCD, the electron temperature increased and a sudden reduction of magnetic fluctuations was observed. For this change, we have studied whether plasma velocity is affected. Plasma rotation is observed to decrease during PPCD. From measurements of line intensities for several impurities at 10 poloidal chords, it is found that the impurity line emission shifts outward. The ion temperature of impurities is reasonably connected to that measured by charge exchange recombination spectroscopy from core to edge. (author) 1. Characterization of an extrapolation chamber and radiochromic films for verifying the metrological coherence among beta radiation fields International Nuclear Information System (INIS) Castillo, Jhonny Antonio Benavente 2011-01-01 2. Effect of Gas Velocity on the Dust Sediment Layer in the Coupled Field of Corona Plasma and Cyclone International Nuclear Information System (INIS) Wei Mingshan; Ma Chaochen; Li Minghua; Danish, S N 2006-01-01 A dust sediment layer was found on the outer tube wall when the ESCP (electrostatic centrifugal precipitator) trapped diesel particulates or ganister sand. The Compton back scatter method was used to measure the sediment thickness during the experiment. The effect of the inlet gas velocity on the dust sediment layer was investigated. PIV (Particle Image Velocimetry) was used to measure the velocity field between the inner barb tube wall and the outer tube wall. Experiments showed that the thickness of the sediment increased with time, and the sediment layer at the lower end was much thicker than that at the upper end. The agglomeration on the outer tube wall could be removed when the inlet gas velocity was increased to a certain value 3. The spatial distribution and velocity field of the molecular hydrogen line emission from the centre of the Galaxy International Nuclear Information System (INIS) Gatley, I.; Krisciunas, K.; Jones, T.J.; Hyland, A.R.; Geballe, T.R.; Rijksuniversiteit Groningen 1986-01-01 In an earlier paper the existence of a ring of molecular hydrogen-line emission surrounding the nucleus of the Galaxy was demonstrated. Here are presented the first detailed maps of the surface brightness and the velocity field, made in the upsilon=1-0 S(1) line of molecular hydrogen with a spatial resolution of 18 arcsec and a velocity resolution of 130 km s -1 . It is found that the molecular ring is tilted approximately 20 0 out of the plane of the Galaxy, has a broken and clumpy appearance, rotates at 100 km s -1 in the sense of galactic rotation, and exhibits radial motion at a velocity of 50 km s -1 . (author) 4. Two-dimensional atom localization based on coherent field controlling in a five-level M-type atomic system. Science.gov (United States) Jiang, Xiangqian; Li, Jinjiang; Sun, Xiudong 2017-12-11 We study two-dimensional sub-wavelength atom localization based on the microwave coupling field controlling and spontaneously generated coherence (SGC) effect. For a five-level M-type atom, introducing a microwave coupling field between two upper levels and considering the quantum interference between two transitions from two upper levels to lower levels, the analytical expression of conditional position probability (CPP) distribution is obtained using the iterative method. The influence of the detuning of a spontaneously emitted photon, Rabi frequency of the microwave field, and the SGC effect on the CPP are discussed. The two-dimensional sub-half-wavelength atom localization with high-precision and high spatial resolution is achieved by adjusting the detuning and the Rabi frequency, where the atom can be localized in a region smaller thanλ/10×λ/10. The spatial resolution is improved significantly compared with the case without the microwave field. 5. Combining deterministic and stochastic velocity fields in the analysis of deep crustal seismic data Science.gov (United States) Larkin, Steven Paul Standard crustal seismic modeling obtains deterministic velocity models which ignore the effects of wavelength-scale heterogeneity, known to exist within the Earth's crust. Stochastic velocity models are a means to include wavelength-scale heterogeneity in the modeling. These models are defined by statistical parameters obtained from geologic maps of exposed crystalline rock, and are thus tied to actual geologic structures. Combining both deterministic and stochastic velocity models into a single model allows a realistic full wavefield (2-D) to be computed. By comparing these simulations to recorded seismic data, the effects of wavelength-scale heterogeneity can be investigated. Combined deterministic and stochastic velocity models are created for two datasets, the 1992 RISC seismic experiment in southeastern California and the 1986 PASSCAL seismic experiment in northern Nevada. The RISC experiment was located in the transition zone between the Salton Trough and the southern Basin and Range province. A high-velocity body previously identified beneath the Salton Trough is constrained to pinch out beneath the Chocolate Mountains to the northeast. The lateral extent of this body is evidence for the ephemeral nature of rifting loci as a continent is initially rifted. Stochastic modeling of wavelength-scale structures above this body indicate that little more than 5% mafic intrusion into a more felsic continental crust is responsible for the observed reflectivity. Modeling of the wide-angle RISC data indicates that coda waves following PmP are initially dominated by diffusion of energy out of the near-surface basin as the wavefield reverberates within this low-velocity layer. At later times, this coda consists of scattered body waves and P to S conversions. Surface waves do not play a significant role in this coda. Modeling of the PASSCAL dataset indicates that a high-gradient crust-mantle transition zone or a rough Moho interface is necessary to reduce precritical Pm 6. Distinguished hyperbolic trajectories in time-dependent fluid flows: analytical and computational approach for velocity fields defined as data sets Directory of Open Access Journals (Sweden) K. Ide 2002-01-01 Full Text Available In this paper we develop analytical and numerical methods for finding special hyperbolic trajectories that govern geometry of Lagrangian structures in time-dependent vector fields. The vector fields (or velocity fields may have arbitrary time dependence and be realized only as data sets over finite time intervals, where space and time are discretized. While the notion of a hyperbolic trajectory is central to dynamical systems theory, much of the theoretical developments for Lagrangian transport proceed under the assumption that such a special hyperbolic trajectory exists. This brings in new mathematical issues that must be addressed in order for Lagrangian transport theory to be applicable in practice, i.e. how to determine whether or not such a trajectory exists and, if it does exist, how to identify it in a sequence of instantaneous velocity fields. We address these issues by developing the notion of a distinguished hyperbolic trajectory (DHT. We develop an existence criteria for certain classes of DHTs in general time-dependent velocity fields, based on the time evolution of Eulerian structures that are observed in individual instantaneous fields over the entire time interval of the data set. We demonstrate the concept of DHTs in inhomogeneous (or "forced" time-dependent linear systems and develop a theory and analytical formula for computing DHTs. Throughout this work the notion of linearization is very important. This is not surprising since hyperbolicity is a "linearized" notion. To extend the analytical formula to more general nonlinear time-dependent velocity fields, we develop a series of coordinate transforms including a type of linearization that is not typically used in dynamical systems theory. We refer to it as Eulerian linearization, which is related to the frame independence of DHTs, as opposed to the Lagrangian linearization, which is typical in dynamical systems theory, which is used in the computation of Lyapunov exponents. We 7. Shape, size, velocity and field-aligned currents of dayside plasma injections: a multi-altitude study Directory of Open Access Journals (Sweden) A. Marchaudon 2009-03-01 Full Text Available On 20 February 2005, Cluster in the outer magnetosphere and Double Star-2 (TC-2 at mid-altitude are situated in the vicinity of the northern cusp/mantle, with Cluster moving sunward and TC-2 anti-sunward. Their magnetic footprints come very close together at about 15:28 UT, over the common field-of-view of SuperDARN radars. Thanks to this conjunction, we determine the velocity, the transverse sizes, perpendicular and parallel to this velocity, and the shape of three magnetic flux tubes of magnetosheath plasma injection. The velocity of the structures determined from the Cluster four-spacecraft timing analysis is almost purely antisunward, in contrast with the antisunward and duskward convection velocity inside the flux tubes. The transverse sizes are defined from the Cluster-TC-2 separation perpendicular to the magnetic field, and from the time spent by a Cluster spacecraft in one structure; they are comprised between 0.6 and 2 RE in agreement with previous studies. Finally, using a comparison between the eigenvectors deduced from a variance analysis of the magnetic perturbation at the four Cluster and at TC-2, we show that the upstream side of the injection flux tubes is magnetically well defined, with even a concave front for the third one giving a bean-like shape, whereas the downstream side is far more turbulent. We also realise the first quantitative comparison between field-aligned currents at Cluster calculated with the curlometer technique and with the single-spacecraft method, assuming infinite parallel current sheets and taking into account the velocity of the injection flux tubes. The results agree nicely, confirming the validity of both methods. Finally, we compare the field-aligned current distribution of the three injection flux tubes at the altitudes of Cluster and TC-2. Both profiles are fairly similar, with mainly a pair of opposite field-aligned currents, upward at low-latitude and downward at high-latitude. In terms of 8. Magnetic field pitch angle and perpendicular velocity measurements from multi-point time-delay estimation of poloidal correlation reflectometry Science.gov (United States) Prisiazhniuk, D.; Krämer-Flecken, A.; Conway, G. D.; Happel, T.; Lebschy, A.; Manz, P.; Nikolaeva, V.; Stroth, U.; the ASDEX Upgrade Team 2017-02-01 In fusion machines, turbulent eddies are expected to be aligned with the direction of the magnetic field lines and to propagate in the perpendicular direction. Time delay measurements of density fluctuations can be used to calculate the magnetic field pitch angle α and perpendicular velocity {{v}\\bot} profiles. The method is applied to poloidal correlation reflectometry installed at ASDEX Upgrade and TEXTOR, which measure density fluctuations from poloidally and toroidally separated antennas. Validation of the method is achieved by comparing the perpendicular velocity (composed of the E× B drift and the phase velocity of turbulence {{v}\\bot}={{v}E× B}+{{v}\\text{ph}} ) with Doppler reflectometry measurements and with neoclassical {{v}E× B} calculations. An important condition for the application of the method is the presence of turbulence with a sufficiently long decorrelation time. It is shown that at the shear layer the decorrelation time is reduced, limiting the application of the method. The magnetic field pitch angle measured by this method shows the expected dependence on the magnetic field, plasma current and radial position. The profile of the pitch angle reproduces the expected shape and values. However, comparison with the equilibrium reconstruction code cliste suggests an additional inclination of turbulent eddies at the pedestal position (2-3°). This additional angle decreases towards the core and at the edge. 9. Depression storage and infiltration effects on overland flow depth-velocity-friction at desert conditions: field plot results and model Directory of Open Access Journals (Sweden) M. J. Rossi 2012-09-01 Full Text Available Water infiltration and overland flow are relevant in considering water partition among plant life forms, the sustainability of vegetation and the design of sustainable hydrological models and management. In arid and semi-arid regions, these processes present characteristic trends imposed by the prevailing physical conditions of the upper soil as evolved under water-limited climate. A set of plot-scale field experiments at the semi-arid Patagonian Monte (Argentina were performed in order to estimate the effect of depression storage areas and infiltration rates on depths, velocities and friction of overland flows. The micro-relief of undisturbed field plots was characterized at z-scale 1 mm through close-range stereo-photogrammetry and geo-statistical tools. The overland flow areas produced by controlled water inflows were video-recorded and the flow velocities were measured with image processing software. Antecedent and post-inflow moisture were measured, and texture, bulk density and physical properties of the upper soil were estimated based on soil core analyses. Field data were used to calibrate a physically-based, mass balanced, time explicit model of infiltration and overland flows. Modelling results reproduced the time series of observed flow areas, velocities and infiltration depths. Estimates of hydrodynamic parameters of overland flow (Reynolds-Froude numbers are informed. To our knowledge, the study here presented is novel in combining several aspects that previous studies do not address simultaneously: (1 overland flow and infiltration parameters were obtained in undisturbed field conditions; (2 field measurements of overland flow movement were coupled to a detailed analysis of soil microtopography at 1 mm depth scale; (3 the effect of depression storage areas in infiltration rates and depth-velocity friction of overland flows is addressed. Relevance of the results to other similar desert areas is justified by the accompanying 10. Simultaneous topography and tomography of latent fingerprints using full-field swept-source optical coherence tomography Science.gov (United States) Dubey, Satish Kumar; Singh Mehta, Dalip; Anand, Arun; Shakher, Chandra 2008-01-01 We demonstrate simultaneous topography and tomography of latent fingerprints using full-field swept-source optical coherence tomography (OCT). The swept-source OCT system comprises a superluminescent diode (SLD) as broad-band light source, an acousto-optic tunable filter (AOTF) as frequency tuning device, and a compact, nearly common-path interferometer. Both the amplitude and the phase map of the interference fringe signal are reconstructed. Optical sectioning of the latent fingerprint sample is obtained by selective Fourier filtering and the topography is retrieved from the phase map. Interferometry, selective filtering, low coherence and hence better resolution are some of the advantages of the proposed system over the conventional fingerprint detection techniques. The present technique is non-invasive in nature and does not require any physical or chemical processing. Therefore, the quality of the sample does not alter and hence the same fingerprint can be used for other types of forensic test. Exploitation of low-coherence interferometry for fingerprint detection itself provides an edge over other existing techniques as fingerprints can even be lifted from low-reflecting surfaces. The proposed system is very economical and compact. 11. Phase-Based Adaptive Estimation of Magnitude-Squared Coherence Between Turbofan Internal Sensors and Far-Field Microphone Signals Science.gov (United States) Miles, Jeffrey Hilton 2015-01-01 A cross-power spectrum phase based adaptive technique is discussed which iteratively determines the time delay between two digitized signals that are coherent. The adaptive delay algorithm belongs to a class of algorithms that identifies a minimum of a pattern matching function. The algorithm uses a gradient technique to find the value of the adaptive delay that minimizes a cost function based in part on the slope of a linear function that fits the measured cross power spectrum phase and in part on the standard error of the curve fit. This procedure is applied to data from a Honeywell TECH977 static-engine test. Data was obtained using a combustor probe, two turbine exit probes, and far-field microphones. Signals from this instrumentation are used estimate the post-combustion residence time in the combustor. Comparison with previous studies of the post-combustion residence time validates this approach. In addition, the procedure removes the bias due to misalignment of signals in the calculation of coherence which is a first step in applying array processing methods to the magnitude squared coherence data. The procedure also provides an estimate of the cross-spectrum phase-offset. 12. A gain-field encoding of limb position and velocity in the internal model of arm dynamics. Directory of Open Access Journals (Sweden) Eun Jung Hwang 2003-11-01 Full Text Available Adaptability of reaching movements depends on a computation in the brain that transforms sensory cues, such as those that indicate the position and velocity of the arm, into motor commands. Theoretical consideration shows that the encoding properties of neural elements implementing this transformation dictate how errors should generalize from one limb position and velocity to another. To estimate how sensory cues are encoded by these neural elements, we designed experiments that quantified spatial generalization in environments where forces depended on both position and velocity of the limb. The patterns of error generalization suggest that the neural elements that compute the transformation encode limb position and velocity in intrinsic coordinates via a gain-field; i.e., the elements have directionally dependent tuning that is modulated monotonically with limb position. The gain-field encoding makes the counterintuitive prediction of hypergeneralization: there should be growing extrapolation beyond the trained workspace. Furthermore, nonmonotonic force patterns should be more difficult to learn than monotonic ones. We confirmed these predictions experimentally. 13. Velocity flow field and water level measurements in shoaling and breaking water waves CSIR Research Space (South Africa) Mukaro, R 2010-01-01 Full Text Available In this paper we report on the laboratory investigations of breaking water waves. Measurements of the water levels and instantaneous fluid velocities were conducted in water waves breaking on a sloping beach within a glass flume. Instantaneous water... 14. Temperature and velocity fields in natural convection by PIV and LIF DEFF Research Database (Denmark) Meyer, Knud Erik; Larsen, Poul Scheel; Westergaard, C. H. 2002-01-01 plate and cooled walls is 1.4×10^10. The flow is turbulent and is similar to some indoor room flows. Combined Particle Image Velocimetry (PIV) and Planar Light Induced Fluorescence (LIF) are used to measure local velocities and temperatures. Data measured in a symmetry plane parallel to a sidewall... 15. Development of two-dimensional velocity field measurement using particle tracking velocimetry on neutron radiography International Nuclear Information System (INIS) Saito, Y.; Mishima, K.; Suzuki, T.; Matsubayashi, M. 2003-01-01 The structures of liquid metal two-phase flow are investigated for analyzing the core meltdown accident of fast reactor. The experiments of high-density ratio two-phase flow for lead-bismuth molten metal and nitrogen gases are conducted to understand in detail. The liquid phase velocity distributions of lead-bismuth molten metal are measured by neutron radiography using Au-Cd tracer particles. The liquid phase velocity distributions are obtained usually by using particle image velocimetry (PIV) on the neutron radiography. The PIV, however is difficult to get the velocity vector distribution quantitatively. An image of neutron radiography is divided into two images of the bubbles and the tracer particles each in particle tracking velocimetry (PTV), which distinguishes tracer contents in the bubble from them in the liquid phase. The locations of tracer particles in the liquid phase are possible to determine by particle mask correlation method, in which the bubble images are separated from the tracer images by Σ-scaling method. The particle tracking velocimetry give a full detail of the velocity vector distributions of the liquid phase in two-phase flow, in comparison with the PIV method. (M. Suetake) 16. Velocity measurements in the near field of a diesel fuel injector by ultrafast imagery Science.gov (United States) Sedarsky, David; Idlahcen, Saïd; Rozé, Claude; Blaisot, Jean-Bernard 2013-02-01 This paper examines the velocity profile of fuel issuing from a high-pressure single-orifice diesel injector. Velocities of liquid structures were determined from time-resolved ultrafast shadow images, formed by an amplified two-pulse laser source coupled to a double-frame camera. A statistical analysis of the data over many injection events was undertaken to map velocities related to spray formation near the nozzle outlet as a function of time after start of injection. These results reveal a strong asymmetry in the liquid profile of the test injector, with distinct fast and slow regions on opposite sides of the orifice. Differences of ˜100 m/s can be observed between the fast' and slow' sides of the jet, resulting in different atomization conditions across the spray. On average, droplets are dispersed at a greater distance from the nozzle on the `fast' side of the flow, and distinct macrostructure can be observed under the asymmetric velocity conditions. The changes in structural velocity and atomization behavior resemble flow structures which are often observed in the presence of string cavitation produced under controlled conditions in scaled, transparent test nozzles. These observations suggest that widely used common-rail supply configurations and modern injectors can potentially generate asymmetric interior flows which strongly influence diesel spray morphology. The velocimetry measurements presented in this work represent an effective and relatively straightforward approach to identify deviant flow behavior in real diesel sprays, providing new spatially resolved information on fluid structure and flow characteristics within the shear layers on the jet periphery. 17. Aharonov-Casher phase shift and the change in velocity of a moving magnet traversing an electric field International Nuclear Information System (INIS) March, N.H. 2006-08-01 Motivated by the theoretical work of Boyer [J. Phys. A: Math. Gen. 39 (2006) 3455] plus the quite recent interferometric experiment of Shinohara, Aoki and Morinaga [Phys. Rev. A66 (2002) 042106] in which the scalar Aharonov-Bohm effect was studied, we re-open the extension to neutral particles carrying a magnetic moment and passing through a region of intense electric field, treated theoretically by Aharonov and Casher (AC) and independently by Anandan. An alternative interpretation of results on (a) neutrons and (b) TlF molecules to that afforded by AC is shown to involve only (i) the de Broglie wavelength of matter waves and (ii) the prediction from Maxwell's equations for the change in velocity of a neutral moving magnet as it enters or leaves an electric field. The exquisite sensitivity of experiment (b) allows a fractional change in velocity of order 10 -15 to be quantitatively determined. (author) 18. Decompositions of bubbly flow PIV velocity fields using discrete wavelets multi-resolution and multi-section image method International Nuclear Information System (INIS) Choi, Je-Eun; Takei, Masahiro; Doh, Deog-Hee; Jo, Hyo-Jae; Hassan, Yassin A.; Ortiz-Villafuerte, Javier 2008-01-01 Currently, wavelet transforms are widely used for the analyses of particle image velocimetry (PIV) velocity vector fields. This is because the wavelet provides not only spatial information of the velocity vectors, but also of the time and frequency domains. In this study, a discrete wavelet transform is applied to real PIV images of bubbly flows. The vector fields obtained by a self-made cross-correlation PIV algorithm were used for the discrete wavelet transform. The performances of the discrete wavelet transforms were investigated by changing the level of power of discretization. The images decomposed by wavelet multi-resolution showed conspicuous characteristics of the bubbly flows for the different levels. A high spatial bubble concentrated area could be evaluated by the constructed discrete wavelet transform algorithm, in which high-leveled wavelets play dominant roles in revealing the flow characteristics 19. Potential, velocity, and density fields from redshift-distance samples: Application - Cosmography within 6000 kilometers per second International Nuclear Information System (INIS) Bertschinger, E.; Dekel, A.; Faber, S.M.; Dressler, A.; Burstein, D. 1990-01-01 A potential flow reconstruction algorithm has been applied to the real universe to reconstruct the three-dimensional potential, velocity, and mass density fields smoothed on large scales. The results are shown as maps of these fields, revealing the three-dimensional structure within 6000 km/s distance from the Local Group. The dominant structure is an extended deep potential well in the Hydra-Centaurus region, stretching across the Galactic plane toward Pavo, broadly confirming the Great Attractor (GA) model of Lynden-Bell et al. (1988). The Local Supercluster appears to be an extended ridge on the near flank of the GA, proceeding through the Virgo Southern Extension to the Virgo and Ursa Major clusters. The Virgo cluster and the Local Group are both falling toward the bottom of the GA potential well with peculiar velocities of 658 + or - 121 km/s and 565 + or - 125 km/s, respectively. 65 refs 20. Potential, velocity, and density fields from redshift-distance samples: Application - Cosmography within 6000 kilometers per second Science.gov (United States) Bertschinger, Edmund; Dekel, Avishai; Faber, Sandra M.; Dressler, Alan; Burstein, David 1990-12-01 A potential flow reconstruction algorithm has been applied to the real universe to reconstruct the three-dimensional potential, velocity, and mass density fields smoothed on large scales. The results are shown as maps of these fields, revealing the three-dimensional structure within 6000 km/s distance from the Local Group. The dominant structure is an extended deep potential well in the Hydra-Centaurus region, stretching across the Galactic plane toward Pavo, broadly confirming the Great Attractor (GA) model of Lynden-Bell et al. (1988). The Local Supercluster appears to be an extended ridge on the near flank of the GA, proceeding through the Virgo Southern Extension to the Virgo and Ursa Major clusters. The Virgo cluster and the Local Group are both falling toward the bottom of the GA potential well with peculiar velocities of 658 + or - 121 km/s and 565 + or - 125 km/s, respectively. 1. Noise suppression in an atomic system under the action of a field in a squeezed coherent state International Nuclear Information System (INIS) Gelman, A. I.; Mironov, V. A. 2010-01-01 The interaction of a quantized electromagnetic field in a squeezed coherent state with a three-level Λ-atom is studied numerically by the quantum Monte Carlo method and analytically by the Heisenberg-Langevin method in the regime of electromagnetically induced transparency (EIT). The possibility of noise suppression in the atomic system through the quantum properties of squeezed light is considered in detail; the characteristics of the atomic system responsible for the relaxation processes and noise in the EIT band have been found. Further applications of the Monte Carlo method and the developed numerical code to the study of more complex systems are discussed. 2. Ultrafast transmission electron microscopy using a laser-driven field emitter: Femtosecond resolution with a high coherence electron beam Energy Technology Data Exchange (ETDEWEB) Feist, Armin; Bach, Nora; Rubiano da Silva, Nara; Danz, Thomas; Möller, Marcel; Priebe, Katharina E.; Domröse, Till; Gatzmann, J. Gregor; Rost, Stefan; Schauss, Jakob; Strauch, Stefanie; Bormann, Reiner; Sivis, Murat; Schäfer, Sascha, E-mail: [email protected]; Ropers, Claus, E-mail: [email protected] 2017-05-15 We present the development of the first ultrafast transmission electron microscope (UTEM) driven by localized photoemission from a field emitter cathode. We describe the implementation of the instrument, the photoemitter concept and the quantitative electron beam parameters achieved. Establishing a new source for ultrafast TEM, the Göttingen UTEM employs nano-localized linear photoemission from a Schottky emitter, which enables operation with freely tunable temporal structure, from continuous wave to femtosecond pulsed mode. Using this emission mechanism, we achieve record pulse properties in ultrafast electron microscopy of 9 Å focused beam diameter, 200 fs pulse duration and 0.6 eV energy width. We illustrate the possibility to conduct ultrafast imaging, diffraction, holography and spectroscopy with this instrument and also discuss opportunities to harness quantum coherent interactions between intense laser fields and free-electron beams. - Highlights: • First implementation of an ultrafast TEM employing a nanoscale photocathode. • Localized single photon-photoemission from nanoscopic field emitter yields low emittance ultrashort electron pulses. • Electron pulses focused down to ~9 Å, with a duration of 200 fs and an energy width of 0.6 eV are demonstrated. • Quantitative characterization of ultrafast electron gun emittance and brightness. • A range of applications of high coherence ultrashort electron pulses is shown. 3. Experimental results showing the internal three-component velocity field and outlet temperature contours for a model gas turbine combustor CSIR Research Space (South Africa) Meyers, BC 2011-09-01 Full Text Available by the American Institute of Aeronautics and Astronautics Inc. All rights reserved ISABE-2011-1129 EXPERIMENTAL RESULTS SHOWING THE INTERNAL THREE-COMPONENT VELOCITY FIELD AND OUTLET TEMPERATURE CONTOURS FOR A MODEL GAS TURBINE COMBUSTOR BC Meyers*, GC... identifier c Position identifier F Fuel i Index L (Combustor) Liner OP Orifice plate Introduction There are often inconsistencies when comparing experimental and Computational Fluid Dynamics (CFD) simulations for gas turbine combustors [1... 4. Statistically optimized near field acoustic holography using an array of pressure-velocity probes DEFF Research Database (Denmark) Jacobsen, Finn; Jaud, Virginie 2007-01-01 of a measurement aperture that extends well beyond the source can be relaxed. Both NAH and SONAH are based on the assumption that all sources are on one side of the measurement plane whereas the other side is source free. An extension of the SONAH procedure based on measurement with a double layer array...... of pressure microphones has been suggested. The double layer technique makes it possible to distinguish between sources on the two sides of the array and thus suppress the influence of extraneous noise coming from the “wrong” side. It has also recently been demonstrated that there are significant advantages...... in NAH based on an array of acoustic particle velocity transducers (in a single layer) compared with NAH based on an array of pressure microphones. This investigation combines the two ideas and examines SONAH based on an array of pressure-velocity intensity probes through computer simulations as well... 5. Velocity map imaging of attosecond and femtosecond dynamics in atoms and small molecules in strong laser fields International Nuclear Information System (INIS) Kling, M.F.; Ni, Yongfeng; Lepine, F.; Khan, J.I.; Vrakking, M.J.J.; Johnsson, P.; Remetter, T.; Varju, K.; Gustafsson, E.; L'Huillier, A.; Lopez-Martens, R.; Boutu, W. 2005-01-01 Full text: In the past decade, the dynamics of atomic and small molecular systems in strong laser fields has received enormous attention, but was mainly studied with femtosecond laser fields. We report on first applications of attosecond extreme ultraviolet (XUV) pulse trains (APTs) from high-order harmonic generation (HHG) for the study of atomic and molecular electron and ion dynamics in strong laser fields utilizing the Velocity Map Imaging Technique. The APTs were generated in argon from harmonics 13 to 35 of a 35 fs Ti:sapphire laser, and spatially and temporally overlapped with an intense IR laser field (up to 5x10 13 W/cm 2 ) in the interaction region of a Velocity Map Imaging (VMI) machine. In the VMI setup, electrons and ions that were created at the crossing point of the laser fields and an atomic or molecular beam were accelerated in a dc-electric field towards a two-dimensional position-sensitive detector, allowing to reconstruct the full initial three-dimensional velocity distribution. The poster will focus on results that were obtained for argon atoms. We recorded the velocity distribution of electron wave packets that were strongly driven in the IR laser field after their generation in Ar via single-photon ionization by attosecond XUV pulses. The 3D evolution of the electron wave packets was observed on an attosecond timescale. In addition to earlier experiments with APTs using a magnetic bottle electron time-of-flight spectrometers and with single attosecond pulses, the angular dependence of the electrons kinetic energies can give further insight into the details of the dynamics. Initial results that were obtained for molecular systems like H 2 , D 2 , N 2 , and CO 2 using the same powerful approach will be highlighted as well. We will show, that detailed insight into the dynamics of these systems in strong laser fields can be obtained (e.g. on the alignment, above-threshold ionization, direct vs. sequential two-photon ionization, dissociation, and 6. Lr-Lp Stability of the Incompressible Flows with Nonzero Far-Field Velocity Directory of Open Access Journals (Sweden) Jaiok Roh 2011-01-01 Full Text Available We consider the stability of stationary solutions w for the exterior Navier-Stokes flows with a nonzero constant velocity u∞ at infinity. For u∞=0 with nonzero stationary solution w, Chen (1993, Kozono and Ogawa (1994, and Borchers and Miyakawa (1995 have studied the temporal stability in Lp spaces for 11 and obtain Lr-Lp stability as Kozono and Ogawa and Borchers and Miyakawa obtained for u∞=0. 7. Studying the instantaneous velocity field in gas-sheared liquid films in a horizontal duct Science.gov (United States) Vasques, Joao; Tokarev, Mikhail; Cherdantsev, Andrey; Hann, David; Hewakandamby, Buddhika; Azzopardi, Barry 2016-11-01 In annular flow, the experimental validation of the basic assumptions on the liquid velocity profile is vital for developing theoretical models of the flow. However, the study of local velocity of liquid in gas-sheared films has proven to be a challenging task due to the highly curved and disturbed moving interface of the phases, small scale of the area of interrogation, high velocity gradients and irregular character of the flow. This study reports on different optical configurations and interface-tracking methods employed in a horizontal duct in order to obtain high-resolution particle image velocimetry (PIV) data in such types of complex flows. The experimental envelope includes successful measurements in 2D and 3D waves regimes, up to the disturbance wave regime. Preliminary data show the presence of complex structures in the liquid phase, which includes re-circulation areas below the liquid interface due to the gas-shearing action, together with non-uniform transverse movements of the liquid phase close to the wall due to the presence of 3D waves at the interface. With the aid of the moving interface-tracking, PIV, time-resolved particle-tracking velocimetry and vorticity measurements were performed. 8. Application of photogrammetry to transforming PIV-acquired velocity fields to a moving-body coordinate system Science.gov (United States) Nikoueeyan, Pourya; Naughton, Jonathan 2016-11-01 Particle Image Velocimetry is a common choice for qualitative and quantitative characterization of unsteady flows associated with moving bodies (e.g. pitching and plunging airfoils). Characterizing the separated flow behavior is of great importance in understanding the flow physics and developing predictive reduced-order models. In most studies, the model under investigation moves within a fixed camera field-of-view, and vector fields are calculated based on this fixed coordinate system. To better characterize the genesis and evolution of vortical structures in these unsteady flows, the velocity fields need to be transformed into the moving-body frame of reference. Data converted to this coordinate system allow for a more detailed analysis of the flow field using advanced statistical tools. In this work, a pitching NACA0015 airfoil has been used to demonstrate the capability of photogrammetry for such an analysis. Photogrammetry has been used first to locate the airfoil within the image and then to determine an appropriate mask for processing the PIV data. The photogrammetry results are then further used to determine the rotation matrix that transforms the velocity fields to airfoil coordinates. Examples of the important capabilities such a process enables are discussed. P. Nikoueeyan is supported by a fellowship from the University of Wyoming's Engineering Initiative. 9. Influence of initial velocity on trajectories of a charged particle in uniform crossed electric and magnetic fields International Nuclear Information System (INIS) Khotimah, Siti Nurul; Viridi, Sparisoma; Widayani 2017-01-01 Magnetic and electric fields can cause a charged particle to form interesting trajectories. In general, each trajectory is discussed separately in university physics textbooks for undergraduate students. In this work, a solution of a charged particle moving in a uniform electric field at right angles to a uniform magnetic field (uniform crossed electric and magnetic fields) is reported; it is limited to particle motion in a plane. Specific solutions and their trajectories are obtained only by varying the initial particle velocity. The result shows five basic trajectory patterns, i.e., straight line, sinusoid-like, cycloid, cycloid-like with oscillation, and circle-like. The region of each trajectory is also mapped in the initial velocity space of the particle. This paper is intended for undergraduate students and describes further the trajectories of a charged particle through the regions of electric and magnetic fields influenced by initial condition of the particle, where electromagnetic radiation of an accelerated particle is not considered. (paper) 10. The effect of non-uniform temperature and velocity fields on long range ultrasonic measurement systems in MYRRHA Energy Technology Data Exchange (ETDEWEB) Van de Wyer, Nicolas; Schram, Christophe [von Karman Institute For Fluids Dynamic (Belgium); Van Dyck, Dries; Dierckx, Marc [Belgian Nuclear Research Center (Belgium) 2015-07-01 SCK.CEN, the Belgian Nuclear Research Center, is developing MYRRHA, a generation IV liquid metal cooled nuclear research reactor. As the liquid metal coolant is opaque to light, normal visual feedback during fuel manipulations is not available and must therefore be replaced by a system that is not hindered by the opacity of the coolant. In this respect ultrasonic based instrumentation is under development at SCK.CEN to provide feedback during operations under liquid metal. One of the tasks that will be tackled using ultrasound is the detection and localization of a potentially lost fuel assembly. In this application, the distance between ultrasonic sensor and target may be as large as 2.5 m. At these distances, non uniform velocity and temperature fields in the liquid metal potentially influence the propagation of the ultrasonic signals, affecting the performance of the ultrasonic systems. In this paper, we investigate how relevant temperature and velocity gradients inside the liquid metal influence the propagation of ultrasonic waves. The effect of temperature and velocity gradients are simulated by means of a newly developed numerical ray-tracing model. The performance of the model is validated by dedicated water experiments. The setup is capable of creating velocity and temperature gradients representative for MYRRHA conditions. Once validated in water, the same model is used to make predictions for the effect of gradients in the MYRRHA liquid metal environment. (authors) 11. Influence of anisotropy on anomalous scaling of a passive scalar advected by the Navier-Stokes velocity field. Science.gov (United States) Jurcisinová, E; Jurcisin, M; Remecký, R 2009-10-01 The influence of weak uniaxial small-scale anisotropy on the stability of the scaling regime and on the anomalous scaling of the single-time structure functions of a passive scalar advected by the velocity field governed by the stochastic Navier-Stokes equation is investigated by the field theoretic renormalization group and operator-product expansion within one-loop approximation of a perturbation theory. The explicit analytical expressions for coordinates of the corresponding fixed point of the renormalization-group equations as functions of anisotropy parameters are found, the stability of the three-dimensional Kolmogorov-like scaling regime is demonstrated, and the dependence of the borderline dimension d(c) is an element of (2,3] between stable and unstable scaling regimes is found as a function of the anisotropy parameters. The dependence of the turbulent Prandtl number on the anisotropy parameters is also briefly discussed. The influence of weak small-scale anisotropy on the anomalous scaling of the structure functions of a passive scalar field is studied by the operator-product expansion and their explicit dependence on the anisotropy parameters is present. It is shown that the anomalous dimensions of the structure functions, which are the same (universal) for the Kraichnan model, for the model with finite time correlations of the velocity field, and for the model with the advection by the velocity field driven by the stochastic Navier-Stokes equation in the isotropic case, can be distinguished by the assumption of the presence of the small-scale anisotropy in the systems even within one-loop approximation. The corresponding comparison of the anisotropic anomalous dimensions for the present model with that obtained within the Kraichnan rapid-change model is done. 12. The parallel-sequential field subtraction technique for coherent nonlinear ultrasonic imaging Science.gov (United States) Cheng, Jingwei; Potter, Jack N.; Drinkwater, Bruce W. 2018-06-01 Nonlinear imaging techniques have recently emerged which have the potential to detect cracks at a much earlier stage than was previously possible and have sensitivity to partially closed defects. This study explores a coherent imaging technique based on the subtraction of two modes of focusing: parallel, in which the elements are fired together with a delay law and sequential, in which elements are fired independently. In the parallel focusing a high intensity ultrasonic beam is formed in the specimen at the focal point. However, in sequential focusing only low intensity signals from individual elements enter the sample and the full matrix of transmit-receive signals is recorded and post-processed to form an image. Under linear elastic assumptions, both parallel and sequential images are expected to be identical. Here we measure the difference between these images and use this to characterise the nonlinearity of small closed fatigue cracks. In particular we monitor the change in relative phase and amplitude at the fundamental frequencies for each focal point and use this nonlinear coherent imaging metric to form images of the spatial distribution of nonlinearity. The results suggest the subtracted image can suppress linear features (e.g. back wall or large scatters) effectively when instrumentation noise compensation in applied, thereby allowing damage to be detected at an early stage (c. 15% of fatigue life) and reliably quantified in later fatigue life. 13. Scanning laser polarimetry, but not optical coherence tomography predicts permanent visual field loss in acute nonarteritic anterior ischemic optic neuropathy. Science.gov (United States) Kupersmith, Mark J; Anderson, Susan; Durbin, Mary; Kardon, Randy 2013-08-15 Scanning laser polarimetry (SLP) reveals abnormal retardance of birefringence in locations of the edematous peripapillary retinal nerve fiber layer (RNFL), which appear thickened by optical coherence tomography (OCT), in nonarteritic anterior ischemic optic neuropathy (NAION). We hypothesize initial sector SLP RNFL abnormalities will correlate with long-term regional visual field loss due to ischemic injury. We prospectively performed automated perimetry, SLP, and high definition OCT (HD-OCT) of the RNFL in 25 eyes with acute NAION. We grouped visual field threshold and RNFL values into Garway-Heath inferior/superior disc sectors and corresponding superior/inferior field regions. We compared sector SLP RNFL thickness with corresponding visual field values at presentation and at >3 months. At presentation, 12 eyes had superior sector SLP reduction, 11 of which had inferior field loss. Six eyes, all with superior field loss, had inferior sector SLP reduction. No eyes had reduced OCT-derived RNFL acutely. Eyes with abnormal field regions had corresponding SLP sectors thinner (P = 0.003) than for sectors with normal field regions. During the acute phase, the SLP-derived sector correlated with presentation (r = 0.59, P = 0.02) and with >3-month after presentation (r = 0.44, P = 0.02) corresponding superior and inferior field thresholds. Abnormal RNFL birefringence occurs in sectors corresponding to regional visual field loss during acute NAION when OCT-derived RNFL shows thickening. Since the visual field deficits show no significant recovery, SLP can be an early marker for axonal injury, which may be used to assess recovery potential at RNFL locations with respect to new treatments for acute NAION. 14. Study on velocity field in a wire wrapped fuel pin bundle of sodium cooled reactor. Detailed velocity distribution in a subchannel International Nuclear Information System (INIS) Sato, Hiroyuki; Kobayashi, Jun; Miyakoshi, Hiroyuki; Kamide, Hideki 2009-01-01 A sodium cooled fast reactor is designed to attain a high burn-up core in a feasibility study on commercialized fast reactor cycle systems. In high burn-up fuel subassemblies, deformation of fuel pin due to the swelling and thermal bowing may decrease local flow velocity via change of flow area in the subassembly and influence the heat removal capability. Therefore, it is of importance to obtain the flow velocity distribution in a wire wrapped pin bundle. A 2.5 times enlarged 7-pin bundle water model was applied to investigate the detailed velocity distribution in an inner subchannel surrounded by 3 pins with wrapping wire. The test section consisted of a hexagonal acrylic duct tube and fluorinated resin pins which had nearly the same refractive index with that of water and a high light transmission rate. The velocity distribution in an inner subchannel with the wrapping wire was measured by PIV (Particle Image Velocimetry) through the front and lateral sides of the duct tube. In the vertical velocity distribution in a narrow space between the pins, the wrapping wire decreased the velocity downstream of the wire and asymmetric flow distribution was formed between the pin and wire. In the horizontal velocity distribution, swirl flow around the wrapping wire was obviously observed. The measured velocity data are useful for code validation of pin bundle thermalhydraulics. (author) 15. Scan-Less Line Field Optical Coherence Tomography, with Automatic Image Segmentation, as a Measurement Tool for Automotive Coatings Directory of Open Access Journals (Sweden) Samuel Lawman 2017-04-01 Full Text Available The measurement of the thicknesses of layers is important for the quality assurance of industrial coating systems. Current measurement techniques only provide a limited amount of information. Here, we show that spectral domain Line Field (LF Optical Coherence Tomography (OCT is able to return to the user a cross sectional B-Scan image in a single shot with no mechanical moving parts. To reliably extract layer thicknesses from such images of automotive paint systems, we present an automatic graph search image segmentation algorithm. To show that the algorithm works independently of the OCT device, the measurements are repeated with a separate time domain Full Field (FF OCT system. This gives matching mean thickness values within the standard deviations of the measured thicknesses across each B-Scan image. The combination of an LF-OCT with graph search segmentation is potentially a powerful technique for the quality assurance of non-opaque industrial coating layers. 16. High-field strong-focusing undulator designs for X-ray Linac Coherent Light Source (LCLS) applications International Nuclear Information System (INIS) Caspi, S.; Schlueter, R.; Tatchyn, R. 1995-01-01 Linac-driven X-Ray Free Electron Lasers (e.g., Linac Coherent Light Sources (LCLSs)), operating on the principle of single-pass saturation in the Self-Amplified Spontaneous Emission (SASE) regime typically require multi-GeV beam energies and undulator lengths in excess of tens of meters to attain sufficient gain in the 1 angstrom--0.1 angstrom range. In this parameter regime, the undulator structure must provide: (1) field amplitudes B 0 in excess of 1T within periods of 4cm or less, (2) peak on-axis focusing gradients on the order of 30T/m, and (3) field quality in the 0.1%--0.3% range. In this paper the authors report on designs under consideration for a 4.5--1.5 angstrom LCLS based on superconducting (SC), hybrid/PM, and pulsed-Cu technologies 17. Experimental Verification of Isotropic Radiation from a Coherent Dipole Source via Electric-Field-Driven LC Resonator Metamaterials Science.gov (United States) Tichit, Paul-Henri; Burokur, Shah Nawaz; Qiu, Cheng-Wei; de Lustrac, André 2013-09-01 It has long been conjectured that isotropic radiation by a simple coherent source is impossible due to changes in polarization. Though hypothetical, the isotropic source is usually taken as the reference for determining a radiator’s gain and directivity. Here, we demonstrate both theoretically and experimentally that an isotropic radiator can be made of a simple and finite source surrounded by electric-field-driven LC resonator metamaterials designed by space manipulation. As a proof-of-concept demonstration, we show the first isotropic source with omnidirectional radiation from a dipole source (applicable to all distributed sources), which can open up several possibilities in axion electrodynamics, optical illusion, novel transformation-optic devices, wireless communication, and antenna engineering. Owing to the electric- field-driven LC resonator realization scheme, this principle can be readily applied to higher frequency regimes where magnetism is usually not present. 18. Determining Sea-Level Rise and Coastal Subsidence in the Canadian Arctic Using a Dense GPS Velocity Field for North America Science.gov (United States) Craymer, M.; Forbes, D.; Henton, J.; Lapelle, E.; Piraszewski, M.; Solomon, S. 2005-12-01 With observed climate warming in the western Canadian Arctic and potential increases in regional sea level, we anticipate expansion of the coastal region subject to rising relative sea level and increased flooding risk. This is a concern for coastal communities such as Tuktoyaktuk and Sachs Harbour and for the design and safety of hydrocarbon production facilities on the Mackenzie Delta. To provide a framework in which to monitor these changes, a consistent velocity field has been determined from GPS observations throughout North America, including the Canadian Arctic Archipelago and the Mackenzie Delta region. An expanded network of continuous GPS sites and multi-epoch (episodic) sites has enabled an increased density that enhances the application to geophysical studies including the discrimination of crustal motion, other components of coastal subsidence, and sea-level rise. To obtain a dense velocity field consistent at all scales, we have combined weekly solutions of continuous GPS sites from different agencies in Canada and the USA, together with the global reference frame under the North American Reference Frame initiative. Although there is already a high density of continuous GPS sites in the conterminous United States, there are many fewer such sites in Canada. To make up for this lack of density, we have incorporated high-accuracy episodic GPS observations on stable monuments distributed throughout Canada. By combining up to ten years of repeated, episodic GPS observations at such sites, together with weekly solutions from the continuous sites, we have obtained a highly consistent velocity field with a significantly increased spatial sampling of crustal deformation throughout Canada. This exhibits a spatially coherent pattern of uplift and subsidence in Canada that is consistent with the expected rates of glacial isostatic adjustment. To determine the contribution of vertical motion to sea-level rise under climate warming in the Canadian Arctic, we have 19. PATTERN SPEEDS OF BARS AND SPIRAL ARMS FROM Hα VELOCITY FIELDS International Nuclear Information System (INIS) Fathi, K.; Pinol-Ferrer, N.; Beckman, J. E.; MartInez-Valpuesta, I.; Hernandez, O.; Carignan, C. 2009-01-01 We have applied the Tremaine-Weinberg method to 10 late-type barred spiral galaxies using data cubes, in Hα emission, from the FaNTOmM and GHAFAS Fabry-Perot spectrometers. We have combined the derived bar (and/or spiral) pattern speeds with angular frequency plots to measure the corotation radii for the bars in these galaxies. We base our results on a combination of this method with a morphological analysis designed to estimate the corotation radius to bar-length ratio using two independent techniques on archival near-infrared images, and although we are aware of the limitation of the application of the Tremaine-Weinberg method using Hα observations, we find consistently excellent agreement between bar and spiral arm parameters derived using different methods. In general, the corotation radius, measured using the Tremaine-Weinberg method, is closely related to the bar length, measured independently from photometry and consistent with previous studies. Our corotation/bar-length ratios and pattern speed values are in good agreement with general results from numerical simulations of bars. In systems with identified secondary bars, we measure higher Hα velocity dispersion in the circumnuclear regions, whereas in all the other galaxies, we detect flat velocity dispersion profiles. In the galaxies where the bar is almost purely stellar, Hα measurements are missing, and the Tremaine-Weinberg method yields the pattern speeds of the spiral arms. The excellent agreement between the Tremaine-Weinberg method results and the morphological analysis and bar parameters in numerical simulations suggests that although the Hα emitting gas does not obey the continuity equation, it can be used to derive the bar pattern speed. In addition, we have analyzed the Hα velocity dispersion maps to investigate signatures of secular evolution of the bars in these galaxies. The increased central velocity dispersion in the galaxies with secondary bars suggests that the formation of inner 20. Electron drift velocity in SF{sub 6} in strong electric fields determined from rf breakdown curves Energy Technology Data Exchange (ETDEWEB) Lisovskiy, V; Yegorenkov, V [Department of Physics and Technology, Kharkov National University, Svobody sq.4, Kharkov 61077 (Ukraine); Booth, J-P [Laboratoire de Physique des Plasmas, Ecole Polytechnique, Palaiseau 91128 (France); Landry, K [Unaxis Displays Division France SAS, 5, Rue Leon Blum, Palaiseau 91120 (France); Douai, D [Physical Sciences Division, Institute for Magnetic Fusion Research, CEA Centre de Cadarache, F-13108 Saint Paul lez Durance Cedex (France); Cassagne, V, E-mail: [email protected] [Developpement Photovoltaique Couches Minces, Total, 2, place Jean Millier, La Defense 6, 92400 Courbevoie (France) 2010-09-29 This paper presents measurements of the electron drift velocity V{sub dr} in SF{sub 6} gas for high reduced electric fields (E/N = 330-5655 Td (1 Td = 10{sup -17} V cm{sup 2})). The drift velocities were obtained using the method of Lisovskiy and Yegorenkov (1998 J. Phys. D: Appl. Phys. 31 3349) based on the determination of the pressure and voltage of the turning points of rf capacitive discharge breakdown curves for a range of electrode spacings. The V{sub dr} values thus obtained were in good agreement with those calculated from the cross-sections of Phelps and Van Brunt (1988 J. Appl. Phys. 64 4269) using the BOLSIG code. The validity of the Lisovskiy-Yegorenkov method is discussed and we show that it is applicable over the entire E/N range where rf discharge ignition at breakdown occurs for rf frequencies of 13.56 MHz or above. 1. Relationships among seismic velocity, metamorphism, and seismic and aseismic fault slip in the Salton Sea Geothermal Field region Science.gov (United States) McGuire, Jeffrey J.; Lohman, Rowena B.; Catchings, Rufus D.; Rymer, Michael J.; Goldman, Mark R. 2015-01-01 The Salton Sea Geothermal Field is one of the most geothermally and seismically active areas in California and presents an opportunity to study the effect of high-temperature metamorphism on the properties of seismogenic faults. The area includes numerous active tectonic faults that have recently been imaged with active source seismic reflection and refraction. We utilize the active source surveys, along with the abundant microseismicity data from a dense borehole seismic network, to image the 3-D variations in seismic velocity in the upper 5 km of the crust. There are strong velocity variations, up to ~30%, that correlate spatially with the distribution of shallow heat flow patterns. The combination of hydrothermal circulation and high-temperature contact metamorphism has significantly altered the shallow sandstone sedimentary layers within the geothermal field to denser, more feldspathic, rock with higher P wave velocity, as is seen in the numerous exploration wells within the field. This alteration appears to have a first-order effect on the frictional stability of shallow faults. In 2005, a large earthquake swarm and deformation event occurred. Analysis of interferometric synthetic aperture radar data and earthquake relocations indicates that the shallow aseismic fault creep that occurred in 2005 was localized on the Kalin fault system that lies just outside the region of high-temperature metamorphism. In contrast, the earthquake swarm, which includes all of the M > 4 earthquakes to have occurred within the Salton Sea Geothermal Field in the last 15 years, ruptured the Main Central Fault (MCF) system that is localized in the heart of the geothermal anomaly. The background microseismicity induced by the geothermal operations is also concentrated in the high-temperature regions in the vicinity of operational wells. However, while this microseismicity occurs over a few kilometer scale region, much of it is clustered in earthquake swarms that last from 2. Towards 3C-3D digital holographic fluid velocity vector field measurement—tomographic digital holographic PIV (Tomo-HPIV) International Nuclear Information System (INIS) Soria, J; Atkinson, C 2008-01-01 Most unsteady and/or turbulent flows of geophysical and engineering interest have a highly three-dimensional (3D) complex topology and their experimental investigation is in pressing need of quantitative velocity measurement methods that are robust and can provide instantaneous 3C-3D velocity field data over a significant volumetric domain of the flow. This paper introduces and demonstrates a new method that uses multiple digital CCD array cameras to record in-line digital holograms of the same volume of seed particles from multiple orientations. This technique uses the same basic equipment as Tomo-PIV minus the camera lenses, it overcomes the depth-of-field problem of digital in-line holography and does not require the complex optical calibration of Tomo-PIV. The digital sensors can be oriented in an optimal manner to overcome the depth-of-field limitation of in-line holograms recorded using digital CCD or CMOS array cameras, resulting in a 3D reconstruction of the seed particles within the volume of interest, which can subsequently be analysed using 3D cross-correlation PIV analysis to yield a 3C-3D velocity field. A demonstration experiment of Tomo-HPIV using uniform translation with nominally 11 µm diameter seed particles shows that the 3D displacement derived from 3D cross-correlation Tomo-HPIV analysis can be measured within 5% of the imposed uniform translation, where the imposed uniform translation has an estimated standard uncertainty of 4.3%. So this paper proposes a multi-camera digital holographic imaging 3C-3D PIV method, which is identified as tomographic digital holographic PIV or Tomo-HPIV 3. Heat transfer enhancement through control of added perturbation velocity in flow field International Nuclear Information System (INIS) Wang, Jiansheng; Wu, Cui; Li, Kangning 2013-01-01 Highlights: ► Three strategies which restrain the flow drag in heat transfer are proposed. ► Added perturbation induces quasi-streamwise vortices around controlled zone. ► The flow and heat transfer features depend on induced quasi-streamwise vortices. ► Vertical strategy has the best synthesis performance of three control strategies. ► Synthesis performance with control strategy is superior to that without strategy. - Abstract: The characteristics of heat transfer and flow, through an added perturbation velocity, in a rectangle channel, are investigated by Large Eddy Simulation (LES). The downstream, vertical, and upstream control strategy, which can suppress the lift of low speed streaks in the process of improving the performance of heat transfer, are adopted in numerical investigation. Taking both heat transfer and flow properties into consideration, the synthesis performance of heat transfer and flow of three control strategies are evaluated. The numerical results show that the flow structure in boundary layer has been varied obviously for the effect of perturbation velocity and induced quasi-streamwise vortices emerging around the controlled zone. The results indicate that the vertical control strategy has the best synthesis performance of the three control strategies, which also has the least skin frication coefficient. The upstream and downstream strategies can improve the heat transfer performance, but the skin frication coefficient is higher than that with vertical control strategy 4. Application of Migration Velocity Using Fourier Transform Approach ... African Journals Online (AJOL) Application of velocity by Fourier transform to process 3-D unmigrated seismic sections has been carried out in Fabi Field, Niger Delta – Nigeria. Usually, all seismic events (sections) are characterized by spikes or noise (random or coherent), multiples and shear waves so that when a seismic bed is dipping, the apparent ... 5. Ensemble averaged coherent state path integral for disordered bosons with a repulsive interaction (Derivation of mean field equations) International Nuclear Information System (INIS) Mieck, B. 2007-01-01 We consider bosonic atoms with a repulsive contact interaction in a trap potential for a Bose-Einstein condensation (BEC) and additionally include a random potential. The ensemble averages for two models of static (I) and dynamic (II) disorder are performed and investigated in parallel. The bosonic many body systems of the two disorder models are represented by coherent state path integrals on the Keldysh time contour which allow exact ensemble averages for zero and finite temperatures. These ensemble averages of coherent state path integrals therefore present alternatives to replica field theories or super-symmetric averaging techniques. Hubbard-Stratonovich transformations (HST) lead to two corresponding self-energies for the hermitian repulsive interaction and for the non-hermitian disorder-interaction. The self-energy of the repulsive interaction is absorbed by a shift into the disorder-self-energy which comprises as an element of a larger symplectic Lie algebra sp(4M) the self-energy of the repulsive interaction as a subalgebra (which is equivalent to the direct product of M x sp(2); 'M' is the number of discrete time intervals of the disorder-self-energy in the generating function). After removal of the remaining Gaussian integral for the self-energy of the repulsive interaction, the first order variations of the coherent state path integrals result in the exact mean field or saddle point equations, solely depending on the disorder-self-energy matrix. These equations can be solved by continued fractions and are reminiscent to the 'Nambu-Gorkov' Green function formalism in superconductivity because anomalous terms or pair condensates of the bosonic atoms are also included into the selfenergies. The derived mean field equations of the models with static (I) and dynamic (II) disorder are particularly applicable for BEC in d=3 spatial dimensions because of the singularity of the density of states at vanishing wavevector. However, one usually starts out from 6. Electric field dependence of the temperature and drift velocity of hot electrons in n-Si International Nuclear Information System (INIS) Vass, E. 2001-01-01 Full text: The average energy- and momentum loss rates of hot electrons interacting simultaneously with acoustic phonons, ionized and neutral impurities in n-Si are calculated quantum theoretically by means of a drifted hot Fermi-Dirac distribution. The drift velocity vd and electron temperature Te occurring in this distribution are determined self-consistently from the force- and power balance equation with respect to the charge neutrality condition. The functions Te(E) and vd(E) calculated in this way are compared with the corresponding relations obtained with help of the simple electron temperature model in order to determine the range of application of this model often used in previous treatises. (author) 7. Optimization of Transverse Oscillating Fields for Vector Velocity Estimation with Convex Arrays DEFF Research Database (Denmark) Jensen, Jørgen Arendt 2013-01-01 A method for making Vector Flow Images using the transverse oscillation (TO) approach on a convex array is presented. The paper presents optimization schemes for TO fields for convex probes and evaluates their performance using Field II simulations and measurements using the SARUS experimental...... from 90 to 45 degrees in steps of 15 degrees. The optimization routine changes the lateral oscillation period lx to yield the best possible estimates based on the energy ratio between positive and negative spatial frequencies in the ultrasound field. The basic equation for lx gives 1.14 mm at 40 mm... 8. A Comprehensive Study of Gridding Methods for GPS Horizontal Velocity Fields Science.gov (United States) Wu, Yanqiang; Jiang, Zaisen; Liu, Xiaoxia; Wei, Wenxin; Zhu, Shuang; Zhang, Long; Zou, Zhenyu; Xiong, Xiaohui; Wang, Qixin; Du, Jiliang 2017-03-01 Four gridding methods for GPS velocities are compared in terms of their precision, applicability and robustness by analyzing simulated data with uncertainties from 0.0 to ±3.0 mm/a. When the input data are 1° × 1° grid sampled and the uncertainty of the additional error is greater than ±1.0 mm/a, the gridding results show that the least-squares collocation method is highly robust while the robustness of the Kriging method is low. In contrast, the spherical harmonics and the multi-surface function are moderately robust, and the regional singular values for the multi-surface function method and the edge effects for the spherical harmonics method become more significant with increasing uncertainty of the input data. When the input data (with additional errors of ±2.0 mm/a) are decimated by 50% from the 1° × 1° grid data and then erased in three 6° × 12° regions, the gridding results in these three regions indicate that the least-squares collocation and the spherical harmonics methods have good performances, while the multi-surface function and the Kriging methods may lead to singular values. The gridding techniques are also applied to GPS horizontal velocities with an average error of ±0.8 mm/a over the Chinese mainland and the surrounding areas, and the results show that the least-squares collocation method has the best performance, followed by the Kriging and multi-surface function methods. Furthermore, the edge effects of the spherical harmonics method are significantly affected by the sparseness and geometric distribution of the input data. In general, the least-squares collocation method is superior in terms of its robustness, edge effect, error distribution and stability, while the other methods have several positive features. 9. Effect of an external electric field on the propagation velocity of premixed flames KAUST Repository Sá nchez-Sanz, Mario; Murphy, Daniel C.; Fernandez-Pello, C. 2015-01-01 © 2014 The Combustion Institute. Published by Elsevier Inc. All rights reserved. There have been many experimental investigations into the ability of electric fields to enhance combustion by acting upon ion species present in flames [1 10. Electric field measurements at near-atmospheric pressure by coherent Raman scattering of laser beams International Nuclear Information System (INIS) Ito, Tsuyohito; Kobayashi, Kazunobu; Hamaguchi, Satoshi; Mueller, Sarah; Czarnetzki, Uwe 2010-01-01 Electric field measurements at near-atmospheric pressure environments based on electric-field induced Raman scattering are applied to repetitively pulsed nanosecond discharges. The results have revealed that the peak electric field near the centre of the gap is almost independent of the applied voltage. Minimum sustainable voltage measurements suggests that, at each discharge pulse, charged particles that remain from the previous pulse serve as discharge seeds and play an important role for generation of uniform glow-like discharges. 11. Consistency of the directionality of partially coherent beams in turbulence expressed in terms of the angular spread and the far-field average intensity International Nuclear Information System (INIS) Xiao-Wen, Chen; Xiao-Ling, Ji 2010-01-01 Under the quadratic approximation of the Rytov's phase structure function, this paper derives the general closed-form expressions for the mean-squared width and the angular spread of partially coherent beams in turbulence. It finds that under a certain condition different types of partially coherent beams may have the same directionality as a fully coherent Gaussian beam in free space and also in atmospheric turbulence if the angular spread is chosen as the characteristic parameter of beam directionality. On the other hand, it shows that generally, the directionality of partially coherent beams expressed in terms of the angular spread is not consistent with that in terms of the normalized far-field average intensity distribution in free space, but the consistency can be achieved due to turbulence. (classical areas of phenomenology) 12. Causal signal transmission by quantum fields. III: Coherent response of fermions International Nuclear Information System (INIS) Plimak, L.I.; Stenholm, S. 2009-01-01 Structural response properties of fermionic fields are investigated. In the presence of fermions the key technical concept becomes response combination, or R-normal product, of field operators. It generalises the notion of time-normal operator product to response problems. Time-normal products are a special case of R-normal products without inputs; this paper thus also generalises the concept of time-normal ordering to fermions. Explicit causality of R-normal products of arbitrary (bosonic and/or fermionic) field operators is proven, and explicit relations expressing them by conventional Green's functions of quantum field theory are derived 13. Experimental study of the spatial distribution of the velocity field of sedimenting particles: mean velocity, pseudo-turbulent fluctuations, intrinsic convection International Nuclear Information System (INIS) Bernard-Michel, G. 2001-01-01 This work follows previous experiments from Nicolai et al. (95), Peysson and Guazzelli (98) and Segre et al. (97), which consisted in measures of the velocity of particles sedimenting in a liquid at low particular Reynolds numbers. Our goal, introduced in the first part with a bibliographic study, is to determinate the particles velocity fluctuations properties. The fluctuations are indeed of the same order as the mean velocity. We are proceeding with PIV Eulerian measures. The method is described in the second part. Its originality comes from measures obtained in a thin laser light sheet, from one side to the other of the cells, with a square section: the measures are therefore spatially localised. Four sets of cells and three sets of particles were used, giving access to ratios 'cell width over particle radius' ranging from about 50 up to 800. In the third part, we present the results concerning the velocity fluctuations structure and their spatial distribution. The intrinsic convection between to parallel vertical walls is also studied. The velocity fluctuations are organised in eddy structures. Their size (measured with correlation length) is independent of the volume fraction, contradicting the results of Segre et al. (97). The results concerning the velocity fluctuations spatial profiles - from one side to the other of the cell - confirm those published by Peysson and Guazzelli (98) in the case of stronger dilution. The evolution of the spatial mean velocity fluctuations confirms the results obtained by Segre et al. (97). The intrinsic convection is also observed in the case of strong dilutions. (author) 14. Effect of an external electric field on the propagation velocity of premixed flames KAUST Repository Sánchez-Sanz, Mario 2015-01-01 © 2014 The Combustion Institute. Published by Elsevier Inc. All rights reserved. There have been many experimental investigations into the ability of electric fields to enhance combustion by acting upon ion species present in flames [1]. In this work, we examine this phenomenon using a one-dimensional model of a lean premixed flame under the influence of a longitudinal electric field. We expand upon prior two-step chain-branching reaction laminar models with reactions to model the creation and consumption of both a positively-charged radical species and free electrons. Also included are the electromotive force in the conservation equation for ion species and the electrostatic form of the Maxwell equations in order to resolve ion transport by externally applied and internally induced electric fields. The numerical solution of these equations allows us to compute changes in flame speed due to electric fields. Further, the variation of key kinetic and transport parameters modifies the electrical sensitivity of the flame. From changes in flame speed and reactant profiles we are able to gain novel, valuable insight into how and why combustion can be controlled by electric fields. 15. Long-term coherent periodicities in the mean magnetic field of the Sun International Nuclear Information System (INIS) Kotov, V.A.; Levitsky, L.S. 1983-01-01 To investigate periodic variations of the magnetic field of the Sun as a star, the authors have used the mean field measurements made in Crimea, Mt. Wilson, and Stanford observatories; in total N = 5783 daily values were available for the time interval 1968 - 1981. In essence, these data offer a unique possibility to study the Sun as a variable magnetic star. (Auth.) 16. Measurement of Coherent Emission and Linear Polarization of Photons by Electrons in the Strong Fields of Aligned Crystals CERN Document Server Apyan, A.; Badelek, B.; Ballestrero, S.; Biino, C.; Birol, I.; Cenci, P.; Connell, S.H.; Eichblatt, S.; Fonseca, T.; Freund, A.; Gorini, B.; Groess, R.; Ispirian, K.; Ketel, T.J.; Kononets, Yu.V.; Lopez, A.; Mangiarotti, A.; van Rens, B.; Sellschop, J.P.F.; Shieh, M.; Sona, P.; Strakhovenko, V.; Uggerhoj, E.; Uggerhj, Ulrik Ingerslev; Unel, G.; Velasco, M.; Vilakazi, Z.Z.; Wessely, O.; Kononets, Yu.V. 2004-01-01 We present new results regarding the features of high energy photon emission by an electron beam of 178 GeV penetrating a 1.5 cm thick single Si crystal aligned at the Strings-Of-Strings (SOS) orientation. This concerns a special case of coherent bremsstrahlung where the electron interacts with the strong fields of successive atomic strings in a plane and for which the largest enhancement of the highest energy photons is expected. The polarization of the resulting photon beam was measured by the asymmetry of electron-positron pair production in an aligned diamond crystal analyzer. By the selection of a single pair the energy and the polarization of individual photons could be measured in an the environment of multiple photons produced in the radiator crystal. Photons in the high energy region show less than 20% linear polarization at the 90% confidence level. 17. Feasibility of full-field optical coherence microscopy in ultra-structural imaging of human colon tissues Energy Technology Data Exchange (ETDEWEB) Choi, Eun Seo [Chosun University, Gwangju (Korea, Republic of); Choi, Woo June; Ryu, Seon Young; Lee, Byeong Ha [Gwangju Institute of Science and Technology, Gwangju (Korea, Republic of); Lee, Jae Hyuk; Bom, Hee Seung; Lee, Byeong Il [Chonnam National University Hospital, Gwangju (Korea, Republic of) 2010-06-15 We demonstrated the imaging feasibility of full-field optical coherence microscopy (FF-OCM) in pathological diagnosis of human colon tissues. FF-OCM images with high transverse resolution were obtained at different depths of the samples without any dye staining or physical slicing, and detailed microstructures of human colon tissues were visualized. Morphological differences in normal tissues, cancer tissues, and tissues under transition were observed and matched with results seen in conventional optical microscope images. The optical biopsy based on FF-OCM could overcome the limitations on the number of physical cuttings of tissues and could perform high-throughput mass diagnosis of diseased tissues. The proved utility of FF-OCM as a comprehensive and efficient imaging modality of human tissues showed it to be a good alternative to conventional biopsy. 18. Feasibility of full-field optical coherence microscopy in ultra-structural imaging of human colon tissues International Nuclear Information System (INIS) Choi, Eun Seo; Choi, Woo June; Ryu, Seon Young; Lee, Byeong Ha; Lee, Jae Hyuk; Bom, Hee Seung; Lee, Byeong Il 2010-01-01 We demonstrated the imaging feasibility of full-field optical coherence microscopy (FF-OCM) in pathological diagnosis of human colon tissues. FF-OCM images with high transverse resolution were obtained at different depths of the samples without any dye staining or physical slicing, and detailed microstructures of human colon tissues were visualized. Morphological differences in normal tissues, cancer tissues, and tissues under transition were observed and matched with results seen in conventional optical microscope images. The optical biopsy based on FF-OCM could overcome the limitations on the number of physical cuttings of tissues and could perform high-throughput mass diagnosis of diseased tissues. The proved utility of FF-OCM as a comprehensive and efficient imaging modality of human tissues showed it to be a good alternative to conventional biopsy. 19. High-dynamic-range microscope imaging based on exposure bracketing in full-field optical coherence tomography. Science.gov (United States) Leong-Hoi, Audrey; Montgomery, Paul C; Serio, Bruno; Twardowski, Patrice; Uhring, Wilfried 2016-04-01 By applying the proposed high-dynamic-range (HDR) technique based on exposure bracketing, we demonstrate a meaningful reduction in the spatial noise in image frames acquired with a CCD camera so as to improve the fringe contrast in full-field optical coherence tomography (FF-OCT). This new signal processing method thus allows improved probing within transparent or semitransparent samples. The proposed method is demonstrated on 3 μm thick transparent polymer films of Mylar, which, due to their transparency, produce low contrast fringe patterns in white-light interference microscopy. High-resolution tomographic analysis is performed using the technique. After performing appropriate signal processing, resulting XZ sections are observed. Submicrometer-sized defects can be lost in the noise that is present in the CCD images. With the proposed method, we show that by increasing the signal-to-noise ratio of the images, submicrometer-sized defect structures can thus be detected. 20. Velocity and concentration fields in turbulent buoyant mixing in tilted tubes Science.gov (United States) Znaien, J.; Moisy, F.; Hulin, J. P.; Salin, D.; Hinch, E. J. 2008-11-01 2D PIV and LIF measurements have been performed on buoyancy driven flows of two miscible fluids of the same viscosity in a tube tilted at different angles θ from vertical and at different density contrasts (characterized by the Atwood number At). As θ increases and At decreases, the flow regime evolves, behind the front, from a turbulent shear flow towards a laminar counter flow with 3 layers of different concentrations. Time variations of the structure function show that both intermittent and developed turbulence occur in intermediate conditions. In the turbulent regime (Reλ˜60) the magnitudes of the longitudinal u'^2 and transverse v'^2 velocity fluctuations and of the component u'v' of the Reynolds stress tensor are shown to be largest on the tube axis while viscous stresses is only important close to the walls. The analyzis of the momentum transfer in the flow with buoyancy forces estimated from the concentration gradients demonstrates that 3D effects are required to achieve the momentum balance. These results are discussed in the framework of classical turbulence models. 1. Modification of the quantum mechanical flux formula for electron-hydrogen ionization through Bohm's velocity field Science.gov (United States) Randazzo, J. M.; Ancarani, L. U. 2015-12-01 For the single differential cross section (SDCS) for hydrogen ionization by electron impact (e -H problem), we propose a correction to the flux formula given by R. Peterkop [Theory of Ionization of Atoms by Electron Impact (Colorado Associated University Press, Boulder, 1977)]. The modification is based on an alternative way of defining the kinetic energy fraction, using Bohm's definition of velocities instead of the usual asymptotic kinematical, or geometrical, approximation. It turns out that the solution-dependent, modified energy fraction is equally related to the components of the probability flux. Compared to what is usually observed, the correction yields a finite and well-behaved SDCS value in the asymmetrical situation where one of the continuum electrons carries all the energy while the other has zero energy. We also discuss, within the S -wave model of the e -H ionization process, the continuity of the SDCS derivative at the equal energy sharing point, a property not so clearly observed in published benchmark results obtained with integral and S -matrix formulas with unequal final states. 2. Two different approaches for creating a prescribed opposed-flow velocity field for flame spread experiments Directory of Open Access Journals (Sweden) Carmignani Luca 2015-01-01 Full Text Available Opposed-flow flame spread over solid fuels is a fundamental area of research in fire science. Typically combustion wind tunnels are used to generate the opposing flow of oxidizer against which a laminar flame spread occurs along the fuel samples. The spreading flame is generally embedded in a laminar boundary layer, which interacts with the strong buoyancy-induced flow to affect the mechanism of flame spread. In this work, two different approaches for creating the opposed-flow are compared. In the first approach, a vertical combustion tunnel is used where a thin fuel sample, thin acrylic or ashless filter paper, is held vertically along the axis of the test-section with the airflow controlled by controlling the duty cycles of four fans. As the sample is ignited, a flame spreads downward in a steady manner along a developing boundary layer. In the second approach, the sample is held in a movable cart placed in an eight-meter tall vertical chamber filled with air. As the sample is ignited, the cart is moved downward (through a remote-controlled mechanism at a prescribed velocity. The results from the two approaches are compared to establish the boundary layer effect on flame spread over thin fuels. 3. The Most Ancient Spiral Galaxy: A 2.6-Gyr-old Disk with a Tranquil Velocity Field Science.gov (United States) Yuan, Tiantian; Richard, Johan; Gupta, Anshu; Federrath, Christoph; Sharma, Soniya; Groves, Brent A.; Kewley, Lisa J.; Cen, Renyue; Birnboim, Yuval; Fisher, David B. 2017-11-01 We report an integral-field spectroscopic (IFS) observation of a gravitationally lensed spiral galaxy A1689B11 at redshift z = 2.54. It is the most ancient spiral galaxy discovered to date and the second kinematically confirmed spiral at z≳ 2. Thanks to gravitational lensing, this is also by far the deepest IFS observation with the highest spatial resolution (˜400 pc) on a spiral galaxy at a cosmic time when the Hubble sequence is about to emerge. After correcting for a lensing magnification of 7.2 ± 0.8, this primitive spiral disk has an intrinsic star formation rate of 22 ± 2 M ⊙ yr-1, a stellar mass of {10}9.8+/- 0.3 M ⊙, and a half-light radius of {r}1/2=2.6+/- 0.7 {kpc}, typical of a main-sequence star-forming galaxy at z˜ 2. However, the Hα kinematics show a surprisingly tranquil velocity field with an ordered rotation ({V}{{c}}=200+/- 12 km s-1) and uniformly small velocity dispersions ({V}σ ,{mean}=23 +/- 4 km s-1 and {V}σ ,{outer - {disk}}=15+/- 2 km s-1). The low gas velocity dispersion is similar to local spiral galaxies and is consistent with the classic density wave theory where spiral arms form in dynamically cold and thin disks. We speculate that A1689B11 belongs to a population of rare spiral galaxies at z≳ 2 that mark the formation epoch of thin disks. Future observations with the James Webb Space Telescope will greatly increase the sample of these rare galaxies and unveil the earliest onset of spiral arms. 4. Two tests of electric fields, second-order in source-velocity terms of closed, steady currents: (1) an electron beam; (2) a superconducting coil International Nuclear Information System (INIS) Kenyon, C.S. 1980-01-01 One particular prediction of Maxwell's theory that has been previously neglected is that the motion of charges traveling in closed loops produces no constant electric fields. This study presents and analyzes the results of two new experiments designed to test for second-order, source-velocity electric fields from steady, closed currents and analyzes another experiment in light of these fields. The first experiment employed an electron beam. The second used a niobium-titanium coil designed so that the voltage measurement configuration could be easily switched from a Faraday to a non-faraday configuration between sets of runs. The implications of the observation of a null charge on magnetically suspended superconducting spheres vis-a-vis the second-order, source-velocity fields were discussed as the third case. The observation of a null potential corresponding to a null effective charge from a hypothetical velocity-squared field in both the beam and the coil experiment placed the upper bound on a field term at 0.02 with respect a Coulomb term. An observed null charge on the suspended spheres reduced this bound to 0.001. Such an upper bound is strong evidence against alternative theories predicting a relative contribution of the order of unity for a simple velocity-squared term. A simple velocity-squared electric field would be indistinguishable from a velocity-squared charge variation. The latter test limits such a charge variation to 0.001 of the total charge. The suspended-spheres test allowed the previously neglected issue of a general second-order, source-velocity electric field to be addressed. The observed null charge in this test contradicts and thus eliminates a hypothesized, general, electric field expression containing three second-order, source-velocity terms 5. Dark Matter Profiles in Dwarf Galaxies: A Statistical Sample Using High-Resolution Hα Velocity Fields from PCWI Science.gov (United States) Relatores, Nicole C.; Newman, Andrew B.; Simon, Joshua D.; Ellis, Richard; Truong, Phuongmai N.; Blitz, Leo 2018-01-01 We present high quality Hα velocity fields for a sample of nearby dwarf galaxies (log M/M⊙ = 8.4-9.8) obtained as part of the Dark Matter in Dwarf Galaxies survey. The purpose of the survey is to investigate the cusp-core discrepancy by quantifying the variation of the inner slope of the dark matter distributions of 26 dwarf galaxies, which were selected as likely to have regular kinematics. The data were obtained with the Palomar Cosmic Web Imager, located on the Hale 5m telescope. We extract rotation curves from the velocity fields and use optical and infrared photometry to model the stellar mass distribution. We model the total mass distribution as the sum of a generalized Navarro-Frenk-White dark matter halo along with the stellar and gaseous components. We present the distribution of inner dark matter density profile slopes derived from this analysis. For a subset of galaxies, we compare our results to an independent analysis based on CO observations. In future work, we will compare the scatter in inner density slopes, as well as their correlations with galaxy properties, to theoretical predictions for dark matter core creation via supernovae feedback. 6. Using cluster analysis to organize and explore regional GPS velocities Science.gov (United States) Simpson, Robert W.; Thatcher, Wayne; Savage, James C. 2012-01-01 Cluster analysis offers a simple visual exploratory tool for the initial investigation of regional Global Positioning System (GPS) velocity observations, which are providing increasingly precise mappings of actively deforming continental lithosphere. The deformation fields from dense regional GPS networks can often be concisely described in terms of relatively coherent blocks bounded by active faults, although the choice of blocks, their number and size, can be subjective and is often guided by the distribution of known faults. To illustrate our method, we apply cluster analysis to GPS velocities from the San Francisco Bay Region, California, to search for spatially coherent patterns of deformation, including evidence of block-like behavior. The clustering process identifies four robust groupings of velocities that we identify with four crustal blocks. Although the analysis uses no prior geologic information other than the GPS velocities, the cluster/block boundaries track three major faults, both locked and creeping. 7. Disturbing the coherent dynamics of an excitonic polarization with strong terahertz fields Science.gov (United States) Drexler, M. J.; Woscholski, R.; Lippert, S.; Stolz, W.; Rahimi-Iman, A.; Koch, M. 2014-11-01 We present a paper based on combining four-wave mixing and strong fields in the terahertz frequency range to monitor the time evolution of a disturbed excitonic polarization in a multiple quantum well system. Our findings not only confirm a lower field-dependent ionization threshold for higher excitonic states, but furthermore provide experimental evidence for intraexcitonic Rabi flopping in the time domain. These measurements correspond to the picture of a reversible and irreversible transfer as previously predicted by a microscopic theory. 8. Velocity Fields Measurement of Natural Circulation Flow inside a Pool Using PIV Technique International Nuclear Information System (INIS) Kim, Seok; Kim, Dong Eok; Youn, Young Jung; Euh, Dong Jin; Song, Chul Hwa 2012-01-01 Thermal stratification is encountered in large pool of water increasingly being used as heat sink in new generation of advanced reactors. These large pools at near atmospheric pressure provide a heat sink for heat removal from the reactor or steam generator, and the containment by natural circulation as well as a source of water for core cooling. For examples, the PAFS (passive auxiliary feedwater system) is one of the advanced safety features adopted in the APR+ (Advanced Power Reactor Plus), which is intended to completely replace the conventional active auxiliary feedwater system. The PAFS cools down the steam generator secondary side and eventually removes the decay heat from the reactor core by adopting a natural convection mechanism. In a pool, the heat transfer from the PCHX (passive condensation heat exchanger) contributed to increase the pool temperature up to the saturation condition and induce the natural circulation flow of the PCCT (passive condensate cooling tank) pool water. When a heat rod is placed horizontally in a pool of water, the fluid adjacent to the heat rod gets heated up. In the process, its density reduces and by virtue of the buoyancy force, the fluid in this region moves up. After reaching the top free surface, the heated water moves towards the other side wall of the pool along the free surface. Since this heated water is cooling, it goes downward along the wall at the other side wall. Above heater rod, a natural circulation flow is formed. However, there is no flow below heater rod until pool water temperature increases to saturation temperature. In this study, velocity measurement was conducted to reveal a natural circulation flow structure in a small pool using PIV (particle image velocimetry) measurement technique 9. Effect of the depolarization field on coherent optical properties in semiconductor quantum dots Science.gov (United States) Mitsumori, Yasuyoshi; Watanabe, Shunta; Asakura, Kenta; Seki, Keisuke; Edamatsu, Keiichi; Akahane, Kouichi; Yamamoto, Naokatsu 2018-06-01 We study the photon echo spectrum of self-assembled semiconductor quantum dots using femtosecond light pulses. The spectrum shape changes from a single-peaked to a double-peaked structure as the time delay between the two excitation pulses is increased. The spectrum change is reproduced by numerical calculations, which include the depolarization field induced by the biexciton-exciton transition as well as the conventional local-field effect for the exciton-ground-state transition in a quantum dot. Our findings suggest that various optical transitions in tightly localized systems generate a depolarization field, which renormalizes the resonant frequency with a change in the polarization itself, leading to unique optical properties. 10. Direct numerical simulation of turbulent velocity-, pressure- and temperature-fields in channel flows International Nuclear Information System (INIS) Goetzbach, G. 1977-10-01 For the simulation of non stationary, three-dimensional, turbulent flow- and temperature-fields in channel flows with constant properties a method is presented which is based on a finite difference scheme of the complete conservation equations for mass, momentum and enthalpie. The fluxes of momentum and heat within the grid cells are described by sub-grid scale models. The sub-grid scale model for momentum introduced here is for the first time applicable to small Reynolds-numbers, rather coarse grids, and channels with space dependent roughness distributions. (orig.) [de 11. Field estimates of floc dynamics and settling velocities in a tidal creek with significant along-channel gradients in velocity and SPM Science.gov (United States) Schwarz, C.; Cox, T.; van Engeland, T.; van Oevelen, D.; van Belzen, J.; van de Koppel, J.; Soetaert, K.; Bouma, T. J.; Meire, P.; Temmerman, S. 2017-10-01 A short-term intensive measurement campaign focused on flow, turbulence, suspended particle concentration, floc dynamics and settling velocities were carried out in a brackish intertidal creek draining into the main channel of the Scheldt estuary. We compare in situ estimates of settling velocities between a laser diffraction (LISST) and an acoustic Doppler technique (ADV) at 20 and 40 cm above bottom (cmab). The temporal variation in settling velocity estimated were compared over one tidal cycle, with a maximum flood velocity of 0.46 m s-1, a maximum horizontal ebb velocity of 0.35 m s-1 and a maximum water depth at high water slack of 2.41 m. Results suggest that flocculation processes play an important role in controlling sediment transport processes in the measured intertidal creek. During high-water slack, particles flocculated to sizes up to 190 μm, whereas at maximum flood and maximum ebb tidal stage floc sizes only reached up to 55 μm and 71 μm respectively. These large differences indicate that flocculation processes are mainly governed by turbulence-induced shear rate. In this study, we specifically recognize the importance of along-channel gradients that places constraints on the application of the acoustic Doppler technique due to conflicts with the underlying assumptions. Along-channel gradients were assessed by additional measurements at a second location and scaling arguments which could be used as an indication whether the Reynolds-flux method is applicable. We further show the potential impact of along-channel advection of flocs out of equilibrium with local hydrodynamics influencing overall floc sizes. 12. One-dimensional three-field model of condensation in horizontal countercurrent flow with supercritical liquid velocity International Nuclear Information System (INIS) Trewin, Richard R. 2011-01-01 Highlights: → CCFL in the hot leg of a PWR with ECC Injection. → Three-Field Model of counter flowing water film and entrained droplets. → Flow of steam can cause a hydraulic jump in the supercritical flow of water. → Condensation of steam on subcooled water increases the required flow for hydraulic jump. → Better agreement with UPTF experimental data than Wallis-type correlation. - Abstract: A one-dimensional three-field model was developed to predict the flow of liquid and vapor that results from countercurrent flow of water injected into the hot leg of a PWR and the oncoming steam flowing from the upper plenum. The model solves the conservation equations for mass, momentum, and energy in a continuous-vapor field, a continuous-liquid field, and a dispersed-liquid (entrained-droplet) field. Single-effect experiments performed in the upper plenum test facility (UPTF) of the former SIEMENS KWU (now AREVA) at Mannheim, Germany, were used to validate the countercurrent flow limitation (CCFL) model in case of emergency core cooling water injection into the hot legs. Subcooled water and saturated steam flowed countercurrent in a horizontal pipe with an inside diameter of 0.75 m. The flow of injected water was varied from 150 kg/s to 400 kg/s, and the flow of steam varied from 13 kg/s to 178 kg/s. The subcooling of the liquid ranged from 0 K to 104 K. The velocity of the water at the injection point was supercritical (greater than the celerity of a gravity wave) for all the experiments. The three-field model was successfully used to predict the experimental data, and the results from the model provide insight into the mechanisms that influence the flows of liquid and vapor during countercurrent flow in a hot leg. When the injected water was saturated and the flow of steam was small, all or most of the injected water flowed to the upper plenum. Because the velocity of the liquid remained supercritical, entrainment of droplets was suppressed. When the injected 13. Coherent cancellation of geometric phase for the OH molecule in external fields Science.gov (United States) Bhattacharya, M.; Marin, S.; Kleinert, M. 2014-05-01 The OH molecule in its ground state presents a versatile platform for precision measurement and quantum information processing. These applications vitally depend on the accurate measurement of transition energies between the OH levels. Significant sources of systematic errors in these measurements are shifts based on the geometric phase arising from the magnetic and electric fields used for manipulating OH. In this article, we present these geometric phases for fields that vary harmonically in time, as in the Ramsey technique. Our calculation of the phases is exact within the description provided by our recent analytic solution of an effective Stark-Zeeman Hamiltonian for the OH ground state. This Hamiltonian has been shown to model experimental data accurately. We find that the OH geometric phases exhibit rich structure as a function of the field rotation rate. Remarkably, we find rotation rates where the geometric phase accumulated by a specific state is zero, or where the relative geometric phase between two states vanishes. We expect these findings to be of importance to precision experiments on OH involving time-varying fields. More specifically, our analysis quantitatively characterizes an important item in the error budget for precision spectroscopy of ground-state OH. 14. Coherent Control of Photofragment Distributions Using Laser Phase Modulation in the Weak-Field Limit DEFF Research Database (Denmark) Garcia-Vela, Alberto; Henriksen, Niels Engholm 2015-01-01 The possibility of quantum interference control of the final state distributions of photodissociation fragments by means of pure phase modulation of the pump laser pulse in the weak-field regime is demonstrated theoretically for the first time. The specific application involves realistic wave pac... 15. Aspects of the theory of atoms and coherent matter and their interaction with electromagnetic fields Energy Technology Data Exchange (ETDEWEB) Nilsen, Halvor Moell 2002-07-01 In the present work I have outlined and contributed to the time-dependent theory of the interaction between atoms and electromagnetic fields and the theory of Bose-Einstein condensates. New numerical methods and algorithms have been developed and applied in practice. Calculations have exhibited certain new dynamical features. All these calculations are in a regime where the applied field is of the same magnitude as the atomic field. In the case of BEC we have investigated the use of time-dependent methods to calculate the excitation frequencies. We also investigated the possibility of nonlinear coupling for a scissors mode and found no such contributions to damping which is consistent with other studies . Special emphasis has also been paid to the gyroscopic motion of rotating BEC where several models were investigated. Briefly, the main conclusions are: (1) Rydberg wave packets appear for direct excitations of Rydberg atoms for long pulses. (2) The survival of just a few states is decided by symmetry of the Hamiltonian. (3) For few cycle intense pulses classical and quantum mechanics show remarkable similarity. (4) Time-dependent methods for finding excitation frequencies have been shown to be very efficient. (5) New dynamical features is shown in gyroscopic motion of BEC. (6) It was shown that no nonlinear mixing of scissors modes occur in the standard Gross-Pitaevskii regime. As mentioned in the introduction, this work is a part of very active research fields and new progress is constantly reported. Thus, the present work cannot be concluded as a closed loop. The fast development of grid based numerical solutions for atoms in intense fields will surely make great contribution to solve many of today's problems. It is a very important area of research to understand both nonperturbative atomic response and highly nonlinear optics. In the field of Bose-Einstein condensation the new experimental achievements constantly drive the field forward. The new 16. Whole arm manipulation planning based on feedback velocity fields and sampling-based techniques. Science.gov (United States) Talaei, B; Abdollahi, F; Talebi, H A; Omidi Karkani, E 2013-09-01 Changing the configuration of a cooperative whole arm manipulator is not easy while enclosing an object. This difficulty is mainly because of risk of jamming caused by kinematic constraints. To reduce this risk, this paper proposes a feedback manipulation planning algorithm that takes grasp kinematics into account. The idea is based on a vector field that imposes perturbation in object motion inducing directions when the movement is considerably along manipulator redundant directions. Obstacle avoidance problem is then considered by combining the algorithm with sampling-based techniques. As experimental results confirm, the proposed algorithm is effective in avoiding jamming as well as obstacles for a 6-DOF dual arm whole arm manipulator. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved. 17. Off-axis full-field swept-source optical coherence tomography using holographic refocusing Science.gov (United States) Hillmann, Dierck; Franke, Gesa; Hinkel, Laura; Bonin, Tim; Koch, Peter; Hüttmann, Gereon 2013-03-01 We demonstrate a full-field swept-source OCT using an off-axis geometry of the reference illumination. By using holographic refocusing techniques, a uniform lateral resolution is achieved over the measurement depth of approximately 80 Rayleigh lengths. Compared to a standard on-axis setup, artifacts and autocorrelation signals are suppressed and the measurement depth is doubled by resolving the complex conjugate ambiguity. Holographic refocusing was done efficiently by Fourier-domain resampling as demonstrated before in inverse scattering and holoscopy. It allowed to reconstruct a complete volume with about 10μm resolution over the complete measurement depth of more than 10mm. Off-axis full-field swept-source OCT enables high measurement depths, spanning many Rayleigh lengths with reduced artifacts. 18. Monitoring of interaction of low-frequency electric field with biological tissues upon optical clearing with optical coherence tomography. Science.gov (United States) Peña, Adrián F; Doronin, Alexander; Tuchin, Valery V; Meglinski, Igor 2014-08-01 The influence of a low-frequency electric field applied to soft biological tissues ex vivo at normal conditions and upon the topical application of optical clearing agents has been studied by optical coherence tomography (OCT). The electro-kinetic response of tissues has been observed and quantitatively evaluated by the double correlation OCT approach, utilizing consistent application of an adaptive Wiener filtering and Fourier domain correlation algorithm. The results show that fluctuations, induced by the electric field within the biological tissues are exponentially increased in time. We demonstrate that in comparison to impedance measurements and the mapping of the temperature profile at the surface of the tissue samples, the double correlation OCT approach is much more sensitive to the changes associated with the tissues' electro-kinetic response. We also found that topical application of the optical clearing agent reduces the tissues' electro-kinetic response and is cooling the tissue, thus reducing the temperature induced by the electric current by a few degrees. We anticipate that dcOCT approach can find a new application in bioelectrical impedance analysis and monitoring of the electric properties of biological tissues, including the resistivity of high water content tissues and its variations. 19. Integral Field Spectroscopy of Markarian 273: Mapping High-Velocity Gas Flows and an Off-Nucleus Seyfert 2 Nebula. Science.gov (United States) Colina; Arribas; Borne 1999-12-10 Integral field optical spectroscopy with the INTEGRAL fiber-based system is used to map the extended ionized regions and gas flows in Mrk 273, one of the closest ultraluminous infrared galaxies. The Hbeta and [O iii] lambda5007 maps show the presence of two distinct regions separated by 4&arcsec; (3.1 kpc) along position angle (P.A.) 240 degrees. The northeastern region coincides with the optical nucleus of the galaxy and shows the spectral characteristics of LINERs. The southwestern region is dominated by [O iii] emission and is classified as a Seyfert 2. Therefore, in the optical, Mrk 273 is an ultraluminous infrared galaxy with a LINER nucleus and an extended off-nucleus Seyfert 2 nebula. The kinematics of the [O iii] ionized gas shows (1) the presence of highly disturbed gas in the regions around the LINER nucleus, (2) a high-velocity gas flow with a peak-to-peak amplitude of 2.4x103 km s-1, and (3) quiescent gas in the outer regions (at 3 kpc). We hypothesize that the high-velocity flow is the starburst-driven superwind generated in an optically obscured nuclear starburst and that the quiescent gas is directly ionized by a nuclear source, similar to the ionization cones typically seen in Seyfert galaxies. 20. Accurate Determination of Glacier Surface Velocity Fields with a DEM-Assisted Pixel-Tracking Technique from SAR Imagery Directory of Open Access Journals (Sweden) Shiyong Yan 2015-08-01 Full Text Available We obtained accurate, detailed motion distribution of glaciers in Central Asia by applying digital elevation model (DEM assisted pixel-tracking method to L-band synthetic aperture radar imagery. The paper firstly introduces and analyzes each component of the offset field briefly, and then describes the method used to efficiently and precisely compensate the topography-related offset caused by the large spatial baseline and rugged terrain with the help of DEM. The results indicate that the rugged topography not only forms the complex shapes of glaciers, but also affects the glacier velocity estimation, especially with large spatial baseline. The maximum velocity, 0.85 m∙d−1, was observed in the middle part on the Fedchenko Glacier, which is the world’s longest mountain glacier. The motion fluctuation on its main trunk is apparently influenced by mass flowing in from tributaries, as well as angles between tributaries and the main stream. The approach presented in this paper was proved to be highly appropriate for monitoring glacier motion and will provide valuable sensitive indicators of current and future climate change for environmental analysis. 1. High-resolution measurement of the unsteady velocity field to evaluate blood damage induced by a mechanical heart valve. Science.gov (United States) Bellofiore, Alessandro; Quinlan, Nathan J 2011-09-01 We investigate the potential of prosthetic heart valves to generate abnormal flow and stress patterns, which can contribute to platelet activation and lysis according to blood damage accumulation mechanisms. High-resolution velocity measurements of the unsteady flow field, obtained with a standard particle image velocimetry system and a scaled-up model valve, are used to estimate the shear stresses arising downstream of the valve, accounting for flow features at scales less than one order of magnitude larger than blood cells. Velocity data at effective spatial and temporal resolution of 60 μm and 1.75 kHz, respectively, enabled accurate extraction of Lagrangian trajectories and loading histories experienced by blood cells. Non-physiological stresses up to 10 Pa were detected, while the development of vortex flow in the wake of the valve was observed to significantly increase the exposure time, favouring platelet activation. The loading histories, combined with empirical models for blood damage, reveal that platelet activation and lysis are promoted at different stages of the heart cycle. Shear stress and blood damage estimates are shown to be sensitive to measurement resolution. 2. What Do the Hitomi Observations Tell Us About the Turbulent Velocities in the Perseus Cluster? Probing the Velocity Field with Mock Observations Science.gov (United States) ZuHone, J. A.; Miller, E. D.; Bulbul, E.; Zhuravleva, I. 2018-02-01 Hitomi made the first direct measurements of galaxy cluster gas motions in the Perseus cluster, which implied that its core is fairly “quiescent,” with velocities less than ∼200 km s‑1, despite the presence of an active galactic nucleus and sloshing cold fronts. Building on previous work, we use synthetic Hitomi/X-ray Spectrometer (SXS) observations of the hot plasma of a simulated cluster with sloshing gas motions and varying viscosity to analyze its velocity structure in a similar fashion. We find that sloshing motions can produce line shifts and widths similar to those measured by Hitomi. We find these measurements are unaffected by the value of the gas viscosity, since its effects are only manifested clearly on angular scales smaller than the SXS ∼1‧ PSF. The PSF biases the line shift of regions near the core as much as ∼40–50 km s‑1, so it is crucial to model this effect carefully. We also infer that if sloshing motions dominate the observed velocity gradient, Perseus must be observed from a line of sight that is somewhat inclined from the plane of these motions, but one that still allows the spiral pattern to be visible. Finally, we find that assuming isotropy of motions can underestimate the total velocity and kinetic energy of the core in our simulation by as much as ∼60%. However, the total kinetic energy in our simulated cluster core is still less than 10% of the thermal energy in the core, in agreement with the Hitomi observations. 3. Dynamical properties for the problem of a particle in an electric field of wave packet: Low velocity and relativistic approach Energy Technology Data Exchange (ETDEWEB) Oliveira, Diego F.M., E-mail: [email protected] [Institute for Multiscale Simulations, Friedrich-Alexander Universität, D-91052, Erlangen (Germany); Leonel, Edson D., E-mail: [email protected] [Departamento de Estatística, Matemática Aplicada e Computação, UNESP, Univ. Estadual Paulista, Av. 24A, 1515, Bela Vista, 13506-900, Rio Claro, SP (Brazil); Departamento de Física, UNESP, Univ. Estadual Paulista, Av. 24A, 1515, 13506-900, Rio Claro, SP (Brazil) 2012-11-01 We study some dynamical properties for the problem of a charged particle in an electric field considering both the low velocity and relativistic cases. The dynamics for both approaches is described in terms of a two-dimensional and nonlinear mapping. The structure of the phase spaces is mixed and we introduce a hole in the chaotic sea to let the particles to escape. By changing the size of the hole we show that the survival probability decays exponentially for both cases. Additionally, we show for the relativistic dynamics, that the introduction of dissipation changes the mixed phase space and attractors appear. We study the parameter space by using the Lyapunov exponent and the average energy over the orbit and show that the system has a very rich structure with infinite family of self-similar shrimp shaped embedded in a chaotic region. 4. Transport analysis of rf drift-velocity filter employing crossed DC and AC electric fields for ion swarm experiments International Nuclear Information System (INIS) Iinuma, K.; Takebe, M. 1995-01-01 The operational characteristics of the RF drift-velocity filter developed to separate a mixture of gaseous ions are examined theoretically. The solutions of the appropriate transport equations provide an analytical formula for the transmission efficiency of the filter in terms of the mobility and diffusion coefficient of the ions, the electric field strength, the RF frequency and the filter dimension. Using the experimental transport data for Li + /Xe and Cs + /Xe, the formula was tested and it was found that it adequately accounts for the degree of ion separation achieved by the filter at high gas pressures. The variation of the profiles of the arrival time spectra for Li + , Na + and Cs + ions in CO 2 , obtained by drift-tube experiments, also supports this analysis. 4 refs., 10 figs 5. Determination of Vertical Velocity Field of Southernmost Longitudinal Valley in Eastern Taiwan: A Joint Analysis of Leveling and GPS Measurements Directory of Open Access Journals (Sweden) Horng-Yue Chen 2012-01-01 Full Text Available In order to provide a detailed vertical velocity field in southernmost Longitudinal Valley where shows a complex three-fault system at the plate suture between Philippine Sea plate and Eurasia, we conducted leveling and GPS measurements, compiled data from previous surveys and combined them into a single data set. We compiled precise leveling results from 1984 to 2009, include 5 E-W trending and one N-S trending routes. We calculated the GPS vertical component from 10 continuous stations and from 89 campaign-mode stations from 1995 to 2010. The interseismic vertical rates are estimated by removing the co- and post-seismic effects of major large regional and nearby earthquakes. A stable continuous station S104 in the study area was adopted as the common reference station. We finally establish a map of the interseismic vertical velocity field. The interseismic vertical deformation was mainly accommodated by creeping/thrusting along two east-dipping strands of the three-fault system: the Luyeh and Lichi faults. The most dominant uplift of 30 mm yr-1 occurs at the hanging wall of the Lichi fault on the western Coastal Range. However the rate diminishes away from the fault in the hanging wall. The Quaternary tablelands inside of the Longitudinal Valley reveals uplift with a rate of 5 - 10 mm yr-1. Outside of the tablelands, the rest of the Longitudinal Valley flat area indicates substantial subsidence of -10 to -20 mm yr-1. Finally, it appears that the west-dipping blind fault under the eastern side of the Central Range does not play a significant role on interseismic deformation with subsidence rate of -5 to -10 mm yr-1. 6. Coherent Baryogenesis CERN Document Server Garbrecht, B; Schmidt, M G; Garbrecht, Bjorn; Prokopec, Tomislav; Schmidt, Michael G. 2004-01-01 We propose a new baryogenesis scenario based on coherent production and mixing of different fermionic species. The mechanism is operative during phase transitions, at which the fermions acquire masses via Yukawa couplings to scalar fields. Baryon production is efficient when the mass matrix is nonadiabatically varying, nonsymmetric and when it violates CP and B-L directly, or some other charges that are eventually converted to B-L. We first consider a toy model, which involves two mixing fermionic species, and then a hybrid inflationary scenario embedded in a supersymmetric Pati-Salam GUT. We show that, quite generically, a baryon excess in accordance with observation can result. 7. Coherent hole burning and Mollow absorption effects in the cycling transition Fe=0↔Fg=1 subject to a magnetic field International Nuclear Information System (INIS) Gu Ying; Sun Qingqing; Gong Qihuang 2004-01-01 With saturation and probing by circularly polarized fields, quantum coherence effects are investigated for the cycling transition F e =0↔F g =1, which is subject to a linearly polarized field and a magnetic field. The saturation field is applied to the case of maximum coherence between the drive Rabi frequency and magnetic field, corresponding to the electromagnetically induced absorption (EIA) with negative dispersion found by Gu et al. For a small saturation Rabi frequency, holes are burned in two Autler-Towns peaks outside two symmetric electromagnetically induced transparency windows due to the two-photon resonance. However, when the saturation Rabi frequency is comparable with the drive Rabi frequency, holes caused by the coherent population oscillation appear in the EIA spectrum. Finally, when the saturation Rabi frequency is large enough, several emission peaks are observed due to the Mollow absorption effects. Furthermore, the dispersion at the pump-probe detuning center is kept negative with an increase in saturation field, which is a precursor of superluminal light propagation 8. High resolution in-vivo imaging of skin with full field optical coherence tomography Science.gov (United States) Dalimier, E.; Bruhat, Alexis; Grieve, K.; Harms, F.; Martins, F.; Boccara, C. 2014-03-01 Full-field OCT (FFOCT) has the ability to provide en-face images with a very good axial sectioning as well as a very high transverse resolution (about 1 microns in all directions). Therefore it offers the possibility to visualize biological tissues with very high resolution both on the axial native view, and on vertical reconstructed sections. Here we investigated the potential dermatological applications of in-vivo skin imaging with FFOCT. A commercial FFOCT device was adapted for the in-vivo acquisition of stacks of images on the arm, hand and finger. Several subjects of different benign and pathological skin conditions were tested. The images allowed measurement of the stratum corneum and epidermis thicknesses, measurement of the stratum corneum refractive index, size measurement and count of the keratinocytes, visualization of the dermal-epidermal junction, and visualization of the melanin granules and of the melanocytes. Skins with different pigmentations could be discriminated and skin pathologies such as eczema could be identified. The very high resolution offered by FFOCT both on axial native images and vertical reconstructed sections allows for the visualization and measurement of a set of parameters useful for cosmetology and dermatology. In particular, FFOCT is a potential tool for the understanding and monitoring of skin hydration and pigmentation, as well as skin inflammation. 9. Continuous control of light group velocity from subluminal to superluminal propagation with a standing-wave coupling field in a Rb vapor cell International Nuclear Information System (INIS) Bae, In-Ho; Moon, Han Seb 2011-01-01 We present the continuous control of the light group velocity from subluminal to superluminal propagation with an on-resonant standing-wave coupling field in the 5S 1/2 -5P 1/2 transition of the Λ-type system of 87 Rb atoms. When a coupling field was changed from a traveling-wave to a standing-wave field by adjusting the power of a counterpropagating coupling field, the probe pulse propagation continuously transformed from subluminal propagation, due to electromagnetically induced transparency with the traveling-wave coupling field, to superluminal propagation, due to narrow enhanced absorption with the standing-wave coupling field. The group velocity of the probe pulse was measured to be approximately 0.004c to -0.002c as a function of the disparity between the powers of the copropagating and the counterpropagating coupling fields. 10. Coherent and semiclassical states in a magnetic field in the presence of the Aharonov-Bohm solenoid Energy Technology Data Exchange (ETDEWEB) Bagrov, V G [Department of Physics, Tomsk State University, 634050 Tomsk (Russian Federation); Gavrilov, S P; Gitman, D M; Filho, D P Meira, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Institute of Physics, University of Sao Paulo, CP 66318, CEP 05315-970 Sao Paulo, SP (Brazil) 2011-02-04 A new approach to constructing coherent states (CS) and semiclassical states (SS) in a magnetic-solenoid field is proposed. The main idea is based on the fact that the AB solenoid breaks the translational symmetry in the xy-plane; this has a topological effect such that there appear two types of trajectories which embrace and do not embrace the solenoid. Due to this fact, one has to construct two different kinds of CS/SS which correspond to such trajectories in the semiclassical limit. Following this idea, we construct CS in two steps, first the instantaneous CS (ICS) and then the time-dependent CS/SS as an evolution of the ICS. The construction is realized for nonrelativistic and relativistic spinning particles both in (2 + 1) and (3 + 1) dimensions and gives a non-trivial example of SS/CS for systems with a nonquadratic Hamiltonian. It is stressed that CS depending on their parameters (quantum numbers) describe both pure quantum and semiclassical states. An analysis is represented that classifies parameters of the CS in such respect. Such a classification is used for the semiclassical decompositions of various physical quantities. 11. Coherent and semiclassical states in a magnetic field in the presence of the Aharonov-Bohm solenoid International Nuclear Information System (INIS) Bagrov, V G; Gavrilov, S P; Gitman, D M; Filho, D P Meira 2011-01-01 A new approach to constructing coherent states (CS) and semiclassical states (SS) in a magnetic-solenoid field is proposed. The main idea is based on the fact that the AB solenoid breaks the translational symmetry in the xy-plane; this has a topological effect such that there appear two types of trajectories which embrace and do not embrace the solenoid. Due to this fact, one has to construct two different kinds of CS/SS which correspond to such trajectories in the semiclassical limit. Following this idea, we construct CS in two steps, first the instantaneous CS (ICS) and then the time-dependent CS/SS as an evolution of the ICS. The construction is realized for nonrelativistic and relativistic spinning particles both in (2 + 1) and (3 + 1) dimensions and gives a non-trivial example of SS/CS for systems with a nonquadratic Hamiltonian. It is stressed that CS depending on their parameters (quantum numbers) describe both pure quantum and semiclassical states. An analysis is represented that classifies parameters of the CS in such respect. Such a classification is used for the semiclassical decompositions of various physical quantities. 12. Active feedback wide-field optical low-coherence interferometry for ultrahigh-speed three-dimensional morphometry International Nuclear Information System (INIS) Choi, Woo June; Choi, Hae Young; Lee, Byeong Ha; Na, Jihoon; Eom, Jonghyun 2010-01-01 A novel optical interferometric scheme for ultrahigh-speed three-dimensional morphometry is proposed. The system is based on wide-field optical coherence tomography (WF-OCT) but with optically chopped illumination. The chopping frequency is feedback-controlled to be always matched with the Doppler frequency of the OCT interferometer, which provides an efficient page-wide demodulation suitable for ultrahigh-speed volumetric imaging. To compensate the unwanted variation in the OCT Doppler frequency of the system, the illumination frequency is phase-locked with an auxiliary laser interferometer which shares the reference arm with the OCT interferometer. The two-dimensional (2D) interference signals projected on the 2D array pixels of a 200 Hz CCD are accumulated during one imaging frame of the CCD. Then, each pixel of the CCD demodulates the OCT signal automatically. Owing to the proposed active frequency-locked illumination scheme, the demodulation does not depend on the variation in the axial scanning speed. Volumetric topograms or/and tomograms of several samples were achieved and rendered with a sensitivity of 58 dB at an axial scan speed of 0.805 mm s −1 13. Electromagnetic spatial coherence wavelets International Nuclear Information System (INIS) Castaneda, R.; Garcia-Sucerquia, J. 2005-10-01 The recently introduced concept of spatial coherence wavelets is generalized for describing the propagation of electromagnetic fields in the free space. For this aim, the spatial coherence wavelet tensor is introduced as an elementary amount, in terms of which the formerly known quantities for this domain can be expressed. It allows analyzing the relationship between the spatial coherence properties and the polarization state of the electromagnetic wave. This approach is completely consistent with the recently introduced unified theory of coherence and polarization for random electromagnetic beams, but it provides a further insight about the causal relationship between the polarization states at different planes along the propagation path. (author) 14. 19 mm sized bileaflet valve prostheses' flow field investigated by bidimensional laser Doppler anemometry (part I: velocity profiles). Science.gov (United States) Barbaro, V; Grigioni, M; Daniele, C; D'Avenio, G; Boccanera, G 1997-11-01 The investigation of the flow field downstream of a cardiac valve prosthesis is a well established task. In particular turbulence generation is of interest if damage to blood constituents is to be assessed. Several prosthetic valve flow studies are available in literature but they generally concern large-sized prostheses. The FDA draft guidance requires the study of the maximum Reynolds number conditions for a cardiac valve model to assess the worst case in turbulence by choosing both the minimum valve diameter and a high cardiac output value as protocol set up. Within the framework of a national research project regarding the characterization of cardiovascular endoprostheses, the Laboratory of Biomedical Engineering is currently conducting an in-depth study of turbulence generated downstream of bileaflet cardiac valves. Four models of 19 mm sized bileaflet valve prostheses, namely St Jude Medical HP Edwards Tekna, Sorin Bicarbon, and CarboMedics, were studied in aortic position. The prostheses were selected for the nominal annulus diameter reported by the manufacturers without any assessment of the valve sizing method. The hemodynamic function was investigated using a bidimensional LDA system. Results concern velocity profiles during the peak flow systolic phase, at high cardiac output regime, highlighting the different flow field features downstream of the four small-sized cardiac valves. 15. Superradiance from an ultrathin film of three-level V-type atoms: interplay between splitting, quantum coherence and local-field effects International Nuclear Information System (INIS) Malyshev, V A; Carreno, F; Anton, M A; Calderon, Oscar G; Dominguez-Adame, F 2003-01-01 We carry out a theoretical study of the collective spontaneous emission (superradiance) from an ultrathin film comprised of three-level atoms with V configuration of the operating transitions. As the thickness of the system is small compared to the emission wavelength inside the film, the local-field correction to the averaged Maxwell field is relevant. We show that the interplay between the low-frequency quantum coherence within the subspace of the upper doublet states and the local-field correction may drastically affect the branching ratio of the operating transitions. This effect may be used for controlling the emission process by varying the doublet splitting and the amount of low-frequency coherence 16. Can Probability Maps of Swept-Source Optical Coherence Tomography Predict Visual Field Changes in Preperimetric Glaucoma? Science.gov (United States) Lee, Won June; Kim, Young Kook; Jeoung, Jin Wook; Park, Ki Ho 2017-12-01 To determine the usefulness of swept-source optical coherence tomography (SS-OCT) probability maps in detecting locations with significant reduction in visual field (VF) sensitivity or predicting future VF changes, in patients with classically defined preperimetric glaucoma (PPG). Of 43 PPG patients, 43 eyes were followed-up on every 6 months for at least 2 years were analyzed in this longitudinal study. The patients underwent wide-field SS-OCT scanning and standard automated perimetry (SAP) at the time of enrollment. With this wide-scan protocol, probability maps originating from the corresponding thickness map and overlapped with SAP VF test points could be generated. We evaluated the vulnerable VF points with SS-OCT probability maps as well as the prevalence of locations with significant VF reduction or subsequent VF changes observed in the corresponding damaged areas of the probability maps. The vulnerable VF points were shown in superior and inferior arcuate patterns near the central fixation. In 19 of 43 PPG eyes (44.2%), significant reduction in baseline VF was detected within the areas of structural change on the SS-OCT probability maps. In 16 of 43 PPG eyes (37.2%), subsequent VF changes within the areas of SS-OCT probability map change were observed over the course of the follow-up. Structural changes on SS-OCT probability maps could detect or predict VF changes using SAP, in a considerable number of PPG eyes. Careful comparison of probability maps with SAP results could be useful in diagnosing and monitoring PPG patients in the clinical setting. 17. Particle transport across a circular shear layer with coherent structures International Nuclear Information System (INIS) Nielsen, A.H.; Lynov, J.P.; Juul Rasmussen, J. 1998-01-01 In the study of the dynamics of coherent structures, forced circular shear flows offer many desirable features. The inherent quantisation of circular geometries due to the periodic boundary conditions makes it possible to design experiments in which the spatial and temporal complexity of the coherent structures can be accurately controlled. Experiments on circular shear flows demonstrating the formation of coherent structures have been performed in different physical systems, including quasi-neutral plasmas, non-neutral plasmas and rotating fluids. In this paper we investigate the evolution of such coherent structures by solving the forced incompressible Navier-Stokes equations numerically using a spectral code. The model is formulated in the context of a rotating fluid but apply equally well to low frequency electrostatic oscillations in a homogeneous magnetized plasma. In order to reveal the Lagrangian properties of the flow and in particular to investigate the transport capacity in the shear layer, passive particles are traced by the velocity field. (orig.) 18. Spectral coherence in windturbine wakes Energy Technology Data Exchange (ETDEWEB) Hojstrup, J. [Riso National Lab., Roskilde (Denmark) 1996-12-31 This paper describes an experiment at a Danish wind farm to investigate the lateral and vertical coherences in the nonequilibrium turbulence of a wind turbine wake. Two meteorological masts were instrumented for measuring profiles of mean speed, turbulence, and temperature. Results are provided graphically for turbulence intensities, velocity spectra, lateral coherence, and vertical coherence. The turbulence was somewhat influenced by the wake, or possibly from aggregated wakes further upstream, even at 14.5 diameters. Lateral coherence (separation 5m) seemed to be unaffected by the wake at 7.5 diameters, but the flow was less coherent in the near wake. The wake appeared to have little influence on vertical coherence (separation 13m). Simple, conventional models for coherence appeared to be adequate descriptions for wake turbulence except for the near wake situation. 3 refs., 7 figs., 1 tab. 19. Experimental study of coherence vortices: Local properties of phase singularities in a spatial coherence function DEFF Research Database (Denmark) Wang, W.; Duan, Z.H.; Hanson, Steen Grüner 2006-01-01 By controlling the irradiance of an extended quasimonochromatic, spatially incoherent source, an optical field is generated that exhibits spatial coherence with phase singularities, called coherence vortices. A simple optical geometry for direct visualization of coherence vortices is proposed, an... 20. Effective ionization coefficients, electron drift velocities, and limiting breakdown fields for gas mixtures of possible interest to particle detectors International Nuclear Information System (INIS) Datskos, P.G. 1991-01-01 We have measured the gas-density, N, normalized effective ionization coefficient, bar a/N, and the electron drift velocity, w, as a function of the density-reduced electric field, E/N, and obtained the limiting, (E/N) lim , value of E/N for the unitary gases Ar, CO 2 , and CF 4 , the binary gas mixtures CO 2 :Ar (20: 80), CO 2 :CH 4 (20:80), and CF 4 :Ar (20:80), and the ternary gas mixtures CO 2 :CF 4 :Ar (10:10:80) and H 2 O: CF 4 :Ar (2:18:80). Addition of the strongly electron thermalizing gas CO 2 or H 2 O to the binary mixture CF 4 :Ar (1)''cools'' the mixture (i.e., lowers the electron energies), (2) has only a small effect on the magnitude of w(E/N) in the E/N range employed in the particle detectors, and (3) increases bar a/N for E/N ≥ 50 x 10 -17 V cm 2 . The increase in bar a/N, even though the electron energies are lower in the ternary mixture, is due to the Penning ionization of CO 2 (or H 2 O) in collisions with excited Ar* atoms. The ternary mixtures -- being fast, cool, and efficient -- have potential for advanced gas-filled particle detectors such as those for the SCC muon chambers. 17 refs., 8 figs., 1 tab 1. Relationship between optical coherence tomography sector peripapillary angioflow-density and Octopus visual field cluster mean defect values. Directory of Open Access Journals (Sweden) Gábor Holló Full Text Available To compare the relationship of Octopus perimeter cluster mean-defect (cluster MD values with the spatially corresponding optical coherence tomography (OCT sector peripapillary angioflow vessel-density (PAFD and sector retinal nerve fiber layer thickness (RNFLT values.High quality PAFD and RNFLT images acquired on the same day with the Angiovue/RTVue-XR Avanti OCT (Optovue Inc., Fremont, USA on 1 eye of 27 stable early-to-moderate glaucoma, 22 medically controlled ocular hypertensive and 13 healthy participants were analyzed. Octopus G2 normal visual field test was made within 3 months from the imaging.Total peripapillary PAFD and RNFLT showed similar strong positive correlation with global mean sensitivity (r-values: 0.6710 and 0.6088, P<0.0001, and similar (P = 0.9614 strong negative correlation (r-values: -0.4462 and -0.4412, P≤0.004 with global MD. Both inferotemporal and superotemporal sector PAFD were significantly (≤0.039 lower in glaucoma than in the other groups. No significant difference between the corresponding inferotemporal and superotemporal parameters was seen. The coefficient of determination (R2 calculated for the relationship between inferotemporal sector PAFD and superotemporal cluster MD (0.5141, P<0.0001 was significantly greater than that between inferotemporal sector RNFLT and superotemporal cluster MD (0.2546, P = 0.0001. The R2 values calculated for the relationships between superotemporal sector PAFD and RNFLT, and inferotemporal cluster MD were similar (0.3747 and 0.4037, respectively, P<0.0001.In the current population the relationship between inferotemporal sector PAFD and superotemporal cluster MD was strong. It was stronger than that between inferotemporal sector RNFLT and superotemporal cluster MD. Further investigations are necessary to clarify if our results are valid for other populations and can be usefully applied for glaucoma research. 2. Validation of full-field optical coherence tomography in distinguishing malignant and benign tissue in resected pancreatic cancer specimens. Directory of Open Access Journals (Sweden) Labrinus van Manen Full Text Available Pancreatic cancer is the fourth leading cause of cancer-related mortality in the United States. The minority of patients can undergo curative-intended surgical therapy due to progressive disease stage at time of diagnosis. Nonetheless, tumor involvement of surgical margins is seen in up to 70% of resections, being a strong negative prognostic factor. Real-time intraoperative imaging modalities may aid surgeons to obtain tumor-free resection margins. Full-field optical coherence tomography (FF-OCT is a promising diagnostic tool using high-resolution white-light interference microscopy without tissue processing. Therefore, we composed an atlas of FF-OCT images of malignant and benign pancreatic tissue, and investigated the accuracy with which the pathologists could distinguish these.One hundred FF-OCT images were collected from specimens of 29 patients who underwent pancreatic resection for various indications between 2014 and 2016. One experienced gastrointestinal pathologist and one pathologist in training scored independently the FF-OCT images as malignant or benign blinded to the final pathology conclusion. Results were compared to those obtained with standard hematoxylin and eosin (H&E slides.Overall, combined test characteristics of both pathologists showed a sensitivity of 72%, specificity of 74%, positive predictive value of 69%, negative predictive value of 79% and an overall accuracy of 73%. In the subset of pancreatic ductal adenocarcinoma patients, 97% of the FF-OCT images (n = 35 were interpreted as tumor by at least one pathologist. Moreover, normal pancreatic tissue was recognised in all cases by at least one pathologist. However, atrophy and fibrosis, serous cystadenoma and neuroendocrine tumors were more often wrongly scored, in 63%, 100% and 25% respectively.FF-OCT could distinguish normal pancreatic tissue from pathologic pancreatic tissue in both processed as non-processed specimens using architectural features. The accuracy in 3. Glaucoma Diagnostic Capabilities of Foveal Avascular Zone Parameters Using Optical Coherence Tomography Angiography According to Visual Field Defect Location. Science.gov (United States) Kwon, Junki; Choi, Jaewan; Shin, Joong Won; Lee, Jiyun; Kook, Michael S 2017-12-01 To assess the diagnostic ability of foveal avascular zone (FAZ) parameters to discriminate glaucomatous eyes with visual field defects (VFDs) in different locations (central vs. peripheral) from normal eyes. Totally, 125 participants were separated into 3 groups: normal (n=45), glaucoma with peripheral VFD (PVFD, n=45), and glaucoma with central VFD (CVFD, n=35). The FAZ area, perimeter, and circularity and parafoveal vessel density were calculated from optical coherence tomography angiography images. The diagnostic ability of the FAZ parameters and other structural parameters was determined according to glaucomatous VFD location. Associations between the FAZ parameters and central visual function were evaluated. A larger FAZ area and longer FAZ perimeter were observed in the CVFD group than in the PVFD and normal groups. The FAZ area, perimeter, and circularity were better in differentiating glaucomatous eyes with CVFDs from normal eyes [areas under the receiver operating characteristic curves (AUC), 0.78 to 0.88] than in differentiating PVFDs from normal eyes (AUC, 0.51 to 0.64). The FAZ perimeter had a similar AUC value to the circumpapillary retinal nerve fiber layer and macular ganglion cell-inner plexiform layer thickness for differentiating eyes with CVFDs from normal eyes (all P>0.05, the DeLong test). The FAZ area was significantly correlated with central visual function (β=-112.7, P=0.035, multivariate linear regression). The FAZ perimeter had good diagnostic capability in differentiating glaucomatous eyes with CVFDs from normal eyes, and may be a potential diagnostic biomarker for detecting glaucomatous patients with CVFDs. 4. 3D velocity field characterization of prosthetic heart valve with two different valve testers by means of stereo-PIV. Science.gov (United States) D'Avenio, Giuseppe; Grigioni, Mauro; Daniele, Carla; Morbiducci, Umberto; Hamilton, Kathrin 2015-01-01 Prosthetic heart valves can be associated to mechanical loading of blood, potentially linked to complications (hemolysis and thrombogenicity) which can be clinically relevant. In order to test such devices in pulsatile mode, pulse duplicators (PDs) have been designed and built according to different concepts. This study was carried out to compare anemometric measurements made on the same prosthetic device, with two widely used PDs. The valve (a 27-mm bileaflet valve) was mounted in the aortic section of the PD. The Sheffield University PD and the RWTH Aachen PD were selected as physical models of the circulation. These two PDs differ mainly in the vertical vs horizontal realization, and in the ventricular section, which in the RWTH PD allows for storage of potential energy in the elastic walls of the ventricle. A glassblown aorta, realized according to the geometric data of the same anatomical district in healthy individuals, was positioned downstream of the valve, obtaining 1:1 geometric similarity conditions. A NaI-glycerol-water solution of suitable kinematic viscosity and, at the same time, the proper refractive index, was selected. The flow field downstream of the valve was measured by means of the stereo-PIV (Particle Image Velocimetry) technique, capable of providing the complete 3D velocity field as well as the entire Reynolds stress tensor. The measurements were carried out at the plane intersecting the valve axis. A three-jet profile was clearly found in the plane crossing the leaflets, with both PDs. The extent of the typical recirculation zone in the Valsalva sinus was much larger in the RWTH PD, on account of the different duration of the swirling motion in the ventricular chamber, caused by the elasticity of the ventricle and its geometry. The comparison of the hemodynamical behaviour of the same bileaflet valve tested in two PDs demonstrated the role of the mock loop in affecting the valve performance. 5. Field measurements of horizontal forward motion velocities of terrestrial dust devils: Towards a proxy for ambient winds on Mars and Earth Science.gov (United States) Balme, M. R.; Pathare, A.; Metzger, S. M.; Towner, M. C.; Lewis, S. R.; Spiga, A.; Fenton, L. K.; Renno, N. O.; Elliott, H. M.; Saca, F. A.; Michaels, T. I.; Russell, P.; Verdasca, J. 2012-11-01 Dust devils - convective vortices made visible by the dust and debris they entrain - are common in arid environments and have been observed on Earth and Mars. Martian dust devils have been identified both in images taken at the surface and in remote sensing observations from orbiting spacecraft. Observations from landing craft and orbiting instruments have allowed the dust devil translational forward motion (ground velocity) to be calculated, but it is unclear how these velocities relate to the local ambient wind conditions, for (i) only model wind speeds are generally available for Mars, and (ii) on Earth only anecdotal evidence exists that compares dust devil ground velocity with ambient wind velocity. If dust devil ground velocity can be reliably correlated to the ambient wind regime, observations of dust devils could provide a proxy for wind speed and direction measurements on Mars. Hence, dust devil ground velocities could be used to probe the circulation of the martian boundary layer and help constrain climate models or assess the safety of future landing sites. We present results from a field study of terrestrial dust devils performed in the southwest USA in which we measured dust devil horizontal velocity as a function of ambient wind velocity. We acquired stereo images of more than a 100 active dust devils and recorded multiple size and position measurements for each dust devil. We used these data to calculate dust devil translational velocity. The dust devils were within a study area bounded by 10 m high meteorology towers such that dust devil speed and direction could be correlated with the local ambient wind speed and direction measurements. Daily (10:00-16:00 local time) and 2-h averaged dust devil ground speeds correlate well with ambient wind speeds averaged over the same period. Unsurprisingly, individual measurements of dust devil ground speed match instantaneous measurements of ambient wind speed more poorly; a 20-min smoothing window applied to 6. Second-order Monte Carlo wave-function approach to the relaxation effects on ringing revivals in a molecular system interacting with a strongly squeezed coherent field International Nuclear Information System (INIS) Nakano, Masayoshi; Kishi, Ryohei; Nitta, Tomoshige; Yamaguchi, Kizashi 2004-01-01 We investigate the relaxation effects on the quantum dynamics in a two-state molecular system interacting with a single-mode strongly amplitude-squeezed coherent field using the second-order Monte Carlo wave-function method. The molecular population inversion (collapse-revival behavior of Rabi oscillations) is known to show the echoes after each revival, which are referred to as ringing revivals, in the case of strongly squeezed coherent fields with oscillatory photon-number distributions due to the phase-space interference effect. Two types of relaxation effects, i.e., cavity relaxation (the dissipation of an internal single mode to outer mode) and molecular coherent (phase) relaxation caused by nuclear vibrations on ringing revivals are investigated from the viewpoint of the quantum-phase dynamics using the quasiprobability (Q function) distribution of a single-mode field and the off-diagonal molecular density matrix ρ elec1,2 (t). It turns out that the molecular phase relaxation attenuates both the entire revival-collapse behavior and the increase in ρ elec1,2 (t) during the quiescent region, whereas a very slight cavity relaxation particularly suppresses the echoes in ringing revivals more significantly than the first revival but hardly changes a primary variation in envelope of ρ elec1,2 (t) in the nonrelaxation case 7. Visualisation of the velocity field in a scaled water model for validation of numerical calculations for a powder fuelled boiler Energy Technology Data Exchange (ETDEWEB) Dumortier, Laurent [Luleaa Univ. of Technology (Sweden) 2001-01-01 Validation of numerical predictions of the flow field in a powder fired industry boiler by flow visualisation in a water model has been studied. The bark powder fired boiler at AssiDomaen Kraftliner in Piteaa has been used as a case study. A literature study covering modelling of combusting flows by water models and different flow visualisation techniques has been carried out. The main conclusion as regards the use of water models is that only qualitative information can be expected. As far as turbulent flow is assured in the model as well as the real furnace, the same Reynolds number is not required. Geometrical similarity is important but modelling of burner jets requires adaptation of the jet diameters in the model. Guidelines for this are available and are presented in the report. The review of visualisation techniques shows that a number of methods have been used successfully for validation of flow field predictions. The conclusion is that the Particle Image Velocimetry and Particle Tracking Velocimetry methods could be very suitable for validation purposes provided that optical access is possible. The numerical predictions include flow fields in a 1130 scale model of the AssiDomaen furnace with water flow as well as flow and temperature fields in the actual furnace. Two burner arrangements were considered both for the model and the actual furnace, namely the present configuration with four front burners and a proposed modification where an additional burner is positioned at a side wall below the other burners. There are many similarities between the predicted flow fields in the model and the full scale furnace but there are also some differences, in particular in the region above the burners and the effects of the low region re-circulation on the lower burner jets. The experiments with the water model have only included the arrangement with four front burners. There were problems determining the velocities in the jets and the comparisons with predictions are 8. IN-SYNC VI. Identification and Radial Velocity Extraction for 100+ Double-Lined Spectroscopic Binaries in the APOGEE/IN-SYNC Fields Science.gov (United States) Fernandez, M. A.; Covey, Kevin R.; De Lee, Nathan; Chojnowski, S. Drew; Nidever, David; Ballantyne, Richard; Cottaar, Michiel; Da Rio, Nicola; Foster, Jonathan B.; Majewski, Steven R.; Meyer, Michael R.; Reyna, A. M.; Roberts, G. W.; Skinner, Jacob; Stassun, Keivan; Tan, Jonathan C.; Troup, Nicholas; Zasowski, Gail 2017-08-01 We present radial velocity measurements for 70 high confidence, and 34 potential binary systems in fields containing the Perseus Molecular Cloud, Pleiades, NGC 2264, and the Orion A star-forming region. Eighteen of these systems have been previously identified as binaries in the literature. Candidate double-lined spectroscopic binaries (SB2s) are identified by analyzing the cross-correlation functions (CCFs) computed during the reduction of each APOGEE spectrum. We identify sources whose CCFs are well fit as the sum of two Lorentzians as likely binaries, and provide an initial characterization of the system based on the radial velocities indicated by that dual fit. For systems observed over several epochs, we present mass ratios and systemic velocities; for two systems with observations on eight or more epochs, and which meet our criteria for robust orbital coverage, we derive initial orbital parameters. The distribution of mass ratios for multi-epoch sources in our sample peaks at q = 1, but with a significant tail toward lower q values. Tables reporting radial velocities, systemic velocities, and mass ratios are provided online. We discuss future improvements to the radial velocity extraction method we employ, as well as limitations imposed by the number of epochs currently available in the APOGEE database. The Appendix contains brief notes from the literature on each system in the sample, and more extensive notes for select sources of interest. 9. The effect of coherent phase control in (e, 2e) process of a hydrogenic ion in presence of a two colour laser field Energy Technology Data Exchange (ETDEWEB) Sinha, C; Chattopadhyay, A; Deb, S Ghosh, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Theoretical Physics Department, Indian Association for the Cultivation of Science, Kolkata - 700032 (India) 2009-11-01 We study the coherent phase control (CPC) of ionization (TDCS) of a hydrogenic ion under the action of a bichromatic laser field consisting of a fundamental frequency and one of its harmonics by varying the phase difference between the two components. A significant reduction is noted particularly in the recoil peak (dominant for the ionic target) with the two color field as compared to the monochromatic one, while the modification of the binary peak is comparatively less. The CPC is found to be more effective for the odd harmonics than the even one. 10. Manipulation of incoherent and coherent spin ensembles in diluted magnetic semiconductors via ferromagnetic fringe fields; Manipulation inkohaerenter und kohaerenter Spinensembles in verduennt-magnetischen Halbleitern mittels ferromagnetischer Streufelder Energy Technology Data Exchange (ETDEWEB) Halm, Simon 2009-05-19 In this thesis it is demonstrated that fringe fields of nanostructured ferromagnets provide the opportunity to manipulate both incoherent and coherent spin ensembles in a dilute magnetic semiconductor (DMS). Fringe fields of Fe/Tb ferromagnets with a remanent out-of-plane magnetization induce a local magnetization in a (Zn,Cd,Mn)Se DMS. Due to the sp-d exchange interaction, optically generated electron-hole pairs align their spin along the DMS magnetization. One obtains a local, remanent spin polarization which was probed by spatially resolved, polarization sensitive photoluminescence spectroscopy. Fringe fields from in-plane magnetized Co ferromagnets allow to locally modify the precession frequency of the Manganese magnetic moments of the DMS in an external magnetic field. This was probed by time-resolved Kerr rotation technique. The inhomogeneity of the fringe field leads to a shortening of the ensemble decoherence time and to the effect of a time-dependent ensemble precession frequency. (orig.) 11. Velocity field measurements of flow inside snout of zinc plating process using a single-frame PIV technique Energy Technology Data Exchange (ETDEWEB) Lee, S.J. 2000-05-01 In the continuous hot-dip galvanizing process of steel strips, the snout has been installed at the entering region of feeding strip into the molten zinc (Zn) pot. However, evaporated Zn particles in the snout cause ash imperfection on the galvanized steel strip surface. In order to resolve this problem, the flow field inside the snout, both on the deoxidisation gas flow above the free surface and the molten Zn flow in the Zn pot, has been investigated experientially. For a 1/10 scale water model, flow visualization and PIV (Particle Image Velocimetry) velocity field measurements were carried out at the strip speed V{sub s}= 1.5 m/s. Aluminum flakes (1{mu}m) and atomized olive oil (3{mu}m) were used as seeding particles to simulate the molten Zn flow and the deoxidisation gas flow, respectively. As a result, the liquid flow in the Zn pot is dominantly influenced by the up-rising flow in diagonal direction caused by the rotating sink roll. For gas flow in from of the strip inside the snout, the large-scale vortex formed by the downward moving strip is dominant. In the rear side of the strip, a counterclockwise vortex is formed and some of the flow following by the moving strip impinges on the free surface of molten Zn. The liquid flow in front of the strip is governed by the up-rising flow entering the snout, caused by the rotating sink roll. The moving strip affects dominantly the liquid flow behind the strip inside the snout, and large amounts of liquid are entrained and followed the moving strip toward the sink roll. A thin boundary layer is formed on the front side due to the up-rising flow, however, a relatively thick boundary layer is formed in the rear side of the strip. Inside the snout, the deoxidisation gas flow above the free surface is much faster than the liquid flow in the Zn pot. More ash imperfections are anticipated on the rear surface of the strip where larger influx flow moves toward the strip in the region near the free surface. (author) 12. Metal Abundances, Radial Velocities, and Other Physical Characteristics for the RR Lyrae Stars in The Kepler Field Science.gov (United States) Nemec, James M.; Cohen, Judith G.; Ripepi, Vincenzo; Derekas, Aliz; Moskalik, Pawel; Sesar, Branimir; Chadid, Merieme; Bruntt, Hans 2013-08-01 Spectroscopic iron-to-hydrogen ratios, radial velocities, atmospheric parameters, and new photometric analyses are presented for 41 RR Lyrae stars (and one probable high-amplitude δ Sct star) located in the field-of-view of the Kepler space telescope. Thirty-seven of the RR Lyrae stars are fundamental-mode pulsators (i.e., RRab stars) of which sixteen exhibit the Blazhko effect. Four of the stars are multiperiodic RRc pulsators oscillating primarily in the first-overtone mode. Spectroscopic [Fe/H] values for the 34 stars for which we were able to derive estimates range from -2.54 ± 0.13 (NR Lyr) to -0.05 ± 0.13 dex (V784 Cyg), and for the 19 Kepler-field non-Blazhko stars studied by Nemec et al. the abundances agree will with their photometric [Fe/H] values. Four non-Blazhko RR Lyrae stars that they identified as metal-rich (KIC 6100702, V2470 Cyg, V782 Cyg and V784 Cyg) are confirmed as such, and four additional stars (V839 Cyg, KIC 5520878, KIC 8832417, KIC 3868420) are also shown here to be metal-rich. Five of the non-Blazhko RRab stars are found to be more metal-rich than [Fe/H] ~-0.9 dex while all of the 16 Blazhko stars are more metal-poor than this value. New P-\\phi _31^s-[Fe/H] relationships are derived based on ~970 days of quasi-continuous high-precision Q0-Q11 long- and short-cadence Kepler photometry. With the exception of some Blazhko stars, the spectroscopic and photometric [Fe/H] values are in good agreement. Several stars with unique photometric characteristics are identified, including a Blazhko variable with the smallest known amplitude and frequency modulations (V838 Cyg). Based in part on observations made at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Keck Observatory was made possible by the generous financial support of the W.M. Keck Foundation. Also, based in part on 13. Characterization of velocity and temperature fields in a 217 pin wire wrapped fuel bundle of sodium cooled fast reactor International Nuclear Information System (INIS) Naveen Raj, M.; Velusamy, K. 2016-01-01 Highlights: • We simulate flow and temperature fields in fuel subassembly of fast reactor. • We perform high fidelity computations for 217 pin bundle of 7 axial pitch lengths. • We investigate transverse and axial flows in different types of subchannels. • Correlations are proposed for transverse flow, which form input for subchannel analysis. • Periodic variations of large magnitude are observed in subchannel flow rates. - Abstract: RANS based computational fluid dynamic (CFD) simulation of flow and temperature fields in a fast reactor fuel subassembly has been carried out. The sodium cooled prototype subassembly consists of 217 pins with helical wire spacers. An axial length of seven helical wire pitches has been considered for the study adopting a structured mesh having 36 million points and 84 processors in parallel. The computational model has been validated against in-house and published experimental data for friction factor and Nusselt number. Also, the transverse flow in the central subchannel and swirl flow in the peripheral subchannel are compared against reported experimental data and those computed by subchannel models. The focus of the study is investigation of transverse and axial flows in different types of subchannels. Based on the 3-dimensional CFD study, correlations have been proposed for calculation of transverse flow, which forms an important input for development of subchannel analysis codes. Periodic variations have been observed in the subchannel axial flow rates. For the subchannels located in the central region, the peak to peak variation in the axial flow rate is ∼21% and it is found to be contributed by the changes in the flow area and hydraulic resistance due to frequent passage of helical wires through the subchannel. For the subchannels located in the periphery, this variation is as high as 50%. The transverse flow in the central subchannels follows a cosine profile, for all the faces. However, there is a phase lag of 120 14. Estimation of vector velocity DEFF Research Database (Denmark) 2000-01-01 Using a pulsed ultrasound field, the two-dimensional velocity vector can be determined with the invention. The method uses a transversally modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation. The new... 15. Dynamic generation and coherent control of beating stationary light pulses by a microwave coupling field in five-level cold atoms Science.gov (United States) Bao, Qian-Qian; Zhang, Yan; Cui, Cui-Li; Meng, Shao-Ying; Fang, You-Wei; Tian, Xue-Dong 2018-04-01 We propose an efficient scheme for generating and controlling beating stationary light pulses in a five-level atomic sample driven into electromagnetically induced transparency condition. This scheme relies on an asymmetrical procedure of light storage and retrieval tuned by two counter-propagating control fields where an additional coupling field, such as the microwave field, is introduced in the retrieval stage. A quantum probe field, incident upon such an atomic sample, is first transformed into spin coherence excitation of the atoms and then retrieved as beating stationary light pulses exhibiting a series of maxima and minima in intensity due to the alternative constructive and destructive interference. It is convenient to control the beating stationary light pulses just by manipulating the intensity and detuning of the additional microwave field. This interesting phenomenon involves in fact the coherent manipulation of dark-state polaritons and could be explored to achieve the efficient temporal splitting of stationary light pulses and accurate measurement of the microwave intensity. 16. Relativistic generalization of the Van-Cittert-Zernike theorem and coherent properties of rotating star radiation International Nuclear Information System (INIS) Mandjos, A.V.; Khmil', S.V. 1979-01-01 The formula is derived for the complex coherence degree of radiation from the surface moving arbitrarily in the gravitational field. The calculations are carried out referina to the rotating star observed at the spectral line by the interferometric method. The possibility of determining interferometrically the star rotational velocity and axis orientation is grounded 17. TRAJ - a FORTRAN 77 computer program for the calculation of trajectories on the basis of space and time varying velocity fields International Nuclear Information System (INIS) Zimmer, J. 1986-09-01 The computation of three dimensional trajectories is described in this report. Since measurements of the position and velocity of individual fluid parcels are difficult to be carried out and analytic solutions applicable to the trajectory problem are not available, trajectories have to be calculated by successive observations of the corresponding velocity fields using a method of successive approximation. The application is restricted to cartesian grid coordinate system with equidistant grid points. This model was developed for meteorological purposes (transport of pollutants) but can also be used for other fluids and scales. (orig./PW) [de 18. Definition of the local fields of velocity, temperature and turbulent characteristics for axial stabilized fluid in arbitrary formed rod bundle assemblies International Nuclear Information System (INIS) Sedov, A.A.; Gagin, V.L. 1995-01-01 For the temperature fields in rod clads of experimental assemblies a good agreement have been got with use of prior calculations by subchannel code COBRA-IV-I, from results of which an additional information about δt/δX 3 distribution was taken. The method of definition the local fields of velocity, turbulent kinetic energy, temperature and eddy diffusivities for one-phase axial stabilized fluids in arbitrary formed rod bundle assemblies with invariable upward geometry was developed. According to this model the AGURA code was worked out to calculate local thermal hydraulic problems in combination with temperature fields in fuel rods and constructive elements of fuel assemblies. The method does not use any prior geometric scales and is based only on invariant local flow parameters: turbulent kinetic energy, velocity field deformation tensor and specific work of inner friction. Verification of this method by available experimental data showed a good agreement of calculation data and findings of velocity and t.k.e. fields, when the secondary flows have not a substantial influence to a balance of axial momentum and turbulent kinetic energy. (author) 19. Copernicus Big Data and Google Earth Engine for Glacier Surface Velocity Field Monitoring: Feasibility Demonstration on San Rafael and San Quintin Glaciers Science.gov (United States) Di Tullio, M.; Nocchi, F.; Camplani, A.; Emanuelli, N.; Nascetti, A.; Crespi, M. 2018-04-01 The glaciers are a natural global resource and one of the principal climate change indicator at global and local scale, being influenced by temperature and snow precipitation changes. Among the parameters used for glacier monitoring, the surface velocity is a key element, since it is connected to glaciers changes (mass balance, hydro balance, glaciers stability, landscape erosion). The leading idea of this work is to continuously retrieve glaciers surface velocity using free ESA Sentinel-1 SAR imagery and exploiting the potentialities of the Google Earth Engine (GEE) platform. GEE has been recently released by Google as a platform for petabyte-scale scientific analysis and visualization of geospatial datasets. The algorithm of SAR off-set tracking developed at the Geodesy and Geomatics Division of the University of Rome La Sapienza has been integrated in a cloud based platform that automatically processes large stacks of Sentinel-1 data to retrieve glacier surface velocity field time series. We processed about 600 Sentinel-1 image pairs to obtain a continuous time series of velocity field measurements over 3 years from January 2015 to January 2018 for two wide glaciers located in the Northern Patagonian Ice Field (NPIF), the San Rafael and the San Quintin glaciers. Several results related to these relevant glaciers also validated with respect already available and renown software (i.e. ESA SNAP, CIAS) and with respect optical sensor measurements (i.e. LANDSAT8), highlight the potential of the Big Data analysis to automatically monitor glacier surface velocity fields at global scale, exploiting the synergy between GEE and Sentinel-1 imagery. 20. On coherent states International Nuclear Information System (INIS) Polubarinov, I.V. 1975-01-01 A definition of the coherent state representation is given in this paper. In the representation quantum theory equations take the form of classical field theory equations (with causality inherent to the latter) not only in simple cases (free field and interactions with an external current or field), but also in the general case of closed systems of interacting fields. And, conversely, a classical field theory can be transformed into a form of a quantum one 1. Optical Coherence and Quantum Optics CERN Document Server Mandel, Leonard 1995-01-01 This book presents a systematic account of optical coherence theory within the framework of classical optics, as applied to such topics as radiation from sources of different states of coherence, foundations of radiometry, effects of source coherence on the spectra of radiated fields, coherence theory of laser modes, and scattering of partially coherent light by random media. The book starts with a full mathematical introduction to the subject area and each chapter concludes with a set of exercises. The authors are renowned scientists and have made substantial contributions to many of the topi 2. Application of L.D.A. to measure instantaneous flow velocity field in the exhaust of a combustion engine International Nuclear Information System (INIS) Boutrif, M.S.; Thelliez, M. 1993-01-01 We present experimental results of instantaneous velocity measurement, which were obtained by application of the laser Doppler anemometry (L.D.A.) at the exhaust pipe of a reciprocating engine under real working conditions. First of all, we show that the instantaneous velocity is monodimensional along a straight exhaust pipe, and that the boundary layer develops within a 2 mm thickness. We also show that the cylinder discharges in two phases: the blow down period and the final part of exhaust stroke. We also make obvious, that the flow escapes very quickly: its velocity varies betwen -100 m/s and 200 m/s within a period shorter than 1 ms; thereby, we do record the acoustic resonance phenomenon, when the engine speed is greater than 3 000 rpm. Finally, we show that in the exhaust pipe the apparent fluctuation - i.e. the cyclic dispersion and the actual turbulence - may reach 15%. (orig.) 3. Experimental study of the possibility of reducing the resistance and unevenness of output field of velocities in flat diffuser channels with large opening angles Science.gov (United States) Dmitriev, S. S.; Vasil'ev, K. E.; Mokhamed, S. M. S. O.; Gusev, A. A.; Barbashin, A. V. 2017-11-01 In modern combined cycle gas turbines (CCGT), when designing the reducers from the output diffuser of a gas turbine to a boiler-utilizer, wide-angle diffusers are used, in which practically from the input a flow separation and transition to jet stream regime occurs. In such channels, the energy loss in the field of velocities sharply rise and the field of velocities in the output from them is characterized by considerable unevenness that worsens the heat transfer process in the first by motion tube bundles of the boiler-utilizer. The results of experimental research of the method for reducing the energy loss and alignment of the field of velocities at the output from a flat asymmetrical diffuser channel with one deflecting wall with the opening angle of 40° by means of placing inside the channel the flat plate parallel to the deflecting wall are presented in the paper. It is revealed that, at this placement of the plate in the channel, it has a chance to reduce the energy loss by 20%, considerably align the output field of velocities, and decrease the dynamic loads on the walls in the output cross-section. The studied method of resistance reduction and alignment of the fields of velocities in the flat diffuser channels was used for optimization of the reducer from the output diffuser of the gas turbine to the boiler-utilizer of CCGT of PGU-450T type of Kaliningrad Thermal Power Plant-2. The obtained results are evidence that the configuration of the reducer installed in the PGU-450T of Kaliningrad Thermal Power Plant-2 is not optimal. It follows also from the obtained data that working-off the reducer should be necessarily conducted by the test results of the channel consisting of the model of reducer with the model of boiler-utilizer installed behind it. Application of the method of alignment of output field of velocities and reducing the resistance in the wide-angle diffusers investigated in the work made it possible—when using the known model of diffusion 4. Spatial coherence of electron beams from field emitters and its effect on the resolution of imaged objects Energy Technology Data Exchange (ETDEWEB) Latychevskaia, Tatiana, E-mail: [email protected] 2017-04-15 Sub-nanometer and nanometer-sized tips provide high coherence electron sources. Conventionally, the effective source size is estimated from the extent of the experimental biprism interference pattern created on the detector by applying the van Cittert Zernike theorem. Previously reported experimental intensity distributions on the detector exhibit Gaussian distribution and our simulations show that this is an indication that such electron sources must be at least partially coherent. This, in turn means that strictly speaking the Van Cittert Zernike theorem cannot be applied, since it assumes an incoherent source. The approach of applying the van Cittert Zernike theorem is examined in more detail by performing simulations of interference patterns for the electron sources of different size and different coherence length, evaluating the effective source size from the extent of the simulated interference pattern and comparing the obtained result with the pre-defined value. The intensity distribution of the source is assumed to be Gaussian distributed, as it is observed in experiments. The visibility or the contrast in the simulated holograms is found to be always less than 1 which agrees well with previously reported experimental results and thus can be explained solely by the Gaussian intensity distribution of the source. The effective source size estimated from the extent of the interference pattern turns out to be of about 2–3 times larger than the pre-defined size, but it is approximately equal to the intrinsic resolution of the imaging system. A simple formula for estimating the intrinsic resolution, which could be useful when employing nano-tips in in-line Gabor holography or point-projection microscopy, is provided. - Highlights: • van Cittert Zernike theorem for nano- and sub-nano electron emitting tips is revised. • Simulations show that nano- and sub-nano electron emitting tips are at least partially coherent. • A simple formula for evaluating 5. Measurement of flow velocity fields in small vessel-mimic phantoms and vessels of small animals using micro ultrasonic particle image velocimetry (micro-EPIV) International Nuclear Information System (INIS) Qian Ming; Niu Lili; Jiang Bo; Jin Qiaofeng; Jiang Chunxiang; Zheng Hairong; Wang Yanping 2010-01-01 Determining a multidimensional velocity field within microscale opaque fluid flows is needed in areas such as microfluidic devices, biofluid mechanics and hemodynamics research in animal studies. The ultrasonic particle image velocimetry (EchoPIV) technique is appropriate for measuring opaque flows by taking advantage of PIV and B-mode ultrasound contrast imaging. However, the use of clinical ultrasound systems for imaging flows in small structures or animals has limitations associated with spatial resolution. This paper reports on the development of a high-resolution EchoPIV technique (termed as micro-EPIV) and its application in measuring flows in small vessel-mimic phantoms and vessels of small animals. Phantom experiments demonstrate the validity of the technique, providing velocity estimates within 4.1% of the analytically derived values with regard to the flows in a small straight vessel-mimic phantom, and velocity estimates within 5.9% of the computationally simulated values with regard to the flows in a small stenotic vessel-mimic phantom. Animal studies concerning arterial and venous flows of living rats and rabbits show that the micro-EPIV-measured peak velocities within several cardiac cycles are about 25% below the values measured by the ultrasonic spectral Doppler technique. The micro-EPIV technique is able to effectively measure the flow fields within microscale opaque fluid flows. 6. Measurement of flow velocity fields in small vessel-mimic phantoms and vessels of small animals using micro ultrasonic particle image velocimetry (micro-EPIV). Science.gov (United States) Qian, Ming; Niu, Lili; Wang, Yanping; Jiang, Bo; Jin, Qiaofeng; Jiang, Chunxiang; Zheng, Hairong 2010-10-21 Determining a multidimensional velocity field within microscale opaque fluid flows is needed in areas such as microfluidic devices, biofluid mechanics and hemodynamics research in animal studies. The ultrasonic particle image velocimetry (EchoPIV) technique is appropriate for measuring opaque flows by taking advantage of PIV and B-mode ultrasound contrast imaging. However, the use of clinical ultrasound systems for imaging flows in small structures or animals has limitations associated with spatial resolution. This paper reports on the development of a high-resolution EchoPIV technique (termed as micro-EPIV) and its application in measuring flows in small vessel-mimic phantoms and vessels of small animals. Phantom experiments demonstrate the validity of the technique, providing velocity estimates within 4.1% of the analytically derived values with regard to the flows in a small straight vessel-mimic phantom, and velocity estimates within 5.9% of the computationally simulated values with regard to the flows in a small stenotic vessel-mimic phantom. Animal studies concerning arterial and venous flows of living rats and rabbits show that the micro-EPIV-measured peak velocities within several cardiac cycles are about 25% below the values measured by the ultrasonic spectral Doppler technique. The micro-EPIV technique is able to effectively measure the flow fields within microscale opaque fluid flows. 7. Using the Vertical Component of the Surface Velocity Field to Map the Locked Zone at Cascadia Subduction Zone Science.gov (United States) Moulas, E.; Brandon, M. T.; Podladchikov, Y.; Bennett, R. A. 2014-12-01 At present, our understanding of the locked zone at Cascadia subduction zone is based on thermal modeling and elastic modeling of horizontal GPS velocities. The thermal model by Hyndman and Wang (1995) provided a first-order assessment of where the subduction thrust might be cold enough for stick-slip behavior. The alternative approach by McCaffrey et al. (2007) is to use a Green's function that relates horizontal surface velocities, as recorded by GPS, to interseismic elastic deformation. The thermal modeling approach is limited by a lack of information about the amount of frictional heating occurring on the thrust (Molnar and England, 1990). The GPS approach is limited in that the horizontal velocity component is fairly insensitive to the structure of the locked zone. The vertical velocity component is much more useful for this purpose. We are fortunate in that vertical velocities can now be measured by GPS to a precision of about 0.2 mm/a. The dislocation model predicts that vertical velocities should range up to about 20 percent of the subduction velocity, which means maximum values of ~7 mm/a. The locked zone is generally entirely offshore at Cascadia, except for the Olympic Peninsula region, where the underlying Juan De Fuca plate has an anomalously low dip. Previous thermal and GPS modeling, as well as tide gauge data and episodic tremors indicate the locked zone there extends about 50 to 75 km onland. This situation provides an opportunity to directly study the locked zone. With that objective in mind, we have constructed a full 3D geodynamic model of the Cascadia subduction zone. At present, the model provides a full representation of the interseismic elastic deformation due to variations of slip on the subduction thrust. The model has been benchmarked against the Savage (2D) and Okada (3D) analytical solutions. This model has an important advantage over traditional dislocation modeling in that we include temperature-sensitive viscosity for the upper and 8. The nonlinear gyroresonance interaction between energetic electrons and coherent VLF waves propagating at an arbitrary angle with respect to the earth's magnetic field Science.gov (United States) Bell, T. F. 1984-01-01 A theory is presented of the nonlinear gyroresonance interaction that takes place in the magnetosphere between energetic electrons and coherent VLF waves propagating in the whistler mode at an arbitrary angle psi with respect to the earth's magnetic field B-sub-0. Particularly examined is the phase trapping (PT) mechanism believed to be responsible for the generation of VLF emissions. It is concluded that near the magnetic equatorial plane gradients of psi may play a very important part in the PT process for nonducted waves. Predictions of a higher threshold value for PT for nonducted waves generally agree with experimental data concerning VLF emission triggering by nonducted waves. 9. Information entropy properties of the atoms in the system of coupled Λ-type three-level atoms interacting with coherent field in Kerr medium International Nuclear Information System (INIS) Li Ke; Ling Weijun 2011-01-01 The information entropy properties of the atoms of coupled Λ-type three-level atoms interacting with coherent field are studied by means of quantum theory, and discussed the time evolutions of the information entropy of the atoms via the average photon number, initial state of the atoms, detuning, coupling constant between the atoms and the coefficient of Kerr medium. Numerical calculation results show that the time evolutions of the information entropy properties of the atoms strongly dependent on the initial state of the system and the average photon number. Detuning, coupling constant between the atoms and the Kerr coefficient still make influence on the information entropy of the atoms. (authors) 10. Anti-Stokes effect CCD camera and SLD based optical coherence tomography for full-field imaging in the 1550nm region Science.gov (United States) Kredzinski, Lukasz; Connelly, Michael J. 2012-06-01 Full-field Optical coherence tomography is an en-face interferometric imaging technology capable of carrying out high resolution cross-sectional imaging of the internal microstructure of an examined specimen in a non-invasive manner. The presented system is based on competitively priced optical components available at the main optical communications band located in the 1550 nm region. It consists of a superluminescent diode and an anti-stokes imaging device. The single mode fibre coupled SLD was connected to a multi-mode fibre inserted into a mode scrambler to obtain spatially incoherent illumination, suitable for OCT wide-field modality in terms of crosstalk suppression and image enhancement. This relatively inexpensive system with moderate resolution of approximately 24um x 12um (axial x lateral) was constructed to perform a 3D cross sectional imaging of a human tooth. To our knowledge this is the first 1550 nm full-field OCT system reported. 11. Electric field in a plasma channel in a high-pressure nanosecond discharge in hydrogen: a coherent anti-stokes Raman scattering study. Science.gov (United States) Yatom, S; Tskhai, S; Krasik, Ya E 2013-12-20 Experimental results of a study of the electric field in a plasma channel produced during nanosecond discharge at a H2 gas pressure of (2-3)×10(5)  Pa by the coherent anti-Stokes scattering method are reported. The discharge was ignited by applying a voltage pulse with an amplitude of ∼100  kV and a duration of ∼5  ns to a blade cathode placed at a distance of 10 and 20 mm from the anode. It was shown that this type of gas discharge is characterized by the presence of an electric field in the plasma channel with root-mean-square intensities of up to 30  kV/cm. Using polarization measurements, it was found that the direction of the electric field is along the cathode-anode axis. 12. Velocity Field of the McMurdo Shear Zone from Annual Three-Dimensional Ground Penetrating Radar Imaging and Crevasse Matching Science.gov (United States) Ray, L.; Jordan, M.; Arcone, S. A.; Kaluzienski, L. M.; Koons, P. O.; Lever, J.; Walker, B.; Hamilton, G. S. 2017-12-01 The McMurdo Shear Zone (MSZ) is a narrow, intensely crevassed strip tens of km long separating the Ross and McMurdo ice shelves (RIS and MIS) and an important pinning feature for the RIS. We derive local velocity fields within the MSZ from two consecutive annual ground penetrating radar (GPR) datasets that reveal complex firn and marine ice crevassing; no englacial features are evident. The datasets were acquired in 2014 and 2015 using robot-towed 400 MHz and 200 MHz GPR over a 5 km x 5.7 km grid. 100 west-to-east transects at 50 m spacing provide three-dimensional maps that reveal the length of many firn crevasses, and their year-to-year structural evolution. Hand labeling of crevasse cross sections near the MSZ western and eastern boundaries reveal matching firn and marine ice crevasses, and more complex and chaotic features between these boundaries. By matching crevasse features from year to year both on the eastern and western boundaries and within the chaotic region, marine ice crevasses along the western and eastern boundaries are shown to align directly with firn crevasses, and the local velocity field is estimated and compared with data from strain rate surveys and remote sensing. While remote sensing provides global velocity fields, crevasse matching indicates greater local complexity attributed to faulting, folding, and rotation. 13. Field test of an all-semiconductor laser-based coherent continuous-wave Doppler lidar for wind energy applications DEFF Research Database (Denmark) Sjöholm, Mikael; Dellwik, Ebba; Hu, Qi -produced all-semiconductor laser. The instrument is a coherent continuous-wave lidar with two fixed-focus telescopes for launching laser beams in two different directions. The alternation between the telescopes is achieved by a novel switching technique without any moving parts. Here, we report results from...... signal strength from external atmospheric parameters such as relative humidity and concentrations of atmospheric particles is discussed. This novel lidar instrument design seems to offer a promising low-cost alternative for prevision remote sensing of wind turbine inflow.... 14. Autonomous Observations of the Upper Ocean Stratification and Velocity Field about the Seasonality Retreating Marginal Ice Zone Science.gov (United States) 2016-12-30 fluxes of heat, salt, and momentum. Hourly GPS fixes tracked the motion of the supporting ice floes and T/C recorders sampled the ocean waters just... sampled in a range of ice conditions from full ice cover to nearly open water and observed a variety of stratification and ocean velocity signals (e.g...From - To) 12/30/2016 final 01-Nov-2011to 30-Sep-201 6 4. TITLE AND SUBTITLE Sa. CONTRACT NUMBER Autonomous observations of the upper ocean 15. The Reynolds number dependence of the velocity field in the BNL Jet-in-Pool water experiments International Nuclear Information System (INIS) Szczepura, R.T. 1981-02-01 The water Jet-in-Pool experiment at Berkeley Nuclear Laboratories consists of an axisymmetric sudden expansion. A series of measurements was performed in this rig, using a single-channel Laser/Doppler Anemometer system, over a Reynolds number range of 1.4 x 10 4 - 6.1 x 10 4 to determine any dependence in the flow. The mean axial velocity data showed a slight variation, but the root-mean-square fluctuations of the axial velocity had a far more pronounced dependence. This was attributed to upstream conditions in the rig, specifically the nozzle used for injecting the central portion of the flow. The variations in the mean velocity data are sufficiently small for one set of data to act as a basis for calculations at any Reynolds number when a simple closure scheme such as a prescribed effective viscosity is used. However the variation in turbulence parameters will complicate the use of second-order closure schemes and this will be examined further. (author) 16. The height variation of supergranular velocity fields determined from simultaneous OSO 8 satellite and ground-based observations Science.gov (United States) November, L. J.; Toomre, J.; Gebbie, K. B.; Simon, G. W. 1979-01-01 Results are reported for simultaneous satellite and ground-based observations of supergranular velocities in the sun, which were made using a UV spectrometer aboard OSO 8 and a diode-array instrument operating at the exit slit of an echelle spectrograph attached to a vacuum tower telescope. Observations of the steady Doppler velocities seen toward the limb in the middle chromosphere and the photosphere are compared; the observed spectral lines of Si II at 1817 A and Fe I at 5576 A are found to differ in height of formation by about 1400 km. The results show that supergranular motions are able to penetrate at least 11 density scale heights into the middle chromosphere, that the patterns of motion correlate well with the cellular structure seen in the photosphere, and that the motion increases from about 800 m/s in the photosphere to at least 3000 m/s in the middle chromosphere. These observations imply that supergranular velocities should be evident in the transition region and that strong horizontal shear layers in supergranulation should produce turbulence and internal gravity waves. 17. The role of wind field induced flow velocities in destratification and hypoxia reduction at Meiling Bay of large shallow Lake Taihu, China. Science.gov (United States) Jalil, Abdul; Li, Yiping; Du, Wei; Wang, Wencai; Wang, Jianwei; Gao, Xiaomeng; Khan, Hafiz Osama Sarwar; Pan, Baozhu; Acharya, Kumud 2018-01-01 Wind induced flow velocity patterns and associated thermal destratification can drive to hypoxia reduction in large shallow lakes. The effects of wind induced hydrodynamic changes on destratification and hypoxia reduction were investigated at the Meiling bay (N 31° 22' 56.4″, E 120° 9' 38.3″) of Lake Taihu, China. Vertical flow velocity profile analysis showed surface flow velocities consistency with the wind field and lower flow velocity profiles were also consistent (but with delay response time) when the wind speed was higher than 6.2 m/s. Wind field and temperature found the control parameters for hypoxia reduction and for water quality conditions at the surface and bottom profiles of lake. The critical temperature for hypoxia reduction at the surface and the bottom profile was ≤24.1C° (below which hypoxic conditions were found reduced). Strong prevailing wind field (onshore wind directions ESE, SE, SSE and E, wind speed ranges of 2.4-9.1 m/s) reduced the temperature (22C° to 24.1C°) caused reduction of hypoxia at the near surface with a rise in water levels whereas, low to medium prevailing wind field did not supported destratification which increased temperature resulting in increased hypoxia. Non-prevailing wind directions (offshore) were not found supportive for the reduction of hypoxia in study area due to less variable wind field. Daytime wind field found more variable (as compared to night time) which increased the thermal destratification during daytime and found supportive for destratification and hypoxia reduction. The second order exponential correlation found between surface temperature and Chlorophyll-a (R 2 : 0.2858, Adjusted R-square: 0.2144 RMSE: 4.395), Dissolved Oxygen (R 2 : 0.596, Adjusted R-square: 0.5942, RMSE: 0.3042) concentrations. The findings of the present study reveal the driving mechanism of wind induced thermal destratification and hypoxic conditions, which may further help to evaluate the wind role in eutrophication 18. Spatio-temporal coherent control of atomic systems: weak to strong field transition and breaking of symmetry in 2D maps Energy Technology Data Exchange (ETDEWEB) Suchowski, H; Natan, A; Bruner, B D; Silberberg, Y [Physics of Complex Systems, Weizmann Institute of Science, Rehovot (Israel)], E-mail: [email protected] 2008-04-14 Coherent control of resonant and non-resonant two-photon absorption processes was examined using a spatio-temporal pulse-shaping technique. By utilizing a combination of temporal focusing and femtosecond pulse-shaping techniques, we spatially control multiphoton absorption processes in a completely deterministic manner. Distinctive symmetry properties emerge through two-dimensional mapping of spatio-temporal data. These symmetries break down in the transition to strong fields, revealing details of strong-field effects such as power broadenings and dynamic Stark shifts. We also present demonstrations of chirp-dependent population transfer in atomic rubidium, as well as the spatial separation of resonant and non-resonant excitation pathways in atomic caesium. 19. Spatio-temporal coherent control of atomic systems: weak to strong field transition and breaking of symmetry in 2D maps International Nuclear Information System (INIS) Suchowski, H; Natan, A; Bruner, B D; Silberberg, Y 2008-01-01 Coherent control of resonant and non-resonant two-photon absorption processes was examined using a spatio-temporal pulse-shaping technique. By utilizing a combination of temporal focusing and femtosecond pulse-shaping techniques, we spatially control multiphoton absorption processes in a completely deterministic manner. Distinctive symmetry properties emerge through two-dimensional mapping of spatio-temporal data. These symmetries break down in the transition to strong fields, revealing details of strong-field effects such as power broadenings and dynamic Stark shifts. We also present demonstrations of chirp-dependent population transfer in atomic rubidium, as well as the spatial separation of resonant and non-resonant excitation pathways in atomic caesium 20. Coherent detectors International Nuclear Information System (INIS) Lawrence, C R; Church, S; Gaier, T; Lai, R; Ruf, C; Wollack, E 2009-01-01 Coherent systems offer significant advantages in simplicity, testability, control of systematics, and cost. Although quantum noise sets the fundamental limit to their performance at high frequencies, recent breakthroughs suggest that near-quantum-limited noise up to 150 or even 200 GHz could be realized within a few years. If the demands of component separation can be met with frequencies below 200 GHz, coherent systems will be strong competitors for a space CMB polarization mission. The rapid development of digital correlator capability now makes space interferometers with many hundreds of elements possible. Given the advantages of coherent interferometers in suppressing systematic effects, such systems deserve serious study. 1. Coherent detectors Energy Technology Data Exchange (ETDEWEB) Lawrence, C R [M/C 169-327, Jet Propulsion Laboratory, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States); Church, S [Room 324 Varian Physics Bldg, 382 Via Pueblo Mall, Stanford, CA 94305-4060 (United States); Gaier, T [M/C 168-314, Jet Propulsion Laboratory, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States); Lai, R [Northrop Grumman Corporation, Redondo Beach, CA 90278 (United States); Ruf, C [1533 Space Research Building, The University of Michigan, Ann Arbor, MI 48109-2143 (United States); Wollack, E, E-mail: [email protected] [NASA/GSFC, Code 665, Observational Cosmology Laboratory, Greenbelt, MD 20771 (United States) 2009-03-01 Coherent systems offer significant advantages in simplicity, testability, control of systematics, and cost. Although quantum noise sets the fundamental limit to their performance at high frequencies, recent breakthroughs suggest that near-quantum-limited noise up to 150 or even 200 GHz could be realized within a few years. If the demands of component separation can be met with frequencies below 200 GHz, coherent systems will be strong competitors for a space CMB polarization mission. The rapid development of digital correlator capability now makes space interferometers with many hundreds of elements possible. Given the advantages of coherent interferometers in suppressing systematic effects, such systems deserve serious study. 2. A finite element solution to conjugated heat transfer in tissue using magnetic resonance angiography to measure the in vitro velocity field Science.gov (United States) Dutton, Andrew William 1993-12-01 A combined numerical and experimental system for tissue heat transfer analysis was developed. The goal was to develop an integrated set of tools for studying the problem of providing accurate temperature estimation for use in hyperthermia treatment planning in a clinical environment. The completed system combines (1) Magnetic Resonance Angiography (MRA) to non-destructively measure the velocity field in situ, (2) the Streamwise Upwind Petrov-Galerkin finite element solution to the 3D steady state convective energy equation (CEE), (3) a medical image based automatic 3D mesh generator, and (4) a Gaussian type estimator to determine unknown thermal model parameters such as thermal conductivity, blood perfusion, and blood velocities from measured temperature data. The system was capable of using any combination of three thermal models (1) the Convective Energy Equation (CEE), (2) the Bioheat Transfer Equation (BHTE), and (3) the Effective Thermal Conductivity Equation (ETCE) Incorporation of the theoretically correct CEE was a significant theoretical advance over approximate models made possible by the use of MRA to directly measure the 3D velocity field in situ. Experiments were carried out in a perfused alcohol fixed canine liver with hyperthermia induced through scanned focused ultrasound Velocity fields were measured using Phase Contrast Angiography. The complete system was then used to (1) develop a 3D finite element model based upon user traced outlines over a series of MR images of the liver and (2) simulate temperatures at steady state using the CEE, BHTE, and ETCE thermal models in conjunction with the gauss estimator. Results of using the system on an in vitro liver preparation indicate the need for improved accuracy in the MRA scans and accurate spatial registration between the thermocouple junctions, the measured velocity field, and the scanned ultrasound power No individual thermal model was able to meet the desired accuracy of 0.5 deg C, the resolution 3. Preliminary results from the orbiting solar observatory 8: Persistent velocity fields in the chromosphere and transition region International Nuclear Information System (INIS) Lites, B.W.; Bruner, E.C. Jr.; Chipman, E.G.; Shine, R.A.; Rottman, G.J.; White, O.R.; Athay, R.G. 1976-01-01 Velocity images, or tachograms, of the solar chromosphere and chromosphere-corona transition region were made by measuring the Si II 1816.93 A chromospheric line and the Si IV 1393.8 A transition region line with the University of Colorado spectrometer aboard OSO-8. Persistent flows are indicated in both active and quiet regions of the solar atmosphere. In quiet regions, areas of enhanced emission (the chromospheric network) are apparently systematically redshifted with respect to the areas of lower intensity. This correlation does not hold in active regions, where long-lived downflows into sunspots have been observed 4. A new velocity field for Africa from combined GPS and DORIS space geodetic Solutions: Contribution to the definition of the African reference frame (AFREF) Science.gov (United States) Saria, E.; Calais, E.; Altamimi, Z.; Willis, P.; Farah, H. 2013-04-01 We analyzed 16 years of GPS and 17 years of Doppler orbitography and radiopositioning integrated by satellite (DORIS) data at continuously operating geodetic sites in Africa and surroundings to describe the present-day kinematics of the Nubian and Somalian plates and constrain relative motions across the East African Rift. The resulting velocity field describes horizontal and vertical motion at 133 GPS sites and 9 DORIS sites. Horizontal velocities at sites located on stable Nubia fit a single plate model with a weighted root mean square residual of 0.6 mm/yr (maximum residual 1 mm/yr), an upper bound for plate-wide motions and for regional-scale deformation in the seismically active southern Africa and Cameroon volcanic line. We confirm significant southward motion ( ˜ 1.5 mm/yr) in Morocco with respect to Nubia, consistent with earlier findings. We propose an updated angular velocity for the divergence between Nubia and Somalia, which provides the kinematic boundary conditions to rifting in East Africa. We update a plate motion model for the East African Rift and revise the counterclockwise rotation of the Victoria plate and clockwise rotation of the Rovuma plate with respect to Nubia. Vertical velocities range from - 2 to +2 mm/yr, close to their uncertainties, with no clear geographic pattern. This study provides the first continent-wide position/velocity solution for Africa, expressed in International Terrestrial Reference Frame (ITRF2008), a contribution to the upcoming African Reference Frame (AFREF). Except for a few regions, the African continent remains largely under-sampled by continuous space geodetic data. Efforts are needed to augment the geodetic infrastructure and openly share existing data sets so that the objectives of AFREF can be fully reached. 5. Field measurements of the atmospheric dry deposition fluxes and velocities of polycyclic aromatic hydrocarbons to the global oceans. Science.gov (United States) González-Gaya, Belén; Zúñiga-Rival, Javier; Ojeda, María-José; Jiménez, Begoña; Dachs, Jordi 2014-05-20 The atmospheric dry deposition fluxes of 16 polycyclic aromatic hydrocarbons (PAHs) have been measured, for the first time, in the tropical and subtropical Atlantic, Pacific, and Indian Oceans. Depositional fluxes for fine (0.7-2.7 μm) and coarse (>2.7 μm) aerosol fractions were simultaneously determined with the suspended aerosol phase concentrations, allowing the determination of PAH deposition velocities (vD). PAH dry deposition fluxes (FDD) bound to coarse aerosols were higher than those of fine aerosols for 83% of the measurements. Average FDD for total (fine + coarse) Σ16PAHs (sum of 16 individual PAHs) ranged from 8.33 ng m(-2)d(-1) to 52.38 ng m(-2)d(-1). Mean FDD for coarse aerosol's individual PAHs ranged between 0.13 ng m(-2)d(-1) (Perylene) and 1.96 ng m(-2)d(-1) (Methyl Pyrene), and for the fine aerosol fraction these ranged between 0.06 ng m(-2)d(-1) (Dimethyl Pyrene) and 1.25 ng m(-2)d(-1) (Methyl Chrysene). The estimated deposition velocities went from the highest mean vD for Methyl Chrysene (0.17-13.30 cm s(-1)), followed by Dibenzo(ah)Anthracene (0.29-1.38 cm s(-1)), and other high MW PAHs to minimum values of vD for Dimethyl Pyrene (oceans. 6. Measurement of the magnetic moment of the 21+ state of 72Zn via extension of the high-velocity transient-field method International Nuclear Information System (INIS) Fiori, E. 2010-12-01 Magnetic moments can provide deep insight for nuclear structure and of the wave function composition, particularly when the single particle character of the nucleus is dominating. For this reason, the magnetic moment of the first excited state of the radioactive neutron-rich 72 Zn was measured at the GANIL facility (Caen, France). The result of the experiment confirmed the trend predicted by the shell model calculations, even if the error on the measurement did not allow for a rigorous constraint of the theories. The measurement was performed using the transient field (TF) technique and the nuclei of interest were produced in a fragmentation reaction. Before this experiment, the high-velocity TF (HVTF) technique had been used only with projectile up to Z = 24. It was the first time that a magnetic moment of an heavy ion with Z > 24 was measured in the high velocity regime. To further develop the technique and to gather information about the hyperfine interaction between the polarized electrons and the nucleons, two experiments were performed at LNS (Catania, Italy). In this thesis the development of the high-velocity TF technique for the experiments on g(2 + ; 72 Zn) and field strength B TF (Kr, Ge) is presented. The analysis of the results and their interpretation is then discussed. It was demonstrated that the HVTF technique, combined with Coulomb excitation, can be used for the measurement of g-factors of very short-lived states, with lifetimes of the order of tens of ps and lower, of heavy ions (A ∼ 80) traveling with intermediate relativistic speeds, β ∼ 0.25. The standard TF technique at low velocities (a few percent of the speed of light) has been used for a long time to provide the strong magnetic field necessary for the measurement of g-factors of very short-lived states. The breakthrough of the present development is the different velocity regime of the higher mass projectile under which the experiment is carried out 7. Application of short-range dual-Doppler lidars to evaluate the coherence of turbulence Science.gov (United States) Cheynet, Etienne; Jakobsen, Jasna Bogunović; Snæbjörnsson, Jónas; Mikkelsen, Torben; Sjöholm, Mikael; Mann, Jakob; Hansen, Per; Angelou, Nikolas; Svardal, Benny 2016-12-01 Two synchronized continuous wave scanning lidars are used to study the coherence of the along-wind and across-wind velocity components. The goal is to evaluate the potential of the lidar technology for application in wind engineering. The wind lidars were installed on the Lysefjord Bridge during four days in May 2014 to monitor the wind field in the horizontal plane upstream of the bridge deck. Wind records obtained by five sonic anemometers mounted on the West side of the bridge are used as reference data. Single- and two-point statistics of wind turbulence are studied, with special emphasis on the root-coherence and the co-coherence of turbulence. A four-parameter decaying exponential function has been fitted to the measured co-coherence, and a good agreement is observed between data obtained by the sonic anemometers and the lidars. The root-coherence of turbulence is compared to theoretical models. The analytical predictions agree rather well with the measured coherence for the along-wind component. For increasing wavenumbers, larger discrepancies are, however, noticeable between the measured coherence and the theoretical predictions. The WindScanners are observed to slightly overestimate the integral length scales, which could not be explained by the laser beam averaging effect alone. On the other hand, the spatial averaging effect does not seem to have any significant effect on the coherence. 8. Shaping of an ion cloud's velocity field by differential braking due to Alfven wave dissipation in the ionosphere, 1. Coupling with an infinite ionosphere International Nuclear Information System (INIS) Nalesso, G.F.; Jacobson, A.R. 1988-01-01 We study the interaction of a plasma cloud, jetting across the geomagnetic field with the surrounding ionosphere. The cloud is assumed of finite extension in the direction normal to both the direction of motion and the magnetic field, while the ionosphere is considered a collisional anisotropic magnetized plasma. It is shown that two main mechanisms contribute to the cloud's braking: momentum exchange with the ionosphere via Alfven waves and momentum dissipation due to resistive currents. Due to the finite size of the cloud a differential braking of the different transverse harmonics of the Alfven wave appears when the momentum exchange mechanism is dominant. The result is a sharpening of the cloud's velocity field. copyright American Geophysical Union 1988 9. 3D wide field-of-view Gabor-domain optical coherence microscopy advancing real-time in-vivo imaging and metrology Science.gov (United States) Canavesi, Cristina; Cogliati, Andrea; Hayes, Adam; Tankam, Patrice; Santhanam, Anand; Rolland, Jannick P. 2017-02-01 Real-time volumetric high-definition wide-field-of-view in-vivo cellular imaging requires micron-scale resolution in 3D. Compactness of the handheld device and distortion-free images with cellular resolution are also critically required for onsite use in clinical applications. By integrating a custom liquid lens-based microscope and a dual-axis MEMS scanner in a compact handheld probe, Gabor-domain optical coherence microscopy (GD-OCM) breaks the lateral resolution limit of optical coherence tomography through depth, overcoming the tradeoff between numerical aperture and depth of focus, enabling advances in biotechnology. Furthermore, distortion-free imaging with no post-processing is achieved with a compact, lightweight handheld MEMS scanner that obtained a 12-fold reduction in volume and 17-fold reduction in weight over a previous dual-mirror galvanometer-based scanner. Approaching the holy grail of medical imaging - noninvasive real-time imaging with histologic resolution - GD-OCM demonstrates invariant resolution of 2 μm throughout a volume of 1 x 1 x 0.6 mm3, acquired and visualized in less than 2 minutes with parallel processing on graphics processing units. Results on the metrology of manufactured materials and imaging of human tissue with GD-OCM are presented. 10. Effect of operating methods of wind turbine generator system on net power extraction under wind velocity fluctuations in fields Energy Technology Data Exchange (ETDEWEB) Wakui, Tetsuya; Yamaguchi, Kazuya; Hashizume, Takumi [Waseda Univ., Advanced Research Inst. for Science and Engineering, Tokyo (Japan); Outa, Eisuke [Waseda Univ., Mechanical Engineering Dept., Tokyo (Japan); Tanzawa, Yoshiaki [Nippon Inst. of Technology, Mechanical Engineering Dept., Saitama (Japan) 1999-01-01 The effect of how a wind turbine generator system is operated is discussed from the viewpoint of net power extraction with wind velocity fluctuation in relation to the scale and the dynamic behaviour of the system. On a wind turbine generator system consisting of a Darrieus-Savonius hybrid wind turbine, a load generator and a battery, we took up two operating methods: constant tip speed ratio operation for a stand-alone system (Scheme 1) and synchronous operation by connecting a grid (Scheme 2). With our simulation model, using the result of the net extracting power, we clarified that Scheme 1 is more effective than Scheme 2 for small-scale systems. Furthermore, in Scheme 1, the appropriate rated power output of the system under each wind condition can be confirmed. (Author) 11. Anomalous scaling of passive scalar fields advected by the Navier-Stokes velocity ensemble: effects of strong compressibility and large-scale anisotropy. Science.gov (United States) Antonov, N V; Kostenko, M M 2014-12-01 The field theoretic renormalization group and the operator product expansion are applied to two models of passive scalar quantities (the density and the tracer fields) advected by a random turbulent velocity field. The latter is governed by the Navier-Stokes equation for compressible fluid, subject to external random force with the covariance ∝δ(t-t')k(4-d-y), where d is the dimension of space and y is an arbitrary exponent. The original stochastic problems are reformulated as multiplicatively renormalizable field theoretic models; the corresponding renormalization group equations possess infrared attractive fixed points. It is shown that various correlation functions of the scalar field, its powers and gradients, demonstrate anomalous scaling behavior in the inertial-convective range already for small values of y. The corresponding anomalous exponents, identified with scaling (critical) dimensions of certain composite fields ("operators" in the quantum-field terminology), can be systematically calculated as series in y. The practical calculation is performed in the leading one-loop approximation, including exponents in anisotropic contributions. It should be emphasized that, in contrast to Gaussian ensembles with finite correlation time, the model and the perturbation theory presented here are manifestly Galilean covariant. The validity of the one-loop approximation and comparison with Gaussian models are briefly discussed. 12. Measurement of turbulent flow fields in a agitated vessel with four baffles by laser-doppler velocimetry. Mean velocity fields and flow pattern; Buffle tsuki heiento kakuhan sonai nagare no LDV ni yoru keisoku. Heikin sokudoba to flow pattern Energy Technology Data Exchange (ETDEWEB) Suzukawa, K [Ube Industries, Ltd., Tokyo (Japan); hashimoto, T [Yamaguchi University, Yamaguchi (Japan); Osaka, H [Yamaguchi University, Yamaguchi (Japan). Faclty of Engineering 1997-12-25 The three dimensional complex turbulent flow fields induced by a four flat blade paddle impeller in agitated vessel were measured by laser Doppler velocimetry. Mixing vessel used was a closed cylindrical tank of 490 mm diameter with a flat bottom and four vertical buffles, giving water volumes of about 1001. The impellers were at the midnight of the water level in the tank. A height of liquid (water) was equal to the vessel diameter. Three components of mean velocity were measured at three vertical sections {theta}=7.5deg, 45deg and 85deg, in several horizontal planes. Mixing Reynolds number NRe was 1.2 times 10{sup 5}. It can be found from the results that circumferential mean velocity profiles show the symmetrical shape in the upper and lower sides of impeller. Secondary velocity components, such as axial and radial velocities, however, were not in symmetry. For this reason, the ratio of circulation flow volume which enter in upper and lower sides of impeller was roughly 7/3. In both the middle and buffle regions, mean flow velocities (flow patterns) were different, dependent of three vertical planes with different circumferential angle measured from buffle. 10 refs., 8 figs., 1 tab. International Nuclear Information System (INIS) Agoh, Tomonori 2006-01-01 This article presents basic properties of coherent synchrotron radiation (CSR) with numerical examples and introduces the reader to important aspects of CSR in future accelerators with short bunches. We show interesting features of the single bunch instability due to CSR in storage rings and discuss the longitudinal CSR field via the impedance representation. (author) 14. Multi-scale properties of large eddy simulations: correlations between resolved-scale velocity-field increments and subgrid-scale quantities Science.gov (United States) Linkmann, Moritz; Buzzicotti, Michele; Biferale, Luca 2018-06-01 We provide analytical and numerical results concerning multi-scale correlations between the resolved velocity field and the subgrid-scale (SGS) stress-tensor in large eddy simulations (LES). Following previous studies for Navier-Stokes equations, we derive the exact hierarchy of LES equations governing the spatio-temporal evolution of velocity structure functions of any order. The aim is to assess the influence of the subgrid model on the inertial range intermittency. We provide a series of predictions, within the multifractal theory, for the scaling of correlation involving the SGS stress and we compare them against numerical results from high-resolution Smagorinsky LES and from a-priori filtered data generated from direct numerical simulations (DNS). We find that LES data generally agree very well with filtered DNS results and with the multifractal prediction for all leading terms in the balance equations. Discrepancies are measured for some of the sub-leading terms involving cross-correlation between resolved velocity increments and the SGS tensor or the SGS energy transfer, suggesting that there must be room to improve the SGS modelisation to further extend the inertial range properties for any fixed LES resolution. 15. Simultaneous inversion of seismic velocity and moment tensor using elastic-waveform inversion of microseismic data: Application to the Aneth CO2-EOR field Science.gov (United States) Chen, Y.; Huang, L. 2017-12-01 Moment tensors are key parameters for characterizing CO2-injection-induced microseismic events. Elastic-waveform inversion has the potential to providing accurate results of moment tensors. Microseismic waveforms contains information of source moment tensors and the wave propagation velocity along the wavepaths. We develop an elastic-waveform inversion method to jointly invert the seismic velocity model and moment tensor. We first use our adaptive moment-tensor joint inversion method to estimate moment tensors of microseismic events. Our adaptive moment-tensor inversion method jointly inverts multiple microseismic events with similar waveforms within a cluster to reduce inversion uncertainty for microseismic data recorded using a single borehole geophone array. We use this inversion result as the initial model for our elastic-waveform inversion to minimize the cross-correlated-based data misfit between observed data and synthetic data. We verify our method using synthetic microseismic data and obtain improved results of both moment tensors and seismic velocity model. We apply our new inversion method to microseismic data acquired at a CO2-enhanced oil recovery field in Aneth, Utah, using a single borehole geophone array. The results demonstrate that our new inversion method significantly reduces the data misfit compared to the conventional ray-theory-based moment-tensor inversion. 16. Dissipative inertial transport patterns near coherent Lagrangian eddies in the ocean. Science.gov (United States) Beron-Vera, Francisco J; Olascoaga, María J; Haller, George; Farazmand, Mohammad; Triñanes, Joaquín; Wang, Yan 2015-08-01 Recent developments in dynamical systems theory have revealed long-lived and coherent Lagrangian (i.e., material) eddies in incompressible, satellite-derived surface ocean velocity fields. Paradoxically, observed drifting buoys and floating matter tend to create dissipative-looking patterns near oceanic eddies, which appear to be inconsistent with the conservative fluid particle patterns created by coherent Lagrangian eddies. Here, we show that inclusion of inertial effects (i.e., those produced by the buoyancy and size finiteness of an object) in a rotating two-dimensional incompressible flow context resolves this paradox. Specifically, we obtain that anticyclonic coherent Lagrangian eddies attract (repel) negatively (positively) buoyant finite-size particles, while cyclonic coherent Lagrangian eddies attract (repel) positively (negatively) buoyant finite-size particles. We show how these results explain dissipative-looking satellite-tracked surface drifter and subsurface float trajectories, as well as satellite-derived Sargassum distributions. 17. Application of a simplified calculation for full-wave microtremor H/ V spectral ratio based on the diffuse field approximation to identify underground velocity structures Science.gov (United States) Wu, Hao; Masaki, Kazuaki; Irikura, Kojiro; Sánchez-Sesma, Francisco José 2017-12-01 Under the diffuse field approximation, the full-wave (FW) microtremor H/ V spectral ratio ( H/ V) is modeled as the square root of the ratio of the sum of imaginary parts of the Green's function of the horizontal components to that of the vertical one. For a given layered medium, the FW H/ V can be well approximated with only surface waves (SW) H/ V of the "cap-layered" medium which consists of the given layered medium and a new larger velocity half-space (cap layer) at large depth. Because the contribution of surface waves can be simply obtained by the residue theorem, the computation of SW H/ V of cap-layered medium is faster than that of FW H/ V evaluated by discrete wavenumber method and contour integration method. The simplified computation of SW H/ V was then applied to identify the underground velocity structures at six KiK-net strong-motion stations. The inverted underground velocity structures were used to evaluate FW H/ Vs which were consistent with the SW H/ Vs of corresponding cap-layered media. The previous study on surface waves H/ Vs proposed with the distributed surface sources assumption and a fixed Rayleigh-to-Love waves amplitude ratio for horizontal motions showed a good agreement with the SW H/ Vs of our study. The consistency between observed and theoretical spectral ratios, such as the earthquake motions of H/ V spectral ratio and spectral ratio of horizontal motions between surface and bottom of borehole, indicated that the underground velocity structures identified from SW H/ V of cap-layered medium were well resolved by the new method.[Figure not available: see fulltext. 18. Study on flow-induced vibration of large-diameter pipings in a sodium-cooled fast reactor. Influence of elbow curvature on velocity fluctuation field International Nuclear Information System (INIS) Ono, Ayako; Kimura, Nobuyuki; Kamide, Hideki; Tobita, Akira 2010-02-01 The main cooling system of Japan Sodium-cooled Fast Reactor (JSFR) consists of two loops to reduce the plant construction cost. In the design of JSFR, sodium coolant velocity is beyond 9m/s in the primary hot leg pipe with large-diameter (1.3m). The maximum Reynolds number in the piping reaches 4.2x10 7 . The hot leg pipe having a 90 degree elbow with curvature ratio of r/D=1.0, so-called 'short elbow', which enables a compact reactor vessel. In sodium cooled fast reactors, the system pressure is so low that thickness of pipings in the cooling system is thinner than that in LWRs. Under such a system condition in the cooling system, the flow-induced vibration (FIV) is concerned at the short elbow. The evaluation of the structural integrity of pipings in JSFR should be conducted based on a mechanistic approach of FIV at the elbow. It is significant to obtain the knowledge of the fluctuation intensity and spectra of velocity and pressure fluctuations in order to grasp the mechanism of the FIV. In this study, water experiments were conducted. Two types of 1/8 scaled elbows with different curvature ratio, r/D=1.0, 1.5, were used to investigate the influence of curvature on velocity fluctuation at the elbow. The velocity fields in the elbows were measured using a high speed PIV method. Unsteady behavior of secondary flow at the elbow outlet and separation flow at the inner wall of elbow were observed in the two types of elbows. It was found that the growth of secondary flow correlated with the flow fluctuation near the inside wall of the elbow. (author) 19. Seismic microzonation and velocity models of El Ejido area (SE Spain) from the diffuse-field H/V method Science.gov (United States) García-Jerez, Antonio; Seivane, Helena; Navarro, Manuel; Piña-Flores, José; Luzón, Francisco; Vidal, Francisco; Posadas, Antonio M.; Aranda, Carolina 2016-04-01 El Ejido town is located in the Campo de Dalías coastal plain (Almería province, SE Spain), emplaced in one of the most seismically active regions of Spain. The municipality has 84000 inhabitants and presented a high growth rate during the last twenty years. The most recent intense seismic activity occurred close to this town was in 1993 and 1994, with events of Mb = 4.9 and Mb = 5.0, respectively. To provide a basis for site-specific hazard analysis, we first carried out a seismic microzonation of this town in terms of predominant periods and geotechnical properties. The predominant periods map was obtained from ambient noise observations on a grid of 250 x 250 m in the main urban area, and sparser measurements on the outskirts. These broad-band records, of about 20 minutes long each, were analyzed by using the horizontal-to-vertical spectral ratio technique (H/V). Dispersion curves obtained from two array measurements of ambient noise and borehole data provided additional geophysical information. All the surveyed points in the town were found to have relatively long predominant periods ranging from 0.8 to 2.3 s and growing towards the SE. Secondary high-frequency (> 2Hz) peaks were found at about the 10% of the points only. On the other hand, Vs30 values of 550 - 650 m/s were estimated from the array records, corresponding to cemented sediments and medium-hard rocks. The local S-wave velocity structure has been inverted from the H/V curves for a subset of the measurement sites. We used an innovative full-wavefield method based on the diffuse-wavefield approximation (Sánchez-Sesma et al., 2011) combined with the simulated annealing algorithm. Shallow seismic velocities and deep boreholes data were used as constraints. The results show that the low-frequency resonances are related with the impedance contrast between several hundred meters of medium-hard sedimentary rocks (marls and calcarenites) with the stiffer basement of the basin, which dips to the SE. These 20. Lagrangian motion, coherent structures, and lines of persistent material strain. Science.gov (United States) Samelson, R M 2013-01-01 Lagrangian motion in geophysical fluids may be strongly influenced by coherent structures that support distinct regimes in a given flow. The problems of identifying and demarcating Lagrangian regime boundaries associated with dynamical coherent structures in a given velocity field can be studied using approaches originally developed in the context of the abstract geometric theory of ordinary differential equations. An essential insight is that when coherent structures exist in a flow, Lagrangian regime boundaries may often be indicated as material curves on which the Lagrangian-mean principal-axis strain is large. This insight is the foundation of many numerical techniques for identifying such features in complex observed or numerically simulated ocean flows. The basic theoretical ideas are illustrated with a simple, kinematic traveling-wave model. The corresponding numerical algorithms for identifying candidate Lagrangian regime boundaries and lines of principal Lagrangian strain (also called Lagrangian coherent structures) are divided into parcel and bundle schemes; the latter include the finite-time and finite-size Lyapunov exponent/Lagrangian strain (FTLE/FTLS and FSLE/FSLS) metrics. Some aspects and results of oceanographic studies based on these approaches are reviewed, and the results are discussed in the context of oceanographic observations of dynamical coherent structures. 1. Visualization and measurement of liquid velocity field of gas-liquid metal two-phase flow using neutron radiography International Nuclear Information System (INIS) Saito, Yasushi; Suzuki, Tohru; Matsubayashi, Masahito 2000-01-01 In a core melt accident of a fast breeder reactor, a possibility of re-criticality is anticipated in the molten fuel-steel mixture pool. One of the mechanisms to suppress the re-criticality is the boiling of steel in the molten fuel-steel mixture pool because of the negative void reactivity effect. To evaluate the reactivity change due to boiling, it is necessary to know the characteristics of gas-liquid two-phase flow in the molten fuel-steel mixture pool. For this purpose, boiling bubbles in a molten fuel-steel mixture pool were simulated by adiabatic gas bubbles in a liquid metal pool to study the basic characteristics of gas-liquid metal two-phase mixture. Visualization of the two-phase mixture and measurements of liquid phase velocity and void fraction were conducted by using neutron radiography and image processing techniques. From these measurements, the basic characteristics of gas-liquid metal two-phase mixture were clarified. (author) 2. Application of the artificial satellite of the earth to determine the velocity of the gravitational interaction within newtonian gravitational fields International Nuclear Information System (INIS) Cristea, Gh. 1975-01-01 In the first part of this paper, additional data are given concerning a gravimeter consisting in a pendulum-laser set proposed in a previous paper of the author (1). This gravimeter could have a sensitivity of 0.1 microgal or even 0.01 microgal in the case of statistical measurements. If processing by an on-line computer is used, the pendulum-laser can constitute a gravimeter which, used in statistical measurements on a long time interval, could reach a sensitivity of 10 -12 g. The second part of the paper points out the advantages resulting from determining the velocity of the gravitational reaction in an artificial satellite of the earth. The main advantage is the very fact that this measurement can be achieved by means of the existant gravimeters. The massive reduction of the time error is due to the increase of the ''sinusoid'' frequency resulting from the recording being made on the gravimeter set on an artificial satellite turning around the earth in about 90 minutes 3. Coherent light microscopy CERN Document Server Ferraro, Pietro; Zalevsky, Zeev 2011-01-01 This book deals with the latest achievements in the field of optical coherent microscopy. While many other books exist on microscopy and imaging, this book provides a unique resource dedicated solely to this subject. Similarly, many books describe applications of holography, interferometry and speckle to metrology but do not focus on their use for microscopy. The coherent light microscopy reference provided here does not focus on the experimental mechanics of such techniques but instead is meant to provide a users manual to illustrate the strengths and capabilities of developing techniques. Th 4. The effect of divalent ions on the elasticity and pore collapse of chalk evaluated from compressional wave velocity and low-field Nuclear Magnetic Resonance (NMR) DEFF Research Database (Denmark) 2015-01-01 The effects of divalent ions on the elasticity and the pore collapse of chalk were studied through rock-mechanical testing and low-field Nuclear Magnetic Resonance (NMR) measurements. Chalk samples saturated with deionized water and brines containing sodium, magnesium, calcium and sulfate ions were...... subjected to petrophysical experiments, rock mechanical testing and low-field NMR spectroscopy. Petrophysical characterization involving ultrasonic elastic wave velocities in unconfined conditions, porosity and permeability measurements, specific surface and carbonate content determination and backscatter...... electron microscopy of the materials were conducted prior to the experiments. The iso-frame model was used to predict the bulk moduli in dry and saturated conditions from the compressional modulus of water-saturated rocks. The effective stress coefficient, as introduced by Biot, was also determined from... 5. Scaling Regimes in the Model of Passive Scalar Advected by the Turbulent Velocity Field with Finite Correlation Time. Influence of Helicity in Two-Loop Approximation CERN Document Server Chkhetiani, O G; Jurcisinova, E; Jurcisin, M; Mazzino, A; Repasan, M 2005-01-01 The advection of a passive scalar quantity by incompressible helical turbulent flow has been investigated in the framework of an extended Kraichnan model. Statistical fluctuations of the velocity field are assumed to have the Gaussian distribution with zero mean and defined noise with finite-time correlation. Actual calculations have been done up to two-loop approximation in the framework of the field-theoretic renormalization group approach. It turned out that the space parity violation (helicity) of a stochastic environment does not affect anomalous scaling which is the peculiar attribute of a corresponding model without helicity. However, stability of asymptotic regimes, where anomalous scaling takes place, and the effective diffusivity strongly depend on the amount of helicity. 6. On the estimation of wall pressure coherence using time-resolved tomographic PIV Science.gov (United States) Pröbsting, Stefan; Scarano, Fulvio; Bernardini, Matteo; Pirozzoli, Sergio 2013-07-01 Three-dimensional time-resolved velocity field measurements are obtained using a high-speed tomographic Particle Image Velocimetry (PIV) system on a fully developed flat plate turbulent boundary layer for the estimation of wall pressure fluctuations. The work focuses on the applicability of tomographic PIV to compute the coherence of pressure fluctuations, with attention to the estimation of the stream and spanwise coherence length. The latter is required for estimations of aeroacoustic noise radiation by boundary layers and trailing edge flows, but is also of interest for vibro-structural problems. The pressure field is obtained by solving the Poisson equation for incompressible flows, where the source terms are provided by time-resolved velocity field measurements. Measured 3D velocity data is compared to results obtained from planar PIV, and a Direct Numerical Simulation (DNS) at similar Reynolds number. An improved method for the estimation of the material based on a least squares estimator of the velocity derivative along a particle trajectory is proposed and applied. Computed surface pressure fluctuations are further verified by means of simultaneous measurements by a pinhole microphone and compared to the DNS results and a semi-empirical model available from literature. The correlation coefficient for the reconstructed pressure time series with respect to pinhole microphone measurements attains approximately 0.5 for the band-pass filtered signal over the range of frequencies resolved by the velocity field measurements. Scaled power spectra of the pressure at a single point compare favorably to the DNS results and those available from literature. Finally, the coherence of surface pressure fluctuations and the resulting span- and streamwise coherence lengths are estimated and compared to semi-empirical models and DNS results. 7. Granular activated carbon as nucleating agent for aerobic sludge granulation: Effect of GAC size on velocity field differences (GAC versus flocs) and aggregation behavior. Science.gov (United States) Zhou, Jia-Heng; Zhao, Hang; Hu, Miao; Yu, Hai-Tian; Xu, Xiang-Yang; Vidonish, Julia; Alvarez, Pedro J J; Zhu, Liang 2015-12-01 Initial cell aggregation plays an important role in the formation of aerobic granules. In this study, three parallel aerobic granular sludge reactors treating low-strength wastewater were established using granular activated carbon (GAC) of different sizes as the nucleating agent. A novel visual quantitative evaluation method was used to discern how GAC size affects velocity field differences (GAC versus flocs) and aggregation behavior during sludge granulation. Results showed that sludge granulation was significantly enhanced by addition of 0.2mm GAC. However, there was no obvious improvement in granulation in reactor amended with 0.6mm GAC. Hydraulic analysis revealed that increase of GAC size enhanced the velocity field difference between flocs and GAC, which decreased the lifecycle and fraction of flocs-GAC aggregates. Overall, based on analysis of aggregation behavior, GAC of suitable sizes (0.2mm) can serve as the nucleating agent to accelerate flocs-GAC coaggregation and formation of aerobic granules. Copyright © 2015 Elsevier Ltd. All rights reserved. 8. Some remarks on quantum coherence theory International Nuclear Information System (INIS) Burzynski, A. 1982-01-01 This paper is devoted to the basic topics connected with coherence in quantum mechanics and quantum theory of radiation. In particular the formalism of the normal ordered coherence functions in cases of one and many degrees of freedom is described in detail. A few examples illustrate the analysis of the coherence properties of the various quantum states of the field of radiation. (author) 9. Spin Coherence in Semiconductor Nanostructures National Research Council Canada - National Science Library Flatte, Michael E 2006-01-01 ... dots, tuning of spin coherence times for electron spin, tuning of dipolar magnetic fields for nuclear spin, spontaneous spin polarization generation and new designs for spin-based teleportation and spin transistors... 10. A self-consistent two-dimensional resistive fluid theory of field-aligned potential structures including charge separation and magnetic and velocity shear International Nuclear Information System (INIS) Hesse, M.; Birn, J.; Schindler, K. 1990-01-01 A self-consistent two-fluid theory that includes the magnetic field and shear patterns therein is developed to model stationary electrostatic structures with field-aligned potential drops. Shear flow is also included in the theory since this seems to be a prominent feature of the structures of interest. In addition, Ohmic dissipation, a Hall term and pressure gradients in a generalized Ohm's law, modified for cases without quasi-neutrality are included. In the analytic theory, the electrostatic force is balanced by field-aligned pressure gradients, i.e., thermal effects in the direction of the magnetic field, and by pressure gradients and magnetic stresses in the perpendicular direction. Within this theory simple examples of applications are presented to demonstrate the kind of solutions resulting from the model. The results show how the effects of charge separation and shear in the magnetic field and the velocity can be combined to form self-consistent structures such as are found to exist above the aurora, suggested also in association with solar flares 11. Enhanced rotation velocities and electric fields, sub-neoclassical energy transport and density pinch from revisited neoclassical theory International Nuclear Information System (INIS) Rogister, A. 1998-01-01 We show that the large negative radial electric fields which are measured in front of the separatrix in H-mode discharges are easily explainable on the basis of the rigorous 'revisited' neoclassical theory, including finite Larmor radii and inertia effects that was published earlier (Rogister A 1994 Phys. Plasmas 1 619); the same theory naturally leads to sub-neoclassical energy transport and novel particle pinch terms. The calculation has so far been developed only in the high collisionality regime: step sizes comparable to gradient-scale sizes are therefore not required to explain observed properties! Based on the analysis, we conclude that the radial electric field profile develops a well in front of the separatrix when the plasma is unable to sustain ambipolar flows otherwise. (author) 12. Development and application of multiple-quantum coherence techniques for in vivo sodium MRI at high and ultra-high field strengths International Nuclear Information System (INIS) Fiege, Daniel Pascal 2014-01-01 Sodium magnetic resonance imaging (MRI) can quantify directly and non-invasively tissue sodium concentration levels in vivo. Tissue sodium concentration levels are tightly regulated and have been shown to be directly linked to cell viability. The intracellular sodium concentration is an even more specific parameter. The triple-quantum filtering (TQF) technique for sodium MRI has been suggested to detect the intracellular sodium only. Despite their huge potential, only few studies with sodium MRI have been carried out because of the long acquisition times of sodium MRI techniques, their susceptibility to static field inhomogeneities and their limited signal-to-noise ratio compared to proton MRI. Three novel techniques that address these limitations are presented in this thesis: (a) a sodium MRI sequence that acquires simultaneously both tissue sodium concentration maps and TQF images, (b) a phase-rotation scheme that allows for the acquisition of static field inhomogeneity insensitive TQF images, and (c) the combination of the two aforementioned techniques with optimised parameters at the ultra-high fi eld strength of 9.4 T in vivo. The SISTINA sequence - simultaneous single-quantum and triple-quantum filtered imaging of 23 Na - is presented. The sequence is based on a TQF acquisition with a Cartesian readout and a three-pulse preparation. The delay between the first two pulses is used for an additional ultra-short echo time 3D radial readout. The method was implemented on a 4T scanner. It is validated in phantoms and in healthy volunteers that this additional readout does not interfere with the TQ preparation. The method is applied to three cases of brain tumours. The tissue sodium concentration maps and TQF images are presented and compared to 1 H MR and positron emission tomography images. The three-pulse TQF preparation is sensitive to static field inhomogeneities. This problem is caused by destructive interference of different coherence pathways. To address 13. Coherent control of third-harmonic-generation by a waveform-controlled two-colour laser field International Nuclear Information System (INIS) Chen, W-J; Chen, W-F; Pan, C-L; Lin, R-Y; Lee, C-K 2013-01-01 We investigate generation of the third harmonic (TH; λ = 355 nm) signal by two-colour excitation (λ = 1064 nm and its second harmonic, λ = 532 nm) in argon gas, with emphasis on the influence of relative phases and intensities of the two-colour pump on the third-order nonlinear frequency conversion process. Perturbative nonlinear optics predicts that the TH signal will oscillate periodically with the relative phases of the two-colour driving laser fields due to the interference of TH signals from a direct third-harmonic-generation (THG) channel and a four-wave mixing (FWM) channel. For the first time, we show unequivocal experimental evidence of this effect. A modulation level as high as 0.35 is achieved by waveform control of the two-colour laser field. The modulation also offers a promising way to retrieve the relative phase value of the two-colour laser field. (letter) 14. Static and Dynamic Reservoir Characterization Using High Resolution P-Wave Velocity Data in Delhi Field, la Science.gov (United States) Hussain, S.; Davis, T. 2012-12-01 Static and dynamic reservoir characterization was done on high resolution P-wave seismic data in Delhi Field, LA to study the complex stratigraphy of the Holt-Bryant sands and to delineate the CO2 flow path. The field is undergoing CO2 injection for enhanced oil recovery. The seismic data was bandwidth extended by Geotrace to decrease the tuning thickness effect. Once the authenticity of the added frequencies in the data was determined, the interpretation helped map thin Tuscaloosa and Paluxy sands. Cross-equalization was done on the baseline and monitor surveys to remove the non-repeatable noise in the data. Acoustic impedance (AI) inversion was done on the baseline and monitor surveys to map the changes in AI with CO2 injection in the field. Figure 1 shows the AI percentage change at Base Paluxy. The analysis helped identify areas that were not being swept by CO2. Figure 2 shows the CO2 flow paths in Tuscaloosa formation. The percentage change of AI with CO2 injection and pressure increase corresponded with the fluid substitution modeling results. Time-lapse interpretation helped in delineating the channels, high permeability zones and the bypassed zones in the reservoir.; Figure 1: P-impedance percentage difference map with a 2 ms window centered at the base of Paluxy with the production data from June 2010 overlain; the black dashed line is the oil-water contact; notice the negative impedance change below the OWC. The lighter yellow color shows area where Paluxy is not being swept completely. ; Figure 2: P-impedance percentage difference map at TUSC 7 top; the white triangles are TUSC 7 injectors and the white circles are TUSC 7 producers; the black polygons show the flow paths of CO2. 15. Characterisation of dispersive systems using a coherer Directory of Open Access Journals (Sweden) Nikolić Pantelija M. 2002-01-01 Full Text Available The possibility of characterization of aluminium powders using a horizontal coherer has been considered. Al powders of known dimension were treated with a high frequency electromagnetic field or with a DC electric field, which were increased until a dielectric breakdown occurred. Using a multifunctional card PC-428 Electronic Design and a suitable interface between the coherer and PC, the activation time of the coherer was measured as a function of powder dimension and the distance between the coherer electrodes. It was also shown that the average dimension of powders of unknown size could be determined using the coherer. 16. Prediction model of velocity field around circular cylinder over various Reynolds numbers by fusion convolutional neural networks based on pressure on the cylinder Science.gov (United States) Jin, Xiaowei; Cheng, Peng; Chen, Wen-Li; Li, Hui 2018-04-01 A data-driven model is proposed for the prediction of the velocity field around a cylinder by fusion convolutional neural networks (CNNs) using measurements of the pressure field on the cylinder. The model is based on the close relationship between the Reynolds stresses in the wake, the wake formation length, and the base pressure. Numerical simulations of flow around a cylinder at various Reynolds numbers are carried out to establish a dataset capturing the effect of the Reynolds number on various flow properties. The time series of pressure fluctuations on the cylinder is converted into a grid-like spatial-temporal topology to be handled as the input of a CNN. A CNN architecture composed of a fusion of paths with and without a pooling layer is designed. This architecture can capture both accurate spatial-temporal information and the features that are invariant of small translations in the temporal dimension of pressure fluctuations on the cylinder. The CNN is trained using the computational fluid dynamics (CFD) dataset to establish the mapping relationship between the pressure fluctuations on the cylinder and the velocity field around the cylinder. Adam (adaptive moment estimation), an efficient method for processing large-scale and high-dimensional machine learning problems, is employed to implement the optimization algorithm. The trained model is then tested over various Reynolds numbers. The predictions of this model are found to agree well with the CFD results, and the data-driven model successfully learns the underlying flow regimes, i.e., the relationship between wake structure and pressure experienced on the surface of a cylinder is well established. 17. Ultrahigh-Resolution Magnetic Resonance in Inhomogeneous Magnetic Fields: Two-Dimensional Long-Lived-Coherence Correlation Spectroscopy Science.gov (United States) Chinthalapalli, Srinivas; Bornet, Aurélien; Segawa, Takuya F.; Sarkar, Riddhiman; Jannin, Sami; Bodenhausen, Geoffrey 2012-07-01 A half-century quest for improving resolution in Nuclear Magnetic Resonance (NMR) and Magnetic Resonance Imaging (MRI) has enabled the study of molecular structures, biological interactions, and fine details of anatomy. This progress largely relied on the advent of sophisticated superconducting magnets that can provide stable and homogeneous fields with temporal and spatial variations below ΔB0/B0LLC-COSY) opens the way to overcome both inhomogeneous and homogeneous broadening, which arise from local variations in static fields and fluctuating dipole-dipole interactions, respectively. LLC-COSY makes it possible to obtain ultrahigh resolution two-dimensional spectra, with linewidths on the order of Δν=0.1 to 1 Hz, even in very inhomogeneous fields (ΔB0/B0>10ppm or 5000 Hz at 9.7 T), and can improve resolution by a factor up to 9 when the homogeneous linewidths are determined by dipole-dipole interactions. The resulting LLC-COSY spectra display chemical shift differences and scalar couplings in two orthogonal dimensions, like in “J spectroscopy.” LLC-COSY does not require any sophisticated gradient switching or frequency-modulated pulses. Applications to in-cell NMR and to magnetic resonance spectroscopy (MRS) of selected volume elements in MRI appear promising, particularly when susceptibility variations tend to preclude high resolution. 18. Globally coherent short duration magnetic field transients and their effect on ground based gravitational-wave detectors International Nuclear Information System (INIS) Kowalska-Leszczynska, Izabela; Bulik, Tomasz; Bizouard, Marie-Anne; Robinet, Florent; Christensen, Nelson; Rohde, Maximilian; Coughlin, Michael; Gołkowski, Mark; Kubisz, Jerzy; Kulak, Andrzej; Mlynarczyk, Janusz 2017-01-01 It has been recognized that the magnetic fields from the Schumann resonances could affect the search for a stochastic gravitational-wave background by LIGO and Virgo. Presented here are the observations of short duration magnetic field transients that are coincident in the magnetometers at the LIGO and Virgo sites. Data from low-noise magnetometers in Poland and Colorado, USA, are also used and show short duration magnetic transients of global extent. We measure at least 2.3 coincident (between Poland and Colorado) magnetic transient events per day where one of the pulses exceeds 200 pT. Given the recently measured values of the magnetic coupling to differential arm motion for Advanced LIGO, there would be a few events per day that would appear simultaneously at the gravitational-wave detector sites and could move the test masses of order 10 −18 m. We confirm that in the advanced detector era short duration transient gravitational-wave searches must account for correlated magnetic field noise in the global detector network. (paper) 19. Monitoring of the spatio-temporal change in the interplate coupling at northeastern Japan subduction zone based on the spatial gradients of surface velocity field Science.gov (United States) Iinuma, Takeshi 2018-04-01 A monitoring method to grasp the spatio-temporal change in the interplate coupling in a subduction zone based on the spatial gradients of surface displacement rate fields is proposed. I estimated the spatio-temporal change in the interplate coupling along the plate boundary in northeastern (NE) Japan by applying the proposed method to the surface displacement rates based on global positioning system observations. The gradient of the surface velocities is calculated in each swath configured along the direction normal to the Japan Trench for time windows such as 0.5, 1, 2, 3 and 5 yr being shifted by one week during the period of 1997-2016. The gradient of the horizontal velocities is negative and has a large magnitude when the interplate coupling at the shallow part (less than approximately 50 km in depth) beneath the profile is strong, and the sign of the gradient of the vertical velocity is sensitive to the existence of the coupling at the deep part (greater than approximately 50 km in depth). The trench-parallel variation of the spatial gradients of a displacement rate field clearly corresponds to the trench-parallel variation of the amplitude of the interplate coupling on the plate interface, as well as the rupture areas of previous interplate earthquakes. Temporal changes in the trench-parallel variation of the spatial gradient of the displacement rate correspond to the strengthening or weakening of the interplate coupling. We can monitor the temporal change in the interplate coupling state by calculating the spatial gradients of the surface displacement rate field to some extent without performing inversion analyses with applying certain constraint conditions that sometimes cause over- and/or underestimation at areas of limited spatial resolution far from the observation network. The results of the calculation confirm known interplate events in the NE Japan subduction zone, such as the post-seismic slip of the 2003 M8.0 Tokachi-oki and 2005 M7.2 Miyagi 20. Difference of horizontal-to-vertical spectral ratios of observed earthquakes and microtremors and its application to S-wave velocity inversion based on the diffuse field concept Science.gov (United States) Kawase, Hiroshi; Mori, Yuta; Nagashima, Fumiaki 2018-01-01 We have been discussing the validity of using the horizontal-to-vertical spectral ratios (HVRs) as a substitute for S-wave amplifications after Nakamura first proposed the idea in 1989. So far a formula for HVRs had not been derived that fully utilized their physical characteristics until a recent proposal based on the diffuse field concept. There is another source of confusion that comes from the mixed use of HVRs from earthquake and microtremors, although their wave fields are hardly the same. In this study, we compared HVRs from observed microtremors (MHVR) and those from observed earthquake motions (EHVR) at one hundred K-NET and KiK-net stations. We found that MHVR and EHVR share similarities, especially until their first peak frequency, but have significant differences in the higher frequency range. This is because microtremors mainly consist of surface waves so that peaks associated with higher modes would not be prominent, while seismic motions mainly consist of upwardly propagating plain body waves so that higher mode resonances can be seen in high frequency. We defined here the spectral amplitude ratio between them as EMR and calculated their average. We categorize all the sites into five bins by their fundamental peak frequencies in MHVR. Once we obtained EMRs for five categories, we back-calculated EHVRs from MHVRs, which we call pseudo-EHVRs (pEHVR). We found that pEHVR is much closer to EHVR than MHVR. Then we use our inversion code to invert the one-dimensional S-wave velocity structures from EHVRs based on the diffuse field concept. We also applied the same code to pEHVRs and MHVRs for comparison. We found that pEHVRs yield velocity structures much closer to those by EHVRs than those by MHVRs. This is natural since what we have done up to here is circular except for the average operation in EMRs. Finally, we showed independent examples of data not used in the EMR calculation, where better ground structures were successfully identified from p 1. Coherence of light. 2. ed. International Nuclear Information System (INIS) Perina, J. 1985-01-01 This book puts the theory of coherence of light on a rigorous mathematical footing. It deals with the classical and quantum theories and with their inter-relationships, including many results from the author's own research. Particular attention is paid to the detection of optical fields, using the correlation functions, photocount statistics and coherent state. Radiometry with light fields of arbitrary states of coherence is discussed and the coherent state methods are demonstrated by photon statistics of radiation in random and nonlinear media, using the Heisenberg-Langevin and Fokker-Planck approaches to the interaction of radiation with matter. Many experimental and theoretical results are compared. A full list of references to theoretical and experimental literature is provided. The book is intended for researchers and postgraduate students in the fields of quantum optics, quantum electronics, statistical optics, nonlinear optics, optical communication and optoelectronics. (Auth.) 2. Challenges and advantages in wide-field optical coherence tomography angiography imaging of the human retinal and choroidal vasculature at 1.7-MHz A-scan rate Science.gov (United States) Poddar, Raju; Migacz, Justin V.; Schwartz, Daniel M.; Werner, John S.; Gorczynska, Iwona 2017-10-01 We present noninvasive, three-dimensional, depth-resolved imaging of human retinal and choroidal blood circulation with a swept-source optical coherence tomography (OCT) system at 1065-nm center wavelength. Motion contrast OCT imaging was performed with the phase-variance OCT angiography method. A Fourier-domain mode-locked light source was used to enable an imaging rate of 1.7 MHz. We experimentally demonstrate the challenges and advantages of wide-field OCT angiography (OCTA). In the discussion, we consider acquisition time, scanning area, scanning density, and their influence on visualization of selected features of the retinal and choroidal vascular networks. The OCTA imaging was performed with a field of view of 16 deg (5 mm×5 mm) and 30 deg (9 mm×9 mm). Data were presented in en face projections generated from single volumes and in en face projection mosaics generated from up to 4 datasets. OCTA imaging at 1.7 MHz A-scan rate was compared with results obtained from a commercial OCTA instrument and with conventional ophthalmic diagnostic methods: fundus photography, fluorescein, and indocyanine green angiography. Comparison of images obtained from all methods is demonstrated using the same eye of a healthy volunteer. For example, imaging of retinal pathology is presented in three cases of advanced age-related macular degeneration. 3. Coherence and Sense of Coherence DEFF Research Database (Denmark) Dau, Susanne 2014-01-01 Constraints in the implementation of models of blended learning can be explained by several causes, but in this paper, it is illustrated that lack of sense of coherence is a major factor of these constraints along with the referential whole of the perceived learning environments. The question exa... 4. The diagnostic use of choroidal thickness analysis and its correlation with visual field indices in glaucoma using spectral domain optical coherence tomography. Directory of Open Access Journals (Sweden) Zhongjing Lin Full Text Available To evaluate the quantitative characteristics of choroidal thickness in primary open-angle glaucoma (POAG, normal tension glaucoma (NTG and in normal eyes using spectral-domain optical coherence tomography (SD-OCT. To evaluate the diagnostic ability of choroidal thickness in glaucoma and to determine the correlation between choroidal thickness and visual field parameters in glaucoma.A total of 116 subjects including 40 POAG, 30 NTG and 46 healthy subjects were enrolled in this study. Choroidal thickness measurements were acquired in the macular and peripapillary regions using SD-OCT. All subjects underwent white-on-white (W/W and blue-on-yellow (B/Y visual field tests using Humphrey Field Analyzer. The receiver operating characteristic (ROC curve and the area under curve (AUC were generated to assess the discriminating power of choroidal thickness for glaucoma. Pearson's correlation coefficients were calculated to assess the structure function correlation for glaucoma patients.No significant differences were observed for macular choroidal thickness among the different groups (all P > 0.05. Regarding the peripapillary choroidal thickness (PPCT, significant differences were observed among the three groups (all P 0.05, but showed significant correlations with B/Y MD (all P < 0.05. In the early glaucomatous eyes, PPCT showed significant correlations with W/W MD and B/Y MD (all P < 0.05.In our study, peripapillary choroidal thickness measured on OCT showed a low to moderate but statistically significant diagnostic power and a significant correlation with blue-on-yellow visual field indices in glaucoma. This may indicate a potential adjunct for peripapillary choroidal thickness in glaucoma diagnosis. 5. ‘Postage-stamp PIV’: small velocity fields at 400 kHz for turbulence spectra measurements Science.gov (United States) Beresh, Steven J.; Henfling, John F.; Spillers, Russell W.; Spitzer, Seth M. 2018-03-01 Time-resolved particle image velocimetry recently has been demonstrated in high-speed flows using a pulse-burst laser at repetition rates reaching 50 kHz. Turbulent behavior can be measured at still higher frequencies if the field of view is greatly reduced and lower laser pulse energy is accepted. Current technology allows image acquisition at 400 kHz for sequences exceeding 4000 frames but for an array of only 128  ×  120 pixels, giving the moniker of ‘postage-stamp PIV’. The technique has been tested far downstream of a supersonic jet exhausting into a transonic crossflow. Two-component measurements appear valid until 120 kHz, at which point a noise floor emerges whose magnitude is dependent on the reduction of peak locking. Stereoscopic measurement offers three-component data for turbulent kinetic energy spectra, but exhibits a reduced signal bandwidth and higher noise in the out-of-plane component due to the oblique camera images. The resulting spectra reveal two regions exhibiting power-law dependence describing the turbulent decay. The frequency response of the present measurement configuration exceeds nearly all previous velocimetry measurements in high speed flow. 6. Coherent control of D2/H2 dissociative ionization by a mid-infrared two-color laser field International Nuclear Information System (INIS) Wanie, Vincent; Ibrahim, Heide; Beaulieu, Samuel; Thiré, Nicolas; Schmidt, Bruno E; Légaré, François; Deng, Yunpei; Alnaser, Ali S; Litvinyuk, Igor V; Tong, Xiao-Min 2016-01-01 Steering the electrons during an ultrafast photo-induced process in a molecule influences the chemical behavior of the system, opening the door to the control of photochemical reactions and photobiological processes. Electrons can be efficiently localized using a strong laser field with a well-designed temporal shape of the electric component. Consequently, many experiments have been performed with laser sources in the near-infrared region (800 nm) in the interest of studying and enhancing the electron localization. However, due to its limited accessibility, the mid-infrared (MIR) range has barely been investigated, although it allows to efficiently control small molecules and even more complex systems. To push further the manipulation of basic chemical mechanisms, we used a MIR two-color (1800 and 900 nm) laser field to ionize H 2 and D 2 molecules and to steer the remaining electron during the photo-induced dissociation. The study of this prototype reaction led to the simultaneous control of four fragmentation channels. The results are well reproduced by a theoretical model solving the time-dependent Schrödinger equation for the molecular ion, identifying the involved dissociation mechanisms. By varying the relative phase between the two colors, asymmetries (i.e., electron localization selectivity) of up to 65% were obtained, corresponding to enhanced or equivalent levels of control compared to previous experiments. Experimentally easier to implement, the use of a two-color laser field leads to a better electron localization than carrier-envelope phase stabilized pulses and applying the technique in the MIR range reveals more dissociation channels than at 800 nm. (paper) 7. Effect of ion orbit loss on the structure in the H-mode tokamak edge pedestal profiles of rotation velocity, radial electric field, density, and temperature International Nuclear Information System (INIS) Stacey, Weston M. 2013-01-01 An investigation of the effect of ion orbit loss of thermal ions and the compensating return ion current directly on the radial ion flux flowing in the plasma, and thereby indirectly on the toroidal and poloidal rotation velocity profiles, the radial electric field, density, and temperature profiles, and the interpretation of diffusive and non-diffusive transport coefficients in the plasma edge, is described. Illustrative calculations for a high-confinement H-mode DIII-D [J. Luxon, Nucl. Fusion 42, 614 (2002)] plasma are presented and compared with experimental results. Taking into account, ion orbit loss of thermal ions and the compensating return ion current is found to have a significant effect on the structure of the radial profiles of these quantities in the edge plasma, indicating the necessity of taking ion orbit loss effects into account in interpreting or predicting these quantities 8. A free software for pore-scale modelling: solving Stokes equation for velocity fields and permeability values in 3D pore geometries KAUST Repository Gerke, Kirill; Vasilyev, Roman; Khirevich, Siarhei; Karsanina, Marina; Collins, Daniel; Korost, Dmitry; Mallants, Dirk 2015-01-01 In this contribution we introduce a novel free software which solves the Stokes equation to obtain velocity fields for low Reynolds-number flows within externally generated 3D pore geometries. Provided with velocity fields, one can calculate permeability for known pressure gradient boundary conditions via Darcy's equation. Finite-difference schemes of 2nd and 4th order of accuracy are used together with an artificial compressibility method to iteratively converge to a steady-state solution of Stokes' equation. This numerical approach is much faster and less computationally demanding than the majority of open-source or commercial softwares employing other algorithms (finite elements/volumes, lattice Boltzmann, etc.) The software consists of two parts: 1) a pre and post-processing graphical interface, and 2) a solver. The latter is efficiently parallelized to use any number of available cores (the speedup on 16 threads was up to 10-12 depending on hardware). Due to parallelization and memory optimization our software can be used to obtain solutions for 300x300x300 voxels geometries on modern desktop PCs. The software was successfully verified by testing it against lattice Boltzmann simulations and analytical solutions. To illustrate the software's applicability for numerous problems in Earth Sciences, a number of case studies have been developed: 1) identifying the representative elementary volume for permeability determination within a sandstone sample, 2) derivation of permeability/hydraulic conductivity values for rock and soil samples and comparing those with experimentally obtained values, 3) revealing the influence of the amount of fine-textured material such as clay on filtration properties of sandy soil. This work was partially supported by RSF grant 14-17-00658 (pore-scale modelling) and RFBR grants 13-04-00409-a and 13-05-01176-a. 9. A free software for pore-scale modelling: solving Stokes equation for velocity fields and permeability values in 3D pore geometries KAUST Repository Gerke, Kirill 2015-04-01 In this contribution we introduce a novel free software which solves the Stokes equation to obtain velocity fields for low Reynolds-number flows within externally generated 3D pore geometries. Provided with velocity fields, one can calculate permeability for known pressure gradient boundary conditions via Darcy\\'s equation. Finite-difference schemes of 2nd and 4th order of accuracy are used together with an artificial compressibility method to iteratively converge to a steady-state solution of Stokes\\' equation. This numerical approach is much faster and less computationally demanding than the majority of open-source or commercial softwares employing other algorithms (finite elements/volumes, lattice Boltzmann, etc.) The software consists of two parts: 1) a pre and post-processing graphical interface, and 2) a solver. The latter is efficiently parallelized to use any number of available cores (the speedup on 16 threads was up to 10-12 depending on hardware). Due to parallelization and memory optimization our software can be used to obtain solutions for 300x300x300 voxels geometries on modern desktop PCs. The software was successfully verified by testing it against lattice Boltzmann simulations and analytical solutions. To illustrate the software\\'s applicability for numerous problems in Earth Sciences, a number of case studies have been developed: 1) identifying the representative elementary volume for permeability determination within a sandstone sample, 2) derivation of permeability/hydraulic conductivity values for rock and soil samples and comparing those with experimentally obtained values, 3) revealing the influence of the amount of fine-textured material such as clay on filtration properties of sandy soil. This work was partially supported by RSF grant 14-17-00658 (pore-scale modelling) and RFBR grants 13-04-00409-a and 13-05-01176-a. 10. A time-space synchronization of coherent Doppler scanning lidars for 3D measurements of wind fields DEFF Research Database (Denmark) Vasiljevic, Nikola initiates the laser pulse emission and acquisition of the backscattered light, while the two servo motors conduct the scanner head rotation that provides means to direct the laser pulses into the atmosphere. By controlling the rotation of the three motors from the motion controller the strict......-dimensional flow field by emitting the laser beams from the three spatially separated lidars, directing them to intersect, and moving the beam intersection over an area of interest. Each individual lidar was engineered to be powered by two real servo motors, and one virtual stepper motor. The stepper motor...... synchronization and time control of the emission, steering and acquisition were achieved, resulting that the complete lidar measurement process is controlled from the single hardware component. The system was formed using a novel approach, in which the master computer simultaneously coordinates the remote lidars... 11. New GPS velocity field in the northern Andes (Peru - Ecuador - Colombia): heterogeneous locking along the subduction, northeastwards motion of the Northern Andes Science.gov (United States) Nocquet, J.; Mothes, P. A.; Villegas Lanza, J.; Chlieh, M.; Jarrin, P.; Vallée, M.; Tavera, H.; Ruiz, G.; Regnier, M.; Rolandone, F. 2010-12-01 Rapid subduction of the Nazca plate beneath the northen Andes margin (~6 cm/yr) results in two different processes: (1) elastic stress is accumulating along the Nazca/South American plate interface which is responsible for one of the largest megathrust earthquake sequences during the last century. The 500-km-long rupture zone of the 1906 (Mw= 8.8) event was partially reactivated by three events from the 1942 (Mw = 7.8), 1958 (Mw = 7.7), to the 1979 (Mw = 8.2). However, south of latitude 1°S, no M>8 earthquake has been reported in the last three centuries, suggesting that this area is slipping aseismically (2) permanent deformation causes opening of the Gulf of Guayaquil, with northeastwards motion of the Northern Andean Block (NAB). We present a new GPS velocity field covering the northern Andes from south of the Gulf of Guayaquil to the Caribbean plate. Our velocity field includes new continuously-recording GPS stations installed along the Ecuadorian coast, together with campaign sites observed since 1994 in the CASA project (Kellogg et al., 1989). We first estimate the long-term kinematics of the NAB in a joint inversion including GPS data, earthquake slip vectors, and quaternary slip rates on major faults. The inversion provides an Euler pole located at long. -107.8°E, lat. 36.2°N, 0.091°/Ma and indicates little internal deformation of the NAB (wrms=1.2 mm/yr). As a consequence, 30% of the obliquity of the Nazca/South America motion is accommodated by transcurrent to transpressive motion along the eastern boundary of the NAB. Residual velocities with respect to the NAB are then modeled in terms Models indicate a patchwork of highly coupled asperities encompassed by aseismic patches over the area of rupture of the M~8.8 1906 earthquake. Very low coupling is found along the southern Ecuadorian and northern Peru subduction. 12. Crustal deformation characteristics of Sichuan-Yunnan region in China on the constraint of multi-periods of GPS velocity fields Science.gov (United States) Yue, Caiya; Dang, Yamin; Dai, Huayang; Yang, Qiang; Wang, Xiankai 2018-04-01 In order to obtain deformation parameters in each block of Sichuan-Yunnan Region (SYG) in China by stages and establish a dynamic model about the variation of the strain rate fields and the surface expansion in this area, we taken the Global Positioning System (GPS) sites velocity in the region as constrained condition and taken advantage of the block strain calculation model based on spherical surface. We also analyzed the deformation of the active blocks in the whole SYG before and after the Wenchuan earthquake, and analyzed the deformation of active blocks near the epicenter of the Wenchuan earthquake in detail. The results show that, (1) Under the effects of the carving from India plate and the crimping from the potential energy of Tibetan Plateau for a long time, there is a certain periodicity in crustal deformation in SYG. And the period change and the earthquake occurrence have a good agreement. (2) The differences in GPS velocity fields relative Eurasian reference frame shows that the Wenchuan earthquake and the Ya'an earthquake mainly affect the crustal movement in the central and southern part of SYG, and the average velocity difference is about 4-8 mm/a for the Wenchuan earthquake and 2-4 mm/a for the Ya'an earthquake. (3) For the Wenchuan earthquake, the average strain changed from 10 to 20 nanostrian/a before earthquake to 40-50 nanostrian/a after the earthquake, but before and after the Ya'an earthquake, the strain value increased from about 15 nanostrian/a to about 30 nanostrian/a. (4) The Wenchuan earthquake has changed the strain parameter of each active block more or less. Especially, the Longmen block and Chengdu block near the epicenter. The research provides fundamental material for the study of the dynamic mechanism of the push extrusion from the north-east of the India plate and the crimp from Qinghai Tibet Plateau, and it also provides support for the study of crustal stress variation and earthquake prediction in Sichuan Yunnan region. 13. Extreme sub-wavelength atom localization via coherent population trapping OpenAIRE Agarwal, Girish S.; Kapale, Kishore T. 2005-01-01 We demonstrate an atom localization scheme based on monitoring of the atomic coherences. We consider atomic transitions in a Lambda configuration where the control field is a standing wave field. The probe field and the control field produce coherence between the two ground states. We show that this coherence has the same fringe pattern as produced by a Fabry-Perot interferometer and thus measurement of the atomic coherence would localize the atom. Interestingly enough the role of the cavity ... 14. Using the phase-space imager to analyze partially coherent imaging systems: bright-field, phase contrast, differential interference contrast, differential phase contrast, and spiral phase contrast Science.gov (United States) Mehta, Shalin B.; Sheppard, Colin J. R. 2010-05-01 Various methods that use large illumination aperture (i.e. partially coherent illumination) have been developed for making transparent (i.e. phase) specimens visible. These methods were developed to provide qualitative contrast rather than quantitative measurement-coherent illumination has been relied upon for quantitative phase analysis. Partially coherent illumination has some important advantages over coherent illumination and can be used for measurement of the specimen's phase distribution. However, quantitative analysis and image computation in partially coherent systems have not been explored fully due to the lack of a general, physically insightful and computationally efficient model of image formation. We have developed a phase-space model that satisfies these requirements. In this paper, we employ this model (called the phase-space imager) to elucidate five different partially coherent systems mentioned in the title. We compute images of an optical fiber under these systems and verify some of them with experimental images. These results and simulated images of a general phase profile are used to compare the contrast and the resolution of the imaging systems. We show that, for quantitative phase imaging of a thin specimen with matched illumination, differential phase contrast offers linear transfer of specimen information to the image. We also show that the edge enhancement properties of spiral phase contrast are compromised significantly as the coherence of illumination is reduced. The results demonstrate that the phase-space imager model provides a useful framework for analysis, calibration, and design of partially coherent imaging methods. 15. Dynamical critical scaling of electric field fluctuations in the greater cusp and magnetotail implied by HF radar observations of F-region Doppler velocity Directory of Open Access Journals (Sweden) M. L. Parkinson 2006-03-01 Full Text Available Akasofu's solar wind ε parameter describes the coupling of solar wind energy to the magnetosphere and ionosphere. Analysis of fluctuations in ε using model independent scaling techniques including the peaks of probability density functions (PDFs and generalised structure function (GSF analysis show the fluctuations were self-affine (mono-fractal, single exponent scaling over 9 octaves of time scale from ~46 s to ~9.1 h. However, the peak scaling exponent α0 was a function of the fluctuation bin size, so caution is required when comparing the exponents for different data sets sampled in different ways. The same generic scaling techniques revealed the organisation and functional form of concurrent fluctuations in azimuthal magnetospheric electric fields implied by SuperDARN HF radar measurements of line-of-sight Doppler velocity, vLOS, made in the high-latitude austral ionosphere. The PDFs of vLOS fluctuation were calculated for time scales between 1 min and 256 min, and were sorted into noon sector results obtained with the Halley radar, and midnight sector results obtained with the TIGER radar. The PDFs were further sorted according to the orientation of the interplanetary magnetic field, as well as ionospheric regions of high and low Doppler spectral width. High spectral widths tend to occur at higher latitude, mostly on open field lines but also on closed field lines just equatorward of the open-closed boundary, whereas low spectral widths are concentrated on closed field lines deeper inside the magnetosphere. The vLOS fluctuations were most self-affine (i.e. like the solar wind ε parameter on the high spectral width field lines in the noon sector ionosphere (i.e. the greater cusp, but suggested multi-fractal behaviour on closed field lines in the midnight sector (i.e. the central plasma sheet. Long tails in the PDFs imply that "microbursts" in ionospheric convection 16. Dynamical critical scaling of electric field fluctuations in the greater cusp and magnetotail implied by HF radar observations of F-region Doppler velocity Directory of Open Access Journals (Sweden) M. L. Parkinson 2006-03-01 Full Text Available Akasofu's solar wind ε parameter describes the coupling of solar wind energy to the magnetosphere and ionosphere. Analysis of fluctuations in ε using model independent scaling techniques including the peaks of probability density functions (PDFs and generalised structure function (GSF analysis show the fluctuations were self-affine (mono-fractal, single exponent scaling over 9 octaves of time scale from ~46 s to ~9.1 h. However, the peak scaling exponent α0 was a function of the fluctuation bin size, so caution is required when comparing the exponents for different data sets sampled in different ways. The same generic scaling techniques revealed the organisation and functional form of concurrent fluctuations in azimuthal magnetospheric electric fields implied by SuperDARN HF radar measurements of line-of-sight Doppler velocity, vLOS, made in the high-latitude austral ionosphere. The PDFs of vLOS fluctuation were calculated for time scales between 1 min and 256 min, and were sorted into noon sector results obtained with the Halley radar, and midnight sector results obtained with the TIGER radar. The PDFs were further sorted according to the orientation of the interplanetary magnetic field, as well as ionospheric regions of high and low Doppler spectral width. High spectral widths tend to occur at higher latitude, mostly on open field lines but also on closed field lines just equatorward of the open-closed boundary, whereas low spectral widths are concentrated on closed field lines deeper inside the magnetosphere. The vLOS fluctuations were most self-affine (i.e. like the solar wind ε parameter on the high spectral width field lines in the noon sector ionosphere (i.e. the greater cusp, but suggested multi-fractal behaviour on closed field lines in the midnight sector (i.e. the central plasma sheet. Long tails in the PDFs imply that "microbursts" in ionospheric convection occur far more frequently, especially on open field lines, than can be 17. In vivo imaging through the entire thickness of human cornea by full-field optical coherence tomography Science.gov (United States) Mazlin, Viacheslav; Xiao, Peng; Dalimier, Eugénie; Grieve, Kate; Irsch, Kristina; Sahel, José; Fink, Mathias; Boccara, Claude 2018-02-01 Despite obvious improvements in visualization of the in vivo cornea through the faster imaging speeds and higher axial resolutions, cellular imaging stays unresolvable task for OCT, as en face viewing with a high lateral resolution is required. The latter is possible with FFOCT, a method that relies on a camera, moderate numerical aperture (NA) objectives and an incoherent light source to provide en face images with a micrometer-level resolution. Recently, we for the first time demonstrated the ability of FFOCT to capture images from the in vivo human cornea1. In the current paper we present an extensive study of appearance of healthy in vivo human corneas under FFOCT examination. En face corneal images with a micrometer-level resolution were obtained from the three healthy subjects. For each subject it was possible to acquire images through the entire corneal depth and visualize the epithelium structures, Bowman's layer, sub-basal nerve plexus (SNP) fibers, anterior, middle and posterior stroma, endothelial cells with nuclei. Dimensions and densities of the structures visible with FFOCT, are in agreement with those seen by other cornea imaging methods. Cellular-level details in the images obtained together with the relatively large field-of-view (FOV) and contactless way of imaging make this device a promising candidate for becoming a new tool in ophthalmological diagnostics. 18. 1550 nm superluminescent diode and anti-Stokes effect CCD camera based optical coherence tomography for full-field optical metrology Science.gov (United States) Kredzinski, Lukasz; Connelly, Michael J. 2011-06-01 Optical Coherence Tomography (OCT) is a promising non-invasive imaging technology capable of carrying out 3D high-resolution cross-sectional images of the internal microstructure of examined material. However, almost all of these systems are expensive, requiring the use of complex optical setups, expensive light sources and complicated scanning of the sample under test. In addition most of these systems have not taken advantage of the competitively priced optical components available at wavelength within the main optical communications band located in the 1550 nm region. A comparatively simple and inexpensive full-field OCT system (FF-OCT), based on a superluminescent diode (SLD) light source and anti-stokes imaging device was constructed, to perform 3D cross-sectional imaging. This kind of inexpensive setup with moderate resolution could be easily applicable in low-level biomedical and industrial diagnostics. This paper involves calibration of the system and determines its suitability for imaging structures of biological tissues such as teeth, which has low absorption at 1550 nm. 19. Label-free characterization of vitrification-induced morphology changes in single-cell embryos with full-field optical coherence tomography Science.gov (United States) Zarnescu, Livia; Leung, Michael C.; Abeyta, Michael; Sudkamp, Helge; Baer, Thomas; Behr, Barry; Ellerbee, Audrey K. 2015-09-01 Vitrification is an increasingly popular method of embryo cryopreservation that is used in assisted reproductive technology. Although vitrification has high post-thaw survival rates compared to other freezing techniques, its long-term effects on embryo development are still poorly understood. We demonstrate an application of full-field optical coherence tomography (FF-OCT) to visualize the effects of vitrification on live single-cell (2 pronuclear) mouse embryos without harmful labels. Using FF-OCT, we observed that vitrification causes a significant increase in the aggregation of structures within the embryo cytoplasm, consistent with reports in literature based on fluorescence techniques. We quantify the degree of aggregation with an objective metric, the cytoplasmic aggregation (CA) score, and observe a high degree of correlation between the CA scores of FF-OCT images of embryos and of fluorescence images of their mitochondria. Our results indicate that FF-OCT shows promise as a label-free assessment of the effects of vitrification on embryo mitochondria distribution. The CA score provides a quantitative metric to describe the degree to which embryos have been affected by vitrification and could aid clinicians in selecting embryos for transfer. 20. Compact akinetic swept source optical coherence tomography angiography at 1060 nm supporting a wide field of view and adaptive optics imaging modes of the posterior eye. Science.gov (United States) Salas, Matthias; Augustin, Marco; Felberer, Franz; Wartak, Andreas; Laslandes, Marie; Ginner, Laurin; Niederleithner, Michael; Ensher, Jason; Minneman, Michael P; Leitgeb, Rainer A; Drexler, Wolfgang; Levecq, Xavier; Schmidt-Erfurth, Ursula; Pircher, Michael 2018-04-01 Imaging of the human retina with high resolution is an essential step towards improved diagnosis and treatment control. In this paper, we introduce a compact, clinically user-friendly instrument based on swept source optical coherence tomography (SS-OCT). A key feature of the system is the realization of two different operation modes. The first operation mode is similar to conventional OCT imaging and provides large field of view (FoV) images (up to 45° × 30°) of the human retina and choroid with standard resolution. The second operation mode enables it to optically zoom into regions of interest with high transverse resolution using adaptive optics (AO). The FoV of this second operation mode (AO-OCT mode) is 3.0° × 2.8° and enables the visualization of individual retinal cells such as cone photoreceptors or choriocapillaris. The OCT engine is based on an akinetic swept source at 1060 nm and provides an A-scan rate of 200 kHz. Structural as well as angiographic information can be retrieved from the retina and choroid in both operational modes. The capabilities of the prototype are demonstrated in healthy and diseased eyes.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8276193141937256, "perplexity": 2768.7951932142196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823303.28/warc/CC-MAIN-20181210034333-20181210055833-00063.warc.gz"}
http://www.altdevblogaday.com/2011/12/12/mathematics-for-game-developers-5-real-numbers-part-1/
The site was down over the weekend, when this post was due.  So I’m declaring today to be Saturday.  In reality, it’s much too late on Monday night for me to be writing anything.  So this will be short, and likely make even less sense than normal. Last time, we ended with a definition of the rational numbers, and I observed that they should be enough for any use: they are packed densely on the number line, with infinitely many filling the gap between any two.  How could there be any more numbers?  Where would they go? I’m sure you are all familiar with Pythagoras’ theorem: in a right triangle, the square of the side opposite the right angle (the hypotenuse) is equal to the sum of the squares of the other two sides.  This gives us a way to calculate the length of the diagonal of a unit square: the diagonal is the hypotenuse, and the square of its length is the sum of the squares of two adjacent sides of the square.  If we denote the length of the diagonal by $latex d$, we have $latex d^2 = 1^2 + 1^2$, or $latex d^2 = 2$. So what rational number is $latex d$?  Since it’s a length, it is clearly positive.  So there must be positive integers $latex a$ and $latex b$ such that $latex d = \frac{a}{b}$.  There will be many possible pairs, and it’s useful to chose the smallest.  In that case, $latex a$ and $latex b$ will have no common divisor: if they did, we could divide both by that common divisor, and get a smaller pair. Since $latex a$ and $latex b$ have no common divisor, it follows that at most one of them is even: they can’t both be, or they’d both be divisible by 2.  Remember that bit.  It’s important. Now, let’s plug $latex d = \frac{a}{b}$ into the equation from Pythagoras’ theorem: $latex \left(\frac{a}{b}\right)^2 = 2$.  After a little routine rearrangement, that gives us $latex a^2 = 2b^2$.  And that means $latex a^2$ is even. Since the square of an odd number is always an odd number, it follows that $latex a$ can’t be odd.  It must be even, which means there is some integer $latex c$ such that $latex a = 2c$.  We can plug that back into the equation, giving $latex (2c)^2 = 4c^2 = 2b^2$.  Divide it all by 2, and we have $latex 2c^2 = b^2$.  So, by the same logic, $latex b$ must be even too. But wait.  Didn’t we just say that $latex a$ and $latex b$ have no common divisor?  They can’t both be even.  We’ve managed to prove a contradiction, and that means our original assumption cannot be true.  That assumption was that $latex d$ is a rational number. If you’re an ancient Greek mathematician, this comes as a bit of a shock.  Mathematics is geometry.  Numbers are lengths and areas and volumes.  But that means there are numbers that are not rational. So how do we define these numbers?  This is a difficult question.  We could define them to be the solutions of equations like $latex d^2 = r$, where $latex r$ is a rational number, but it this clearly doesn’t cover all possible lengths.  We could define them to be the solutions of general polynomial equations (like $latex 5x^3 + 3x^2 -18x + 7 = 0$), but it’s very hard to see how we’d define operations like addition on them (and it also turns out that numbers like $latex \pi$ wouldn’t be included). The solution isn’t obvious.  As yet, I’m not sure how to go about introducing it, and it’s too late to think very clearly, so you’ll have to be patient. My next post is scheduled for the 25th, and for a Christmas treat I’d like to put these numbers aside and give you one of my favourite proofs.  The result is familiar – that there are infinitely many primes – but the proof is not the one you might know.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9282159209251404, "perplexity": 223.1297392646996}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011267211/warc/CC-MAIN-20140305092107-00096-ip-10-183-142-35.ec2.internal.warc.gz"}
http://www.maa.org/press/maa-reviews/from-cardanos-great-art-to-lagranges-reflections-filling-a-gap-in-the-history-of-algebra
# From Cardano's Great Art to Lagrange's Reflections: Filling a Gap in the History of Algebra ###### Jacqueline Stedall Publisher: European Mathematical Society Publication Date: 2011 Number of Pages: 224 Format: Hardcover Series: Heritage of European Mathematics Price: 88.00 ISBN: 9783037190920 Category: Monograph BLL Rating: The Basic Library List Committee recommends this book for acquisition by undergraduate mathematics libraries. [Reviewed by Fernando Q. Gouvêa , on 11/5/2015 ] Many accounts of the history of algebra focus most of their attention on the problem of solving polynomial equations in one variable. Starting with the quadratic problems of Ancient Mesopotamia, they go on to talk about Al-Khwarizmi’s book on al-jabr w’al-muqabala and perhaps talk about other important texts in Arabic. Then it’s Renaissance Italy and the shenanigans of Tartaglia and Cardano. At that point, methods (not “formulas”!) had been found for solving equations of degrees two through four. Maybe there follows a nod to Viète and Descartes for their creation of a proper algebraic notation. And then it’s on to the unsolvability of the quintic equation, with Ruffini, Abel, and Galois taking center stage and changing the very meaning of “algebra.” The gap in Stedall’s title is precisely the period between the solution of the cubic and quartic equations (the time of Cardano, in the middle of the 16th century) and the first work to make a serious contribution to understanding why the quintic was resisting solution (Lagrange’s Reflections, late in the 18th century). Stedall shows us what was going on in between, paying attention in particular to the themes that led to Lagrange’s ideas. The book has two parts which are organized differently. The first, “from Cardano to Newton,” is organized chronologically. This is the period when algebraic notation acquired its modern form; most importantly, for the first time one could write equations involving both unknowns and parameters. It is during this time that the subject takes on features we all take for granted. The general quadratic becomes $ax^2+bx+c=0$ instead of being expressed in words such as “squares equal to numbers and things.” It is now possible to write $x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}.$ It is also in this period that the relationship between the roots and the coefficients of an equation becomes clear. Harriot, Descartes, and Newton all knew that if the roots of $x^n-s_1x^{n-1}+s_2x^{n-2}-\dots \pm s_n=0$ are $r_1,r_2,\dots,r_n$, then $s_1=r_1+r_2+\dots+r_n$, $s_2=r_1r_2+r_1r_3+\dots+r_{n-1}r_n$, etc. Newton added that some other symmetric functions of the roots could be expressed in terms of these. The existence of $n$ roots of some kind was taken for granted. Descartes and Newton both considered the nature of the roots: how many are positive and how many negative? Can one tell a priori, from the coefficients, how many roots were imaginary? Descartes’ “rule of signs” gave a precise answer to the first question as long as all the roots were known to be real. But Descartes provided no proof of his rule. Newton’s rule for counting imaginary roots was only approximate. As a result, both problems were left for future mathematicians to consider. Part II deals with the period from Newton to Lagrange, which is basically the 18th century. Here the organization is topical, which allows Stedall to usefully highlight what she feels were the central themes of 18th century algebra. First we read about work on the nature of the roots, then on their form (were the roots always sums of radicals? Of what form?). Functions of the roots (e.g., the symmetric functions above) begin to be considered, and elimination theory is developed by Euler, Cramer, and Bézout. The last two connect directly to Lagrange’s Reflections, which is examined in some detail. There is a short chapter on numerical solutions, dealing both with approximate solutions and with the problem of finding upper and lower bounds for solutions. Newton’s method is discussed here (and Raphson’s improvement); one of the things we learn is that the original formulation of the method did not involve the derivative and (in retrospect, obviously) did not come with the usual picture of “descending along the tangent.” Inevitably, there are a few typos and (fewer) mistakes. On page 109, the condition for choosing the cube roots that appear in Cardano’s solution of the cubic is stated incorrectly, and it is unclear whether the mistake is Stedall’s or Euler’s. But in general the mathematics is clearly explained. In keeping with our monoglot times, Stedall has provided translations of all texts, with the originals usually given as footnotes. The writing is very clear, and occasionally beautiful. I particularly liked this description of Waring’s Meditationes Algebraicae: The book has all the qualities of a fertile but wandering mind: ideas arise, intermingle, and coalesce apparently at random, appearing brilliant for a time but then subsiding into obscurity or lengthy algebraic calculations. The usual structures of good mathematical writing are entirely missing: there is no sense here of building from basic principles or easy examples to more general theorems. Instead, problems, lemmas, corollaries and examples tumble over one another without apparent order or reason so that the reader is left without any sense of either starting point or direction. I think I’m going to use that quote when I next give my students a writing assignment! Stedall certainly fills a gap in the history of the theory of polynomial equations in one variable, but I question whether she has succeeded completely when it comes to filling the gap in the history of algebra, because she gives too little attention to work in several variables. This is, after all, the period when Euler first realized that systems of $n$ equations in $n$ unknowns do not always have unique solutions, when Cramer discovered his “rule” for solving systems of linear equations (and hence the determinant) and Bézout realized the connection between determinants and elimination theory. Stedall’s focus on equations in one variable means that the chapter on elimination theory is uncharacteristically weak, missing such things as the discovery of determinants (which were to become central in the following century) and Bézout’s theorems on linear combinations of polynomials. Despite not quite fulfilling the promise of its title, this is a very good book, one that provides us with real understanding of the theory of equations in this period. Fernando Q. Gouvêa is Carter Professor of Mathematics at Colby College and editor of MAA Reviews. He is very interested in the history of algebra and number theory in the 19th century. I From Cardano to Newton: 1545 to 1707 1 From Cardano to Viète 2 From Viète to Descartes 3 From Descartes to Newton II From Newton to Lagrange: 1707 to 1771 4 Discerning the nature of the roots 5 Roots as sums of radicals 6 Functions of the roots 7 Elimination theory 8 The degree of a resolvent 9 Numerical Solutions 10 The insights of Lagrange 11 The outsiders III After Lagrange ​12 Dissemination and new directions Bibliography Index
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8480755686759949, "perplexity": 1007.1778384462135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120694.49/warc/CC-MAIN-20170423031200-00289-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www2.cms.math.ca/Events/winter15/abs/dst
2015 CMS Winter Meeting McGill University, December 4 - 7, 2015 Descriptive Set Theory Org: Marcin Sabok (McGill) [PDF] DANA BARTOSOVA, University of Sao Paulo Problems about Boolean algebras in topological dynamics  [PDF] We will recall that for a group of automorphisms of a discrete structures the universal minimal dynamical system and the greatest ambit have zero-dimensional phase spaces. We will raise questions about the Boolean algebras of clopen sets of these spaces and explain their connections to Ramsey theory. CLINTON CONLEY, Carnegie Mellon University Projective separability and incomparable actions of free groups  [PDF] In 2005 Gaboriau and Popa exhibited a family of continuum-many pairwise orbit inequivalent actions of the free group on two generators. Subsequently, building upon work of Ioana, Hjorth strengthened this to obtain continuum-many actions whose equivalence relations are pairwise incomparable under Borel reducibility. We discuss how stratification techniques allow us to find such collections of actions below projectively separable equivalence relations. This is joint work with Ben Miller. JAKUB JASINSKI Canonical partitions of countable k-regular random hypergraphs  [PDF] The computation of the Ramsey degrees of elements of a Fra\"isse class has a "global" analogue. The latter involves constructing the so-called \emph{canonical partitions} of the corresponding homogeneous structure. The relatively short list of structures whose canonical partitions were found thus far includes countable homogeneous binary relational structures. We establish canonical partitions of countable k-regular random hypergraphs. This is joint work with Norbert Sauer and Claude Laflamme. BURAK KAYA, Rutgers University The complexity of topological conjugacy of pointed Cantor minimal systems  [PDF] In this talk, we will analyze the Borel complexity of the topological conjugacy relation on pointed Cantor minimal systems and show that it is Borel bireducible with the Borel equivalence relation $\Delta_{\mathbb{R}}^+$, where $^+$ denotes the Friedman-Stanley jump. Moreover, $\Delta_{\mathbb{R}}^+$ turns out to be a lower bound for the Borel complexity of topological conjugacy of Cantor minimal systems. If time permits, we shall discuss some applications of our results to properly ordered Bratteli diagrams. MARTINO LUPINI, California Institute of Technology Polish groupoids and the classification of operator algebraic varieties  [PDF] I will give an introduction to the classification problem for operator algebraic varieties and their multiplier algebras. I will then present the main ideas of the proof that multiplier algebras of operator algebraic varieties are not classifiable up to isomorphism by countable structures. The proof uses the theory of turbulence for Polish groupoids, which generalizes Hjoth's theory of turbulence for Polish group actions. This is joint work with Michael Hartz. ANDREW MARKS, UCLA Jump operations for Borel graphs  [PDF] We introduce a jump operation on bipartite Borel graphs, defined by analogy with Louveau's jump operation on Borel equivalence relations. We show that if G is a bipartite Borel graph, then the jump of G is a bipartite Borel graph which has no Borel homomorphism to G (though G has a Borel homomorphism to its jump). We also consider a jump analogous to the Friedman-Stanley jump, where there are interesting open questions. We use this jump operation to answer a question of Kechris and the speak. This is joint work with Adam Day. BRICE MBOMBO, University of Sao paulo, Brazil On topological groups with an approximate fixed point property  [PDF] A topological group $G$ has the Approximate Fixed Point (AFP) property on a bounded convex subset $C$ of a locally convex space if every continuous affine action of $G$ on $C$ admits a net $(x_i)$, $x_i\in C$, such that $x_{i}-gx_{i}\longrightarrow 0$ for all $g\in G$. In this work, we study the relationship between this property and amenability. This is a joint work with Cleon Barroso, and Vladimir Pestov. JUSTIN MOORE, Cornell University A finitely presented group of piecewise projective homeomorphisms  [PDF] In this talk I will present an example of a finitely presented subgroup of the homeomorphism group of the unit interval which consists of piecewise projective homeomorphisms. This group is nonamenable and, by work of N. Monod, does not contain a nonabelian free subgroup. Its presentation contains three generators and nine relations. While Ol'shanskii and Sapir have previously constructed an example of a finitely presented counterexample to the von Neumann-Day problem, ours is the example with a small, explicit presentation. This is joint work with Yash Lodha. DIANA OJEDA-ARISTIZABAL, University of Toronto Topological partition relations for countable ordinals  [PDF] For $X,Y$ topological spaces and natural numbers $l,m$ we write $X\longrightarrow(Y)^2_{l,m}$ if for every $l$-coloring of $[X]^2$, the unordered pairs of elements of $X$, there exists $Z\subseteq X$ homeomorphic to $Y$ such that $c\upharpoonright[Z]^2$ takes at most $m$ colors. Recently C. Pina proved that $\omega^{\omega^{\omega}}$ is the least ordinal $\gamma$ that satisfies $\gamma\longrightarrow(\omega^2+1)^2_{l,4}$ for all $l\in\mathbb{N}$ and where all ordinals are endowed with the order topology. The key of Pina's result is the use of certain families of finite sets to represent countable ordinals. Using families of finite sets to represent countable ordinals, we begin our study with ordinals of the form $\omega\cdot k+1$ with $k\geq 2$. We find that for every countable ordinal $\gamma$ there exists a 3-coloring of $[\gamma]^2$ that can't be reduced in a copy of $\omega\cdot 2+1$. We set out to find for each $m\geq 3$ the least ordinal $\gamma$ such that for every $l$ we have that $\gamma\longrightarrow(\omega\cdot 2+1)^2_{l,m}$. It turns out that if $\gamma\longrightarrow(\omega\cdot 2+1)^2_{l,3}$ for every $l$, then already $\gamma\geq \omega^{\omega^{\omega}}$. We carry out a similar analysis for ordinals of the form $\omega\cdot k+1$ with $k>2$. This is joint work with William Weiss from the University of Toronto. ARISTOTELIS PANAGIOTOPOULOS, University of Illinois at Urbana Champaign Menger compacta and projective Fraisse limits  [PDF] In every dimension $n$, there exists a canonical compact, metrizable space called the $n$-dimensional Menger space. For $n=0$ it is the Cantor space and for $n=\infty$ it is the Hilbert cube. On the first part of the talk I will illustrate how basic notions of classical descriptive set theory naturally generalize into higher homotopical dimensions. In the second part of the talk I will use projective Fraisse machinery to provide a very canonical construction of the Menger-1 space and show that this object is highly homogeneous. This is a joint work with Slawomir Solecki. KONSTANTIN SLUTSKY, University of Illinois at Chicago Lebesgue orbit equivalence of Borel flows  [PDF] A Borel flow is a Borel measurable action of the Euclidean space $\mathbb{R}^d$ on a standard Borel space. Free Borel flows are said to be Lebesgue orbit equivalent if there is a Borel bijection between the phase spaces which sends orbits onto orbits and preserves the Lebesgue measure within each orbit. We show that free non-smooth Borel flows are classified up to Lebesgue orbit equivalence by the number of ergodic invariant probability measures. This classification is in accordance with the classification of hyperfinite Borel equivalence relations by Dougherty, Jackson, and Kechris. IIAN SMYTHE, Cornell University Turbulence and Essential Equivalence of Subspaces  [PDF] Using Hjorth's theory of turbulence, it can be shown that various equivalence relations induced by operator ideals on the space of bounded operators on a Hilbert space are not classifiable by countable structures. In particular, we examine essential equivalence of closed subspaces of a Hilbert space, realized as equivalence of the corresponding projections operators, modulo the compact operators. Even in this restricted setting, we recover non-classifiability. Similar results for non-reduction to orbit equivalence relations from Polish group actions will be discussed for equivalence modulo finite rank or finite dimension. JURIS STEPRANS, York University Complexity of weakly almost periodic functions  [PDF] Given a group $G$ one can consider the Banach algebra formed by the dual of the group algebra $\ell_1^{**}(G)= \ell_1(G)$. In determining when the two Arens products on this algebra agree, it is natural to consider the complexity of the weakly almost periodic functions as a subset of $\ell_\infty(G)$. This talk will discuss some calculations for specific groups. SIMON THOMAS, Rutgers University The isomorphism and bi-embeddability relations for finitely generated groups  [PDF] I will discuss the isomorphism and bi-embeddability relations on the spaces of Kazhdan groups and finitely generated simple groups. PHILLIP WESOLEK, Université catholique de Louvain Chief factors in Polish groups  [PDF] (Joint work with Colin Reid.) For a Polish group $G$, closed normal subgroups $L<K$ of $G$ form a chief factor $K/L$ if there is no closed normal subgroup of $G$ strictly between $L$ and $K$. Chief factors play an important role in the structure theory of finite groups. Surprisingly, the theory of chief factors admits a natural and useful extension to the setting of Polish groups. We discuss this theory and its two key ingredients, the association relation and normal compressions. We then outline a Schreier refinement theorem and a trichotomy theorem for topologically characteristically simple Polish groups. Time permitting, we discuss applications to locally compact Polish groups and finitely generated branch groups. JINDRICH ZAPLETAL, University of Florida Borel reducibility of ideal equivalence relations  [PDF] I introduce several combinatorial properties of ideals I on natural numbers that make it possible to prove negative results regarding the Borel reducibility of the equivalence relation modulo I to other analytic equivalence relations. JOSEPH ZIELINSKI, University of Illinois at Chicago Compact metrizable structures and classification problems  [PDF] We consider metrizable compact spaces equipped with closed relations. The natural notion of equivalence between such structures is homeomorphic isomorphism. Through this notion, we establish bounds for classification problems in Borel reducibility. Portions of this talk are based on joint work with C. Rosendal.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9258596897125244, "perplexity": 507.7801482260488}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103645173.39/warc/CC-MAIN-20220629211420-20220630001420-00069.warc.gz"}
http://math.stackexchange.com/questions/60355/no-non-trivial-homomorphism-to-a-group
# No non-trivial homomorphism to a group Let $G$ be a compact Hausdorff topological group, and let $H$ be a torsion-free group satisfying the ascending condition, i.e. there are no infinite strictly ascending chains $H_1<H_2<...$ of subgroups of $H.$ Prove that there is no non-trivial homomorphism of $G$ into $H.$ Note: no topology is considered on $H$ and "homomorphism" simply means "group homomorphism." - A somewhat related question was asked and answered on MO, recently. I haven't looked at the paper mentioned in the answer, but maybe it contains something useful for this question. –  t.b. Nov 15 '11 at 22:53 @t.b:The given thread solves the problem. Thank you. –  Ehsan M. Kermani Dec 25 '11 at 4:46 Ehsanmo, could you then answer your own question and accept the answer so this question doesn't show up in unanswered questions? Thank you. –  user23211 Mar 5 '12 at 9:16 @ymar: Yes, sure. –  Ehsan M. Kermani Mar 5 '12 at 9:26 It is easy to see that such an $H$ is finitely generated and the rest follows from Nikolov-Segal theorem.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9393435120582581, "perplexity": 413.66739192358466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645298065.68/warc/CC-MAIN-20150827031458-00219-ip-10-171-96-226.ec2.internal.warc.gz"}
http://en.wikipedia.org/wiki/Talk:Moment_magnitude_scale
# Talk:Moment magnitude scale WikiProject Earthquakes (Rated C-class, Top-importance) This article is part of WikiProject Earthquakes, a project to systematically present information on earthquakes, seismology, plate tectonics, and related subjects. If you would like to participate, you can choose to edit the article attached to this page (see Wikipedia:Contributing FAQ for more information), or join by visiting the project page. C  This article has been rated as C-Class on the project's quality scale. Top  This article has been rated as Top-importance on the project's importance scale. ## Notation Why is moment magnitude indicated with the letters MW? What does the W stand for? -- Rod Thompson, Hilo, Hawaii Good question... I don't immediately find the answer by googling. The notation was probably introduced in Kanamori's 1977 paper in the Journal of Geophysical Research, but I don't have that handy. The origin of notation in the sciences is often rather obscure. Gwimpey 06:14, 24 May 2004 In the equation for seismic moment is a capital omega not, which is equal to spectral amplitude at low frequencies and or equivalently on a broadband displacement seismogram the product of pulse with and amplitude. The lower case symbol for omega closely resembles a w. This was taken from an informal poll of workers that came in on a holiday at a large seismic network. Ref is "An introduction to the theory of seismology" by Bullen and Bolt page 426 and 427. (not my favorite seis book though) Annonymous Interesting. So should we actually write MΩ or Mω? I know that some Microsoft programs convert "ω" into "w" and equivalently "Ω" into "W" when converting from Unicode to a non-Greek 8-bit legacy character set (which makes for interesting reading when having a "4.7 MW resistor" specified in a small circuit :-). In signal processing and physics (see ISO 31-2), ω = 2πf stands for the angular velocity, a measure of frequency that saves you having to write 2π in many formulas related to the Fourier transform. Is that what is used here, i.e. is the quantity meant to be frequency dependent? Markus Kuhn 11:22, 13 Jun 2005 (UTC) The underlying question is, whether we should copy this rather confusing mix of indices on the magnitude quantity at all, or look instead for an existing more streamlined and systematic presentation and notation in the literature (a good textbook, some formal standard?) and take the notation from there, rather than from the original articles. Looking for some good books worldwide is probably more helpful than googling here. Any suggestions? Markus Kuhn 11:22, 13 Jun 2005 (UTC) W stands for the "difference in elastic strain energy W before and after an earthquake" in : Kanamori H. (1977). The energy release in great earthquakes. J. Geophys. Res., 82, 2981-2987. This article is the reference paper for moment magnitude where it is defined for the first time. The original formulation links the energy called W in the paper to a new magnitude called by the author Mw. The relation linking the seismic moment to this magnitude, even if it is nowadays a common formula in seismology, is a secondary product in this article. The author gives only the relation between energy W and seismic moment M0. Thus even if deducing the formula between the seismic moment and the new magnitude is straightforward, the relation is not explicitly written. It is explicitly written (equation 4) in this brief paper : Hanks T. and H. Kanamori (1979). A moment magnitude scale. J. Geophys. Res., 84, 2348-2350. andre 09:57, 14 September 2006 (UTC) To increase the likelihood of confusion with mega watts! Mu2 01:35, 27 May 2011 (UTC) ## More prominent: SI or CGS? Shouldn't SI be more prominent? Do we even need the formula in CGS units? This unnecessarily complicates things. Shameer 01:14, 28 Dec 2004 (UTC) Presumably, if developed in 1977, the scale may have been developed for cgs units. I'm not certain, but yes, I'd agree that the SI equation should be listed before the cgs equation. --ABQCat 02:28, 28 Dec 2004 (UTC) -- Why, physicists use so many unit system variants and choose what ever system makes the math the easiest for them. My favorite are God's units or natural units In the sixty-to-eighties a variety of things were settling out. CGS was far more common in many sciences. Prior to everyday available computers (or hand calculators), many problem types were best solved using scale factors appropriate to the range of expected data... one only dealt with the first few significant digits on a slide rule, and/or log tables. Been there, did that! // FrankB 13:54, 12 November 2008 (UTC) ## SI: approximation -> exact??? Here is the approximation from the article: $M_W = {2 \over 3}\log_{10} M_{0,\mathrm{SI}} - 6.\,$ This can be changed to the following, which is exact, assuming the "cgs equation" is exact. $M_W = {2 \over 3}\log_{10} M_{0,\mathrm{SI}} - \left(6.7 - {2 \over 3}\right).\,$ ## Dimensional analysis Here, we are taking the logarithm of a quantity with units, which is forbidden by the dimensional analysis article. Brianjd 06:24, 2004 Dec 29 (UTC) So you can't do a full dimensional analysis of all this. Sometimes you just have to accept that this is HOW they do it without understanding WHY they do it this way. I do recall a few times in my engineering classes when we needed to take logarithms of quantities with units - the professors told us not to worry and to just bring the units outside the logarithm. --ABQCat 07:47, 29 Dec 2004 (UTC) I haven't read it fully because my studies are not advanced enough to understand it, but it doesn't seem to mention any exceptions. So it seems that it needs to be changed. Brianjd 07:42, 2005 Jan 11 (UTC) After taking the logarithm, you have a quantity with units of the type mass×length²/time², from which you have to subtract a dimensionless number. How do you explain that? One could say that the number has the same units, in which case the final result also has those units, so how is it a "scale"? Maybe it's a really kludgy way to do this, but maybe the equation WORKS if the value of the moment is in the correct units, but for use in the calculation the units are stripped off? That would eliminate logs of units and subtracting dimensionless numbers. Again, I don't know, but it's possible. --ABQCat 23:29, 11 Jan 2005 (UTC) The old formulas (prior to 12 June 2005) were an outstanding example of how not to use physical quantities in modern scientific writing. I've decided to be bold and replaced them with an equivalent version that follows the ISO 31-0 convention of using division by a unit when it is necessary to convert a physical quantity into a dimensionfree numerical value. This way, each formula becomes independent of the units used to write down the quantities. No more need to have "SI" indices and similar nonsense. I nevertheless left several forms of the expression standing, using both SI and CGS units in the division, for the benefit of people who are not experienced in converting between CGS and SI. It would be nice is someone finally taught the authors of USGS web sites how to use units properly ... Markus Kuhn 12:45, 12 Jun 2005 (UTC) Exactly, my first reaction was "what? a logarithm of a dimensioned number?" It's silly too, because there it's unnecessary: $M_{w} = {2 \over 3}log_{10}\left(\frac{M_0}{M_{ref}}\right)$ with $M_{ref} = 10^{10.7} dyne cm = 10^{3.7} Nm = 5011.87 Nm$ Wouter halswijk (talk) 12:50, 21 January 2010 (UTC) Unfortunately, geologists (at least those that write these formulae) don't generally care about dimensions, so it isn't technically correct, but will give the correct values, and is the only version available in published sources. OrangeDog (τ • ε) 13:17, 21 January 2010 (UTC) ## Energy example There was an example which said: sonny For example, M0 = 6,7,8,9 means energies of approximately 1 petajoule (1 PJ = 1015 Joules), 32 petajoules, 1 exajoule (1 EJ = 1018 Joules), 32 exajoules. I removed it because M0 should increase linearly with energy and Mw was what was probably intended. If I replaced M0 with Mw, the energies didn't match these sources: [1] [2]. Some one else who has more knowledge of this can add it back. Shameer 00:16, 30 Dec 2004 (UTC) -- At work sometimes I see magnitudes expressed in eqv killo tons of tnt. Maybe we should try it here too, it was a nice representation Yes, I meant Mw. I changed it, now also taking into account the two conversion factors, as I understand them from the links.--Patrick 11:55, Dec 31, 2004 (UTC) Seeking support for "The energy is 1/2000 times the moment," I read [3] but remained confused. The "magnitude 6 = 1 megaton TNT" rule in [4] suggests a formula more like $M_W = {2 \over 3}\log_{10} E_{\mathrm{SI}} - 4.4\,$ --Mgarraha 22:22, Dec 31, 2004 (UTC) I hope the rephrased section on kt TNT comparisons makes it clear, that neither earthquakes nor underground nuclear weapons tests release more than a tiny fraction of their total converted energy in the form of seismic waves, and that therefore any comparison between the two is only meaningful if you agree on a seismic efficiency coefficient for the nuclear weapon. That of course depends a lot on the design of the test and the test site. Since all conversion formulas are equally useless and misleading, I find sympathy for the established secret convention of telling any naive journalists who insists on a TNT comparision a conventional figure that is based on magnitude 0 being equivalent to 1 kg TNT. That saves the seismologists unnecessary mental arithmetic on the phone when a journalist really does not want to go away without a TNT figure, and everyone is happy. Markus Kuhn 13:19, 12 Jun 2005 (UTC) ## Formulas see history I notice that it was changed from the "preferred" formula to another formula. Why? I quote the following from [5]: "Another source of confusion is the form of the formula for converting from scalar moment M0 to moment magnitude, M. The preferred practice is to use M = (log Mo)/1.5-10.7, where Mo is in dyne-cm (dyne-cm=10-7 N-m), the definition given by Hanks and Kanamori in 1979. An alternate form in Hanks and Kanamori’s paper, M=(log M0-16.1)/1.5, is sometimes used, with resulting confusion. These formulae look as if they should yield the same result, but the latter is equivalent to M = (log Mo)/1.5-10.7333. The resulting round-off error occasionally leads to differences of 0.1 in the estimates of moment magnitude released by different groups. All USGS statements of moment magnitude should use M = (log Mo)/1.5-10.7 for converting from scalar moment Mo to moment magnitude." Anonymous I suggest we keep for the moment the version with parenthesis where the factor 2/3 is not already multiplied into the offset. This form ensures that the SI and the CGS versions remain consistent without rounding errors when written using a fraction-free decimal offset. Otherwise, the final constant would have to change by the ugly offset of 7*(2/3). In that respect, the above quoted USGS policy is a somewhat unfortunate choice. Let's hope that it is already outdated and that they now finally also use SI, like everyone else. Markus Kuhn 13:00, 12 Jun 2005 (UTC) So what's the evidence against the USGS formula? Is the alternative form a standard anywhere else? To be clear, I'm not concerned about the units here, I'm concerned about the numeric result. ~brad 9 Feb 2006 ## Energy formula inaccurate above 8.0? I read an article by a local Geology professor stating that the energy released by an 8.5 is over 10 times greater than the energy released by an 8.0. According to the energy formulas in the article, the energy should only be 5.62 times greater. I emailed the professor, and she said the simple formula fails for "very large" earthquakes due to: • a non linear increase in frictional heat release • free oscillations generated • impacts on rotation • permanent deformation This is all over my head. Is it worth mentioning in the article that the energy formula fails for large earthquakes? Maybe. I'm not sure that I understand exactly what she meant. I think it depends which energy she is talking about. The total energy release formula is $\mathcal{E}_r = - (\mathcal{E}_e + \mathcal{E}_g + \mathcal{E}_k)$ where $\mathcal{E}_e$, $\mathcal{E}_g$ , and $\mathcal{E}_k$ represent elastic, gravitational, and kinetic energy. In addition, there is the concept of seismic energy, which is the total released energy minus the energy dissipated by frictional heating of the fault. I am pretty sure that the total released energy is still estimated correctly by the moment. However, the partitioning of the energy may change in the ways the professor noted. When talking about energy here, it's important to know exactly what energy is being referred to. Note: The text I consulted is Theoretical global seismology by Dahlen and Tromp (1998). Gwimpey 18:49, Jan 19, 2005 (UTC) ## Tom Hanks Tom Hanks is an actor (real name Thomas Jeffrey Hanks) This article points to an article about him, but i think this Tom Hanks is a different person Thomas C. Hanks is a seismologist at the US Geological Survey in Menlo Park —Preceding unsigned comment added by 71.146.27.92 (talk) 06:40, 14 January 2009 (UTC) ## Wretched formulas (units problem, etc.) The "Energy" section really sucks: it is both redundant and confusing. For starters, Log(x) only makes sense if x is dimensionless. These formulas should be recast in the form Log(x/x0), where x and x0 have the same dimensions. Give x0 in several different units, and get rid of all but one of the nearly identical equations. Also, give the inverted version of the equation to give energy in terms of moment magnitude -- the more useful direction for the calculation. I'm just passing through, but it would be nice if someone would redo this section from scratch. I've fixed the unit mess and cleaned up the wretched energy section. I wanted to preserve the underlying information in the latter, so I split it up into separate sections on the energy magnitude and on comparing earthquake magnitudes with TNT-expressed underground nuclear detonations. If people want to keep things short, I'd be happy to agree that both sections could be moved to other articles, with appropriate cross references. If people stopped the silly and utterly meaningless practice of comparing earthquakes with nuclear detonations, that would be even better! Markus Kuhn 13:08, 12 Jun 2005 (UTC) ## Just wondering what is the difference from a 4.0 on the richter scale to a magnitude of 5.0? The article explains this very clearly already. Markus Kuhn 23:51, 9 December 2005 (UTC) Could a table be drawn up to show examples at each part of the scale like the Richter magnitude scale article does? The Basel earthquake says it had a mwm of 6.2, and I have no idea what that means in terms of power. The artcle needs some more for the layman, as right now it's all technical jargon to me.--SeizureDog 14:04, 5 January 2007 (UTC) I concur. And I don't see anywhere where the comparison with Richter is explained, let alone "very clearly"! How about a small chart giving Richter and Mms values (assuming they are always the same for a given quake, which I can't tell from this convoluted article) and something like destruction or energy released. ## Strange formula I have noticed that after the main formula the symbols N and m are not explained. Reading previous discussions, I think they stand for Newton and meter to make M0 dimensionless. It doesn't really make sense, since they look like other parameters with their own value, in any case their meaning should be clear from the context and it's not. I would suggest a notation like this: $M_\mathrm{w} = {2 \over 3}\left(\log_{10} | M_0 | - 9.1\right)$ where $| M_0 |$ is the dimensionless module of the seismic moment About another previous discussion, I absolutely agree that in physics it is not correct to calculate the logarithm of any number that is not dimensionless, but in engineering it is quite common to represent dimension-ful functions with a very wide range using logarithmic scales. For example, the Bode plot of a voltage-to-current amplifier is the plot of a dimension-ful number [A/V] on a logarithmic scale. There is no problem in taking the logarithm of a dimension-ful number, as long as it's only for measuring or plotting sake. I can agree that the best scales use a reference to make the argument of the logarithm dimensionless (e.g. dB spl: Tom Hanks and Hiroo Kanamori didn't make their scale properly from this point of view. We could fix it writing like: $M_\mathrm{w} = {2 \over 3}\left(\log_{10} \frac{M_0}{M_r} - 9.1\right)$ where $| M_0 |$ is the dimensionless module of the seismic moment and $M_r$ is the reference value of 1 Nm but it would be original work, I guess (even if it's definitely more rigorous then the original) Alessio Damato (Talk) 10:40, 17 August 2007 (UTC) I think the formula is currently expressed very well and should not be changed as you suggest. It currently follows the stylistic guidelines given in various International Standards related to scientific notation (e.g., ISO 31-0, SI Brochure, etc.), which specify that variable quantities are distinguished typographically from units and other constants by use of italic characters. ISO 31-0 also specifies that a quantity is always a product of a number and a unit, and if you need a dimensionless number, you can simply access that by dividing the quantity through the unit in which you want the number to express the quantity. The current formula also uses the well-understood international standard symbols for the units used in the correct font. All these are world-wide very well-established conventions in scientific writing, that are globally taught in most secondary-school physics classes. I personally find them pretty convenient, logical, and easy to understand. On the other hand, I find your first alternative formula confusing, as it suggests calculating the logarithm of an energy quantity, which physically makes no sense. Your second formula is identical to the current one after the substitution Mr = 1 Nm, and therefore the current formula should be just as good as your second alternative. It simply avoids the unnecessary introduction of another, redundant quantity Mr. I suspect it all boils down to that you might not have been familiar with the (pretty common) typographic convention that units of measurements are not typeset in italics (NIST SP 811, etc.). Markus Kuhn 12:23, 18 August 2007 (UTC) well, as you like, but I'll point out clearly in the description that they are not parameters with their own value. Alessio Damato (Talk) 15:06, 18 August 2007 (UTC) Markus, I don't think the confusion was with the font. The confusion was with the notion of dividing by a unit. The concept that "if you need a dimensionless number, you can simply access that by dividing the quantity through the unit in which you want the number to express the quantity" is, I believe, not widely appreciated by educated adults in the U.S. It does make sense when you think about it, however. I have added a sentence explaining the notation. Mark Foskey (talk) 02:48, 14 May 2008 (UTC) I see. I would have expected the notion to be pretty familiar and clear to most people in Europe who have enjoyed some form of secondary-school education in physics. But I admit that I do notice occasionally that some American authors do things with units that I would consider rather odd and cumbersome, possibly because they have not grown up with thinking of them just as factors that can be divided or multiplied like any other factor, or because an algebraic way of using units is not commonly taught there. I was taught that writing formulas like F = m · a, where F is in newtons, m is in kilos and a is in m/s2 is bad style, as the units should always sort themselves out algebraically on their own in physical formulas. One should therefore never have to explicitly say which variable is measured in which unit, as they are just quantities that work no matter what the unit is. Only in ad-hoc scales that require logarithms, it actually becomes necessary to explicitly divide through a particular unit, as the formulas in this article do. Is the algebraic use of physical units only widely taught in countries that use mostly SI units and can we therefore not just simply assume familiarity with it in Wikipedia articles about scientific topics? Markus Kuhn (talk) 14:08, 14 May 2008 (UTC) While checking the sources I noticed that the equation was in fact completely wrong regarding the brackets, and $M_0$ should be given in dyne centimetres.[6] I have change it to reflect the original paper, and explained the terms. –OrangeDog (talkedits) 02:53, 29 January 2009 (UTC) ## Richter Scale Inconsistency? The article about the Richter Scale has the following: The energy release of an earthquake scales with the 3⁄2 power of the shaking amplitude, and thus a difference in magnitude of 1.0 is equivalent to a factor of 31.6 in the energy released; a difference of magnitude of 2.0 is equivalent to a factor of 1000 in the energy released. ...the Richter Scale, which has a 10¹ = 10 times energy increase for a 1 step increase, and 10² = 100 times energy increase for a 2 step increase. Instead, an increase of 2 steps corresponds to a 10³ = 1000 times increase in energy. If I assume the information about the Richter Scale in the Richter Scale article is accurate, it appears an editor of this article has confused amplitude with energy when referring to the Richter Scale. SlowJog (talk) 16:23, 13 May 2008 (UTC) I found a USGS page that discusses the Richter magnitude scale and specifically states, "each whole number step in the magnitude scale corresponds to the release of about 31 times more energy than the amount associated with the preceding whole number value." This page was part of the external links on the Richter magnitude scale page, I changed it to a citation. Another USGS page states, "All of the currently used methods for measuring earthquake magnitude (ML, duration magnitude mD, surface-wave magnitude MS, teleseismic body-wave magnitude mb, moment magnitude M, etc.) yield results that are consistent with ML. In fact, most modern methods for measuring magnitude were designed to be consistent with the Richter scale." Added this as an external link on the Richter magnitude scale page. Gblandst (talk) 17:43, 13 May 2008 (UTC) ## On {{confusing}} • <*-- • DITTO, what he/she says!!! ... this is unfamiliar subject... comparisons needed! WORSE, the above comments have been asking for such for several years. Come on folks... is that professionalism at it's best!?? // FrankB 14:15, 12 November 2008 (UTC) ## "Compared to Richter Scale" I am just a layman who came to this page because an earthquake on the Main Page was described on the "moment" scale, and I wanted to know how "moment" scale values would corrolate into (to me) more familiar Richter scale. I was eventually able to find this information, although sort of buried beneath some very confusing mathematics. So, I added a section heading in hopes that information will be easier for future lay inquirers to find. Maybe the math equations should be moved lower in the article, rather than being the first thing your eye sees? I'm sure that, like me, they don't really convey any information at all to most people. —Preceding unsigned comment added by Blorblowthno (talkcontribs) 21:02, 15 October 2008 (UTC) I took the silence on this issue for tacit consent, and have now gone ahead and moved the mathematical derivation lower down in the article. Blorblowthno (talk) 19:44, 29 October 2008 (UTC) The section fails to specify whether quoted values are on MMS or Richter or both. Some examples values in the ranges where the scales differ would also be useful. OrangeDog (talkedits) 00:04, 10 January 2009 (UTC) Making up sample values is getting more into original research. It seems like that would be counter intuitive as well, because it would imply that the two scales have different magnitude values for the part of the scale where they overlap. This is wrong as far as I can tell; Local Magnitude and Moment Magnitude agree within the 3.0-7.0 or 3.5-7.0 range, and beyond that range you would only use one or the other (RMS for < 3, MMS for > 7). Thus when the article says 6.0, you can take that to mean the same thing on both scales (as it says). BigNate37(T) 22:42, 10 January 2009 (UTC) This explanation should be included in the article somewhere. I still think a small table or possibly a graph could show this explicitly with a few sample values across the ranges you mentioned.–OrangeDog (talkedits) 22:34, 11 January 2009 (UTC) That graphic indeed makes no sense. What is it supposed to be showing? The differing gradients are both imprecise and unexplained. A line graph or x-y scatter would probably be better for whatever it's supposed to be. –OrangeDog (talkedits) 02:01, 26 January 2009 (UTC) OrangeDog, thanks for continuing the conversation here. I'll tell you what I was trying to get at with the graphic, and maybe we can figure out a better way of representing it visually. Basically, the big issue on this article has always been, people want to know how to "convert" from Richter values to Moment Magnitude values. But that way of thinking is a fallacy. The scales are designed to dovetail together along the same continuum. The Richter "scale" and the Moment Magnitude "scale" aren't really different scales at all, but different mathematical formulae for computing values along that continuum. So, we can talk about a magnitude 9 earthquake... you'd never use the Richter formula to compute a magnitude estimate for that earthquake, because it's too big. With the Richter formula, no matter how large of inputs you use, your output will max out around 7.5. So, for a very large earthquake, you want to use the Moment Magnitude scale. Now, there's no sharp point where the Richter scale "stops" working and you "start" using Moment Magnitude. Just, as the earthquakes get larger and larger, the Richter formula becomes less and less helpful. In the same way, as earthquakes get smaller and smaller, the Moment Magnitude formula becomes less and less helpful. So, what I'm trying to show with the graphic is the ranges where the different "scales" are best suited/most helpful/most appropriate. —Preceding unsigned comment added by Blorblowthno (talkcontribs) 23:45, 27 January 2009 (UTC) In response to BigNate37, deriving mathematical truth from published equations in not WP:OR. No-one would have any problem including that $\sin(2.124)=0.851$ if it were relevant, but it would be nearly impossible to find a verifiable source that explicitly states it. –OrangeDog (talkedits) 02:15, 26 January 2009 (UTC) Re-wrote the section to simplify and include examples. Hopefully much clearer now. –OrangeDog (talkedits) 02:49, 29 January 2009 (UTC) Please list values for both scales in the table here: Richter_magnitude_scale#Examples The recent quake in Japan is listed as 9.0 on the Richter scale. Is it listed accurately? Would the value be higher or lower on the moment magnitude scale?- 71.179.126.205 (talk) 23:04, 10 April 2011 (UTC) Where is the Japan earthquake listed on the Richter scale? The Richter scale has not been in use for at least 30 years. OrangeDog (τ • ε) 18:08, 11 April 2011 (UTC) Under the "Richter Approximate Magnitude" heading on the "9.0" line: Sendai earthquake and tsunami (Japan), 2011. The M_W prefix may mean moment magnitude scale, but then the heading in the table of the article with the same name is misleading. - Ac44ck (talk) 02:03, 12 April 2011 (UTC) It's linked to moment magnitude scale. The Richter scale is not the problem, it's the observation using the 20 second period on the seismometer that is the problem - the scale itself has no upper limit, and Mw was designed to match Richter for lower magnitudes, so what the table is saying is that the recent earthquake has the equivalent power of a 9.0 on the Richter scale, even if it couldn't be measured in that way. Mikenorton (talk) 11:01, 12 April 2011 (UTC) If it was 9.0 on the Richter scale, then what was it on the moment magnitude scale? Is it true that the "big ones" have higher numbers on the moment magnitude scale than they would on the Richter scale? - Ac44ck (talk) 03:37, 13 April 2011 (UTC) It wasn't a 9.0 on the Richer scale. The article has since been corrected to remove all mention of Richter scale, which is never used except by lazy journalists. OrangeDog (τ • ε) 17:44, 13 April 2011 (UTC) This seems confusing to me: • A column labeled "Richter Approximate Magnitude" in the Richter magnitude scale contains no values in the Richter magnitude scale? • Some entries have no prefix: Lincolnshire earthquake (UK), 2008 • Some entries have the M_w prefix: Ontario-Quebec earthquake (Canada), 2010 • Some entries have an M_s prefix: Caracas earthquake (Venezuela) Is an apples-to-apples comparison not doable? Is it true that the "big ones" have higher numbers on the moment magnitude scale than they would on the Richter scale? - Ac44ck (talk) 04:19, 14 April 2011 (UTC) ## Clarifying What the Values mean. This article is still completely useless at explaining what a value on the scale actually means. Therefor I'm going to post a table from the Richter Scale article and maybe we can adapt it to the moment magnitude scale. If this works it can be inserted into the article. Comments? --Arnos78 (talk) 22:48, 1 June 2009 (UTC) Yes the scales give similar values so this would work well in this article, although generally moment magnitude is not used for quakes less than M3.5. RapidR (talk) 23:06, 1 June 2009 (UTC) Richter Magnitudes Description Earthquake Effects Frequency of Occurrence Less than 2.0 Micro Microearthquakes, not felt. About 8,000 per day 2.0-2.9 Minor Generally not felt, but recorded. About 1,000 per day 3.0-3.9 Often felt, but rarely causes damage. 49,000 per year (est.) 4.0-4.9 Light Noticeable shaking of indoor items, rattling noises. Significant damage unlikely. 6,200 per year (est.) 5.0-5.9 Moderate Can cause major damage to poorly constructed buildings over small regions. At most slight damage to well-designed buildings. 800 per year 6.0-6.9 Strong Can be destructive in areas up to about 160 kilometres (100 mi) across in populated areas. 120 per year 7.0-7.9 Major Can cause serious damage over larger areas. 18 per year 8.0-8.9 Great Can cause serious damage in areas several hundred miles across. 1 per year 9.0-9.9 Devastating in areas several thousand miles across. 1 per 20 years 10.0+ Epic Never recorded; see below for equivalent seismic energy yield. Extremely rare (Unknown) (Based on U.S. Geological Survey documents.)[1] Changing this to use MMS would be complete synthesis. You would need a source that explicitly describes the effects of earthquakes of differing magnitude. Even in itself this table isn't very useful, the distinctions being drawn are done so pretty arbitrarily (compare a 4.9 to a 5.0 for example). The moment magnitude scale isn't based on destructive effects or actual shaking of the ground. OrangeDog (talkedits) 08:54, 2 June 2009 (UTC) No, this is perfectly appropriate for MMS. In fact, as the table refers to magnitudes greater than 7 which are impossible to measure on the classical bodywave Richter scale, it is either probably based on MMS (or a surface wave magnitude scale). Remember, MMS values are 1) designed to roughly mimic the values originally used in the Richter scale, 2) provide the capability to measure earthquakes larger than 7, which classical body-wave amplitude based magnitude measurements cannot do, and 3) relate the magnitude of an earthquake to the size of the rupture. There is no reason to "compare" Richter and MMS*. They are the same thing, as best as Hanks and Kanamori could do within the constraints and within the error of the estimates, which are substantial. Given that true "Richter" magnitudes have not been calculated since the Wood-Anderson seismometer became obsolete (1960's), anyone who thinks they are familiar with the "Richter scale" is mistaken, unless they are referring to the general idea of classifying earthquake size by a small number typically in the range of 2 to 9. This needs to be clarified in the article, and I may try. *Unless you are estimating magnitude/frequency relationships from old catalogs in southern California or some other esoteric pursuit. 146.244.227.220 (talk) 23:49, 6 January 2010 (UTC) Note the problems in your use of "probably" and "roughly". OrangeDog (τ • ε) 13:12, 21 January 2010 (UTC) ## citations I removed the "unreferenced section|date=March 2010" from the section on comparing magnitudes, because the work is shown and requires only high school algebra to verify. Learjeff (talk) 17:13, 14 March 2011 (UTC) It also requires references to establish that the content is relevant and not WP:UNDUE. As well as claims as to the closeness of the relationship. OrangeDog (τ • ε) 22:39, 14 March 2011 (UTC) Odd, OrangeDog, that you should demand references proving the relevance of comparative magnitudes. Never mind that comparing magnitudes is a perfectly reasonable thing for people to wish to do, and therefore a perfectly reasonable thing for them to hope to learn about in Wikipedia. What's odd is that the article did have just such a reference(<ref name="AP-CHILE-HAITI">{{cite web | last = Bajak | first = Frank | title = Why Chile dodged Haiti-style ruin | publisher = [[Toronto Star]] | date = 28 February 2010 | url = http://www.thestar.com/news/world/article/772765--why-chile-dodged-haiti-style-ruin | accessdate = 2010-02-28}}</ref>) until, at 23:41 on 21 Mar 2010, it was deleted by you. That anyone would have demanded references "to establish the purpose of the equation" I don't understand. Its purpose is made crystal clear by the title and the body of the section that contains it. Indeed, to what would you have a footnote proving relevance affixed? Certainly not to the equation itself, which is relevant (on its face) to the sentence containing it. Would you have a new purpose-built sentence added, "One way people try to make sense of magnitudes is by comparing them from one earthquake to another [but you needn't take our word for it because we offer you a reference to prove it]"? As you can see from the summary, I removed the ref as part of an "unnecessary recentist example". While it is a good reference, there is nothing in this article that it can directly verify. What I'm looking for is a reliable secondary source that verifies the equation in question to be "closely related" and "allows one to assess the proportional difference fΔE in energy release between earthquakes of two different moment magnitudes". The title and body of the section do make it clear, but they are not verifiable. I would suggest that such a reference goes when I had placed the {{cn}}. OrangeDog (τ • ε) 20:55, 17 March 2011 (UTC) The "unnecessary recentist" I understood and accepted (I don't agree with it, but I can live with it). The close relationship of which you aren't yet convinced—or anyway desire third-party confirmation—is straightforward high-school algebra, as Learjeff wrote above and the text makes plain: you start with the definition (the equation starting with MW = ...) and solve it for M0. That's what's in the numerator and in the denominator, which represent the two quakes being compared. And since the numerator and denominator are both in terms of energy, the ratio connotes proportional energy. QED. As to making it clear that this is kosher, the Toronto Star reference first shows up in an even earlier version of the article that worked through one of those recentist examples, both to help any math-challenged readers and to provide verification. That version says, "As seismologists reported,[5], the Chilean quake released roughly 500 times as much energy as the Haitian quake. That comparison works out as follows: fΔE = 10(3/2)×1.8 ≈ 501." That was a better place to put the reference, but the example got removed by editors who thought it was too much hand holding.—PaulTanenbaum (talk) 00:38, 18 March 2011 (UTC) One can create any equation they wish using simple algebra. I could likewise construct an equation relating the magnitude of an earthquake with the weight of peanuts that would release the same amount of energy should they undergo nuclear fission. Without an actual specific and reliable source, this article disintegrates into a long list of pointless (and often wrong) equations. OrangeDog (τ • ε) 00:03, 22 March 2011 (UTC) You enumerate above two expectations of the desired reliable secondary source: (1) that it verify "the equation in question to be 'closely related,'" and (2) that it verify that the equation allows one to assess the proportional difference in energy release between earthquakes. As to (1), would you demand a reference to justify an article's describing the equation m = F/a as closely related to the equation F = m a? And as to (2), the section in question explains that the algebra solves the earlier equation for M0, which (as the definition explains) has units dyn · cm (i.e., energy), so taking the ratio of two different M0 values gives you a proportional comparison of energies; for what part of that logic do you require verification? I am just not seeing the problem, and your peanut-fission example, though perhaps cute, does not help clarify your concern.—PaulTanenbaum (talk) 21:13, 29 March 2011 (UTC) Actually, yes I would, unless it was an article on elementary algebra. There is no evidence that the equation in question in this article is relevant, important or correct, in decreasing order of concern. OrangeDog (τ • ε) 21:16, 31 March 2011 (UTC) ## Converting magnitude to energy The formula in this section does no appear to be correct. For a magnitude 4 earthquake I get 56 GJ, not 1 PJ. I used the expressions $M_\mathrm{w} = \textstyle{\frac{2}{3}}\log_{10}M_0 - 10.7,$ and ${E} = M_0/( 2 \times 10^4 )\,\!$, both from here for the energy in dyne cm, dividing by another 107 to convert to joules. My values match others that I've found in this linked powerpoint file. Mikenorton (talk) 22:30, 18 March 2011 (UTC) So do I. As it's unreferenced, and with an insufficient definition of energy, I'm removing it. OrangeDog (τ • ε) 00:14, 22 March 2011 (UTC) ## The problem of "energy" related to earthquakes and magnitudes I see from the text and the discussions that there exists a confusion about what is "energy". As the articles states correctly, elastic energy is released during a rupture of a fault. This energy goes into fracturing rock, generating heat, and energy radiated as seismic waves Es. Es can be measured by integrating the spectrum of the radiated waves. Because this requires extra work, Es is often estimated from one of the magnitudes, using an empirical equation. However, that is not quite correct. Most of the energy resides in high frequency waves, and especially in the "corner frequency". There are eqs with relatively large mb and low Mw. These are high stress drop events. Generally Es is best estimated from mb, and NOT from Mw, because the latter is based on very long period, low energy waves. Perhaps I should take the trouble to explain this somewhere. I have introduced changes in several places in the text of the eq project, trying to get away from saying magnitude measures energy (because that is not quite correct) and saying "magnitude measures the size of an eq". That is avoiding the issue whether or not Es is considered or the entire elastic E released by an earthquake. The moment, Mo, has units of energy because it measures the work done. Of course there are many technical articles to cite, we do not need to go to a ppt-file lecture notes. QUESTION: Should I explain this, or is someone going to remove what I write? If the latter is the case, I do not want to spend the time.MaxWyss (talk) 07:22, 18 April 2011 (UTC)MaxWyss What you've written so far is really good. As long as you continue to cite any possible points of contention, go right ahead. I think the Richter comparison section could do with some of its content being moved to a "History" section as much of it isn't really about Richter any more. OrangeDog (τ • ε) 19:02, 18 April 2011 (UTC) ## Thoughts on edit as of October 2012 After listening to an introductory geology lecture in which a Ph.D. geologist was unable to put the moment magnitude scale into perspective, I took a shot at some words to make this more easily understood (and pulled the discussion earlier into the article). Am running out of time today, but will come back to pull the excellent material from the comparison with the Richter scale a bit forward into the article as well. Think this is another face of the continuing Wikipedia struggle between providing something for the unexpert reader and providign solid technical material for the more expert user. As always, if I get it wrong, Wiki my work... Skaal - Williamborg (Bill) 23:25, 30 October 2012 (UTC) ## Mw is not moment magnitude Unfortunately, it has become common for seismologists to refer to "Moment Magnitude" as Mw. This is incorrect and Wikipedia should correct this. The following is the correct vocabulary Hanks and Kanamori (1979) introduced the Moment Magnitude scale and they called it M. Actually it's a capital script M. They defined M == 2/3 (log M0 - 16.05) where M0 is moment in dyne-cm. This is THE definition of Moment Magnitude. Here's where it came from. Thatcher and Hanks (1973) pointed out that there was a general relationship between moment and local magnitude, Ml, that could be written Ml =~ 2/3 (log M0 - 16). In 1978, Kanamori published his paper on energy magnitude Mw, where he argued that it makes most sense to measure the size of an earthquake using its radiated energy. He used the W as meaning energy; he did this even though W often is used for strain energy density in mechanics. Radiated energy and strain energy are definitely different quantities. At any rate, Kanamori's definition of Mw is Mw == 2/3(log W0 - 11.8) where W0 is total radiated energy in ergs (M0 is a torque and W0 is an energy). In that same paper, Kanamori recognized that it's very difficult to estimate W0, so he said that you could estimate W0 using the following approximate relationship between W0 and M0; W0 =~ M0 * 5 * 10**-5. If you replace W0 with the approximate M0 in Kanamori's relation, then Mw =~ 2/3 (log M0 - 16.1). Note that this is an approximation for Mw. After Kanamori published his 1978 paper, Hanks noted that the 1973 Ml relationship with M0 was the same as the 1978 Kanamori relationship between Mw and M0, except that Thatcher and Hanks had a constant of 16 and Kanamori used 16.1. Hanks convinced Kanamori to write a paper to define a scale based on moment. The constant used in the Hanks and Kanamori is 16.05, which was a political compromise. The rest should history, except that somehow it has become common to call Mw moment magnitude. To further complicate the issue, seismic moment is not really a unique measure of an earthquake's size. This is a long and technical issue, but the bottom line is that average slip times the rupture area is indeed a physical parameter (called potency), whereas multiplying potency times the local rigidity (the definition of seismic moment) results in a parameter that cannot be uniquely determined. Please fix this site. It is perpetuating a common misunderstanding. Heatoncaltech (talk) 22:58, 24 January 2013 (UTC) As you say it has become common to use Mw (much more common than M in my experience) and Wikipedia has to follow usage rather than to try and change it I'm afraid. Mikenorton (talk) 23:09, 24 January 2013 (UTC) ## Vandalism The section "The Richter scale; a former measure of earthquake magnitude" appears to have been vandalised after the final sentence. I attempted to remove this, but don't seem to be able to as the text does not appear when I try to edit. Here's a screen shot I did of the section. The only Photoshop work I performed was to highlight the text. [7] Bwob (talk) 22:03, 10 September 2013 (UTC) I'm going to guess that it was a cache problem, leaving you viewing an old version of the page, before the vandalism was reverted by ClueBot just a minute after it was added. I looked at it and had the same problem, but I noticed a typo and when I saved that edit, the page updated. Mikenorton (talk) 22:37, 10 September 2013 (UTC)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 18, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.893288254737854, "perplexity": 1319.1489698278906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121961657.12/warc/CC-MAIN-20150124175241-00108-ip-10-180-212-252.ec2.internal.warc.gz"}
https://www.askmehelpdesk.com/skin-care/dirty-skin-9750.html
This is very embarrassing! I have skin that is much darker than my normal complection. It is unfortunately due to carlessness from my youth. I scrub really hard but I can't seem to get rid of it. Please help. Please note that I can't afford expensive creams and laser treatments. How can I get rid of this problem??
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.871546745300293, "perplexity": 820.4675879631677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815500.61/warc/CC-MAIN-20180224073111-20180224093111-00721.warc.gz"}
http://hardmath123.github.io/envelope.html
## Caustics and Casinos on the I-5 #### Friday, January 4, 2019 · 7 min read I was in Los Angeles over winter break, and on the long drive back home I began thinking about a billboard I saw just at the edge of the city advertising the “closest casino to anywhere in LA.” This is a fascinating claim. Let me rephrase it, at least the way I interpret it: the claim is that wherever you are in LA, the closest casino is the one advertised on the billboard. (This is confusingly distinct from the claim that the casino is closest to anywhere in LA, in the sense that the placement of the casino minimizes the distance to the nearest bit of LA soil. Of course, practically speaking this latter claim is useless because any casino within LA trivially has the minimal distance of zero to “anywhere in LA.”) The question is this: what region does the billboard’s claim imply is devoid of casinos? Let’s start with a simple case to get some intuition. Suppose LA is a 15-mile-radius disk, and the casino is at the center of the disk. Then, according to the billboard, there must be no other casinos within LA’s 15-mile radius (otherwise, if you were in LA, you might be closer to that other casino!). But actually, the claim is quite a bit stronger: there cannot be any casinos within thirty miles of the center. Why? Well, suppose there was a casino 20 miles from the center of LA. Then, someone just within the city borders would be 5 miles away from that other casino, but 15 miles from the city-center casino. One more simple case: suppose LA is a line segment of length 15 miles, and the casino is located at one endpoint. Can you imagine what the region in question must be? It is a disk of radius 15 miles, centered at the other endpoint! This is not entirely obvious, and you might need to draw a picture to convince yourself that this is true. Now let’s consider a much trickier case: suppose LA is a circle again, but the casino is located along the circumference. Suddenly, it’s much harder to picture what’s going on — sitting in a car without pencil or paper, I had no idea what the region might look like. My instinct was “circle centered at the diametrically-opposite point on the circumference,” but it turns out that this is wrong! In the rest of this post, we’ll build up some mathematical machinery to answer this question correctly. If you don’t want to work through the math, however, then feel free to just scroll to the red-bordered squares and enjoy the interactive demos. Definition. Given some region $R \subset \mathbb{R}^2$, the casino closure with respect to some point $p \in \mathbb{R}^2$ is defined as $R^p = \left\{ c \in \mathbb{R}^2 ~\middle|~ \exists z \in R, || p - z || > || c - z || \right\}$ Or, informally: the casino closure is the set of possible casino locations $c$ such that there exists a person $z$ in the region $R$ who is closer to $c$ than to $p$. From here on out, I’m only going to worry about “nice” regions, i.e. closed, connected regions with smooth boundaries. I’m also going to use $x, p, z, c$ to range over points in $\mathbb{R}^2$, but really most math that follows is equally applicable in $\mathbb{R}^n$. It’s just harder to visualize. Lemma. Let the disk $D(z, r)$ be the set of all points in $\mathbb{R}^2$ within $r$ of point $z$. Then, $R^p = \bigcup_{z~\in~R} D(z, ||p - z||)$ That is, we can construct $R^p$ by combining all the disks at all the points in $R$ that have $p$ on their circumference. Proof sketch. This should make sense “by construction.” If point $c$ is in this constructed $R^p$, then it lies in some disk $d$, and thus the point $z \in R$ that created disk $d$ fulfills the existence criterion of our definition. Lemma. If region $R$ has a boundary $B \subset R$, then $B^p = R^p$. Proof sketch. Clearly, $B^p \subset R^p$ if you believe the first lemma — adding more points to $B$ should only increase the union of disks. The harder direction to show is $R^p \subset B^p$. Consider some point $z \in R$. Extend the ray $\overrightarrow{pz}$ until it intersects with $B$ at point $z^\prime$. Then, we can relate the disks created by $z$ and $z^\prime$: $D(z, ||p-z||) \subset D(z^\prime, ||p-z^\prime||)$. Why? Because the disk from $z$ is smaller and internally tangent to the disk from $z^\prime$. Mapping this argument over the entire union, it makes sense that $R^p \subset B^p$. Okay, time for some empirical verification of all this theory. Try drawing the boundary of a region (in black) here! The disks will show up in red. Notice how filling in your region with black dots doesn’t change the red blob at all. (Click to clear.) The boundary-circle lemma gives us a nicer characterization of $R^p$: it is the region bounded by the curve that is tangent to all of the disks created by the points on $B$. It turns out that there is a very nice mathematical theory of “the curve that is tangent to all curves in a given family of curves,” and that is the theory of envelopes. The account below is paraphrased from “What is an Envelope?”, a lovely 1981 paper. Let the function $F(x, t) : \mathbb{R}^2 \times \mathbb{R} \rightarrow \mathbb{R}$ define a family of curves parameterized by $t$, in the sense that $F(x, 0) = 0$ defines a curve and $F(x, 1) = 0$ defines another curve, and so on. Then, we seek to characterize the envelope curve which is tangent to every curve in $F$. Lemma. If the boundary of $R$ can be (periodically) parameterized as $B(t), t \in \mathbb{R}$ then the boundary of $R^p$ is the envelope with respect to $t$ of $F(x, t) = || x - B(t) ||^2 - || p - B(t) ||^2$ Proof sketch. Again, this should make sense “by construction”: $F$ is chosen to correspond to circles centered at $B(t)$ passing through $p$. Ah, but how do we find the envelope? Here we need a tiny bit of multivariable calculus. Let $X(t)$ be the parameterization of the envelope of $F$. Then for all $t$, we have that $F(X(t), t) = 0$ because the envelope must lie on the respective curve in the family (“tangent” means “touch”!). We also have that the curve $X(t)$ must be parallel to the member of family $F$ at $t$. We can then express this condition by saying that the gradient (with respect to $X$) of $F$ at $t$ is perpendicular to the derivative of $X$ at $t$. Or: $d X(t) / dt \cdot \nabla_X F(X(t), t) = 0$ In two dimensions, with $X(t) = (x(t), y(t))$, this equation manifests itself as $x^\prime(t)\partial F(x, y, t) / \partial x + y^\prime(t)\partial F(x, y, t) / \partial y$. The left hand side is oddly reminiscent of the multivariable chain rule. Indeed, if we took the partial derivative of our equation $F(X(t), t) = 0$ with respect to $t$, we would get: $dF(X, t)/dt = dX/dt\cdot\nabla_X F(X, t) + dt/dt\cdot \partial F(X, t)/\partial t = 0$ So we must have $\partial F(X, t) / \partial t = 0$. There is a simpler but less rigorous derivation if you believe that the envelope is exactly the points of intersection of infinitesimally close curves in the family $F$. Then we want every point $X$ on the envelope to satisfy both $F(X, t)$ and $F(X, t + \delta)$ for some $t$ and some infinitesimal $\delta$. Taking the limit as $\delta$ approaches zero gives the same condition that $\partial F(X, t) / \partial t = 0$. The paper above discusses how this notion is subtly different in some strange cases, but it suffices to say that for all “nice” $R$, we’re fine. Almost-a-theorem. The boundary of $R^p$ is given by the parameterized vectors $X$ that satisfy $F(X, t) = 0$ and $\partial F (X, t) / \partial t = 0$ for the $F$ defined above. Almost-a-proof-sketch. Almost! In general, the solution for $X$ might self-intersect, so we want to take only the “outermost” part of $X$. But this is easy to work out on a case-by-case basis. Example. Suppose $R$ is the unit disk centered at $(a, 0)$, and $p$ is located at the origin. Then $B(t) = (\cos t + a, \sin t)$ and $p = (0, 0)$. We have $F((x, y), t) = ((x - (\cos t + a))^2 + (y - \sin t)^2) - ((\cos t + a)^2 + \sin^2 t)$ $F((x, y), t) = -2ax + x^2 - 2x\cos t + y^2 - 2y \sin t = 0$ We also have $\partial F((x, y), t) / \partial t = 2x\sin t - 2y\cos t = 0$ Solving these by eliminating $t$ is a simple exercise in polar coordinates. Discover from the second equation that $t = \theta$, then recall that $r^2 = x^2 + y^2$. The resulting boundary of $R^p$ is (almost!) the curve $r = 2(1 + a\cos\theta)$. In other words, it’s (almost!) a limaçon! As $a$ varies, the character of the limaçon varies, and at the critical points $a = \pm 1$, we get a cardioid with a cusp. Beyond those critical points, the curve has an inner loop that we have to ignore; hence, “almost!” Okay, time for more empiricism. Move your mouse around in the square below to see how the relative placement of LA and the casino affects the envelope. Notice also the inner loop predicted by the envelope, which we should of course ignore for the purposes of bounding $R^p$. An amazing fact is that a cardioid is the same shape you get on the surface of your coffee mug when you put it under a light! Well, not quite — it depends subtly on where the light source is. Read more about caustics at Chalkdust, from whom I also borrowed the image below: Further reading: Wikipedia has great diagrams to accompany. Dan Kalman’s article “Solving the Ladder Problem on the Back of an Envelope.” includes a nice pedagogically-oriented discussion of envelope subtleties. It cites Courant’s Differential and Integral Calculus (vol 2), which is freely available on Archive.org. ◊ ◊ ◊
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8770158290863037, "perplexity": 292.12398554621785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912205600.75/warc/CC-MAIN-20190326180238-20190326202238-00315.warc.gz"}
https://www.arxiv-vanity.com/papers/0805.1370/
# Isometric embeddings into the Minkowski space and new quasi-local mass Mu-Tao Wang and Shing-Tung Yau May 9, 2008, revised August 12, 2008 ###### Abstract The definition of quasi-local mass for a bounded space-like region in space-time is essential in several major unsettled problems in general relativity. The quasi-local mass is expected to be a type of flux integral on the boundary two-surface and should be independent of whichever space-like region bounds. An important idea which is related to the Hamiltonian formulation of general relativity is to consider a reference surface in a flat ambient space with the same first fundamental form and derive the quasi-local mass from the difference of the extrinsic geometries. This approach has be taken by Brown-York [4][5] and Liu-Yau [16][17] (see also related works [12], [15], [6], [14], [3], [9], [28], [32]) to define such notions using the isometric embedding theorem into the Euclidean three space. However, there exist surfaces in the Minkowski space whose quasilocal mass is strictly positive [19]. It appears that the momentum information needs to accounted for to reconcile the difference. In order to fully capture this information, we use isometric embeddings into the Minkowski space as references. In this article, we first prove an existence and uniqueness theorem for such isometric embeddings. We then solve the boundary value problem for Jang’s [13] equation as a procedure to recognize such a surface in the Minkowski space. In doing so, we discover new expression of quasi-local mass for a large class of “admissible” surfaces (see Theorem A and Remark 1.1). The new mass is positive when the ambient space-time satisfies the dominant energy condition and vanishes on surfaces in the Minkowski space. It also has the nice asymptotic behavior at spatial infinity and null infinity. Some of these results were announced in [29]. ## 1 Introduction ### 1.1 Dominant energy condition and positive mass theorem Let be a space-time, i.e. a four-manifold with a Lorentzian metric of signature that satisfies the Einstein equation: Rαβ−s2gαβ=8πGTαβ where and are the Ricci curvature and the Ricci scalar curvature of , respectively. is the gravitational constant and is the energy-momentum tensor of matter density. The metric defines space-like, time-like and null vectors on the tangent space of accordingly. A “dominant energy condition”, which corresponds to a positivity condition on the matter density , is expected to be satisfied on any realistic space-time. It means the following: for any time-like vector , and is a non-space-like co-vector. We shall assume throughout this article the space-time satisfies the dominant energy condition. Consider a space-like hypersurface in where is the induced (Riemannian) metric and is the second fundamental form with respect to the future-directed time-like unit normal vector field of . The dominant energy condition together with the compatibility conditions for submanifolds imply μ≥|J| (1.1) where μ=12(R−pijpij+(pkk)2), and Ji=Dj(pij−pkkgij). Here is the scalar curvature of . An important special case is when (time-symmetric case) and the dominant energy condition implies that the scalar curvature of is non-negative. The positive mass theorem proved by Schoen-Yau [22, 23, 24] (later a different proof by Witten [30] ) states: ###### Theorem 1.1 Let be a complete three manifold that satisfies (1.1). Suppose is asymptotically flat: i.e. there exists a compact set such that is diffeomorphic to a union of complements of balls in (called ends) such that with , , and , on each end of . Then the ADM mass (Arnowitt-Deser-Misner) of each end of is positive, i.e. E≥|P| (1.2) where E=limr→∞116πG∫Sr(∂jgij−∂igjj)dΩi is the total energy and Pk=limr→∞116πG∫Sr2(pik−δikpjj)dΩi is the total momentum. Here is a coordinate sphere of radius on an end. We notice that the conclusion of the theorem is equivalent to the four-vector is future-directed time-like, i.e. E≥0and−E2+P21+P22+P23≤0. The asymptotic flat condition can be considered a gauge condition to assure that can be compared to the flat space . The essence of the positive mass theorem is that positive local matter density (1.1) measured pointwise should imply positive total energy momentum (1.2) measured at infinity. In contrast, the “quasi-local mass” corresponds to the measurement of mass of in-between scales. ### 1.2 Two-surfaces in space-time and quasi-local notion of mass Let be a time-oriented space-time. Denote the Lorentzian metric on by and covariant derivative by . Let be a closed space-like two-surface embedded in . Denote the induced Riemannian metric on by and the gradient and Laplace operator of by and , respectively. Given any two tangent vector and of , the second fundamental form of in is given by where denotes the projection onto the normal bundle of . The mean curvature vector is the trace of the second fundamental form, or where is an orthonormal basis of the tangent bundle of . The normal bundle is of rank two with structure group and the induced metric on the normal bundle is of signature . Since the Lie algebra of is isomorphic to , the connection form of the normal bundle is a genuine 1-form that depends on the choice of the normal frames. The curvature of the normal bundle is then given by an exact 2-form which reflects the fact that any bundle is topologically trivial. Connections of different choices of normal frames differ by an exact form. We define ###### Definition 1.1 Let be a space-like unit normal along , the connection form determined by is defined to be αe3(X)=⟨∇NXe3,e4⟩ (1.3) where is the future-directed time-like unit normal that is orthogonal to . When bounds a space-like hypersurface with , we choose to be the space-like outward unit normal with respect to . The connection form is then denoted by . Suppose bounds a space-like hypersurface in , the definition of quasi-local mass asks that (see [8], [7]) (1) under the dominant energy condition. (2) if and only if is in the Minkowski spacetime. (3) The limit of on large coordinates spheres of asymptotically flat (null) hypersurfaces should approach the ADM (Bondi) mass. The quasi-local mass is supposed to be closely related to the formation of black holes according to the hoop conjecture of Throne. Various definitions for the quasi-local mass have been proposed (see for example the review article by Szabados [27]). In this article, we shall focus on quasi-local mass defined by the following comparison principle: anchor the intrinsic geometry (the induced metric) by isometric embeddings and compare other extrinsic geometries. An important feature that we expect is the definition should be a flux type integral on and it should depend only on the fact that bounds a space-like hypersurface , but does not depend which specific it bounds. ### 1.3 Prior results We recall the solution of Weyl’s isometric embedding problem by Nirenberg [18] and independently, Pogorelov [21]: ###### Theorem 1.2 Let be a closed surface with a Riemannian metric of positive Gauss curvature, then there exists an isometric embedding that is unique up to Euclidean rigid motions. In particular, the mean curvature of the isometric embedding is uniquely determined by the metric. Through a Hamiltonian-Jacobi analysis of Einstein’s action, Brown and York [4] [5] introduced ###### Definition 1.2 Suppose a two-surface bounds a space-like region in a space-time . Let be the mean curvature of with respect to the outward normal of . Assume the induced metric on has positive Gauss curvature and denote by the mean curvature of the isometric embedding of into . The Brown-York mass is defined to be: 18πG(∫Σk0−∫ΣkΩ). ###### Definition 1.3 Suppose is an embedded two-surface that bounds a space-like region in a space-time . Assume has positive Gauss curvature. The Liu-Yau mass is defined to be 18πG(∫Σk0−∫Σ|H|) where is the Lorentzian norm of the mean curvature vector. The Brown-York and Liu-Yau mass are proved to be positive by Shi-Tam [26] in the time-symmetric case, and Liu-Yau [16] [17], respectively. ###### Theorem 1.3 [26] Suppose has non-negative scalar curvature and . Then the Brown-York mass of is nonnegative and it equals zero if only if is flat. ###### Theorem 1.4 [16] [17] Suppose satisfies the dominant energy condition and the mean curvature vector of is space-like. The Liu-Yau mass is non-negative and it equals zero only if is isometric to along . However, Ó Murchadha, Szabados, and Tod [19] found examples of surfaces in the Minkowki space which satisfy the assumptions but whose Liu-Yau mass, as well as Brown-York, mass, are strictly positive. It seems the missing of the momentum information is responsible for this inconsistency: the Euclidean space can be considered as a totally geodesic space-like hypersurface in the Minkowski space with the second fundamental form and in both the Brown-York and Liu-Yau case, the reference is taken to be the isometric embedding into . In order to capture the information of , we need to take the reference surface to be a general isometric embedding into the Minkowski space. However, an intrinsic difficulty for this embedding problem is that there are four unknowns (the coordinate functions in ) but only three equations (for the first fundamental form). An ellipticity condition in replacement of the positive Gauss curvature condition is also needed to guarantee the uniqueness of the solution. We are able to achieve these in this article and indeed the extra unknown (corresponds to the time function) allows us to identify a canonical gauge in the physical space and define a quasi-local mass expression. We refer to our paper [29] in which this expression was derived from the more physical point of view, i.e. the Hamilton-Jacobi analysis of the gravitational action. ### 1.4 Results, organization and acknowledgement We first state the key comparison theorem: Theorem A Let be a space-time that satisfies the dominant energy condition. Suppose is a closed embedded space-like two-surface in with space-like mean curvature vector . Let be an isometric embedding into the Minkowski space and let denote the restriction of the time function on . Let be the future-directed time-like unit normal along such that ⟨H,¯e4⟩=−Δτ√1+|∇τ|2 and be the space-like unit normal along with and . Let be the projection of onto and be the mean curvature of in . If is admissible (see Definition 5.1), then ∫ˆΣ^k−∫Σ−√1+|∇τ|2⟨H,¯e3⟩−α¯e3(∇τ) (1.4) is non-negative. Indeed, we show ∫ˆΣ^k=∫Σ−√1+|∇τ|2⟨H0,˘e3⟩−α˘e3(∇τ) (1.5) (see equation (3.4) ) where is the mean curvature vector of in , is the space-like unit normal along in that is orthogonal to the time direction. The expression (1.4) naturally arises as the surface term in the Hamiltonian of gravitational action (see Remark 2.1). When the reference isometric embedding lies in an with , it recovers the Liu-Yau mass. ###### Remark 1.1 If the Gauss curvature of is positive, an isometric embedding into an with is admissible (see Corollary 5.3 and the preceding remark). In general, when the Gauss curvature is close to being positive, an isometric embedding with small enough is admissible. ###### Remark 1.2 We learned the expression in (1.5) from Gibbon’s paper [10]. Indeed, we are motivated by [10] to study the projection of a space-like two-surface in the Minkowski space. The new quasi-local mass is defined to be the infimum of the expression (1.4) over all such isometric embeddings (see Definition 5.2). We prove that such embeddings are parametrized by the admissible . Theorem B Given a metric and a function on such that the condition (3.1) holds. There exists a unique space-like isometric embedding with the induced metric and the function as the time function. In §2, we study the expression for surfaces in space-time. We consider it as a generalized mean curvature and study the variation of the total integral. The gauge in Theorem A indeed minimizes the total integral (see Proposition 2.1). In §3, we prove Theorem B and study the total mean curvature of the projected surface. In particular, we prove equality (1.5). In §4, we study the boundary problem of Jang’s equation and calculate the boundary terms. This is an important step in proving Theorem A. In §5, we define the new quasi-local mass and prove the positivity. In particular, Theorem A is proved. We emphasize that though the proof involves solving Jang’s equation, the results depend only on the solvability but not on the specific solution. The Euler-Lagrange equation of the new quasi-local mass among all admissible ’s is derived in §6. We wish to thank Richard Hamilton for helpful discussions on isometric embeddings and Melissa Liu for her interest and reading an earlier version of this article. The first author would like to thank Naqing Xie for pointing out several typos in an earlier version. ## 2 A generalization of mean curvature ###### Definition 2.1 Suppose is an embedded space-like two-surface. Given a smooth function on and a space-like normal , the generalized mean curvature associated with these data is defined to be h(Σ,i,τ,e3)=−√1+|∇τ|2⟨H,e3⟩−αe3(∇τ) where is the mean curvature vector of in and is the connection form (see 1.3) of the normal bundle of in determined by and the future-directed time-like unit normal orthogonal to . ###### Remark 2.1 In the case when bounds a space-like region and is the outward unit normal of , the mean curvature vector is H=⟨H,e3⟩e3−⟨H,e4⟩e4. We can reflect along the light cone of the normal bundle to get J=⟨H,e4⟩e3−⟨H,e3⟩e4. Denote the tangent vector on dual to the one-form by , then the expression (3) in [29] is , where and . We have h(Σ,i,τ,e3)=−⟨J−V,√1+|∇τ|2e4−∇τ⟩. Notice that is again a future-directed unit time-like vector along . Fix a base frame for the normal bundle, any other frame can be expressed as e3=coshϕ^e3−sinhϕ^e4,e4=−sinhϕ^e3+coshϕ^e4 (2.1) for some . We compute the integral ∫Σh(Σ,i,τ,e3)dvΣ=∫Σ[√1+|∇τ|2(coshϕ⟨∇Nea^e3,ea⟩−sinhϕ⟨∇Nea^e4,ea⟩)−α^e3(∇τ)−∇τ⋅∇ϕ]dvΣ and consider this expression as a functional of . Suppose the mean curvature vector of is space-like, we may choose a base frame with ^e3=−H|H| (2.2) and the future directed time-like unit normal that is orthogonal to . This choice makes . Integration by parts, the functional becomes ∫Σ(√1+|∇τ|2coshϕ|H|−α^e3(∇τ)+ϕΔτ)dvΣ. (2.3) As is positive, this is clearly a convex functional of which achieves the minimum as sinhϕ=−Δτ|H|√1+|∇τ|2. (2.4) We notice that the minimum is achieved by such that the expression |H|sinhϕ=⟨H,¯e4⟩=−Δτ√1+|∇τ|2 (2.5) depends only on ; this is taken as the characterizing property of in [29]. ###### Definition 2.2 Given an isometric embedding into a space-time with space-like mean curvature vector . Denote H(Σ,i,τ)=∫Σ[√(Δτ)2+|H|2(1+|∇τ|2)−∇τ⋅∇ϕ−α^e3(∇τ)]dvΣ. where is defined by (2.4) and is the connection one-form on associated with in equation (2.2). In terms of the frame where is given by equation (2.5) and is the space-like unit normal with , then H(Σ,i,τ)=∫Σh(Σ,i,τ,¯e3)dvΣ=∫Σ−√1+|∇τ|2⟨H,¯e3⟩−α¯e3(∇τ)dvΣ. ###### Proposition 2.1 If the mean curvature vector of the embedding is space-like and is any space-like unit normal such that , then ∫Σh(Σ,i,τ,e3)dvΣ≥H(Σ,i,τ). ## 3 Isometric embeddings into the Minkowski space ### 3.1 Existence and uniqueness theorem Let be a two-surface diffeomorphic to . We fix a Riemannian metric on , , in local coordinates . Denote the gradient, the Hessian, and the Laplace operator with respect to the metric by , , and , respectively. We consider the isometric embedding problem of into the Minkowski space with prescribed mean curvature in a fixed time direction. Let denote the standard Lorentzian metric on and be a constant unit time-like vector in , we have the following existence and uniqueness theorem: ###### Theorem 3.1 Let be a function on with . Let be a potential function of , i.e. . Suppose K+(1+|∇τ|2)−1det(∇2τ)>0 (3.1) where is the Gauss curvature of and is the determinant of the Hessian of . Then there exists a unique space-like embedding with the induced metric and such that the mean curvature vector of the embedding satisfies ⟨H0,T0⟩=−λ. (3.2) Proof. We prove the uniqueness part first. Let , be two isometric embeddings that satisfy (3.2). Since the mean curvature vector of the embedding is , this implies ⟨Δ(X1−X2),T0⟩=0, or is a constant on . Denote , we thus have . Now consider the projection onto the orthogonal complement of ; . The Gauss curvature of the embedding can be computed as ˆKi=(1+|∇τi|2)−1[K+(1+|∇τi|2)−1det(∇2τi)] (3.3) which is positive by the assumption. We compute the induced metric on the image of the embedding ⟨dˆXi,dˆXi⟩=⟨dXi,dXi⟩+dτ2i. Since we assume , ’s are embeddings into with the same induced metrics of positive Gauss curvature. By Theorem 1.2, and are congruent in . Since and are different by a constant, , as the graphs of over , are congruent in . We turn to the existence part. We start with the metric and the function and solve for in . The Gauss curvature of the new metric is again given by (3.3). Theorem 1.2 gives an embedding with the induced metric . Now is the desired isometric embedding into that satisfies (3.2). The existence theorem can be formulated in terms of as the mean curvature vector is given by . ###### Corollary 3.1 (Theorem B) Given a metric and a function on such that the condition (3.1) holds. There exists a unique space-like isometric embedding with the induced metric and the function as the time function. ### 3.2 Total mean curvature of the projection In this section, we compute the total mean curvature of the projection in in term of the geometry of in . Suppose is the embedding and is the restriction of the time function associated with . The outward unit normal of in and form an orthonormal basis for the normal bundle of in . Extend along by parallel translation and denote it by . We have ###### Proposition 3.1 ∫^Σ^kdvˆΣ=∫Σ[−⟨H0,˘e3⟩√1+|∇τ|2−α˘e3(∇τ)]dvΣ (3.4) Proof. Denote by the flat connection associated with the Lorentzian metric on . Take an orthonormal basis for the tangent space of and compute ^k=⟨∇R3,1^ea^ν,^ea⟩=⟨∇R3,1^ea^ν,^ea⟩+⟨∇R3,1^ν^ν,^ν⟩−⟨∇R3,1T0^ν,T0⟩, because the last two terms are both zero. Therefore for any orthonormal frame of where is the inverse of . Now may be considered as a space-like normal vector field along . Pick an orthonormal basis tangent to . Let be the future-directed unit normal vector in the direction of the normal part of . It is not hard to see that . form an orthonormal basis for the normal bundle of . We derive ^k=⟨∇R3,1ea˘e3,ea⟩−⟨∇R3,1˘e4˘e3,˘e4⟩=−⟨H0,˘e3⟩−1√1+|∇τ|2⟨∇R3,1∇τ˘e3,˘e4⟩ (3.5) because is extended along by parallel translation. The area forms of and are related by . Integrating equation (3.5) over , we obtain (3.4) Suppose the mean curvature vector of in is space-like. Let be the unit vector in the direction of and the future-directed time-like unit normal vector with . Suppose that eH03=coshθ˘e3+sinhθ˘e4,andeH04=sinhθ˘e3+coshθ˘e4. Since and , we derive sinhθ=−Δτ|H0|√1+|∇τ|2. (3.6) These imply the following relations ˘e3=coshθeH03−sinhθeH04,% and˘e4=−sinhθeH03+coshθeH04. The integrand on the right hand side of (3.4) becomes |H0|coshθ√1+|∇τ|2−∇θ⋅∇τ−⟨∇R3,1∇τeH03,eH04⟩. Therefore we have ###### Proposition 3.2 When the mean curvature vector of in is space-like, is equal to ∫Σ[√(Δτ)2+|H0|2(1+|∇τ|2)−∇θ⋅∇τ−αeH03(∇τ)]dvΣ. (3.7) where is given by (3.6) and is the one-form on defined by . ## 4 Jang’s equation and boundary information ### 4.1 Jang’s equation Jang’s equation was proposed by Jang [13] in an attempt to solve the positive energy conjecture. Schoen and Yau came up with different geometric interpretations, studied the equation in full, and applied to their proof [24] of the positive mass theorem. Another important contribution of Schoen and Yau’s work in [24] is to understand the precise connection between the solvability of Jang’s equation and the existence of black holes. This leads to the later works on the existence of black holes due to condensation of matter and boundary effect [25] [31]. Given an initial data set where is a symmetric two-tensor that represents the second fundamental form of with respect to a future-directed time-like normal in a space-time . We consider the Riemannian product and extend by parallel translation along the direction to a symmetric tensor on . Such an extension makes where denotes the downward unit vector in the direction. Jang’s equation asks for a hypersurface in , defined as the graph of a function over , such that the mean curvature of in is the same as the the trace of the restriction of to . In terms of local coordinates on , the equation takes the form 3∑i,j=1(gij−fifj1+|Df|2)(DiDjf(1+|Df|2)1/2−pij)=0, (4.1) where is the gradient of , and is the Hessian of . Pick an orthonormal basis for the tangent space of along such that is tangent to and is the downward unit normal, then Jang’s equation is 3∑i=1⟨˜∇~ei~e4,~ei⟩=3∑i=1P(~ei,~ei), (4.2) here and throughout this section is the Levi-Civita connection on the product space . ### 4.2 Boundary calculations Let be a smooth function on . We consider a solution of Jang’s equation in that satisfies the Dirichlet boundary condition on . Denote the graph of over by and the graph of over by so that . We choose orthonormal frames and for and , respectively. Let be the outward normal of that is tangent to . We also choose for the normal bundle of in such that is tangent to the graph and is a downward unit normal vector of in . forms an orthonormal basis for the tangent space of , so does . All these frames are extended along the direction by parallel translation. Along , we have Df=∇τ+f3e3 where is the normal derivative of . and can be written down explicitly: ~e3=1√1+|Df|2⎡⎢ ⎢⎣√1+|∇τ|2e3−f3√1+|∇τ|2(v+∇τ)⎤⎥ ⎥⎦and~e4=1√1+|Df|2(v+Df). (4.3) We check that and are orthogonal to for . Simple calculations yield ⟨e3,~e3⟩=√1+|∇τ|2√1+|Df|2,and⟨e3,~e4⟩=f3√1+|Df|2. (4.4) Let be the mean curvature of with respect to . We are particularly interested in the following expression on : ~k−⟨˜∇~e4~e4,~e3⟩+P(~e4,~e3). (4.5) ###### Theorem 4.1 Let be a space-like embedding. Given any smooth function on and any space-like hypersurface with . Suppose the Dirichlet problem of Jang’s equation (4.1) over subject to the boundary condition that on is solvable. Then there exists a space-like unit normal along in such that the expression (4.5) at is equal to −⟨H,e′3⟩−(1+|∇τ|2)−1/2αe′3(∇τ) at q∈Σ, where In particular ∫˜Σ~k−⟨˜∇~e4~e4,~e3⟩+P(~e4,~e3)dv˜Σ=∫Σ−√1+|∇τ|2⟨H,e′3⟩−αe′3(∇τ)dvΣ. (4.6) Let be the outward unit normal of that is tangent to and is the future-directed time-like normal of in , is given by e′3=coshϕe3+sinhϕe4,wheresinhϕ=−f3√1+|∇τ|2. (4.7) Proof. The proof is through a sequence of calculations using the product structure of and Jang’s equation. It also relies on the fact that , , and are all parallel in the direction of . We first prove the following identity: ~k−⟨˜∇~e4~e4,~e3⟩+P(~e3,~e4)=⟨˜∇ea~e3,ea⟩+⟨e3,~e4⟩⟨e3,~e3⟩⟨˜∇ea~e4,ea⟩−⟨e3,~e4⟩⟨e3,~e3⟩P(ea,ea)+1⟨e3,~e3⟩P(e3,~e4−⟨~e4,e3⟩e3). (4.8) We compute the terms and in the following. ⟨˜∇ea~e3,ea⟩=3∑i=1⟨˜∇ei~e3,ei⟩−⟨˜∇e3~e3,e3⟩=4∑α=1⟨˜∇~eα~e3,~eα⟩−⟨˜∇e3~e3,e3⟩, as and are both orthonormal frames for the tangent space of and . Notice that and thus we obtain 2∑a=1⟨˜∇ea~e3,ea⟩=~k+⟨˜∇~e4~e3,~e4⟩−⟨˜∇e3~e3,e3⟩. (4.9) On the other hand, 2∑a=1⟨˜∇ea~e4,ea⟩=3∑i=1⟨˜∇ei~e4,ei⟩−⟨˜∇e3~e4,e3⟩=3∑i=1⟨˜∇~ei~e4,~ei⟩−⟨˜∇e3~e4,e3⟩. Applying Jang’s equation (4.2), we obtain 2∑a=1⟨˜∇ea~e4,ea⟩=3∑i=1P(~ei,~ei)−⟨˜∇e3~e4,e3⟩. Furthermore, we derive 3∑i=1P(~
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9934846758842468, "perplexity": 425.43462844857913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.76/warc/CC-MAIN-20230322114226-20230322144226-00554.warc.gz"}
http://math.stackexchange.com/questions/158469/if-an-endomorphism-satisfies-alpha-alpha-then-its-eigenvalues-are-pure
# If an endomorphism satisfies $\alpha^* = -\alpha$, then its eigenvalues are purely imaginary This is another exercise from Golan's book. Problem: Let $V$ be an inner product space over $\mathbb{C}$ and let $\alpha$ be an endomorphism of $V$ satisfying $\alpha^*=-\alpha$, where $\alpha^*$ denotes the adjoint. Show that every eigenvalue of $\alpha$ is purely imaginary. My proposed solution is below. - Isn't there a geometric interpretation for the notion of adjoint? –  math-visitor Jun 15 '12 at 3:18 Let me show another argument which applies to a more general setting: if $\alpha$ is a linear operator on a Hilbert space satisfying $\alpha^*=-\alpha$, then the spectrum of $\alpha$ is purely imaginary (i.e. real part equal zero). Indeed, one simply needs to notice that $\alpha-\lambda\,\text{id}$ is invertible if and only if $(\alpha-\lambda\,\text{id})^*$ is invertible. As $$(\alpha-\lambda\,\text{id})^*=\alpha^*-\overline\lambda\,\text{id}=-\alpha-\overline\lambda\,\text{id}=-(\alpha+\overline\lambda\,\text{id}),$$ we conclude that any $\lambda$ in the spectrum of $\alpha$ satisfies $\overline\lambda=-\lambda$. - Suppose $c$ is an eigenvalue of $\alpha$ and $v$ is a corresponding eigenvector. We have $$c \langle v,v\rangle= \langle \alpha(v),v\rangle =\langle v, \alpha^*(v)\rangle =\langle v,-\alpha(v)\rangle=\langle v, -cv\rangle=\overline{\langle -cv, v\rangle} = \overline{-c\langle v,v\rangle}$$ Because $\langle v,v\rangle$ is real, as it is the norm of a vector, we have $c=\overline{-c}$, which immediately shows $c$ is imaginary (write $c$ out in terms of imaginary and real parts to see this). Looks fine. Why though couldn't $\,c=0\,$ ? A line about this could enhance the solution. –  DonAntonio Jun 15 '12 at 0:10 @Potato: $0$ is purely imaginary (in that it has zero real part). –  Qiaochu Yuan Jun 15 '12 at 1:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9928297400474548, "perplexity": 125.4516592943231}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678703495/warc/CC-MAIN-20140313024503-00053-ip-10-183-142-35.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-write-an-equation-of-the-line-with-slope-3-passing-thru-2-6
Algebra Topics # How do you write an equation of the line with slope=-3 passing thru (2,6)? Jul 12, 2015 I found: $y = - 3 x + 12$ #### Explanation: You can use the relationship: $y - {y}_{0} = m \left(x - {x}_{0}\right)$ Where $m$ is the slope and ${x}_{0} , {y}_{0}$ the coordinates of your point. $y - 6 = - 3 \left(x - 2\right)$ $y = - 3 x + 6 + 6$ $y = - 3 x + 12$ $3 x + y = 12$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8499755859375, "perplexity": 1815.2334886888325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371665328.87/warc/CC-MAIN-20200407022841-20200407053341-00254.warc.gz"}
http://www.math.iisc.ac.in/events/2019-11-06-1st-infosys-symposium.html
We cordially invite you to the 1st Math Symposium of Infosys Young Investigators: an in-House faculty symposium of the Department of Mathematics, IISc, on Wednesday, 6th November, 2019. In it, the most recent cohort of Infosys Young Investigators in IISc Mathematics will present snapshots of their research funded by the Infosys Foundation. Date: 6th November, 2019 (Wednesday) Venue: Lecture Hall-1, Department of Mathematics Time      Speaker      Title 2:00 pm – 2:30 pm      Subhojoy Gupta      Meromorphic geometric structures on surfaces 2:35 pm – 3:05 pm      Vamsi Pritham Pingali      Interpolation or the lack of thereof from affine hypersurfaces; a vector bundle version of the Monge-Ampere equation 3:05 pm – 3:25 pm        Tea 3:25 pm – 3:55 pm      Apoorva Khare      Polymath-14 - Groups with norms; Distance matrices and Zariski density 5:00 pm        High Tea Each lecture will be of 30 minutes, with a 5 minute break for Q&A and change of speaker. ### Abstracts #### Lecture 1 ​ Speaker: Subhojoy Gupta Title: ​ Meromorphic geometric structures on surfaces Abstract: I shall present results from two projects of mine that were supported by the Infosys Foundation. Both concern geometric structures on a punctured Riemann surface X, that are associated with holomorphic quadratic differentials on X via certain differential equations. The first concerns projective structures, which are determined by quadratic differentials via the Schwarzian differential equation. If we fix the orders of the poles at the punctures, the space of such meromorphic projective structures admits a monodromy map to the space of surface-group representations to PSL(2,C). I shall discuss a recent result characterizing the image of the monodromy map in the case the poles have order at most two. This is an analogue of a theorem of Gallo-Kapovich-Marden for closed surfaces, and clarifies a remark of Poincaré. The second concerns solutions of the non-linear PDE satisfied by harmonic maps from X to a hyperbolic surface of the same topological type. In this case, a holomorphic quadratic differential is obtained as the Hopf differential of the harmonic map. In the case that X is a closed surface, this defines a homeomorphism between the Teichmüller space of X and the space of holomorphic quadratic differentials on X. This was work of M. Wolf, and independently N. Hitchin, in the mid-1980s. I shall discuss the analogue of this theorem in the case X has punctures, with the assumption that the orders of the poles are all greater than two. #### Lecture 2​ Speaker: Vamsi Pritham Pingali Title: Interpolation or the lack of thereof from affine hypersurfaces; a vector bundle version of the Monge-Ampere equation Abstract: There are two themes of my research funded by the Infosys Foundation. I shall present a typical representative of each theme. 1) PDE arising from Differential Geometry and Physics : The Monge–Ampere equation is a well-studied PDE in complex geometry and its solvability has ramifications in various sub-areas. Inspired by its success, in a preprint, I introduced a vector bundle version of it, and proved a Kobayashi–Hitchin correspondence (essentially, the PDE has a solution if and only if some condition from algebraic geometry is met) in a special case, namely, for some equivariant rank-2 vortex bundles. 2) Analytic studies in Algebraic Geometry : A natural question arising from applied mathematics is, “When can one extend finite-energy analytic functions from subsets of $\mathbb{C}^n$ to all of space whilst preserving the finite-energy condition ?” In a joint work with D. Varolin, we discuss examples and counterexamples of such subsets arising as zeroes of polynomials. #### Lecture 3​ Speaker: Apoorva Khare Title: ​ Polymath-14 - Groups with norms; Distance matrices and Zariski density Abstract: I shall present details of two disparate projects that received funding from the Infosys Foundation. We first discuss the Polymath-14 project, which arose out of a discussion literally made possible by Infosys funding! In this project, we show that a group is abelian and torsion-free if and only if it admits a “norm”, or equivalently a homogeneous length function. This question was motivated by probability, connects algebra, geometry, and analysis, was solved in five days on a blog, and used a computer. (Joint work as D.H.J. Polymath, with Tobias Fritz, Siddhartha Gadgil, Pace Nielsen, Lior Silberman, and Terence Tao.) Next, I discuss recent joint work with Projesh Nath Choudhury, in which we study distance matrices of trees. We propose a model that subsumes all previous variants to date (starting with Graham, Pollak, and Lovasz). In this model, we compute the determinant, cofactor-sum, and inverse of the distance matrix (and its minors), subsuming prior results, and answering an open question of Bapat et al. The proofs use Zariski density, as our results hold over all unital commutative rings. Contact: +91 (80) 2293 2711, +91 (80) 2293 2265 ;     E-mail: chair.math[at]iisc[dot]ac[dot]in Last updated: 22 Apr 2021
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8348910808563232, "perplexity": 1762.588492131131}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039568689.89/warc/CC-MAIN-20210423070953-20210423100953-00471.warc.gz"}
https://cstheory.stackexchange.com/questions/3716/is-there-any-research-on-the-notion-of-weak-isolation
Is there any research on the notion of weak isolation? (First of all, sorry for the long article which makes you want to skip through, but since the background and motivations are important to this question or it would be nonsense to the main problem, forgive me if this makes you sleep. I owe you a cup of coffee.) Background Isolation lemma, one of the best tool in complexity theory invented by K. Mulmuley, U. Vazirani and V. Vazirani in "Matching is as easy as matrix inversion", have been used to prove many amazing results like an RNC-algorithm for matching in the above paper; Valiant–Vazirani theorem which is a randomized reduction from NP problems to USAT; NL/poly=UL/poly; and many others. Surveys and introduction in blogs are also all over the web. In the center of the isolation lemma, we use randomization to assign weights on the base set $U$, such that for a family $\mathcal{F}$ of subsets of $U$, there is a unique minimum weighted subset in $F$ with high probability. Formally, Lemma. Let $U$ be a set of size $n$, and $\mathcal{F}$ be a non-empty family containing elements which are the subsets of $U$. Uniform-randomly assign integral weights in $[2n]$ on the set $U$, then there exist a unique minimum weighted subset in $\mathcal{F}$ with probability at least $1/2$. A stronger form which covers a collection of $\mathcal{F_1}, \ldots, \mathcal{F_m}$ that is used in practical situations can be found in the survey. There are also some variants that may reduced the use of random bits, or derandomized the lemma to get a deterministic weight assigning algorithm when the size of $\mathcal{F}$ is not too large. Problem After a search of literature, it seems that all the research on this topic were all focus on isolating a unique subset of $U$. I want to know whether there is any obvious reason that we don't have a weak isolation lemma, in the sense that the number of minimum weighted subsets is bounded by a particular bound, and the requirement for the weak lemma is less than the original one. For example, do we have a lemma that only isolates $O(\log n)$ minimum weighted subsets? How about linear or polynomial? Since the size of $\mathcal{F}$ is at most $2^n$, when the bound is set to $2^n$, the result becomes trivial since no isolation is needed. Problem. Is there any research on the notion of weak isolation, that only limits the number of minimum weighted subsets instead of a unique one? If so, is there any references? If not, what is the most obvious obstacle toward such a result? I've been trying to prove such a lemma with some approaches, but despite of using the exactly same techniques (thus the same requirements) which is clearly no better than the original one (since with the same requirements you can indeed isolate a unique subset), there is no success to reduce any conditions we need, even in the case that we only need a loose bound on the number of minimum subsets, say polynomial. (please read on if you need motivations to the problem! The part below contains some technical details to the usage of the lemma, and some applications to log-space computations.) Motivation I am working on the problems in log-space computations, precisely the relation between classes $\mathsf{NL}$, $\mathsf{UL}$, and many other classes below and in between. Consider the above lemma, which requires $O(n \log n)$ random bits if we assign weights to $U$ accordingly. A random-bit-saving version of the lemma is presented in this paper by Chari et al., which states: Lemma. Let $U$ be a set of size $n$, and $\mathcal{F}$ be a non-empty family containing elements which are the subsets of $U$. There is a way of assigning weights in $[n^7]$ on $U$ with $O(\log|F|+\log n)$ random bits, such that there exist a unique minimum weighted subset in $\mathcal{F}$ with probability at least $1/4$. That gives us an $\mathsf{USPACE}[O(\log|F|+\log n)]$ algorithm for reachability. Of course this is a pity bound, since by Savitch's theorem we have $\mathsf{NL} \subseteq \mathsf{L}^2 \subseteq \mathsf{UL}^2$. But what if we consider the reachability problem with a restriction that there are at most $f(n)$ paths from the source to any node in the graph? This defines the problem $\mathtt{Reach}[f(n)]$, and we have $\mathtt{Reach}[2^n]$ to be the normal reachability problem. By setting $f(n)$ a polynomial, applying the above lemma together with an $\mathsf{UL}$-algorithm for reachability in graphs with a minimum unique path (see the paper for more details), we can solve $\mathtt{Reach}[n^{O(1)}]$ in $\mathsf{UL}$. (The result occurs in this recent paper. In fact they use a stronger version of the lemma, which isolates every paths in the graph by a constructive way.) If a weaker isolation lemma is known to have less restrictions on either the size of $\mathcal{F}$, or the number of random bits being used, we can provide better bounds on the problem $\mathtt{Reach}[f(n)]$, with the help of some modifications to solve reachability in graphs with few minimum paths. Hopefully if the lemma is weak enough to set $f(n) = 2^n$, and we can obtain some better bounds about the important class $\mathsf{NL}$. Any comments on the proof techniques or obstacles will give insight to the question, and I would like to know that whether this idea has a tiny little chance of success, or it is just a completely impossible concept. • This is an interesting variant. If you limit the number to $\operatorname{poly}(n)$, then you would likely be placing NL in FewL (non-deterministic log-space with at most a polynomial number of accepting paths). – Derrick Stolee Dec 10 '10 at 18:58 • @Derrick: That is pretty much the motivation of this question. :) But that will need some additional criteria, say the number of random bits being used should be $O(\log n)$, and we can not pose any constrain on the size of F. Still some weaker results may be obtained if we have further insights to this variant. Any idea? – Hsien-Chih Chang 張顯之 Dec 10 '10 at 19:13 • Would you please explain why this "weak isolation lemma" is useful? – Dai Le Dec 11 '10 at 6:08 • @Dai Le: Sure, since there are some direct applications to the weak lemma, and maybe the current version of the article is not well-written, I'll revise the entire question to make it both more attractive and self-contained. – Hsien-Chih Chang 張顯之 Dec 11 '10 at 6:20 • That would be great! – Dai Le Dec 11 '10 at 6:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9270412921905518, "perplexity": 253.01687290462735}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313617.6/warc/CC-MAIN-20190818042813-20190818064813-00322.warc.gz"}
http://www.math.gatech.edu/seminars-colloquia/series/algebra-seminar/melody-chan-20140924
## Tropical K_4 curves Series: Algebra Seminar Wednesday, September 24, 2014 - 15:05 1 hour (actually 50 minutes) Location: Skiles 006 , Harvard University Organizer: This is joint work with Pakwut Jiradilok.  Let X be a smooth, proper curve of genus 3 over a complete and algebraically closed nonarchimedean field.  We say X is a K_4-curve if the nonarchimedean skeleton G of X is a metric K_4, i.e. a complete graph on 4 vertices.We prove that X is a K_4-curve if and only if X has an embedding in p^2 whose tropicalization has a strong deformation retract to a metric K_4. We then use such an embedding to show that the 28 odd theta characteristics of X are sent to the seven odd theta characteristics of g in seven groups of four.  We give an example of the 28 bitangents of a honeycomb plane quartic, computed over the field C{{t}}, which shows that in general the 4 bitangents in a given group need not have the same tropicalizations.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8516597747802734, "perplexity": 796.7831598213767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823255.12/warc/CC-MAIN-20171019065335-20171019085335-00439.warc.gz"}
http://mathoverflow.net/questions/61093/fontaine-mazur-conjecture-for-higher-local-fields
# Fontaine-Mazur conjecture for higher local fields Hello, For a $p$-adic local field K, the Fontaine-Mazur conjecture characterises the $p$-adic representations of the Galois group of $K$ which arise from geometry. Is there a conjectural generalization to higher local fields? or to the $p$-adic representations of the etale fundamental group of a variety defined over $K$? Thanks - The Fontaine-Mazur conjecture is a global statement---well, I always thought it was. Can you be more precise about what you mean by the conjecture? –  Kevin Buzzard Apr 8 '11 at 19:54 Not three weeks ago, I heard Fontaine said that he didn't know how to formulate the local Fontaine-Mazur conjecture (that is , he didn't know how to characterize local representations coming from varieties over local fields). –  Olivier Apr 9 '11 at 5:53 @Olivier: right! That sounds like a really hard question. The problem is that a local representation coming from geometry has to satisfy all sorts of "arithmeticity" statements---for example if you have a variety with good reduction then the eigenvalues of Frobenius on the cohomology will all be algebraic numbers whose norms under embeddings into the complexes all have to satisfy various size constraints because of the Weil conjectures. In the bad reduction case there is still a similar, but more complicated, story. The miracle about F-M is that in the global setting... –  Kevin Buzzard Apr 9 '11 at 6:30 ...these constraints are conjecturally forced upon you by a natural-looking local assumption ("de Rham") plus a natural global assumption ("unramified outside a finite set of primes"). [The reason these conjectures don't appear earlier in the folklore is that it took until the 80s for complicated notions like de Rham to be formulated] –  Kevin Buzzard Apr 9 '11 at 6:34 Many thanks for all the very informative comments. @Buzzard: You are right that the Fontaine-Mazur conjecture is global So the question as stated does not make any sense. @Olivier: thanks for generously answering the question that should have been asked! Thanks! –  SGP Apr 9 '11 at 11:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8287761807441711, "perplexity": 680.0082784621618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207932182.13/warc/CC-MAIN-20150521113212-00018-ip-10-180-206-219.ec2.internal.warc.gz"}
https://artofproblemsolving.com/wiki/index.php/Interval
Interval Definition An interval is a continuous range of values, such as all of the real numbers between and inclusive. The most common uses of an interval are to specify the domain and range of a function. Symbols If an interval has either or in it, the values at the end are NOT included in the interval. If an interval has either or in it, the values at the end ARE included. If both endpoints are not included, then the interval is open. If both endpoints are included, then the interval is closed. Note: The symbols and are used with and by convention. Examples • means all real numbers between and but not including or • means all real numbers between and including but not including • means all real numbers greater than or equal to Use latex \infty for infinity symbol and use latex \cup for OR symbol.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8493833541870117, "perplexity": 434.16874656006945}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949642.35/warc/CC-MAIN-20230331113819-20230331143819-00482.warc.gz"}
http://www.planetmath.org/uniquefactorizationandidealsinringofintegers
# unique factorization and ideals in ring of integers Theorem.  Let $O$ be the maximal order, i.e. the ring of integers of an algebraic number field.  Then $O$ is a unique factorization domain if and only if $O$ is a principal ideal domain. Proof.$1^{\underline{o}}$. Suppose that $O$ is a PID. We first state, that any prime number $\pi$ of $O$ generates a prime ideal $(\pi)$ of $O$.  For if  $(\pi)=\mathfrak{ab}$,  then we have the principal ideals$\mathfrak{a}=(\alpha)$  and $\mathfrak{b}=(\beta)$.  It follows that  $(\pi)=(\alpha\beta)$, i.e.  $\pi=\lambda\alpha\beta$  with some $\lambda\in O$,  and since $\pi$ is prime, one of $\alpha$ and $\beta$ must be a unit of $O$.  Thus one of $\mathfrak{a}$ and $\mathfrak{b}$ is the unit ideal $O$, and accordingly $(\pi)$ is a maximal ideal of $O$, so also a prime ideal. Let a non-zero element $\gamma$ of $O$ be split to prime number factors $\pi_{i}$, $\varrho_{j}$ in two ways:  $\gamma=\pi_{1}\cdots\pi_{r}=\varrho_{1}\cdots\varrho_{s}$.  Then also the principal ideal $(\gamma)$ splits to principal prime ideals in two ways:  $(\gamma)=(\pi_{1})\cdots(\pi_{r})=(\varrho_{1})\cdots(\varrho_{s})$.  Since the prime factorization of ideals is unique, the   $(\pi_{1}),\,\ldots,\,(\pi_{r})$  must be, up to the , identical with  $(\varrho_{1}),\,\ldots,\,(\varrho_{s})$  (and  $r=s$).  Let  $(\pi_{1})=(\varrho_{j_{1}})$.  Then $\pi_{1}$ and $\varrho_{j_{1}}$ are associates of each other; the same may be said of all pairs  $(\pi_{i},\,\varrho_{j_{i}})$.  So we have seen that the factorization in $O$ is unique. $2^{\underline{o}}$. Suppose then that $O$ is a UFD. Consider any prime ideal $\mathfrak{p}$ of $O$.  Let $\alpha$ be a non-zero element of $\mathfrak{p}$ and let $\alpha$ have the prime factorization $\pi_{1}\cdots\pi_{n}$.  Because $\mathfrak{p}$ is a prime ideal and divides the ideal product $(\pi_{1})\cdots(\pi_{n})$, $\mathfrak{p}$ must divide one principal ideal  $(\pi_{i})=(\pi)$.  This means that  $\pi\in\mathfrak{p}$.  We write  $(\pi)=\mathfrak{pa}$, whence  $\pi\in\mathfrak{p}$  and  $\pi\in\mathfrak{a}$.  Since $O$ is a Dedekind domain, every its ideal can be generated by two elements, one of which may be chosen freely (see the two-generator property).  Therefore we can write $\mathfrak{p}=(\pi,\,\gamma),\,\,\,\mathfrak{a}=(\pi,\,\delta).$ We multiply these, getting  $\mathfrak{pa}=(\pi^{2},\,\pi\gamma,\,\pi\delta,\,\gamma\delta)$,  and so  $\gamma\delta\in\mathfrak{pa}=(\pi)$.  Thus  $\gamma\delta=\lambda\pi$  with some $\lambda\in O$.  According to the unique factorization, we have  $\pi\,|\,\gamma$  or  $\pi\,|\,\delta$. The latter alternative means that  $\delta=\delta_{1}\pi$ (with  $\delta_{1}\in O$),  whence  $\mathfrak{a}=(\pi,\,\delta_{1}\pi)=(\pi)(1,\,\delta_{1})=(\pi)(1)=(\pi)$;  thus we had  $\mathfrak{pa}=(\pi)=\mathfrak{p}(\pi)$  which would imply the absurdity  $\mathfrak{p}=(1)$.  But the former alternative means that  $\gamma=\gamma_{1}\pi$ (with  $\gamma_{1}\in O$),  which shows that $\mathfrak{p}=(\pi,\,\gamma_{1}\pi)=(\pi)(1,\,\gamma_{1})=(\pi)(1)=(\pi).$ In other words, an arbitrary prime ideal $\mathfrak{p}$ of $O$ is principal.  It follows that all ideals of $O$ are principal. Q.E.D. Title unique factorization and ideals in ring of integers Canonical name UniqueFactorizationAndIdealsInRingOfIntegers Date of creation 2015-05-06 15:32:53 Last modified on 2015-05-06 15:32:53 Owner pahio (2872) Last modified by pahio (2872) Numerical id 17 Author pahio (2872) Entry type Theorem Classification msc 13B22 Classification msc 11R27 Synonym equivalence of UFD and PID Related topic ProductOfFinitelyGeneratedIdeals Related topic PIDsAreUFDs Related topic NumberFieldThatIsNotNormEuclidean Related topic DivisorTheory Related topic FundamentalTheoremOfIdealTheory Related topic EquivalentDefinitionsForUFD
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 74, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9684226512908936, "perplexity": 1298.8580244056773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865145.53/warc/CC-MAIN-20180623171526-20180623191526-00163.warc.gz"}
http://www.maths.manchester.ac.uk/raag/index.php?preprint=0153
Real Algebraic and Analytic Geometry Previous   Next 153. Benoit Bertrand: Asymptotically maximal families of hypersurfaces in toric varieties. e-mail: Submission: 2005, January 25. Abstract: A real algebraic variety is maximal (with respect to the Smith Thom inequality) if the sum of the Betti numbers (with $\mathbb{Z}_2$ coefficients) of the real part of the variety is equal to the sum of Betti numbers of its complex part. We prove that there exist polytopes that are not Newton polytopes of any maximal hypersurface in the corresponding toric variety. On the other hand we show that for any polytope $\Delta$ there are families of hypersurfaces with the Newton polytopes $(\lambda\Delta )_{\lambda \in \mathbb{N}}$ that are asymptotically maximal when $\lambda$ tends to infinity. We also show that these results generalize to complete intersections . Mathematics Subject Classification (2000): 14P25. Keywords and Phrases: Viro method, combinatorial patchworking, toric varieties. Full text, 18p.: dvi 105k, ps.gz 205k, pdf 257k.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8219490051269531, "perplexity": 702.188029286614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746639.67/warc/CC-MAIN-20181120191321-20181120212325-00001.warc.gz"}
http://www.physicsforums.com/showthread.php?t=719243
# FO Differential equations and account balance P: 126 1. The problem statement, all variables and given/known data a. Assume that yo dollars are deposited into an account paying r percent compounded continuously. If withdrawals are at an annual rate of 200t dollars (assume these are continuous), find the amount in the account after T years. b. Consider the special case if r = 10% and y0=$20000 c. When will the account be depleted if y0=$5000? Give your answer to the nearest month. 2. Relevant equations 3. The attempt at a solution I've realized that the rate at which the account balance varies is the following: dy/dt = ry - 200 (where r is the r percent rate, 0.10; and y the amount of money present) However, when i try to obtain the differential equation, I keep getting that the amount of money present is the following: y(T) = 200/r + (y0-200/r)erT This would, mean that the function would never decrease in the case of $20000 and as well for$5000 (meaning it will never be depleted). However, I'm pretty sure that i'm wrong on this one. Could anyone please help me with this? My procedure: 1/(ry-200) dy = 1 dt (integrate both parts) ln(ry-200) 1/r = t + M1 ln(ry-200) = rt + M2 M3ert=ry-200 y = M4ert + 200/r Then, if y(0) = y0: y0 - 200/r = M4 We then plug this result into our equation: y = 200/r + (y0 - 200/r)erT This corresponds to the equation i've been getting. Is my procedure done right? HW Helper Thanks P: 5,187 Quote by fogvajarash 1. The problem statement, all variables and given/known data a. Assume that yo dollars are deposited into an account paying r percent compounded continuously. If withdrawals are at an annual rate of 200t dollars (assume these are continuous), find the amount in the account after T years. b. Consider the special case if r = 10% and y0=$20000 c. When will the account be depleted if y0=$5000? Give your answer to the nearest month. 2. Relevant equations 3. The attempt at a solution I've realized that the rate at which the account balance varies is the following: dy/dt = ry - 200 (where r is the r percent rate, 0.10; and y the amount of money present) However, when i try to obtain the differential equation, I keep getting that the amount of money present is the following: y(T) = 200/r + (y0-200/r)erT This would, mean that the function would never decrease in the case of $20000 and as well for$5000 (meaning it will never be depleted). However, I'm pretty sure that i'm wrong on this one. Could anyone please help me with this? My procedure: 1/(ry-200) dy = 1 dt (integrate both parts) ln(ry-200) 1/r = t + M1 ln(ry-200) = rt + M2 M3ert=ry-200 y = M4ert + 200/r Then, if y(0) = y0: y0 - 200/r = M4 We then plug this result into our equation: y = 200/r + (y0 - 200/r)erT This corresponds to the equation i've been getting. Is my procedure done right? Your withdrawal rates are incorrect; you said that the annual withdrawal rate is 200t, so at t = 1 it is at rate 100, at t = 2 it is at rate 200, etc. In other words, the withdrawal rate varies with t, so your DE is not correct. In the corrected problem the value of y0 determines whether or not the account will ever be depleted, and when that will happen. P: 126 Quote by Ray Vickson Your withdrawal rates are incorrect; you said that the annual withdrawal rate is 200t, so at t = 1 it is at rate 100, at t = 2 it is at rate 200, etc. In other words, the withdrawal rate varies with t, so your DE is not correct. In the corrected problem the value of y0 determines whether or not the account will ever be depleted, and when that will happen. So this means i can't solve the problem until i have done first order linear DE? Mentor P: 5,370 FO Differential equations and account balance Quote by fogvajarash So this means i can't solve the problem until i have done first order linear DE? Have you learned yet about using integrating factors for first order linear ODEs with constant coefficients? Chet
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8196628093719482, "perplexity": 817.7720342038522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657118950.27/warc/CC-MAIN-20140914011158-00319-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://mukemmellokma.com/ls66aaf/1aff20-formula-of-diagonal-of-square
Diagonal of a Square Formula. The diagonal of a square can be defined as the line segment that connects the two opposite vertices of a square. When two non-adjacent vertices within a polygon are joined through a single line, it is named as the polygon. Area formula of a rectangle. Note: Sometimes, base and height are used instead of length and width. The sloping […] Length of each face diagonal of cube = $$\mathbf{\sqrt{2}\,\,x }$$ Apart from the diagonals on the faces, there are $$4$$ other diagonals (main diagonals or body diagonals) that pass through the center of the square. A simple polygon is any two-dimensional (flat) shape made only with straight sides that close in a space, and with sides that do not cross each other (if they do, it is a complex polygon). Hence, the length of each such diagonal is the same as the length of a diagonal of a square. You know what the formula for the number of diagonals in a polygon is, and you know that the polygon has 90 diagonals, so plug 90 in for the answer and solve for n: Thus, n equals 15 or –12. Diagonal of a Square = √ 2 *x, where x = length of any side of the square Electrical Calculators Real Estate Calculators Accounting Calculators Business Calculators Construction Calculators Sports Calculators The diagonal line cuts the square into two equal triangles. Area formula using the diagonal. This means that we have two diagonals within each square. In this formula, d is the length of the diagonal of the square and s is the side of the square. Diagonal Formula (1) Diagonal Formula of a Rectangle. If the diagonal, d, and one side, s, of the rectangle are known, the following area formula can be used: How do we find the diagonal of a square when given the area? Diagonal of Square. There are four vertices of a square. Thus the line stretching from one corner of the square or rectangle to the opposite corner through the center point of the figure is known as the diagonal. Diagonal is formed by joining any two vertices of a polygon except edges. A triangle is a polygon. In fact, each diagonal divides the square into two congruent isosceles right triangles, with two vertices of 45°, as the vertical divides the square’s right angles in half. So you have a 15-sided polygon (a pentadecagon, in case you’re curious). How to find the diagonal of a rectangle? Diagonal of a Square Formula; Diagonal of a Cube Formula; What's a Simple Polygon? The diagonal is the hypotenuse of each triangle. To find the diagonal of a rectangle formula, you can divide a rectangle into two congruent right triangles, i.e., triangles with one angle of 90°.Each triangle will have sides of length l and w and a hypotenuse of length d.You can use the Pythagorean theorem to estimate the diagonal of a rectangle, which can be expressed with the following formula: Squares are having two diagonals and equal in length. Following is the diagonal of a square formula on how to calculate diagonal of a square. We need to use the Pythagorean Theorem: , where a and b are the legs and c is the hypotenuse. Their hypotenuse is the diagonal of the square, so we can solve for the hypotenuse. A diagonal is a straight line joining two opposite corners of a square, rectangle, or another straight-sided shape is calculated using Diagonal=2*sqrt(2)*Radius Of Inscribed Circle.To calculate Diagonal of the square when inradius is given, you need Radius Of Inscribed Circle (r).With our tool, you need to enter the respective value for Radius Of Inscribed Circle and hit the calculate … d = $$\sqrt{l^2+b^2}$$ 5. formula of the square diagonal in terms of the diameter of the circumcircle: d = D c. 6. formula of the square diagonal in terms of the inradius: The diagonal formula in mathematics is used to calculate the diagonals of a polygon including rectangles, square, and more similar shapes. But because a polygon can’t have a negative number of sides, n must be 15. We'll solve this geometry problem in today's video math lesson! The two legs … Pythagorean Theorem You can use Pythagorean Theorem to calculate the length of the diagonal of a square. The area, A, of a rectangle is the product of its length, l, and width, w. A = l×w. And equal in length length of the diagonal is formed by joining two. A square formula on how to calculate the length of the square, and more similar.. Solve this geometry problem in today 's video math lesson within a polygon ’! 15-Sided polygon ( a pentadecagon, in case you ’ re curious ) ’... Number of sides, n must be 15 but because a polygon can ’ t have a negative of... By joining any two vertices of a polygon can ’ t have a polygon. Within a polygon except edges are the legs and c is the hypotenuse of each triangle vertices... ( a pentadecagon, in case you ’ re curious ) this means that have! 15-Sided polygon ( a pentadecagon, in case you ’ re curious ) edges. Sometimes, base and height are used instead of length and width, w. a l×w... The Pythagorean Theorem to calculate the diagonals of a polygon including rectangles, square, and,! Square, and more similar shapes, w. a = l×w have a 15-sided polygon ( a pentadecagon, case. Where a and b are the legs and c is the diagonal of the square into two equal triangles w.. ( a pentadecagon, in case you ’ re curious ) can use Theorem... Diagonal formula of a square formula on how to calculate the diagonals of a polygon joined... Square, so we can solve for the hypotenuse line cuts the square into two equal triangles and width w.... In length case you ’ re curious ) diagonals of a polygon edges... This geometry problem in today 's video math lesson polygon can ’ t have a negative of. Rectangles, square, and more similar shapes the legs and c is the product its... The diagonals of a rectangle, w. a = l×w have a negative number of sides, must... When two non-adjacent vertices within a polygon except edges today 's video math!. More similar shapes Sometimes, base and height are used instead of length and width, a! On how to calculate the length of the diagonal line cuts the square into two equal triangles it named! Area formula of a square can be defined as the polygon but because a polygon including rectangles, square and... And c is the diagonal of a square but because a polygon including,! Line, it is named as the line segment that connects the two legs … diagonal... That we have two diagonals and equal in length diagonals of a square on. That we have two diagonals and equal in length the diagonal of a rectangle is the of... Sides, n must be 15 a negative number of sides, n must be.... ] Area formula of a rectangle is the hypotenuse the sloping [ … ] formula! This means that we have two diagonals and equal in length Area, a, of rectangle. Polygon are joined through a single line, it is named as the line that... In length Theorem you can use Pythagorean Theorem to calculate the length of the diagonal is by. Square into two equal triangles we can solve for the hypotenuse base height! And equal in length, it is named as the polygon problem in 's. Length, l, and more similar shapes problem in today 's video math lesson the line segment that the. Theorem to calculate the diagonals of a square, so we can solve for the hypotenuse, in you. Their hypotenuse is the hypotenuse of each triangle ) diagonal formula ( 1 ) diagonal in...: Sometimes, base and height are used instead of length and width in you! And c is the hypotenuse of each triangle and b are the legs and c is the is! Polygon can ’ t have a negative number of sides, n must be 15 two. Each square in mathematics is used to calculate diagonal of a square w. =! You ’ re curious ) line cuts the square into two equal triangles of,. Can be defined as the polygon diagonals of a square this geometry problem in today 's video lesson... Base and height are used instead of length and width more similar...., so we can solve for the hypotenuse of each triangle 1 ) diagonal in... Use Pythagorean Theorem to calculate the length of the diagonal line cuts the square into equal. Of length and width, w. a = l×w ’ re curious ) this geometry in... Two diagonals and equal in length and height are used instead of length and width re )! Pythagorean Theorem:, where a and b are the legs and c the! Can be defined as the polygon is used to calculate the length of the square into two equal.! On how to calculate diagonal of a rectangle negative number of sides, must... The square, and width, it is named as the polygon as the.! C is the hypotenuse when two non-adjacent vertices within a polygon except edges curious ) two. Equal triangles sides, n must be 15 of each triangle when two non-adjacent within... Two legs … the diagonal formula of a rectangle is the diagonal of a rectangle in. And c is the hypotenuse n must be 15 formula ( 1 ) diagonal formula of square. More similar shapes diagonal line cuts the square, so we can solve the..., and more similar shapes the diagonals of a polygon except edges ’ curious! Calculate the length of the diagonal of a rectangle is the hypotenuse length and width of its length,,. On how to calculate the diagonals of a square [ … ] Area formula of square. Polygon are joined through a single line, it is named as the line segment that the. = l×w and more similar shapes so you formula of diagonal of square a 15-sided polygon a! Of sides, n must be 15 Theorem:, where a and b the... Single line, it is named as the polygon we have two diagonals each... Two diagonals within each square math lesson sloping [ … ] Area of. Single line, it is named as the polygon joining any two vertices of a.! A square more similar shapes polygon except edges 's video math lesson t have 15-sided! When two non-adjacent vertices within a polygon including rectangles, square, and more similar.... Each square number of sides, n must be 15 rectangles, square, and more shapes... Polygon can ’ t have a negative number of sides, n must 15... Can be defined as the line segment that connects the two opposite vertices of a polygon edges... Re curious ) is the hypotenuse the diagonal of the diagonal of a rectangle must be 15 the Pythagorean to... The line segment that connects the two legs … the diagonal of a square formula on how calculate. Instead of length and width a pentadecagon, in case you ’ re ). Height are used instead of length and width a negative number of sides, n must be 15 formula how! 'S video math lesson single line, it is named as the polygon having two diagonals equal. Opposite vertices of a polygon including rectangles, square, formula of diagonal of square we can solve for the hypotenuse of each.! Video math lesson number of sides, n must be 15 = l×w we have diagonals! That we have two diagonals within each square problem in today 's video math lesson it. To calculate the diagonals of a square 1 ) diagonal formula of a square formula on how to calculate diagonals... Area formula of a rectangle is the hypotenuse of each triangle the two opposite of... Have two diagonals and equal in length mathematics is used to calculate diagonal of a including! Two non-adjacent vertices within a polygon can ’ t have a 15-sided polygon ( a pentadecagon, in case ’. In case you ’ re curious ) vertices within a polygon can ’ t have a negative number sides! Diagonal of the square, and more similar shapes two non-adjacent vertices within a polygon rectangles. Defined as the polygon formed by joining any two vertices of a can. This geometry problem in today 's video math lesson l, and width a 15-sided polygon ( a pentadecagon in. Math lesson negative number of sides, n must be 15 15-sided polygon ( a,... Equal triangles Pythagorean Theorem you can use Pythagorean Theorem you can use Pythagorean Theorem:, where a b... The polygon of its length, l, and width squares are having two diagonals within each square joined! This geometry problem in today 's video math lesson we need to use the Pythagorean Theorem to calculate the of. Two opposite vertices of a square formula on how to calculate diagonal of square... Formula on how to calculate diagonal of a polygon can ’ t have 15-sided. Are having two diagonals and equal in length b are the legs and is... Squares are having two diagonals and equal in length the diagonals of a.! Is named as the polygon curious ) the diagonals of a rectangle is hypotenuse! Calculate the length of the diagonal is the diagonal is the hypotenuse formula in mathematics is used to the. ’ t have a negative number of sides, n must be 15 = l×w how to the... Use Pythagorean Theorem:, where a and b are the legs and c is the hypotenuse of triangle...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9207651615142822, "perplexity": 566.7543206293783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500041.18/warc/CC-MAIN-20230202200542-20230202230542-00660.warc.gz"}
http://twistedone151.wordpress.com/2010/01/25/monday-math-105/
Monday Math 105 The Laplace Transform Part 6: Frequency shifting, time shifting, and scaling properties Expanding upon last time’s work on the Laplace transform of the exponential, we can prove quickly the “frequency shifting” property of the Laplace transform. Let real function f(t), t≥0, have Laplace transform $F(s)=\mathcal{L}\left[f(t)\right](s)=\int_0^{\infty}e^{-st}f(t)\,dt$. Then, for real number a, we see that $\begin{eqnarray}\mathcal{L}\left[e^{at}f(t)\right](s)&=&\int_0^{\infty}e^{-st}e^{at}f(t)\,dt\\&=&\int_0^{\infty}e^{-(s-a)t}f(t)\,dt\\\mathcal{L}\left[e^{at}f(t)\right](s)&=&F(s-a)\end{eqnarray}$, and thus multiplication by an exponential in the time domain equals a translation in the s domain. This can be combined with our results for sine and cosine to get $\mathcal{L}\left[e^{at}\cos(\omega{t})\right](s)=\frac{s-a}{(s-a)^2+\omega^2}$ $\mathcal{L}\left[e^{at}\sin(\omega{t})\right](s)=\frac{\omega}{(s-a)^2+\omega^2}$. And our results for powers of t to get $\mathcal{L}\left[t^re^{at}\right](s)=\frac{\operatorname{\Gamma}(r+1)}{\left(s-a\right)^{r+1}}$. Note that for all of these, the shift in the “frequency” domain entails an equal shift in the region of convergence; if $\mathcal{L}\left[f(t)\right](s)=F(s)$ for $\Re(s)\gt{b}$, then $\mathcal{L}\left[e^{at}f(t)\right](s)=F(s-a)$ for $\Re(s)\gt{b+a}$. Next, let’s consider what happens when we scale our time variable. Again, we have f(t), t≥0, with Laplace transform $F(s)=\mathcal{L}\left[f(t)\right](s)=\int_0^{\infty}e^{-st}f(t)\,dt$. Now, let us find the Laplace transform for f(at), with a>0. $\mathcal{L}\left[f(at)\right](s)=\int_0^{\infty}e^{-st}f(at)\,dt$. Making u-substitution u=at, then $\begin{eqnarray}\mathcal{L}\left[f(at)\right](s)&=&\int_0^{\infty}e^{-s\frac{u}{a}}f(u)\,\frac{du}{a}\\&=&\frac{1}{a}\int_0^{\infty}e^{-\frac{s}{a}u}f(u)\,du\\\mathcal{L}\left[f(at)\right](s)&=&\frac{1}{a}F\left(\frac{s}{a}\right)\end{eqnarray}$. As with our other result, the region of convergence will be transformed along with the scaling. We found what happens in the time domain when the transform is shifted in the “frequency” domain, but what happens when we shift in the time domain? Recall that our function f(t) is defined for t≥0; when we shift our function forward, we must set those portions that are at t<0 before our shift to zero, so that our translated function is f(t-τ)H(t-τ), where H(t) is the Heaviside step function, and τ>0 is our time delay. Integrating as we did for the transform of the Heaviside step function, $\begin{eqnarray}\mathcal{L}\left[f\left(t-\tau\right)H\left(t-\tau\right)\right](s)&=&\int_0^{\infty}e^{-st}f\left(t-\tau\right)H\left(t-\tau\right)\,dt\\&=&\int_{\tau}^{\infty}e^{-st}f\left(t-\tau\right)\,dt\end{eqnarray}$. Now, making u-substitution u=t-τ, we find $\begin{eqnarray}\mathcal{L}\left[f\left(t-\tau\right)H\left(t-\tau\right)\right](s)&=&\int_{\tau}^{\infty}e^{-st}f\left(t-\tau\right)\,dt\\&=&\int_{0}^{\infty}e^{-s(u+\tau)}f(u)\,du\\&=&e^{-s\tau}\int_{0}^{\infty}e^{-su}f(u)\,du\\\mathcal{L}\left[f\left(t-\tau\right)H\left(t-\tau\right)\right](s)&=&e^{-s\tau}F(s)\end{eqnarray}$. Thus, a time shift corresponds to multiplication by an exponential in the “frequency” domain. 5 Responses to “Monday Math 105” 1. Monday Math 108 « Twisted One 151's Weblog Says: [...] a matter of finding the inverse transform of that last term. Using partial fractions, , and from here, we see , and so . Next, we consider integrals. Let us have, as before, f(t), t>0, with Laplace [...] 2. Monday Math 111 « Twisted One 151's Weblog Says: [...] know from here that the Laplace transform for cosine is , so Thus, via the frequency shift formula here, . Considering, then, our frequency differentiation formula, . Now consider the “sine [...] 3. Monday Math 114 « Twisted One 151's Weblog Says: [...] and so we see that , and using from here that the Laplace transform of an exponential is , and from here that the frequency-shifted sine and cosine formulas are and , we see that we have , and we have [...] 4. Monday Math 115 « Twisted One 151's Weblog Says: [...] we found, using the periodic function rule, that the square wave has Laplace transform , so using the scaling rule, we see that the Laplace transform of our function f(t) is . So, to continue with our solution, , [...] 5. Monday Math 117 « Twisted One 151's Weblog Says: [...] and proof by induction to show that the general formula is , for integer n≥0. Using the scaling rule and some algebra, this generalizes to [...]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 14, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9645851254463196, "perplexity": 1083.7290271196541}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268533.31/warc/CC-MAIN-20140728011748-00356-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-3-introduction-to-graphing-3-1-reading-graphs-plotting-points-and-scaling-graphs-3-1-exercise-set-page-161/39
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition) on the $x-$axis Since the $y-$value of the given point, $\left( -\dfrac{5}{2},0 \right)$, is $0$, then the point is located $\text{ on the$x-$axis }$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9633266925811768, "perplexity": 3267.552726526434}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517485.8/warc/CC-MAIN-20220517130706-20220517160706-00767.warc.gz"}
http://math.stackexchange.com/questions/305989/sum-of-two-random-variables-is-random-variable
# Sum of two random variables is random variable How to show: If $X$ and $Y$ are random variables on a probability space $(\Omega, F, \mathbb P)$, then so is $X+Y$. The definition of a random variable is a function $X: \Omega \to \mathbb R$ with the property that $\{\omega\in\Omega: X(\omega)\leq x\}\in F$ for each $x\in\mathbb R$. and further more, how to approach $X+Y$ and $\min\{X,Y\}$? - This was asked (and answered) pretty recently on the site. – Did Feb 17 '13 at 7:34 There are a number of ways to do it. A standard trick for proving things like this is by noticing that $\{X + Y < x\} = \displaystyle \bigcup_{r \in \mathbb{Q}} \{X < r\} \cap \{Y < x - r\}$, and that showing that this is in $F$ is enough to show that $X + Y$ is measurable. Then use the properties of $X$, $Y$, and $\sigma-$algebras to deduce that this set is measurable. Obvioulsy whenever $X(\omega) < r$ and $Y(\omega) < x-r$, $X(\omega)+Y(\omega) < x$. On the other hand, if $X(\omega)+Y(\omega) = z < x$, take some rational $r$ with $X(\omega) < r < X(\omega) + x - z$, and you have $Y(\omega) = z - X(\omega) < x - r$. – Robert Israel Feb 17 '13 at 6:28 In fact, a random variable is a measurable function from $\Omega$ to $\mathbb{R}$. $$\{X+Y>x\}=\{X>x-Y\}=\bigcup_{q\in \mathbb{Q}}\{X>q>x-Y\}=\bigcup_{q\in \mathbb{Q}}(\{X>q\ \}\bigcap\{Y>x-q\})=\bigcup_{q\in \mathbb{Q}}(\{X\le q\ \}^c\bigcap\{Y\le x-q\}^c)$$ Since $\{X\le q\ \}^c\in \mathcal{F}\ ,\ \{Y\le x-q\}^c\in \mathcal{F}$, $\mathbb{Q}$ is countable and $\mathcal{F}$ is a $\sigma-field$, we obtain $\{X+Y>x\}=\{X+Y\le x\}^c\in \mathcal{F}$. Then $$\{\omega:X(\omega)+Y(\omega)\le x\}\in \mathcal{F}$$ $$\{\omega:min\{X(\omega),Y(\omega)\}\le x\}=\{\omega:X(\omega)\le x\}\bigcup\{\omega:Y(\omega)\le x\}\in \mathcal{F}$$ By definition, $X+Y,and\ min\{X,Y\}$ are both random variables.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9642859697341919, "perplexity": 78.05777808886829}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701146241.46/warc/CC-MAIN-20160205193906-00296-ip-10-236-182-209.ec2.internal.warc.gz"}
https://monolix.lixoft.com/graphics/individual-fits/
# Individual fits ### Purpose The figure displays the observed data for each subject, as well as two curves from simulations using the design and the covariates of each subject: • the predicted profile given by the estimated population model (Population fits), • the predicted profile given by the estimated individual model (Individual fits). If the EBEs and/or the conditional distribution tasks were performed, the user can choose either the conditional means or the conditional modes, estimated by MCMC, as estimators. Otherwise, approximations of the conditional means from SAEM are used. This is a good way to see on each subject the validity of the model, and the actual fit proposed, as well as the inter-individual variability in the kinetics. It is possible to show the computed individual parameters on the figure. Moreover, it is also possible to display an individual predictive check: the median and a confidence interval for ($y_{ij}$) estimated with a Monte Carlo procedure. ### Examples • #### Individual fits and population fits In the example below, the concentration for the theophylline data set is shown with simulations of a one-compartment model with first-order absorption and linear elimination. For each subject, the data are displayed with blue points along with the individual fit and population fit (the prediction using the estimated individual and population parameters respectively). • #### Individual parameters Information on individual parameters can be used in two ways, as shown below. By clicking on Information (marked in green on the figure) in the General panel, individual parameter values can be displayed on each individual plot. Moreover, the plots can be sorted according to the values for a given parameter, in ascending or descending order (Sorting panel marked in orange). By default, the individual plots are sorted by subject id, with the same order as in the data set. • #### Individual predictive check Individual predictive checks can be added to the plots: for each individual a prediction interval is computed based on multiple simulations with the population parameters and the design structure of this individual. The median line of the interval is also drawn. The interval allows to check whether the observed data are compatible with the population prediction, taking into account the inter-individual variability. The example below shows that the first subject in the theophylline data set show too much variability from the rest of the population to be correctly described by the population model. • #### Dosing times Dosing times can also be overlayed, which is useful to visualize the effect of doses on the prediction. As an example, the following figure shows the observations of an individual from the tobramycin data set along with the corresponding individual fit and multiple dosing times. • #### Special zoom User-defined constraints for the zoom are available. They allow to zoom in according to one axis only instead of both axes. Moreover, a link between plots can be set in order to perform a linked zoom on all individual plots at once. This is shown on the figure below with observations from the remifentanil example, and individual fits from a two-compartment model. It is thus possible to focus on the same time range or observation values for all individuals. In this example it is used to zoom on time on the elimination phase for all individuals, while keeping the Y axis in log scale unchanged for each plot. • #### Censored data When a data is censored, this data is different to a “classical” observation and has thus a different representation. We represent it as a bar from the censored value specified in the data set and the associated limit. If there is no limit column then is goes to Infinity as in the following example. However, in any case, the user can choose the limit of the plot. ### Settings • Grid arrange. The user can define the number of subjects that are displayed, as well as the number of rows and the number of columns. Moreover, a slider is present to be able to change the subjects under consideration. • General • Legend: hide/show the legend. The legends adapts automatically to the elements displayed on the plot. The same legend box applies to all subplots and it is possible to drag and drop the legend at the desired place. • Grid : hide/show the grid in the background of the plots. • Information: hide/show the individual parameter values for each subject (conditional mode or conditional mean depending on the “Individual estimates” choice is the setting section “Display”). • Dosing times: hide/show dosing times as vertical lines for each subject. • Link between plots: activate the linked zoom for all subplots. The same zooming region can be applied on all individuals only on the x-axis, only on the Y-axis or on both (option “none”). • Display • Observed data: hide/show the observed data. • Censored intervals [if censored data present]: hide/show the data marked as censored (BLQ), shown as a rectangle representing the censoring interval (for instance [0, LOQ]). • Split occasions [if IOV present]: Split the individual subplots by occasions in case of IOV. • Individual fits: Model prediction for each individual using the subject’s design and the individual parameters. The individual parameters can be the conditional mode or the conditional mean depending on the choice in the “Individual estimates” section. • Population fits [if no covariates in the model]: Model prediction for each individual using the subject’s design and the population parameters. • Population fits (individual covariates) [if covariates present in the model]: Model prediction for each individual using the subject’s design, the population parameters and the individual covariates values. • Population fits (population covariates) [if covariates present in the model]: Model prediction for each individual using the subject’s design, the population parameters and the median covariates values (median from all individuals of the data set). • Individual estimates [if EBEs task has run]: depending on the tasks that have been calculated, choice between conditional mode (given by EBEs task), conditional mean (approximation given by the population parameter estimated task) or conditional mean (given by the conditional distribution task). • Individual predictive check: For each individual, 500 (see “number of simulations” in the PLOTS task settings) data sets are simulated using the individual’s design (dose and regressor values). The parameter values used for the simulation include the population parameter values, the individual covariate values and random effects sampled from the population distribution. The simulated data sets include residual errors. The prediction interval represents the interval containing 90% (see “level” setting) of the simulated data points. The predicted median is the median of all simulated data points. The individual predictive check allows to visualize the inter-individual variability (unexplained by covariates) and compare the population prediction to the individual observations. • Sorting: Sort the subjects by ID or individual parameter values in ascending or descending order. By default, only the observed data and the individual fits are displayed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8128460049629211, "perplexity": 1395.0710919744295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991413.30/warc/CC-MAIN-20210512224016-20210513014016-00105.warc.gz"}
https://seop.illc.uva.nl/entries/proof-theoretic-semantics/
# Proof-Theoretic Semantics First published Wed Dec 5, 2012; substantive revision Thu Feb 1, 2018 Proof-theoretic semantics is an alternative to truth-condition semantics. It is based on the fundamental assumption that the central notion in terms of which meanings are assigned to certain expressions of our language, in particular to logical constants, is that of proof rather than truth. In this sense proof-theoretic semantics is semantics in terms of proof . Proof-theoretic semantics also means the semantics of proofs, i.e., the semantics of entities which describe how we arrive at certain assertions given certain assumptions. Both aspects of proof-theoretic semantics can be intertwined, i.e. the semantics of proofs is itself often given in terms of proofs. Proof-theoretic semantics has several roots, the most specific one being Gentzen’s remarks that the introduction rules in his calculus of natural deduction define the meanings of logical constants, while the elimination rules can be obtained as a consequence of this definition (see section 2.2.1). More broadly, it belongs to what Prawitz called general proof theory (see section 1.1). Even more broadly, it is part of the tradition according to which the meaning of a term should be explained by reference to the way it is used in our language. Within philosophy, proof-theoretic semantics has mostly figured under the heading “theory of meaning”. This terminology follows Dummett, who claimed that the theory of meaning is the basis of theoretical philosophy, a view which he attributed to Frege. The term “proof-theoretic semantics” was proposed by Schroeder-Heister (1991; used already in 1987 lectures in Stockholm) in order not to leave the term “semantics” to denotationalism alone—after all, “semantics” is the standard term for investigations dealing with the meaning of linguistic expressions. Furthermore, unlike “theory of meaning”, the term “proof-theoretic semantics” covers philosophical and technical aspects likewise. In 1999, the first conference with this title took place in Tübingen, the second one in 2013. The first textbook with this title appeared in 2015. This entry also includes the following supplementary documents that are linked into the text: ## 1. Background ### 1.1 General proof theory: consequence vs. proofs The term “general proof theory” was coined by Prawitz. In general proof theory, “proofs are studied in their own right in the hope of understanding their nature”, in contradistinction to Hilbert-style “reductive proof theory”, which is the “attempt to analyze the proofs of mathematical theories with the intention of reducing them to some more elementary part of mathematics such as finitistic or constructive mathematics” (Prawitz, 1972, p. 123). In a similar way, Kreisel (1971) asks for a re-orientation of proof theory. He wants to explain “recent work in proof theory from a neglected point of view. Proofs and their representations by formal derivations are treated as principal objects of study, not as mere tools for analyzing the consequence relation.” (Kreisel, 1971, p. 109) Whereas Kreisel focuses on the dichotomy between a theory of proofs and a theory of provability, Prawitz concentrates on the different goals proof theory may pursue. However, both stress the necessity of studying proofs as fundamental entities by means of which we acquire demonstrative (especially mathematical) knowledge. This means in particular that proofs are epistemic entities which should not be conflated with formal proofs or derivations. They are rather what derivations denote when they are considered to be representations of arguments. (However, in the following we often use “proof” synonymously with “derivation”, leaving it to the reader to determine whether formal proofs or proofs as epistemic entities are meant.) In discussing Prawitz’s (1971) survey, Kreisel (1971, p. 111) explicitly speaks of a “mapping” between derivations and mental acts and considers it as a task of proof theory to elucidate this mapping, including the investigation of the identity of proofs, a topic that Prawitz and Martin-Löf had put on the agenda. This means that in general proof theory we are not solely interested in whether B follows from A, but in the way by means of which we arrive at B starting from A. In this sense general proof theory is intensional and epistemological in character, whereas model theory, which is interested in the consequence relation and not in the way of establishing it, is extensional and metaphysical. ### 1.2 Inferentialism, intuitionism, anti-realism Proof-theoretic semantics is inherently inferential, as it is inferential activity which manifests itself in proofs. It thus belongs to inferentialism (see Brandom, 2000) according to which inferences and the rules of inference establish the meaning of expressions, in contradistinction to denotationalism, according to which denotations are the primary sort of meaning. Inferentialism and the ‘meaning-as-use’ view of semantics is the broad philosophical framework of proof-theoretic semantics. This general philosophical and semantical perspective merged with constructive views which originated in the philosophy of mathematics, especially in mathematical intuitionism. Most forms of proof-theoretic semantics are intuitionistic in spirit, which means in particular that principles of classical logic such as the law of excluded middle or the double negation law are rejected or at least considered problematic. This is partly due to the fact that the main tool of proof-theoretic semantics, the calculus of natural deduction, is biased towards intuitionistic logic, in the sense that the straightforward formulation of its elimination rules is the intuitionistic one. There classical logic is only available by means of some rule of indirect proof, which, at least to some extent, destroys the symmetry of the reasoning principles (see section 3.5). If one adopts the standpoint of natural deduction, then intuitionistic logic is a natural logical system. Also the BHK (Brouwer-Heyting-Kolmogorov) interpretation of the logical signs plays a significant role. This interpretation is not a unique approach to semantics, but comprises various ideas which are often more informally than formally described. Of particular importance is its functional view of implication, according to which a proof of AB is a constructive function which, when applied to a proof of A yields a proof of B. This functional perspective underlies many conceptions of proof-theoretic semantics, in particular those of Lorenzen, Prawitz and Martin Löf (see sections 2.1.1, 2.2.2, 2.2.3). According to Dummett, the logical position of intuitionism corresponds to the philosophical position of anti-realism. The realist view of a recognition independent reality is the metaphysical counterpart of the view that all sentences are either true or false independent of our means of recognizing it. Following Dummett, major parts of proof-theoretic semantics are associated with anti-realism. ### 1.3 Gentzen-style proof theory: Reduction, normalization, cut elimination Gentzen’s calculus of natural deduction and its rendering by Prawitz is the background to most approaches to proof-theoretic semantics. Natural deduction is based on at least three major ideas: • Discharge of assumptions: Assumptions can be “discharged” or “eliminated” in the course of a derivation, so the central notion of natural deduction is that of a derivation depending on assumptions. • Separation: Each primitive rule schema contains only a single logical constant. • Introductions and eliminations: The rules for logical constants come in pairs. The introduction rule(s) allow(s) one to infer a formula with the constant in question as its main operator, the elimination rule(s) permit(s) to draw consequences from such a formula. In Gentzen’s natural deduction system for first-order logic derivations are written in tree form and based on the well-known rules. For example, implication has the following introduction and elimination rules [A] B →I A→B A→B  A →E B where the brackets indicate the possibility to discharge occurrences of the assumption A. The open assumptions of a derivation are those assumptions on which the end-formula depends. A derivation is called closed, if it has no open assumption, otherwise it is called open. If we deal with quantifiers, we have to consider open individual variables (sometimes called “parameters”), too. Metalogical features crucial for proof-theoretic semantics and for the first time systematically investigated and published by Prawitz (1965) include: Reduction: For every detour consisting of an introduction immediately followed by an elimination there is a reduction step removing this detour. Normalization: By successive applications of reductions, derivations can be transformed into normal forms which contain no detours. For implication the standard reduction step removing detours is the following: [A] ⋮ B | A→B A B reduces to |A⋮B A simple, but very important corollary of normalization is the following: Every closed derivation in intuitionistic logic can be reduced to a derivation using an introduction rule in the last step. We also say that intuitionistic natural deduction satisfies the “introduction form property”. In proof-theoretic semantics this result figures prominently under the heading “fundamental assumption” (Dummett, 1991, p. 254). The “fundamental assumption” is a typical example of a philosophical re-interpretation of a technical proof-theoretic result. For the general orientation of proof-theoretic semantics the special issue of Synthese (Kahle and Schroeder-Heister, 2006) the reader edited by Piecha and Schroeder-Heister (2016b), the textbook by Francez (2015), Schroeder-Heister (2008b, 2016a), and Wansing (2000). For the philosophical position and development of proof theory the entries on Hilbert’s program and the development of proof theory as well as Prawitz (1971). For intuitionism the entries on intuitionistic logic, intuitionism in the philosophy of mathematics and the development of intuitionistic logic. For anti-realism the entry on challenges to metaphysical realism as well as Tennant (1987); Tennant (1997), Tranchini (2010); Tranchini (2012a). For Gentzen-style proof-theory and the theory of natural deduction: besides Gentzen’s (1934/35) original presentation, Jaśkowski’s (1934) theory of suppositions and Prawitz’s (1965) classic monograph, Tennant (1978), Troelstra and Schwichtenberg (2000), and Negri and von Plato (2001). ## 2. Some versions of proof-theoretic semantics ### 2.1 The semantics of implications: Admissibility, derivability, rules The semantics of implication lies at the heart of proof-theoretic semantics. In contradistinction to classical truth-condition semantics, implication is a logical constant in its own right. It has also the characteristic feature that it is tied to the concept of consequence. It can be viewed as expressing consequence at the sentential level due to modus ponens and to what in Hilbert-style systems is called the deduction theorem, i.e. the equivalence of Γ,AB and Γ ⊢ AB. A very natural understanding of an implication AB is reading it as expressing the inference rule which allows one to pass over from A to B. Licensing the step from A to B on the basis of AB is exactly, what modus ponens says. And the deduction theorem can be viewed as the means of establishing a rule: Having shown that B can be deduced from A justifies the rule that from A we may pass over to B. A rule-based semantics of implication along such lines underlies several conceptions of proof-theoretic semantics, notably those by Lorenzen, von Kutschera and Schroeder-Heister. #### 2.1.1 Operative logic Lorenzen, in his Introduction to Operative Logics and Mathematics (1955) starts with logic-free (atomic) calculi, which correspond to production systems or grammars. He calls a rule admissible in such a system if it can be added to it without enlarging the set of its derivable atoms. The implication arrow → is interpreted as expressing admissibility. An implication AB is considered to be valid, if, when read as a rule, it is admissible (with respect to the underlying calculus). For iterated implications (= rules) Lorenzen develops a theory of admissibility statements of higher levels. Certain statements such as AA or ((AB), (BC)) → (AC) hold independently of the underlying calculus. They are called universally admissible [“allgemeinzulässig”]), and constitute a system of positive implicational logic. In a related way, laws for universal quantification ∀ are justified using admissibility statements for rules with schematic variables. For the justification of the laws for the logical constants ∧, ∨, ∃ and ⊥, Lorenzen uses an inversion principle (a term he coined). In a very simplified form, without taking variables in rules into account, the inversion principle says that everything that can be obtained from every defining condition of A can be obtained from A itself. For example, in the case of disjunction, let A and B each be a defining condition of AB as expressed by the primitive rules AAB and BAB. Then the inversion principle says that ABC is admissible assuming AC and BC, which justifies the elimination rule for disjunction. The remaining connectives are dealt with in a similar way. In the case of ⊥, the absurdity rule ⊥→ A is obtained from the fact that there is no defining condition for ⊥. #### 2.1.2 Gentzen semantics In what he calls “Gentzen semantics”, von Kutschera (1968) gives, as Lorenzen, a semantics of logically complex implication-like statements A1,…,AnB with respect to calculi K which govern the reasoning with atomic sentences. The fundamental difference to Lorenzen is the fact that A1,,AnB now expresses a derivability rather than an admissibility statement. In order to turn this into a semantics of the logical constants of propositional logic, von Kutschera argues as follows: When giving up bivalence, we can no longer use classical truth-value assignments to atomic formulas. Instead we can use calculi which prove or refute atomic sentences. Moreover, since calculi not only generate proofs or refutations but arbitrary derivability relations, the idea is to start directly with derivability in an atomic system and extend it with rules that characterize the logical connectives. For that von Kutschera gives a sequent calculus with rules for the introduction of n-ary propositional connectives in the succedent and antecedent, yielding a sequent system for generalized propositional connectives. Von Kutschera then goes on to show that the generalized connectives so defined can all be expressed by the standard connectives of intuitionistic logic (conjunction, disjunction, implication, absurdity). #### 2.1.3 Natural deduction with higher-level rules Within a programme of developing a general schema for rules for arbitrary logical constants, Schroeder-Heister (1984) proposed that a logically complex formula should express the content or common content of systems of rules. This means that not the introduction rules are considered basic but the consequences of defining conditions. A rule R is either a formula A or has the form R1,,RnA, where R1,,Rn are themselves rules. These so-called “higher-level rules” generalize the idea that rules may discharge assumptions to the case where these assumptions can themselves be rules. For the standard logical constants this means that AB expresses the content of the pair (A,B); AB expresses the content of the rule AB; AB expresses the common content of A and B; and absurdity ⊥ expresses the common content of the empty family of rule systems. In the case of arbitrary n-ary propositional connectives this leads to a natural deduction system with generalized introduction and elimination rules. These general connectives are shown to be definable in terms of the standard ones, establishing the expressive completeness of the standard intuitionistic connectives. For Lorenzen’s approach in relation to Prawitz-style proof-theoretic semantics: Schroeder-Heister (2008a). For extensions of expressive completeness in the style of von Kutschera: Wansing (1993a). ### 2.2 The Semantics of derivations as based on introduction rules #### 2.2.1 Inversion principles and harmony In his Investigations into Logical Deduction, Gentzen makes some, nowadays very frequently quoted, programmatic remarks on the semantic relationship between introduction and elimination inferences in natural deduction. The introductions represent, as it were, the ‘definitions’ of the symbols concerned, and the eliminations are no more, in the final analysis, than the consequences of these definitions. This fact may be expressed as follows: In eliminating a symbol, we may use the formula with whose terminal symbol we are dealing only ‘in the sense afforded it by the introduction of that symbol’. (Gentzen, 1934/35, p. 80) This cannot mean, of course, that the elimination rules are deducible from the introduction rules in the literal sense of the word; in fact, they are not. It can only mean that they can be justified by them in some way. By making these ideas more precise it should be possible to display the E-inferences as unique functions of their corresponding I-inferences, on the basis of certain requirements. (ibid., p. 81) So the idea underlying Gentzen’s programme is that we have “definitions” in the form of introduction rules and some sort of semantic reasoning which, by using “certain requirements”, validate the elimination rules. By adopting Lorenzen’s term and adapting its underlying idea to the context of natural deduction, Prawitz (1965) formulated an “inversion principle” to make Gentzen’s remarks more precise: Let α be an application of an elimination rule that has B as consequence. Then, deductions that satisfy the sufficient condition […] for deriving the major premiss of α, when combined with deductions of the minor premisses of α (if any), already “contain” a deduction of B; the deduction of B is thus obtainable directly from the given deductions without the addition of α. (p. 33) Here the sufficient conditions are given by the premisses of the corresponding introduction rules. Thus the inversion principle says that a derivation of the conclusion of an elimination rule can be obtained without an application of the elimination rule if its major premiss has been derived using an introduction rule in the last step, which means that a combination ⋮ I-inference A {Di} E-inference B of steps, where {Di} stands for a (possibly empty) list of deductions of minor premisses, can be avoided. The relationship between introduction and elimination rules is often described as “harmony”, or as governed by a “principle of harmony” (see, e.g. Tennant, 1978, p. 74). This terminology is not uniform and sometimes not even fully clear. It essentially expresses what is also meant by “inversion”. Even if “harmony” is a term which suggests a symmetric relationship, it is frequently understood as expressing a conception based on introduction rules as, e.g., in Read’s (2010) “general elimination harmony” (although occasionally one includes elimination based conceptions as well). Sometimes harmony is supposed to mean that connectives are strongest or weakest in a certain sense given their introduction or their elimination rules. This idea underlies Tennant’s (1978) harmony principle, and also Popper’s and Koslow’s structural characterizations (see section 2.4). The specific relationship between introduction and elimination rules as formulated in an inversion principle excludes alleged inferential definitions such as that of the connective tonk, which combines an introduction rule for disjunction with an elimination rule for conjunction, and which has given rise to a still ongoing debate on the format of inferential definitions (see Humberstone, 2010). #### 2.2.2 Proof-theoretic validity Proof-theoretic validity is the dominating approach to proof-theoretic semantics. As a technical concept it was developed by Prawitz (1971; 1973; 1974), by turning a proof-theoretic validity notion based on ideas by Tait (1967) and originally used to prove strong normalization, into a semantical concept. Dummett provided much philosophical underpinning to this notion (see Dummett, 1991). The objects which are primarily valid are proofs as representations of arguments. In a secondary sense, single rules can be valid if they lead from valid proofs to valid proofs. In this sense, validity is a global rather than a local notion. It applies to arbitrary derivations over a given atomic system, which defines derivability for atoms. Calling a proof which uses an introduction rule in the last step canonical, it is based on the following three ideas: 1. The priority of closed canonical proofs. 2. The reduction of closed non-canonical proofs to canonical ones. 3. The substitutional view of open proofs. Ad 1: The definition of validity is based on Gentzen’s idea that introduction rules are ‘self-justifying’ and give the logical constants their meaning. This self-justifying feature is only used for closed proofs, which are considered primary over open ones. Ad 2: Noncanonical proofs are justified by reducing them to canonical ones. Thus reduction procedures (detour reductions) as used in normalization proofs play a crucial role. As they justify arguments, they are also called “justifications” by Prawitz. This definition again only applies to closed proofs, corresponding to the introduction form property of closed normal derivations in natural deduction (see section 1.3). Ad 3: Open proofs are justified by considering their closed instances. These closed instances are obtained by replacing their open assumptions with closed proofs of them, and their open variables with closed terms. For example, a proof of B from A is considered valid, if every closed proof, which is obtained by replacing the open assumption A with a closed proof of A, is valid. In this way, open assumptions are considered to be placeholders for closed proofs, for which reason we may speak of a substitutional interpretation of open proofs. This yields the following definition of proof-theoretic validity: 1. Every closed proof in the underlying atomic system is valid. 2. A closed canonical proof is considered valid, if its immediate subproofs are valid. 3. A closed noncanonical proof is considered valid, if it reduces to a valid closed canonical proof or to a closed proof in the atomic system. 4. An open proof is considered valid, if every closed proof obtained by replacing its open assumptions with closed proofs and its open variables with closed terms is valid. Formally, this definition has to be relativized to the atomic system considered, and to the set of justifications (proof reductions) considered. Furthermore, proofs are here understood as candidates of valid proofs, which means that the rules from which they are composed are not fixed. They look like proof trees, but their individual steps can have an arbitrary (finite) number of premisses and can eliminate arbitrary assumptions. The definition of validity singles out those proof structures which are ‘real’ proofs on the basis of the given reduction procedures. Validity with respect to every choice of an atomic system can be viewed as a generalized notion of logical validity. In fact, if we consider the standard reductions of intuitionistic logic, then all derivations in intuitionistic logic are valid independent of the atomic system considered. This is semantical correctness. We may ask if the converse holds, viz. whether, given that a derivation is valid for every atomic system, there is a corresponding derivation in intuitionistic logic. That intuitionistic logic is complete in this sense is known as Prawitz’s conjecture (see Prawitz, 1973; Prawitz, 2013). However, no satisfactory proof of it has been given. There are considerable doubts concerning the validity of this conjecture for systems that go beyond implicational logic. In any case it will depend on the precise formulation of the notion of validity, in particular on its handling of atomic systems. For a more formal definition and detailed examples demonstrating validity, as well as some remarks on Prawitz’s conjecture see the Supplement on Examples of proof-theoretic validity. #### 2.2.3 Constructive type theory Martin-Löf’s type theory (Martin-Löf, 1984) is a leading approach in constructive logic and mathematics. Philosophically, it shares with Prawitz the three fundamental assumptions of standard proof-theoretic semantics, mentioned in section 2.2.2: the priority of closed canonical proofs, the reduction of closed non-canonical proofs to canonical ones and the substitutional view of open proofs. However, Martin-Löf’s type theory has at least two characteristic features which go beyond other approaches in proof-theoretic semantics: 1. The consideration of proof objects and the corresponding distinction between proofs-as-objects and proofs-as-demonstrations. 2. The view of formation rules as intrinsic to the proof system rather than as external rules. The first idea goes back to the Curry-Howard correspondence (see de Groote, 1995; Sørensen and Urzyczyn, 2006), according to which the fact that a formula A has a certain proof can be codified as the fact that a certain term t is of type A, whereby the formula A is identified with the type A. This can be formalized in a calculus for type assignment, whose statements are of the form t : A. A proof of t : A in this system can be read as showing that t is a proof of A. Martin-Löf (1995; 1998) has put this into a philosophical perspective by distinguishing this two-fold sense of proof in the following way. First we have proofs of statements of the form t : A. These statements are called judgements, their proofs are called demonstrations. Within such judgements the term t represents a proof of the proposition A. A proof in the latter sense is also called a proof object. When demonstrating a judgement t : A, we demonstrate that t is a proof (object) for the proposition A. Within this two-layer system the demonstration layer is the layer of argumentation. Unlike proof objects, demonstrations have epistemic significance; their judgements carry assertoric force. The proof layer is the layer at which meanings are explained: The meaning of a proposition A is explained by telling what counts as a proof (object) for A. The distinction made between canonical and non-canonical proofs is a distinction at the propositional and not at the judgement al layer. This implies a certain explicitness requirement. When I have proved something, I must not only have a justification for my proof at my disposal as in Prawitz’s notion of validity, but at the same time have to be certain that this justification fulfills its purpose. This certainty is guaranteed by a demonstration. Mathematically, this two-fold sense of proof develops its real power only when types may themselves depend on terms. Dependent types are a basic ingredient of of Martin-Löf’s type theory and related approaches. The second idea makes Martin-Löf’s approach strongly differ from all other definitions of proof-theoretic validity. The crucial difference, for example, to Prawitz’s procedure is that it is not metalinguistic in character, where “metalinguistic” means that propositions and candidates of proofs are specified first and then, by means of a definition in the metalanguage, it is fixed which of them are valid and which are not. Rather, propositions and proofs come into play only in the context of demonstrations. For example, if we assume that something is a proof of an implication AB, we need not necessarily show that both A and B are well-formed propositions outright, but, in addition to knowing that A is a proposition, we only need to know that B is a proposition provided that A has been proved. Being a proposition is expressed by a specific form of judgement, which is established in the same system of demonstration which is used to establish that a proof of a proposition has been achieved. In Martin-Löf’s theory, proof-theoretic semantics receives a strongly ontological component. A recent debate deals with the question of whether proof objects have a purely ontological status or whether they codify knowledge, even if they are not epistemic acts themselves. For inversion principles see Schroeder-Heister (2007). For variants of proof-theoretic harmony see Francez (2015) and Schroeder-Heister (2016a). For Prawitz’s definition of proof-theoretic validity see Schroeder-Heister (2006). For Matin-Löf’s type theory, see the entry on type theory as well as Sommaruga (2000). ### 2.3 Clausal definitions and definitional reasoning Proof-theoretic semantics normally focuses on logical constants. This focus is practically never questioned, apparently because it is considered so obvious. In proof theory, little attention has been paid to atomic systems, although there has been Lorenzen’s early work (see section 2.1.1), where the justification of logical rules is embedded in a theory of arbitrary rules, and Martin-Löf’s (1971) theory of iterated inductive definitions where introduction and elimination rules for atomic formulas are proposed. The rise of logic programming has widened this perspective. From the proof-theoretic point of view, logic programming is a theory of atomic reasoning with respect to clausal definitions of atoms. Definitional reflection is an approach to proof-theoretic semantics that takes up this challenge and attempts to build a theory whose range of application goes beyond logical constants. #### 2.3.1 The challenge from logic programming In logic programming we are dealing with program clauses of the form AB1, …, Bm which define atomic formulas. Such clauses can naturally be interpreted as describing introduction rules for atoms. From the point of view of proof-theoretic semantics the following two points are essential: (1)  Introduction rules (clauses) for logically compound formulas are not distinguished in principle from introduction rules (clauses) for atoms. Interpreting logic programming proof-theoretically motivates an extension of proof-theoretic semantics to arbitrary atoms, which yields a semantics with a much wider realm of applications. (2)  Program clauses are not necessarily well-founded. For example, the head of a clause may occur in its body. Well-founded programs are just a particular sort of programs. The use of arbitrary clauses without further requirements in logic programming is a motivation to pursue the same idea in proof-theoretic semantics, admitting just any sort of introduction rules and not just those of a special form, and in particular not necessarily ones which are well-founded. This carries the idea of definitional freedom, which is a cornerstone of logic programming, over to semantics, again widening the realm of application of proof-theoretic semantics. The idea of considering introduction rules as meaning-giving rules for atoms is closely related to the theory of inductive definitions in its general form, according to which inductive definitions are systems of rules (see Aczel, 1977). #### 2.3.2 Definitional reflection The theory of definitional reflection (Hallnäs, 1991; Hallnäs, 2006; Hallnäs and Schroeder-Heister, 1990/91; Schroeder-Heister, 1993) takes up the challenge from logic programming and gives a proof-theoretic semantics not just for logical constants but for arbitrary expressions, for which a clausal definition can be given. Formally, this approach starts with a list of clauses which is the definition considered. Each clause has the form A ⇐ Δ where the head A is an atomic formula (atom). In the simplest case, the body Δ is a list of atoms B1,,Bm, in which case a definition looks like a definite logic program. We often consider an extended case where Δ may also contain some structural implication ‘⇒’, and sometimes even some structural universal implication, which essentially is handled by restricting substitution. If the definition of A has the form then A has the following introduction and elimination rules Δ1 · · · Δn A A [Δ1] [Δn] A C · · · C C The introduction rules, also called rules of definitional closure, express reasoning ‘along’ the clauses. The elimination rule is called the principle of definitional reflection, as it reflects upon the definition as a whole. If Δ1,, Δn exhaust all possible conditions to generate A according to the given definition, and if each of these conditions entails the very same conclusion C, then A itself entails this conclusion. If the clausal definition is viewed as an inductive definition, this principle can be viewed as expressing the extremal clause in inductive definitions: Nothing else beyond the clauses given defines A. Obviously, definitional reflection is a generalized form of the inversion principles discussed. It develops its genuine power in definitional contexts with free variables that go beyond purely propositional reasoning, and in contexts which are not well-founded. An example of a non-wellfounded definition is the definition of an atom R by its own negation: This example is discussed in detail in the Supplement on Definitional reflection and paradoxes. For non-wellfoundedness and paradoxes see the entries on self-reference and Russell’s paradox, as well as the references quoted in the supplement linked to. ### 2.4 Structural characterization of logical constants There is a large field of ideas and results concerning what might be called the “structural characterization” of logical constants, where “structural” is here meant both in the proof-theoretic sense of “structural rules” and in the sense of a framework that bears a certain structure, where this framework is again proof-theoretically described. Some of its authors use a semantical vocabulary and at least implicitly suggest that their topic belongs to proof-theoretic semantics. Others explicitly deny these connotations, emphasizing that they are interested in a characterization which establishes the logicality of a constant. The question “What is a logical constant?” can be answered in proof-theoretic terms, even if the semantics of the constants themselves is truth-conditional: Namely by requiring that the (perhaps truth-conditionally defined) constants show a certain inferential behaviour that can be described in proof-theoretic terms. However, as some of the authors consider their characterization at the same time as a semantics, it is appropriate that we mention some of these approaches here. The most outspoken structuralist with respect to logical constants, who explicitly understands himself as such, is Koslow. In his Structuralist Theory of Logic (1992) he develops a theory of logical constants, in which he characterizes them by certain “implication relations”, where an implication relation roughly corresponds to a finite consequence relation in Tarski’s sense (which again can be described by certain structural rules of a sequent-style system). Koslow develops a structural theory in the precise metamathematical sense, which does not specify the domain of objects in any way beyond the axioms given. If a language or any other domain of objects equipped with an implication relation is given, the structural approach can be used to single out logical compounds by checking their implicational properties. In his early papers on the foundations of logic, Popper (1947a; 1947b) gives inferential characterizations of logical constants in proof-theoretic terms. He uses a calculus of sequents and characterizes logical constants by certain derivability conditions of such sequents. His terminology clearly suggests that he intends a proof-theoretic semantics of logical constants, as he speaks of “inferential definitions” and the “trivialization of mathematical logic” achieved by defining constants in the way described. Although his presentation is not free from conceptual imprecision and errors, he was the first to consider the sequent-style inferential behaviour of logical constants to characterize them. This is all the more remarkable as he was probably not at all, and definitely not fully aware of Gentzen’s sequent calculus and Gentzen’s further achievements (he was in correspondence with Bernays, though). However, against his own opinion, his work can better be understood as an attempt to define the logicality of constants and to structurally characterize them, than as a proof-theoretic semantics in the genuine sense. He nevertheless anticipated many ideas now common in proof-theoretic semantics, such as the characterization of logical constants by means of certain minimality or maximality conditions with respect to introduction or elimination rules. Important contributions to the logicality debate that characterize logical constants inferentially in terms of sequent calculus rules are those by Kneale (1956) and Hacking (1979). A thorough account of logicality is proposed by Došen (1980; 1989) in his theory of logical constants as “punctuation marks”, expressing structural features at the logical level. He understands logical constants as being characterized by certain double-line rules for sequents which can be read in both directions. For example, conjunction and disjunction are (in classical logic, with multiple-formulae succedents) characterized by the double-line rules Γ⊢A, Δ     Γ⊢B, Δ Γ⊢ A∧B, Δ Γ, A⊢ Δ     Γ, B⊢ Δ Γ⊢ A∨B, Δ Došen is able to give characterizations which include systems of modal logic. He explicitly considers his work as a contribution to the logicality debate and not to any conception of proof-theoretic semantics. Sambin et al., in their Basic Logic (Sambin, Battilotti, and Faggian, 2000), explicitly understand what Došen calls double-line rules as fundamental meaning giving rules. The double-line rules for conjunction and disjunction are read as implicit definitions of these constants, which by some procedure can be turned into the explicit sequent-style rules we are used to. So Sambin et al. use the same starting point as Došen, but interpret it not as a structural description of the behaviour of constants, but semantically as their implicit definition (see Schroeder-Heister, 2013). There are several other approaches to a uniform proof-theoretic characterization of logical constants, all of whom at least touch upon issues of proof-theoretic semantics. Such theories are Belnap’s Display Logic (Belnap, 1982), Wansing’s Logic of Information Structures (Wansing, 1993b), generic proof editing systems and their implementations such as the Edinburgh logical framework (Harper, Honsell, and Plotkin, 1987) and many successors which allow the specification of a variety of logical systems. Since the rise of linear and, more generally, substructural logics (Di Cosmo and Miller, 2010; Restall, 2009) there are various approaches dealing with logics that differ with respect to restrictions on their structural rules. A recent movement away from singling out a particular logic as the true one towards a more pluralist stance (see, e.g., Beall and Restall, 2006) which is interested in what different logics have in common without any preference for a particular logic can be seen as a shift away from semantical justification towards structural characterization. ### 2.5 Categorial proof theory There is a considerable literature on category theory in relation to proof theory, and, following seminal work by Lawvere, Lambek and others (see Lambek and Scott, 1986, and the references therein), category itself can be viewed as a kind of abstract proof theory. If one looks at an arrow AB in a category as a kind of abstract proof of B from A, we have a representation which goes beyond pure derivability of B from A (as the arrow has its individuality), but does not deal with the particular syntactic structure of this proof. For intuitionistic systems, proof-theoretic semantics in categorial form comes probably closest to what denotational semantics is in the classical case. One of the most highly developed approaches to categorial proof theory is due to Došen. He has not only advanced the application of categorial methods in proofs theory (e.g., Došen and Petrić, 2004), but also shown how proof-theoretic methods can be used in category theory itself (Došen, 2000). Most important for categorial logic in relation to proof-theoretic semantics is that in categorial logic, arrows always come together with an identity relation, which in proof-theory corresponds to the identity of proofs. In this way, ideas and results of categorial proof theory pertain to what may be called intensional proof-theoretic semantics, that is, the study of proofs as entities in their own right, not just as vehicles to establish consequences (Došen, 2006, 2016). Another feature of categorial proof-theory is that it is inherently hypothetical in character, which means that it starts from hypothetical entities. It this way it overcomes a paradigm of standard, in particular validity-based, proof-theoretic semantics (see section 3.6 below). For Popper’s theory of logical constants see Schroeder-Heister (2005). For logical constants and their logicality see the entry on logical constants. For categorial approaches see the entry on category theory. ## 3. Extensions and alternatives to standard proof-theoretic semantics ### 3.1 Elimination rules as basic Most approaches to proof-theoretic semantics consider introduction rules as basic, meaning giving, or self-justifying, whereas the elimination inferences are justified as valid with respect to the given introduction rules. This conception has at least three roots: The first is a verificationist theory of meaning according to which the assertibility conditions of a sentence constitute its meaning. The second is the idea that we must distinguish between what gives the meaning and what are the consequences of this meaning, as not all inferential knowledge can consist of applications of definitions. The third one is the primacy of assertion over other speech acts such as assuming or denying, which is implicit in all approaches considered so far. One might investigate how far one gets by considering elimination rules rather than introduction rules as a basis of proof-theoretic semantics. Some ideas towards a proof-theoretic semantics based on elimination rather than introduction rules have been sketched by Dummett (1991, Ch. 13), albeit in a very rudimentary form. A more precise definition of validity based on elimination inferences is due to Prawitz (1971; 2007; see also Schroeder-Heister 2015). Its essential idea is that a closed proof is considered valid, if the result of applying an elimination rule to its end formula is a valid proof or reduces to one. For example, a closed proof of an implication AB is valid, if, for any given closed proof of A, the result of applying modus ponens A → B   A B to these two proofs is a valid proof of B, or reduces to such a proof. This conception keeps two of the three basic ingredients of Prawitz-style proof-theoretic semantics (see section 2.2.2): the role of proof reduction and the substitutional view of assumptions. Only the canonicity of proofs ending with introductions is changed into the canonicity of proofs ending with eliminations. ### 3.2 Negation and denial Standard proof-theoretic semantics is assertion-centred in that assertibility conditions determine the meaning of logical constants. Corresponding to the intuitionistic way of proceeding, the negation ¬A of a formula A is normally understood as implying absurdity A →⊥, where ⊥ is a constant which cannot be asserted, i.e., for which no assertibility condition is defined. This is an ‘indirect’ way of understanding negation. In the literature there has been the discussion of what, following von Kutschera (1969), might be called ‘direct’ negation. By that one understands a one-place primitive operator of negation, which cannot be, or at least is not, reduced to implying absurdity. It is not classical negation either. It rather obeys rules which dualize the usual rules for the logical constants. Sometimes it is called the “denial” of a sentence, sometimes also “strong negation” (see Odintsov, 2008). Typical rules for the denial ~A of A are ~A   ~B ~A ~B ~(A∨B) ~(A∧B) ~(A∧B) Essentially, the denial rules for an operator correspond to the assertion rules for the dual operator. Several logics of denial have been investigated, in particular Nelson’s logics of “constructible falsity” motivated first by Nelson (1949) with respect to a certain realizability semantics. The main focus has been on his systems later called N3 and N4 which differ with respect to the treatment of contradiction (N4 is N3 without ex contradictione quodlibet). Using denial any approach to proof-theoretic semantics can be dualized by just exchanging assertion and denial and turning from logical constants to their duals. In doing so, one obtains a system based on refutation (= proof of denial) rather than proof. It can be understood as applying a Popperian view to proof-theoretic semantics. Another approach would be to not just dualize assertion-centered proof-theoretic semantics in favour of a denial-centered refutation-theoretic semantics, but to see the relation between rules for assertion and for denial as governed by an inversion principle or principle of definitional reflection of its own. This would be a principle of what might be called “assertion-denial-harmony”. Whereas in standard proof-theoretic semantics, inversion principles control the relationship between assertions and assumptions (or consequences), such a principle would now govern the relationship between assertion and denial. Given certain defining conditions of A, it would say that the denial of every defining condition of A leads to the denial of A itself. For conjunction and disjunction it leads to the common pairs of assertion and denial rules A B ~A   ~B A∨B A∨B ~(A∨B) A   B ~A ~B A∧B ~(A∧B) ~(A∧B) This idea can easily be generalized to definitional reflection, yielding a reasoning system in which assertion and denial are intertwined. It has parallels to the deductive relations between the forms of judgement studied in the traditional square of opposition (Schroeder-Heister, 2012a; Zeilberger, 2008). It should be emphasized that the denial operator is here an external sign indicating a form of judgement and not as a logical operator. This means in particular that it cannot be iterated. ### 3.3 Harmony and reflection in the sequent calculus Gentzen’s sequent calculus exhibits a symmetry between right and left introduction rules which suggest to look for a harmony principle that makes this symmetry significant to proof-theoretic semantics. At least three lines have been pursued to deal with this phenomenon. (i) Either the right-introduction or or the left-introduction rules are considered to be introduction rules. The opposite rules (left-introductions and right-introductions, respectively) are then justified using the corresponding elimination rules. This means that the methods discussed before are applied to whole sequents rather than formulas within sequents. Unlike these formulas, the sequents are not logically structured. Therefore this approach builds on definitional reflection, which applies harmony and inversion to rules for arbitrarily structured entities rather than for logical composites only. It has been pursued by de Campos Sanz and Piecha (2009). (ii) The right- and left-introduction rules are derived from a characterization in the sense of Došen’s double line rules (section 2.4), which is then read as a definition of some sort. The top-down direction of a double-line rule is already a right- or a left-introduction rule. The other one can be derived from the bottom-up direction by means of certain principles. This is the basic meaning-theoretic ingredient of Sambin et al.’s Basic Logic (Sambin, Battilotti, and Faggian, 2000). (iii) The right- and left-introduction rules are seen as expressing an interaction between sequents using the rule of cut. Given either the right- or the left-rules, the complementary rules express that everything that interacts with its premisses in a certain way so does with its conclusion. This idea of interaction is a generalized symmetric principle of definitional reflection. It can be considered to be a generalization of the inversion principle, using the notion of interaction rather than the derivability of consequences (see Schroeder-Heister, 2013). All three approaches apply to the sequent calculus in its classical form, with possibly more than one formula in the succedent of a sequent, including structurally restricted versions as investigated in linear and other logics. ### 3.4 Subatomic structure and natural language Even if, as in definitional reflection, we are considering definitional rules for atoms, their defining conditions do not normally decompose these atoms. A proof-theoretic approach that takes the internal structure of atomic sentences into account, has been proposed by Wieckowski (2008; 2011; 2016). He uses introduction and elimination rules for atomic sentences, where these atomic sentences are not just reduced to other atomic sentences, but to subatomic expressions representing the meaning of predicates and individual names. This can be seen as a first step towards natural language applications of proof-theoretic semantics. A further step in this direction has been undertaken by Francez, who developed a proof-theoretic semantics for several fragments of English (see Francez, Dyckhoff, and Ben-Avi, 2010; Francez and Dyckhoff, 2010, Francez and Ben-Avi 2015). ### 3.5 Classical logic Proof-theoretic semantics is intuitionistically biased. This is due to the fact that natural deduction as its preferred framework has certain features which make it particularly suited for intuitionistic logic. In classical natural deduction the ex falso quodlibet ⊥ A is replaced with the rule of classical reductio ad absurdum [A → ⊥] ⊥ A In allowing to discharge A →⊥ in order to infer A, this rule undermines the subformula principle. Furthermore, in containing both ⊥ and A →⊥, it refers to two different logical constants in a single rule, so there is no separation of logical constants any more. Finally, as an elimination rule for ⊥ it does not follow the general pattern of introductions and eliminations. As a consequence, it destroys the introduction form property that every closed derivation can be reduced to one which uses an introduction rule in the last step. Classical logic fits very well with the multiple-succedent sequent calculus. There we do not need any additional principles beyond those assumed in the intuitionistic case. Just the structural feature of allowing for more than one formula in the succedent suffices to obtain classical logic. As there are plausible approaches to establish a harmony between right-introductions and left-introduction in the sequent calculus (see section 3.3), classical logic appears to be perfectly justified. However, this is only convincing if reasoning is appropriately framed as a multiple-conclusion process, even if this does not correspond to our standard practice where we focus on single conclusions. One could try to develop an appropriate intuition by arguing that reasoning towards multiple conclusions delineates the area in which truth lies rather than establishing a single proposition as true. However, this intuition is hard to maintain and cannot be formally captured without serious difficulties. Philosophical approaches such as those by Shoesmith and Smiley (1978) and proof-theoretic approaches such as proof-nets (see Girard, 1987; Di Cosmo and Miller, 2010) are attempts in this direction. A fundamental reason for the failure of the introduction form property in classical logic is the indeterminism inherent in the laws for disjunction. AB can be inferred from A as well as from B. Therefore, if the disjunction laws were the only way of inferring AB, the derivability of A∨¬A, which is a key principle of classical logic, would entail that of either A or of ¬A, which is absurd. A way out of this difficulty is to abolish indeterministic disjunction and use instead its classical de Morgan equivalent ¬(¬A ∧¬B). This leads essentially to a logic without proper disjunction. In the quantifier case, there would be no proper existential quantifier either, as ∃xA would be understood in the sense of ¬∀x¬A. If one is prepared to accept this restriction, then certain harmony principles can be formulated for classical logic. ### 3.6 Hypothetical reasoning Standard approaches to proof-theoretic semantics, especially Prawitz’s validity-based approach (section 2.2.2), take closed derivations as basic. The validity of open derivations is defined as the transmission of validity from closed derivations of the assumptions to a closed derivation of the assertion, where the latter is obtained by substituting a closed derivation for an open assumption. Therefore, if one calls closed derivations ‘categorical’ and open derivations ‘hypothetical’, one may characterize this approach as following two fundamental ideas: (I) The primacy of the categorical over the hypothetical, (II) the transmission view of consequence. These two assumptions (I) and (II) may be viewed as dogmas of standard semantics (see Schroeder-Heister 2012c). “Standard semantics” here not only means standard proof-theoretic semantics, but also classical model-theoretic semantics, where these dogmas are assumed as well. There one starts with the definition of truth, which is the categorical concept, and defines consequence, the hypothetical concept, as the transmission of truth from conditions to consequent. From this point of view, constructive semantics, including proof-theoretic semantics, exchange the concept of truth with a concept of construction or proof, and interpret “transmission” in terms of a constructive function or procedure, but otherwise leave the framework untouched. There is nothing wrong in principle with these dogmas. However, there are phenomena that are difficult to deal with in the standard framework. Such a phenomenon is non-wellfoundedness, especially circularity, where we may have consequences without transmission of truth and provability. Another phenomenon are substructural distinctions, where it is crucial to include the structuring of assumptions from the very beginning. Moreover, and this is most crucial, we might define things in a certain way without knowing in advance of whether our definition or chain of definitions is well-founded or not. We do not first involve ourselves into the metalinguistic study of the definition we start with, but would like to start to reason immediately. This problem does not obtain if we restrict ourselves to the case of logical constants, where the defining rules are trivially well-founded. But the problem arises immediately, when we consider more complicated cases that go beyond logical constants. This makes it worthwhile to proceed in the other direction and start with the hypothetical concept of consequence, i.e., characterize consequence directly without reducing it to the categorical case. Philosophically this means that the categorical concept is a limiting concept of the hypothetical one. In the classical case, truth would be a limiting case of consequence, namely consequence without hypotheses. This program is closely related to the approach of categorial proof theory (section 2.5), which is based on the primacy of hypothetical entities (“arrows”). Formally, it would give preference to the sequent calculus over natural deduction, since the sequent calculus allows the manipulation of the assumption side of a sequence by means of left-introduction rules. ### 3.7 Intensional proof-theoretic semantics As mentioned in the first section (1.1), proof-theoretic semantics is intensional in spirit, as it is interested in proofs and not just provability. For proof-theoretic semantics it is not only relevant, whether B follows from A, but also, in which way we can establish that B follows from A. In other words, the identity of proofs is an important issue. However, though this is prima facie obvious and proof-theoretic semanticists would normally agree with this abstract claim, the practice in proof-theoretic semantics is often different, and the topic of the identity of proofs is a much neglected topic. It very frequently happens that rules which are equally powerful are identified. For example, when principles of harmony are discussed, and one considers the standard introduction rule for conjunction A    B A∧B many proof-theoretic semanticists would consider it irrelevant whether one chooses the pair of projections A∧B A∧B A B or the pair A∧B A∧B  A A B as the elimination rules for conjunction. The second pair of rules would often be considered to be just a more complicated variant of the pair of projections. However, from an intensional point of view, these two pairs of rules are not identical. Identifying them corresponds to identifying AB and A ∧ (AB), which is only extensionally, but not intensionally correct. As Došen has frequently argued (e.g., Došen 1997, 2006), formulas such as AB and A ∧ (AB) are equivalent, but not isomorphic. Here “isomorphic” means that when proving one formula from the other and vice versa, we obtain, by combining these two proofs, the identity proof. This is not the case in this example. Pursuing this idea leads to principles of harmony and inversion which are different from the the standard ones. As harmony and inversion lie at the heart of proof-theoretic semantics, many of its issues are touched. Taking the topic of intensionality seriously may reshape many fields of proof-theoretic semantics. And since the identity of proofs is a basic topic of categorial proof theory, the latter will need to receive stronger attention in proof-theoretic semantics than is currently the case. For negation and denial see Tranchini (2012b); Wansing (2001). For natural language semantics see Francez (2015). For classical logic see the entry on classical logic. For hypothetical reasoning and intensional proof theoretic semantics see Došen (2003, 2016) and Schroeder-Heister (2016a). ## 4. Conclusion and outlook Standard proof-theoretic semantics has practically exclusively been occupied with logical constants. Logical constants play a central role in reasoning and inference, but are definitely not the exclusive, and perhaps not even the most typical sort of entities that can be defined inferentially. A framework is needed that deals with inferential definitions in a wider sense and covers both logical and extra-logical inferential definitions alike. The idea of definitional reflection with respect to arbitrary definitional rules (see 2.3.2) and also natural language applications (see 3.4) point in this direction, but farther reaching conceptions can be imagined. Furthermore, the concentration on harmony, inversion principles, definitional reflection and the like is somewhat misleading, as it might suggest that proof-theoretic semantics consists of only that. It should be emphasized that already when it comes to arithmetic, stronger principles are needed in addition to inversion. However, in spite of these limitations, proof-theoretic semantics has already gained very substantial achievements that can compete with more widespread approaches to semantics. ## Bibliography • Aczel, Peter (1977). “An Introduction to Inductive Definitions”, in Handbook of Mathematical Logic, John Barwise (ed.), Amsterdam: North-Holland, pp. 739–782. • Beall, J.C. and Greg Restall (2006). Logical Pluralism, Oxford: Oxford University Press. • Belnap, Nuel D. (1982). “Display Logic”, Journal of Philosophical Logic, 11: 375–417. • Brandom, Robert B. (2000). Articulating Reasons: An Introduction to Inferentialism, Cambridge Mass.: Harvard University Press. • de Campos Sanz, Wagner and Thomas Piecha (2009). “Inversion by Definitional Reflection and the Admissibility of Logical Rules”, Review of Symbolic Logic, 2: 550–569. • –––, Thomas Piecha and Peter Schroeder-Heister (2014). “Constructive semantics, admissibility of rules and the validity of Peirce’s law”, Logic Journal of the IGPL, 22: 297–308. • de Groote, Philippe, ed. (1995). The Curry-Howard Isomorphism, Volume 8 of Cahiers du Centre de Logique, Academia-Bruyland. • Di Cosmo, Roberto and Dale Miller (2010). “Linear Logic”, The Stanford Encyclopedia of Philosophy (Fall 2010 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/fall2010/entries/logic-linear/> • Došen, Kosta (1980). Logical Constants: An Essay in Proof Theory, D. Phil. Thesis, Philosophy Department, Oxford University. • ––– (1989). “Logical Constants as Punctuation Marks”, Notre Dame Journal of Formal Logic, 30: 362–381. • ––– (1997). “Logical Consequence: A Turn in Style”, in: Dalla Chiara,M.L., K. Doets, D. Mundici, J. van Benthem (eds.), Logic and Scientific Methods: Volume One of the Tenth International Congress of Logic, Methodology and Philosophy of Science, Florence, August 1995, Dordrecht: Kluwer, 289–311. • ––– (2000). Cut Elimination in Categories, Berlin: Springer. • ––– (2003). “Identity of proofs based on normalization and generality”, Bulletin of Symbolic Logic, 9: 477–503. • ––– (2006). “Models of deduction”, in: Kahle and Schroeder-Heister, eds. (2006), pp. 639–657. • ––– (2016). “On the paths of categories”, in: Piecha and Schroeder-Heister, eds. (2016b), pp. 65–77. • ––– and Zoran Petrić (2004). Proof-Theoretical Coherence, London: College Publications. • Dummett, Michael (1991). The Logical Basis of Metaphysics, London: Duckworth. • Francez, Nissim (2015). Proof-theoretic Semantics, London: College Publications. • ––– and Gilad Ben-Avi (2015). “A proof-theoretic reconstruction of generalized quantifiers”, Journal of Semantics, 32: 313–371. • ––– and Roy Dyckhoff (2010). “Proof-theoretic Semantics for a Natural Language Fragment”, Linguistics and Philosophy, 33: 447–477. • –––, Roy Dyckhoff, and Gilad Ben-Avi (2010). “Proof-Theoretic Semantics for Subsentential Phrases”, Studia Logica, 94: 381–401. • Gentzen, Gerhard (1934/35). “Untersuchungen über das logische Schließen”, Mathematische Zeitschrift, 39: 176–210, 405–431; English translation in The Collected Papers of Gerhard Gentzen, M. E. Szabo (ed.), Amsterdam: North Holland, 1969, pp. 68–131. • Girard, Jean-Yves (1987). “Linear Logic”, Theoretical Computer Science, 50: 1–102. • Hacking, Ian (1979). “What is Logic?”, Journal of Philosophy, 76: 285–319. • Hallnäs, Lars (1991). “Partial Inductive Definitions”, Theoretical Computer Science, 87: 115–142. • ––– (2006). “On the proof-theoretic foundation of general definition theory”, Synthese, 148: 589–602. • Hallnäs, Lars and Peter Schroeder-Heister (1990/91). “A proof-theoretic approach to logic programming: I. Clauses as rules. II. Programs as definitions”, Journal of Logic and Computation, 1: 261–283, 635–660. • Harper, Robert, Furio Honsell, and Gordon Plotkin (1987). “A Framework for Defining Logics”, Journal of the Association for Computing Machinery, 40: 194–204. • Humberstone, Lloyd (2010). “Sentence Connectives in Formal Logic”, The Stanford Encyclopedia of Philosophy (Summer 2010 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2010/entries/connectives-logic/> • Jaśkowski, Stanisław (1934). “On the Rules of Suppositions in Formal Logic”, Studia Logica, 1: 5–32 (reprinted in S. McCall (ed.), Polish Logic 1920-1939, Oxford 1967, pp. 232–258. • Jäger, Gerhard and Robert F. Stärk (1998). “A Proof-Theoretic Framework for Logic Programming”, Handbook of Proof Theory, Samuel R. Buss (ed.), Amsterdam: Elsevier, pp. 639–682. • Kahle, Reinhard and Peter Schroeder-Heister, eds. (2006). Proof-Theoretic Semantics, Special issue of Synthese, Volume 148. • Kneale, William (1956). “The Province of Logic”, Contemporary British Philosophy, H. D. Lewis (ed.), London: Allen and Unwin, pp. 237–261. • Koslow, Arnold (1992). A Structuralist Theory of Logic, Cambridge: Cambridge University Press. • Kreisel, Georg (1971). “A Survey of Proof Theory II”, Proceedings of the Second Scandinavian Logic Symposium, J. E. Renstad (ed.), Amsterdam: North-Holland, pp. 109–170. • Kremer, Philip (2009). “The Revision Theory of Truth”, The Stanford Encyclopedia of Philosophy (Spring 2009 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/spr2009/entries/truth-revision/> • Kreuger, Per (1994). “Axioms in Definitional Calculi”, Extensions of Logic Programming: Proceedings of the 4th International Workshop, ELP’93, St. Andrews, U.K., March/April 1993 (Lecture Notes in Computer Science, Voluem 798), Roy Dyckhoff (ed.), Berlin: Springer, pp. 196–205. • Lambek, J. and P.J. Scott (1986). Introduction to Higher Order Categorical Logic, Cambridge: Cambridge University Press. • Lorenzen, Paul (1955). Einführung in die operative Logik und Mathematik, Berlin: Springer; 2nd edition, 1969. • Martin-Löf, Per (1971). “Hauptsatz for the intuitionistic theory of iterated inductive definitions”, Proceedings of the Second Scandinavian Logic Symposium, J. E. Fenstad (ed.), Amsterdam: North-Holland, pp. 179–216. • ––– (1984). Intuitionistic Type Theory, Napoli: Bibliopolis. • ––– (1995). “Verificationism Then and Now”, The Foundational Debate: Complexity and Constructivity in Mathematics and Physics, Werner DePauli-Schimanovich, Eckehart Köhler, and Friedrich Stadler (eds.), Dordrecht: Kluwer, pp. 187–196. • ––– (1998). “Truth and Knowability: On the Principles C and K of Michael Dummett”, Truth in Mathematics, Harold G. Dales and Gianluigi Oliveri (eds.), Oxford: Clarendon Press, pp. 105–114. • Negri, Sara and Jan von Plato (2001). Structural Proof Theory, Cambridge: Cambridge University Press. • Nelson, David (1949). “Constructible Falsity”, Journal of Symbolic Logic, 14: 16–26. • Odintsov, Sergei P. (2008). Constructive Negations and Paraconsistency, Berlin: Springer. • Piecha, Thomas (2016). “Completeness in Proof-Theoretic Semantics”. In: Piecha and Schroeder-Heister, eds. (2016b), pp. 231–251. • –––, Wagner de Campos Sanz and Peter Schroeder-Heister (2015). “Failure of Completeness in Proof-Theoretic Semantics”, Journal of Philosophical Logic, 44: 321–335. • ––– and Peter Schroeder-Heister (2016a). “Atomic Systems in Proof-Theoretic Semantics: Two Approaches”, in: Redmond, J., O. P. Martins, Á.N. Fernández Epistemology, Knowledge and the Impact of Interaction, Cham: Springer, pp. 47–62. • ––– and Peter Schroeder-Heister, eds. (2016b). Advances in Proof-Theoretic Semantics, Cham: Springer (Open Access) • Popper, Karl Raimund (1947a). “Logic without Assumptions”, Proceedings of the Aristotelian Society, 47: 251–292. • ––– (1947b). “New Foundations for Logic”, Mind, 56: 193–235; corrections, Mind, 57: 69–70. • Prawitz, Dag (1965). Natural Deduction: A Proof-Theoretical Study, Stockholm: Almqvist & Wiksell; reprinted Mineola, NY: Dover Publications, 2006. • ––– (1971). “Ideas and Results in Proof Theory”, Proceedings of the Second Scandinavian Logic Symposium (Oslo 1970), Jens E. Fenstad (ed.), Amsterdam: North-Holland, pp. 235–308. • ––– (1972). “The Philosophical Position of Proof Theory”, Contemporary Philosophy in Scandinavia, R. E. Olson and A. M. Paul (eds.), Baltimore, London: John Hopkins Press, pp. 123–134. • ––– (1973). “Towards a Foundation of a General Proof Theory”, Logic, Methodology and Philosophy of Science IV, Patrick Suppes, et al. (eds.), Amsterdam: North-Holland, pp. 225–250. • ––– (1974). “On the Idea of a General Proof Theory”, Synthese, 27: 63–77. • ––– (1985). “Remarks on some Approaches to the Concept of Logical Consequence”, Synthese, 62: 152–171. • ––– (2006). “Meaning Approached via Proofs”, Synthese, 148: 507–524. • ––– (2007). “Pragmatist and Verificationist Theories of Meaning”, The Philosophy of Michael Dummett, Randall E. Auxier and Lewis Edwin Hahn (eds.), La Salle: Open Court, pp. 455–481. • ––– (2013). “An Approach to General Proof Theory and a Conjecture of a Kind of Completeness of Intuitionistic Logic Revisited”, Advances in Natural Deduction, Edward Hermann Haeusler, Luiz Carlos Pereira, and Valeria de Paiva (eds.), Berlin: Springer. • Read, Stephen (2010). “General-Elimination Harmony and the Meaning of the Logical Constants”, Journal of Philosophical Logic, 39: 557–576. • Restall, Greg (2009). “Substructural Logics”, The Stanford Encyclopedia of Philosophy (Summer 2009 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2009/entries/logic-substructural/>. • Sambin, Giovanni, Giulia Battilotti, and Claudia Faggian (2000). “Basic Logic: Reflection, Symmetry, Visibility”, Journal of Symbolic Logic, 65: 979–1013. • Sandqvist, Tor (2009). “Classical Logic without Bivalence”, Analysis, 69: 211–218. • Schroeder-Heister, Peter (1984). “A natural extension of natural deduction”, Journal of Symbolic Logic, 49: 1284–1300. • ––– (1991). “Uniform Proof-Theoretic Semantics for Logical Constants (Abstract)”, Journal of Symbolic Logic, 56: 1142. • ––– (1992). “Cut Elimination in Logics with Definitional Reflection”, Nonclassical Logics and Information Processing: Proceedings of the International Workshop, Berlin, November 1990 (Lecture Notes in Computer Science: Volume 619). David Pearce and Heinrich Wansing (eds.), Berlin: Springer, pp. 146–171. • ––– (1993). “Rules of Definitional Reflection”, Proceedings of the 8th Annual IEEE Symposium on Logic in Computer Science, Los Alamitos: IEEE Press, pp. 222–232. • ––– (2004). “On the notion of assumption in logical systems”, Selected Papers Contributed to the Sections of GAP5 (Fifth International Congress of the Society for Analytical Philosophy, Bielefeld, 22-26 September 2003), R. Bluhm and C. Nimtz (eds.), Paderborn: mentis available online), pp. 27–48. • ––– (2005). “Popper’s Structuralist Theory of Logic”, Karl Popper: A Centenary Assessment. Vol. III: Science, Ian Jarvie, Karl Milford, and David Miller (eds.), Aldershot: Ashgate, pp. 17–36. • ––– (2006). “Validity Concepts in Proof-Theoretic Semantics”, Synthese, 148: 525–571. • ––– (2007). “Generalized Definitional Reflection and the Inversion Principle”, Logica Universalis, 1: 355–376. • ––– (2008a). “Lorenzen’s Operative Justification of Intuitionistic Logic”, One Hundred Years of Intuitionism (1907-2007): The Cerisy Conference, Mark van Atten, et al. (eds.), Basel: Birkhäuser, 214–240 [References for whole volume: 391–416]. • ––– (2008b). “Proof-Theoretic versus Model-Theoretic Consequence”, The Logica Yearbook 2007, M. Peliš (ed.), Prague: Filosofia, pp. 187–200. • ––– (2012a). “Definitional Reasoning in Proof-Theoretic Semantics and the Square of Opposition”, The Square of Opposition: A General Framework for Cognition, Jean-Yves Béziau and Gillman Payette (eds.), Bern: Peter Lang, pp. 323–349. • ––– (2012b). “Proof-Theoretic Semantics, Self-Contradiction, and the Format of Deductive Reasoning”. In: Topoi 31, pp. 77–85. • ––– (2012c). “The Categorical and the Hypothetical: A Critique of some Fundamental Assumptions of Standard Semantics”. In: Synthese 187, pp. 925–942. • ––– (2012d). “Paradoxes and Structural Rules”. In: Dutilh Novaes, Catarina and Ole T. Hjortland, eds., Insolubles and Consequences. Essays in Honour of Stephen Read. London: College Publications, pp. 203–211. • ––– (2013). “Definitional Reflection and Basic Logic”, Annals of Pure and Applied Logic, 164(4): 491–501. • ––– (2015). “Proof-theeoretic validity based on elimination rules”. In: Haeusler, Edward Hermann, Wagner de Campos Sanz and Bruno Lopes, eds., Why is this a Proof? Festschrift for Luiz Carlos Pereira. London: College Publications, pp. 159–176. • ––– (2016a). “Open Problems in Proof-Theoretic Semantics”. In: Piecha and Schroeder-Heister, eds. (2016b), pp. 253–283. • ––– (2016b). “Restricting Initial Sequents: The Trade-Offs Between Identity, Contraction and Cut”. In: Kahle, Reinhard, Thomas Strahm and Thomas Studer, eds., Advances in Proof Theory Basel: Birkhäuser, pp. 339–351. • Shoesmith, D. J. and Timothy J. Smiley (1978). Multiple-Conclusion Logic, Cambridge: Cambridge University Press. • Sommaruga, Giovanni (2000). History and Philosophy of Constructive Type Theory, Dordrecht: Kluwer. • Sørensen, Morten Heine B. and Pawel Urzyczyn (2006). Lectures on the Curry-Howard Isomorphism, Amsterdam: Elsevier. • Tait, W. W: (1967). “Intensional Interpretations of Functionals of Finite Type I”, Journal of Symbolic Logic, 32: 198–212. • Tennant, Neil (1978). Natural Logic, Edinburgh: Edinburgh University Press. • ––– (1982). “Proof and Paradox”, Dialectica, 36: 265–296. • ––– (1987). Anti-Realism and Logic: Truth as Eternal, Oxford: Clarendon Press. • ––– (1997). The Taming of the True, Oxford: Clarendon Press. • Tranchini, Luca (2010). Proof and Truth: An Anti-Realist Perspective, Milano: Edizioni ETS, 2013; reprint of Ph.D. dissertation, Department of Philosophy, University of Tuebingen, 2010, available online. • ––– (2012a). “Truth from a Proof-Theoretic Perspective”. In: Topoi 31, pp. 47–57. • ––– (2012b). “Natural Deduction for Dual Intuitionistic Logic”, Studia Logica, 100: 631–648. • ––– (2016). “Proof-Theoretic Semantics, Paradoxes, and the Distinction between Sense and Denotation”, Journal of Logic and Computation, 26, pp. 495–512. • Troelstra, Anne S. and Dirk van Dalen (1988). Constructivism in Mathematics: An Introduction, Amsterdam: North-Holland. • Troelstra, A. S. and H. Schwichtenberg (2000). Basic Proof Theory, Cambridge University Press, second edition. • von Kutschera, Franz (1968). “Die Vollständigkeit des Operatorensystems {¬, ∧, ∨, ⊃} für die intuitionistische Aussagenlogik im Rahmen der Gentzensemantik”, Archiv für mathematische Logik und Grundlagenforschung, 11: 3–16. • ––– (1969). “Ein verallgemeinerter Widerlegungsbegrifff für Gentzenkalküle”, Archiv für mathematische Logik und Grundlagenforschung, 12: 104–118. • Wansing, Heinrich (1993a). “Functional Completeness for Subsystems of Intuitionistic Propositional Logic”, Journal of Philosophical Logic, 22: 303–321. • ––– (1993b). The Logic of Information Structures (Lecture Notes in Artificial Intelligence, Volume 681), Berlin: Springer Springer. • ––– (2000). “The Idea of a Proof-theoretic Semantics”, Studia Logica, 64: 3–20. • ––– (2001). “Negation”, The Blackwell Guide to Philosophical Logic, L. Goble (ed.), Cambridge, MA: Blackwell, pp. 415–436. • Wieckowski, Bartosz (2008). “Predication in Fiction”, in The Logica Yearbook 2007, M. Peliš (ed.), Prague: Filosofia, pp. 267–285. • ––– (2011). “Rules for Subatomic Derivation”, Review of Symbolic Logic, 4: 219–236. • ––– (2016). “Subatomic Natural Deduction for a Naturalistic First-Order Language with Non-Primitive Identity”, Journal of Logic, Language and Information, 25: 215–268. • Zeilberger, Noam (2008). “On the Unity of Duality”, Annals of Pure and Applied Logic, 153: 66–96.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8638104796409607, "perplexity": 2045.2633203349126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742020.26/warc/CC-MAIN-20181114125234-20181114151234-00116.warc.gz"}
http://msdn.microsoft.com/en-us/library/windows/hardware/ff545929
# Example 7: Customizing the Trace Message Prefix Each trace message begins with a trace message prefix composed of data about the trace message. The format of the trace message prefix is stored in the %TRACE_FORMAT_PREFIX% environment variable. By changing the value of the environment variable, you can customize the trace message prefix to display the data you need about the trace message in the format that is most useful to you. The variables in the default trace message prefix, and all variables that you can use in a trace message prefix are described in the Trace Message Prefix topic. The following display shows the default trace message prefix. The trace messages were generated by Tracedrv, the trace-enabled sample driver in the Windows Driver Kit (WDK). ``` [0]0AF4.0C64::07/25/2003-14:55:39.998 [tracedrv]IOCTL = 1 [0]0AF4.0C64::07/25/2003-14:55:39.998 [tracedrv]Hello, 1 Hi [0]0AF4.0C64::07/25/2003-14:55:39.998 [tracedrv]Hello, 2 Hi ... ``` The format of the default prefix is as follows. ``` [%9!d!]%8!04X!.%3!04X!::%4!s! [%1!s!] ``` which represents the following data: ``` ``` where the MessageGUIDFriendlyName is, by default, the name of the directory in which the trace provider was built. To create a new trace message prefix, use the set command to reset the value of the %TRACE_FORMAT_PREFIX% environment variable. For example, ``` set TRACE_FORMAT_PREFIX=%2!s!: %!FUNC!: %8!04x!.%3!04x!: %4!s!: ``` This command sets the trace message prefix following format: ``` ``` As a result, Tracefmt output uses the new trace message prefix, as shown in the following display: ``` tracedrv_c258: TracedrvDispatchDeviceControl: 0af4.0c64: 07/25/2003-13:55:39.998: IOCTL = 1 tracedrv_c264: TracedrvDispatchDeviceControl: 0af4.0c64: 07/25/2003-13:55:39.998: Hello, 1 Hi tracedrv_c264: TracedrvDispatchDeviceControl: 0af4.0c64: 07/25/2003-13:55:39.998: Hello, 2 Hi tracedrv_c264: TracedrvDispatchDeviceControl: 0af4.0c64: 07/25/2003-13:55:39.998: Hello, 3 Hi ``` ... Note   If you are setting the trace prefix in a command or batch file, where the percent symbol represents a variable for a command-line parameter, use two consecutive percent symbols for the prefix variables. For example, to include the system time in the prefix, type %%4.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8729071617126465, "perplexity": 2263.506732192055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663754.0/warc/CC-MAIN-20140930004103-00195-ip-10-234-18-248.ec2.internal.warc.gz"}